Gabe Monroy, Microsoft Azure | KubeCon 2017
>> Commentator: Live from Austin, Texas, it's the Cube. Covering KubeCon and CloudNativeCon 2017. Brought to you by Red Hat, the Linux foundation, and the Cube's ecosystem partners. >> Hey welcome back everyone. Live here in Austin, Texas the Cube's exclusive coverage of KubeCon and CloudNativeCon, its third year, not even third year I think it's second year and not even three years old as a community, growing like crazy. Over 4500 people here. Combined the bulk of the shows it's double than it was before. I'm John Ferrier, co-founder of SiliconANGLE. Stu Miniman, analysts here. Next is Gabe Monroy who was lead p.m. product manager for containers for Microsoft Azure, Gabe welcome to the Cube. >> Thanks, glad to be here. Big fan of the show. >> Great to have you on. I mean obviously container madness we've gotten past that now it's Kubernetes madness which really means that the evolution of the industry is really starting to get some clear lines of sight as a straight and narrow if you will people starting to see a path towards scale, developer acceleration, more developers coming in than ever before, this cloud native world. Microsoft's doing pretty well with the cloud right now. Numbers are great, hiring a bunch of people, give us a quick update big news what's going on? >> Yeah so you know a lot of things going on. I'm just excited to be here, I think for me, I'm new to Microsoft right. I came here about seven months ago by way of a Dais acquisition and I like to think of myself as kind of representing part of this new Microsoft trend. My career was built on open source. I started a company called Dais and we were focused on really Kubernetes based solutions and here at Microsoft I'm really doing a lot of the same thing but with Microsoft's Cloud as sort of the vehicle that we're trying to attract developers to. >> What news do you guys have here, some services? >> Yeah so we got a bunch of things, we're talking about so the first is something I'm especially excited about. So this is the virtual kubelet. Now, tell a little bit of story here, I think it's actually kind of fascinating, so back in July we launched this thing called Azure Container Instances and what ACI was first of its kind service containers in the cloud. Just run a container, runs in the cloud. It's micro build and it is invisible infrastructure, so part of the definition of serverless there. As part of that we want to make it clear that if you were going to do complex things with these containers you really need an orchestrator so we released this thing called the ACI Connector for Kubernetes along with it. And we were excited to see people just were so drawn its idea of serverless Kubernetes, Kubernetes that you know didn't have any VMs associated with it and folks at hyper.sh, who have a similar service container offering, they took our code base and forked it and did a version of theirs and you know Brent and I were thinking together when we were like "oh man there's something here, we should explore this" and so we got some engineers together, we put a lot of work together and we announced now, this in conjunction with hyper and others, this virtual kubelet that bridges the world of Kubernetes with the world of these new serverless container runtimes like ACI. >> Okay, can you explain that a little bit. >> Sure. >> People have been coming in saying wait does serverless replace, how does it work, is Kubernetes underneath still? >> Yeah so I think the best place to start is the definition of serverless and I think serverless is really the conflation of three things: it's invisible infrastructure, it is micro billing, and it is an event based programming model. It's sort of the classical definition right. Now what we did with ACI and serverless containers is we took that last one, the event based programming model, and we said look you don't need to do that. If you want to write a container, anything that runs in that container can work, not just functions and so that is I think a really important distinction that I believe it's really the best of serverless is you know that micro billing and invisible infrastructure. >> Well that's built in isn't it? >> Correct yeah. >> What are the biggest challenges of serverless because first of all its [Inaudible 00:03:58] in the mind of a developer who doesn't want to deal with plumbing. >> Yes. >> Meaning networking plumbing, storage, and a lot of the details around configurating, just program away, be creative, spend their time building. >> Yes. >> What is the big differences between that? What are the issues and challenges that service has for people adopting it or is it frictionless at this point? >> Well you know as far I mean it depends on what you're talking about right. So I think you know for functions you know it's very simple to you know get a function service and add your functions and deploy functions and start chaining those together and people are seeing rapid adoption and that's progressing nicely but there's also a contingent of folks who are represented here at the show who are really interested in containers as the primitive and not functions right. Containers are inclusive of lots of things, functions being one of them, betting on containers as like the compute artifact is actually a lot more flexible and solves a lot more use cases. So we're making sure that we can streamline ease of use for that while also bringing the benefits of serverless, really the way I think of this is marrying our AKS, our Managed Kubernetes Service with ACI, our you know serverless containers so you can get to a place where you can have a Kubernetes environment that has no VMs associated with it like literally zero VMs, you'd scale the thing down to zero and when you want to run a pod or container you just pay for a few seconds of time and then you kill it and you stop paying for it right. >> Alright so talk about customers. >> Yep. >> What's the customer experience you guys are going after, did you have any beta customers, who's adopting your approach, and can highlight some examples of some really cool and you don't have to name names or you can, anecdotal data will be good. >> Yeah well you know I think on the blog post announcement blog post page we have a really great video of Siemens Health and Years, I believe is the name, but basically a health care company that is looking, that is using Kubernetes on Azure, AKS specifically, to disrupt the health care market and to benefit real people and you know to me I think it's important that we remember that we're deep in this technology right but at the end of the day this is about helping developers who are in turn helping real world people and I think that video is a good example of that. >> An what was there impact, speed? Speed of developers? >> Yeah, I mean I think it's really the main thing is agility right, people want to move faster right and so that's the main benefit that we hear. I think cost is obviously a concern for folks but I think in practice the people cost of operating some of these systems is tends to be a lot higher than the infrastructure costs when you stack them up, so people are willing to pay a little bit of a premium to make it easier on people and we see that over and over again. >> Yeah Gabe, want you to speak to kind of the speed of company the size of Microsoft. So you know the Dais acquisition of course was already focused on Kubernetes before inside of Microsoft, see I mean big cloud companies moving really fast on Kubernetes. I've heard complaints from customers like "I can't get a good roadmap because it's moving so fast". >> You know I would say that was one of the biggest surprises for me joining Microsoft, is just how fast things move inside of Azure in particular. And I think it's terrific you know. I think that there's a really good focus of making sure that we're meeting customers where they are and building solutions that meet the market but also just executing and delivering and doing that with speed. One of the things that is most interesting to me is like the geographic spread. Microsoft is in so many different regions more than any other cloud. Compliance certification, we take to all that stuff really seriously and being able to do all those things, be the enterprise friendly cloud while also moving at this breakneck pace in terms of innovation, it's really spectacular to watch from the inside. >> A lot of people don't know that. When they think about Azure they think "oh they're copying Amazon" but Microsoft has tons of data centers. They've had browsers, they're all over the world, so it's not like they're foreign to region areas I mean they're everywhere. >> Microsoft is ever and not only is it not foreign but I mean you got to remember Microsoft is an enterprise software company at its core. We know developers, that is what we do and going into cloud in this way is just it's extremely natural for us. And I think that the same can't really be said for everyone who's trying to move into cloud. Like we've got history of working with developers, building platforms, we've entire division devoted to developer tooling right. >> I want to ask you about two things that comes up a lot, one is very trendy, one is kind of not so trendy but super important, one is AI. >> Yes. >> AI with software units impact disrupt storage and with virtual kubelets this is going to be changing storage game buts going to enhance the machine learning and AI capability. The other one is data warehousing or data analytics. Two very important trends, one is certainly a driver for growth and has a lot of sex appeal as the AI machine learning but all the analytics being done on cloud whether it's an IOT device, this is like a nice use case for containers and orchestration. Your comment and reaction for those two trends. >> Yeah and you know I think that AI and deep learning generally is something that we see driving a ton of demand for container orchestration. I've worked lots of customers including folks like OpenAI on there Kubernetes infrastructure running on a Azure today. Something that Elon Musk actually proudly mention, that was a good moment for the containers (chuckling) >> Get a free Tesla. Brokerage some Teslas and get that new one, goes from 0 to 100 and 4.5 seconds. >> Right yeah. >> So you got a good customer, OpenAI, what was the impact of them? What was the big? >> Well you know this is ultimately about empowering people, in this case they happen to be data scientists, to get their job done in a way where I mean I look at it has we're doing our jobs in the infrastructure space if the infrastructure disappears. The more conceptual overhead we're bringing to developers that means we're not doing our job. >> So question then specifically is deep learning in AI, is it enhanced by containers and Kubernetes? >> Absolutely. >> What order of magnitude? >> I don't know but in order of magnitude in enhancement I would argue. >> Just underlying that the really important piece is we're talking about data here >> Yes. >> and one of the things we've been kind of trying to tackle the last couple years of containers is you know storage and that's carried over to Kubernetes, how's Microsoft involved? What's you're you know prognosis as to where we go with cloud native storage? >> Yeah that's a fascinating question and I actually, so back in the early days when I was still contributing to Docker, I was one of the largest external contributors to the Docker Project earlier in my career. I actually wrote some of the storage stuff and so I've been going around Dockers inception 2013 saying don't run databases in containers. It's not cause you can't, right, you can, but just because you can doesn't mean you should (chuckling) >> Exactly. >> and I think that you know as somebody who has worked in my career as on the operation side things like an SLA mean a lot and so this leads me to another one of our announcements at the show which is the Open Service Broker for Azure. Now what we've done, thanks to the Cloud Foundry Foundation who basically took the service broker concept and spun it out, we now are able to take the world of Kubernetes and bridge it to the world of Azure services, data services being sort of some of the most interesting. Now the demo that I like to show this is WordPress which by the way sounds silly but WordPress powers tons of the web today still. WordPress is a PHP application and a MySQL database. Well if you're going to run WordPress at scale you're going to want to run that MySQL in a container? Probably not, you're probably going to want to use something like Azure database for MySQL which comes with an SLA, backup/restore, DR, ops team by Microsoft to manage the whole thing right. So but then the question is well I want to use Kubernetes right so how do I do that right, well with the Open Service Broker for Azure we actually shipped a helm chart. We can helm install Azure WordPress and it will install in Kubernetes the same way you would a container based system and behind the scenes it uses the broker to go spin up a Postgres, sorry a MySQL and dynamically attach it. Now the coolest thing to me about this yeah is the agility but I think that one of the underrated features is the security. The developer who does that doesn't ever touch credentials, the passwords are automatically generated and automatically injected into the application so you get to do things with rotations without ever touching the app. >> So we're at publisher we use WordPress, we'd love, will this help us with scale if we did Azure? >> Absolutely. After this is over we'll go set it up. (laughing) >> I love WordPress but when it breaks down well this is the whole point of where auto scaling shows a little bit of its capabilities in the world is that, PHP does you'd like to have more instances >> Yeah. >> that would be a use case. Okay Redshift in Amazon wasn't talking about much at re:Invent last week. We don't hear a lot of talk around the data warehouse which is a super important way to think about collecting data in cloud and is that going to be an enhanced feature because people want to do analytics. There's a huge analytics audience out there, they're moving off of tera-data. They're doing you guys have a lot of analytics at Microsoft. They might have moved from Hadoop or Hive or somewhere else so there's a lot of analytics workloads that would be prime or at least potentially prime for Kubernetes. >> Yeah I think >> Or is that not fully integrated. >> No I think it's interesting, I mean for us we look at, I personally think using something like the service broker, Open Service Broker API to bridge to something like a data lake or some of these other Azure hosted services is probably the better way of doing that because if you're going to run it on containers, these massive data warehouses, yes you can do it, but the operational burden is high, >> So your point about the >> its really high. >> database earlier. >> Yeah. Same general point there. Now can you do it? Do we see people doing it? Absolutely right. >> Yeah, they do you things sometimes that they shouldn't be doing. >> Yeah and of course back to the deep learning example those are typically big large training models that have similar characteristics. >> Alright as a newbie inside Azure, not new to the industry and the community, >> Yep. >> share some color. What's it like in there? Obviously a number two to Amazon, you guys have great geography presence, you're adding more and more services every day at Azure, what's the vibe, what's the mojo like over there, and share some inside baseball. >> Yeah I got to say so really I'm just saying it's a really exciting place to work. Things are moving so fast, we're growing so fast, customers really want what we're building. Honestly day to day I'm not spending a lot of time looking out I'm spending a lot of time dealing with enterprises who want to use our cloud products. >> And one of the top things that you have on your p.m. list that are the top stack ranked features people want? >> I think a lot of this comes down, in general I think this whole space is approaching a level of enterprise friendliness and enterprise hardening where we want to start adding governance, and adding security, and adding role based access controls across the board and really making this palatable to high trust environment. So I think a lot that's a lot of our focus. >> Stability, ease of use. >> Stability, ease of use are always there. I think the enterprise hardening and things like v-net support for all of our services, v-net service endpoints, those are some things that are high on the list. >> Gabe Monroy, lead product manager for containers at Microsoft Azure Cloud. Great to have you on and love to talk more about geographies and moving apps around the network and multi-cloud but another time, thanks for the time. >> Another time. >> It's the Cube live coverage I'm John Ferrier co-founder of [Inaudible 00:15:21]. Stu Miniman with Wikibon, back with more live coverage after this short break.
SUMMARY :
and the Cube's ecosystem partners. Live here in Austin, Texas the Cube's exclusive coverage Big fan of the show. that the evolution of the industry is really starting to get Yeah so you know a lot of things going on. and you know Brent and I were thinking together and we said look you don't need to do that. What are the biggest challenges of serverless and a lot of the details around configurating, and when you want to run a pod or container and you don't have to name names and you know to me I think it's important that we remember and so that's the main benefit that we hear. of company the size of Microsoft. and building solutions that meet the market so it's not like they're foreign to region areas but I mean you got to remember Microsoft is I want to ask you about two things that comes up a lot, and has a lot of sex appeal as the AI machine learning Yeah and you know I think that AI and deep learning goes from 0 to 100 and 4.5 seconds. in this case they happen to be data scientists, I don't know but in order of magnitude in enhancement so back in the early days and I think that you know After this is over we'll go set it up. and is that going to be an enhanced feature Now can you do it? Yeah, they do you things sometimes Yeah and of course back to the deep learning example and share some inside baseball. it's a really exciting place to work. And one of the top things that you have on your p.m. list across the board and really making this palatable and things like v-net support for all of our services, Great to have you on and love to talk more about It's the Cube live coverage I'm John Ferrier
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John Ferrier | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Gabe Monroy | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Gabe | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
July | DATE | 0.99+ |
MySQL | TITLE | 0.99+ |
Austin, Texas | LOCATION | 0.99+ |
Dais | ORGANIZATION | 0.99+ |
Elon Musk | PERSON | 0.99+ |
0 | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
KubeCon | EVENT | 0.99+ |
Cloud Foundry Foundation | ORGANIZATION | 0.99+ |
Kubernetes | TITLE | 0.99+ |
4.5 seconds | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Cube | ORGANIZATION | 0.99+ |
SiliconANGLE | ORGANIZATION | 0.99+ |
Tesla | ORGANIZATION | 0.98+ |
100 | QUANTITY | 0.98+ |
third year | QUANTITY | 0.98+ |
Brent | PERSON | 0.98+ |
ACI | ORGANIZATION | 0.98+ |
CloudNativeCon | EVENT | 0.98+ |
three things | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
PHP | TITLE | 0.98+ |
Linux | ORGANIZATION | 0.97+ |
double | QUANTITY | 0.97+ |
second year | QUANTITY | 0.97+ |
two trends | QUANTITY | 0.97+ |
OpenAI | ORGANIZATION | 0.97+ |
today | DATE | 0.96+ |
zero | QUANTITY | 0.96+ |
Wikibon | ORGANIZATION | 0.96+ |
Over 4500 people | QUANTITY | 0.95+ |
KubeCon 2017 | EVENT | 0.94+ |
last week | DATE | 0.94+ |
CloudNativeCon 2017 | EVENT | 0.94+ |
zero VMs | QUANTITY | 0.93+ |
Teslas | ORGANIZATION | 0.93+ |
Two very important trends | QUANTITY | 0.92+ |
Azure | TITLE | 0.91+ |
WordPress | ORGANIZATION | 0.89+ |
WordPress | TITLE | 0.89+ |
two things | QUANTITY | 0.89+ |
00:15:21 | DATE | 0.88+ |
about seven months ago | DATE | 0.87+ |
John Gossman, Microsoft Azure - DockerCon 2017 - #DockerCon - #theCUBE
>> Announcer: Live from Austin, Texas, It's theCUBE, covering DockerCon 2017. Brought to you by Docker and support from its ecosystem partners. >> Welcome back to theCUBE here in Austin, Texas at DockerCon 2017. I'm Stu Miniman with my cohost for the two days of live broadcast, Jim Kobielus. Happy to welcome back to the program, John Gossman, who is the lead architect with Microsoft Azure. Also part of the keynote this morning. John, had the pleasure of interviewing you two years ago. We went though the obligatory wait, Microsoft Open Source, Linux, and Windows and everything living together. It's like cats and dogs. But thanks so much for joining us again. >> Yeah well as I was saying, that's 14 years in cloud years. So it's been a lot of change in that time, but thanks for having me again. >> Yeah. Absolutely. You said it was three years that you've been working Microsoft and Docker together. 21 years in it, dog or cloud years, if you will. I think Docker is more whales and turtles, as opposed to the dogs. But enough about the cartoons and the animals. Why don't you give our audience just a synopsis of kind of the key messages you were trying to get across in the keynote this morning. >> Okay well the very simple message is that what we enabled this new technology, Hyper-V isolation for Linux containers, is the ability to run Linux containers just seamlessly on Windows using the normal Docker experience. It's just Docker run, BusyBox or Docker run, MySQL, or whatever it is, and it just works. And of course if you know a little more technical detail about containers, you realize that one of the reasons that the containers are the way there are is that all the containers on a box normally share a kernel. And so you can run a Canonical, Ubuntu on user space, on a Red Hat kernel or vice versa. But Windows and Linux kernels are too different. So if you want to run Windows container, it's not going to run easily on Linux and vice versa. And you can still get this effect, if you want it, by also using a virtual machine. But then you've got the management overhead of managing the virtual machine, managing the containers, all the complexity that that involves. You have to get a VHD or AMI or something like that, as well a container image and you lose a lot of that sort of experience. >> John, first of all, I have to say congratulations to Microsoft. When the announcement was made that Windows containers were going to be developed, I have to say that I and most of my peers were a little bit skeptical as to how fast that would work; the development cycle. Probably because we have lots of experience and it's always okay, we understand how many man years this usually takes, but you guys hit and were delivering, got through the Betas, so can you speak to us about where we are with Windows containers? And one of the things people want to kind of understand is, compared to like Linux containers, how do you expect the adoption of that now that it's generally available to roll out? Do I have to wait for the next server refresh, OS refresh, how do you expect your customers to adopt and embrace? >> Well we were able to get this to work so quickly because if you remember, Docker didn't actually invent containers. They took a bunch of kernel primitives that were in Linux and put a really great user experience on it. And I'm not taking anything away from Docker by doing that, because oftentimes in the technology industry, it's easy to make something that was complicated, powerful, but not easy to use. And Windows already had a lot of those kernel primitives, same sort of similar kind of kernel primitives built-in. They had to take out Java javax, I think when Windows 2000. And so it was kind of the same experience. We took the Docker engine, so we got the API, we were using the open source project, so we have complete compatibility. And then we just had to write a basically a new back-end, and that's why it was able to come up rather quickly. And now we're in a mode you know, Windows server updates things more incrementally, than we did in the past. So this will just keep on improving as time goes on. >> Okay, one of the other big announcements in the keynote this morning was LinuxKit. And it was open source project, we actually saw Solomon move it to open source during the keynote, when they laid out the ecosystems for it like IBM, HPE, INTEL and Microsoft. So what does that mean for Microsoft? You are now a provider of Linux? How are we supposed to look at this? >> Yeah. So we're working with all the Linux vendors. So if you saw our blog about the work we did today. We also have announcements from SUSE and Red Hat and Canonical, and the usual people. And one of the things I said in this box, I said look there's the new model is that you could choose both the Linux container that you want and the kernel that you want to run it on. And we're open to all sorts of things. But we have been working with Docker for a long time. On making sure that there was a great experience for running Docker for Linux on Windows. This thing called Docker for Windows. Which they developed. And we have been helping out. And that's basically an earlier generation of this same Linux technology. So it's just the next step on that journey. >> Microsoft's pretty well recognized to have a robust solution for a hybrid cloud. Cause of course you go your Azure stack, that you're putting on premises. There's Azure itself, it's really the cloud first methodology that you've been rolling through and you offer as a service. Containers really anywhere in your environment, baked in anywhere? How should we be thinking about this going forward? >> Yeah absolutely. I mean one of the points of containers in general, one of the attractive parts of containers is that they run everywhere. Including from your laptop, to the various clouds to bare metal, to virtualized environments. And so we have both things. We want Windows containers, where we're the vendor of the container. We want those to work everywhere. And we also, as the vendors of Azure and Azure Stack, and just server system center, and other older enterprise technologies. We want containers to work on all those things. So both directions. I mean, that's kind of the world we're in now, where everything works everywhere. >> Can you square you container strategy as reflected in your partnership with Docker, With your serverless computer strategy for Azure Functions? I'm trying to get a sense for Microsoft's overall approach to running containers as it relates to the Azure strategy. >> In some ways, you can think of this as a serverless functions mode as a step even further. You just deploy a hardware machine and install everything on it. Next thing, you'd have a virtual machine and you install everything. And then you put your code and all its affinities to the container. And with serverless with Azure Functions, it's like, well why do any of that? Just write a function. Now at the same time, we think there's lots of reasons. Under the covers, all of these past systems, going all the way back, that's how Docker started. Run a container underneath the covers. in the same place, it's not literally a Docker container, but the same place down in functions has that sort of a capability. And we're certainly thinking about how Docker can handle for work in that serverless model in the future. >> See one of my core focus areas for Wikibon as an analyst, is looking at developers going more deeply into deep learning and machine learning. To what extent is Microsoft already taking its core tools in that area and containerizing them and enabling access to that functionality through serverless APIs and functions and so forth in Azure? On the serverless stuff, I'm not on the serverless team. I'm not really qualified to explain everything on their end. I do know that the CNT team has a Docker container that they put the bits in. There's the Azure Machine Learning team who's been working a lot of these sort of technologies. I'm just not the right guy to answer that question. >> As you talk to your customers, where does this fit in to the whole discussion? Do containers just happen in the background? Is it helping them with some of their application modernization? Does it help Microsoft change the way we architect things? What's kind of the practitioner, your ultimate end user viewpoint on this? Well cloud adoption is at all points on the curve simultaneously. Even the inside of individual companies. So everybody's in it, in a kind of different place. The two models that I think people have really concentrated on, is on one end, the path at least is infrastructure where you just bring your existing applications and another one would be PADS, where you rewrite the application for a more modern architecture, more cloud centric architecture. And containers fit kind of squarely in the middle of that in some respects. Because in many ways and primarily, I see Docker containers as a better form of infrastructure. It is an easier, more portable way to get all your dependency together and run them everywhere. So a lot of lift-and-shift works is in there, but once you're in containers, it is also easier to break the components apart and put them back together into a more microservice oriented cloud-native model. >> I think that's a great point because we've been having this discussion about okay, there's applications that I'm rewriting, but then I've got this huge amount of applications that I need some way to have the bridge to the future, if you will. Because I don't know, there's one analyst firm that calls it bimodal, but to customers we talked to in general, we don't segment everything we do. I have application type infrastructure and I need to be able to live across multiple environments. Wrapping versus refactoring. >> And they do both. But I always prefer to, you know some people come in and they talk about legacy and they're developers. I'm a developer, right? Developers we always want to rewrite everything. And there's a time and place to doing that. But the legacy applications are required for those applications to work. And if you don't need to refactor that thing, if you can get it into a container or virtual machine or however, and get it into that more environment, and then work around it, re-architect it, it's a whole different set of approaches. It's a good conversation to have with a customer to understand. I've seen people go both too slow, and I see people refactor their whole thing and then try to figure out how to get it to work again. >> So Microsoft has a gigantic user base, What kind of things are you doing to help educate and help the people that had certification or jobs were running exchange to move towards this new kind of world and cloud in general. And containers specifically maybe. >> Well we have a ton of stuff. I'm not familiar with the certification programs myself, but we certainly have our Developer Evangelism team, out going out training people. We've been trying to improve our documentation. And we have a bunch of guidance on cloud migration and things like that. There is a real challenge and it's the same problem for our customers and anybody looking at cloud. Is to re-educate people who have been working in some of their previous moment. Which is another reason again, where the lift and shift stuff is, you can make it more like it is on Premise, or more like it is on your laptop. It makes that journey a little easier. But we're indefinitely in one of those points where the industry is changing so fast, I personally have to spend a lot of time, What's going on? What happened this day? What's new today coming to the conference, I learn new things. >> You bring up a huge challenge that we see. I kind of like Docker has their two delivery models. They've got the Community Edition, CE, and the Enterprise Edition, EE. An EE feels more like traditional software. It's packaged, it's on the regular release cycle. CE is, Solomon talked this morning about the edge pieces. Can I keep up with every six months, or can I have stuff flying at me? People inside of Docker can't keep up with the pace of change that much. What do you see, I mean, I think back to the major Windows operating system releases that we used to, like the Intel tick-tock on releases. It's the pace of change is tough for everyone, how are you helping, you know with you product development and customers, you know, take advantage of things and try to keep up with this rapidly changing ecosystem? >> This is a constant challenge with physically software now. We can't afford to only ever ship things every three years. And at the same time there's stability. So with the major products like Windows, we have these stable branches, where things are pretty much the same going along. And then there's an inactive branch Where things are coming down and the changes and the updates are coming. I'd say the one biggest difference I'd say, but you know I've been in this industry for a long time. So say between the '90s and now, is that we have so much of it is actually off servers. Where when something crashes, we get a crash dump and we can debug the thing and so going out in the field we have much more capability in finding what's going on in the customer base than we did 20 years ago. But other than that, it's just a really hard challenge to both satisfy people that can't have anything to change, and everything changing. >> John you've been watching this for a number of years, what do we still have left to do? We come back to DockerCon next year, you know, we'll have more people, it'll be a bigger event, but you know, what's the progression, what kind of things are you looking forward to the ecosystem and yourself and Docker, knocking down and moving customers forward with? >> The first year was kind of like, what is this thing? Second year was now, the individual Docker container is there now how do you orchestrate them and next step is how do we network these things. And there's an initiative now to standardize on storage, for storage systems and docker containers. Monitoring. There's a lot of things that are still to do. We have a long ways to go. On the other side, I think this other track, which we talked about today, which is that virtualization and containers are going to blur and mend, and I don't think that seven years from now we're going to be talking about containers or virtual machines, we're just going to be saying it's some unit of compute and then there's so much in knobs and tweaks that you want it a little more isolated, you want it a little less isolated, you trade off some performance for something else. >> Business capability, in other words the enterprise architecture framework of business capabilities, will be paramount in terms of composing applications or microservices. From what I understand you saying. >> Yeah, I think where we're really going to get to is a model where people we get past this basics of storage of networking and start working up the next level So things like Helm or DCS Universe, or Storm Stacks, where you can describe more of an application, it just keeps moving up. And so I think in seven years, we won't be talking so much about this, it'll some other disruption, right? But there won't be talking about this virtualization layer as much as building apps again. >> On a visual composition of microservices, what is Microsoft doing, you say that you long ago entered Microsoft during the Vizio acquisition, what's Microsoft doing to enable more visual composition across these functions, across orchestrated team-like environments going forward? >> I think there is some work going on. It's not my area again, on visual composition, despite the fact that I came from Vizio. I kind of got away from that space >> Well I'm betraying my age. I remember that period. >> All right. Well John, always a pleasure catching up with you and thank you so much for joining us for this segment. Look forward to watching Microsoft going forward. >> Thanks. Thank you for having me. We'll be back with lots more coverage here from DockerCon 2017. You're watching theCUBE.
SUMMARY :
Brought to you by Docker John, had the pleasure of interviewing you two years ago. So it's been a lot of change in that time, of kind of the key messages you were trying to get across is the ability to run Linux containers And one of the things people want to kind of understand is, And now we're in a mode you know, in the keynote this morning was LinuxKit. and the kernel that you want to run it on. Cause of course you go your Azure stack, I mean one of the points of containers in general, Can you square you container strategy as And then you put your code I'm just not the right guy to answer that question. Does it help Microsoft change the way we architect things? the bridge to the future, if you will. And if you don't need to refactor that thing, and help the people that had certification or jobs There is a real challenge and it's the same problem and the Enterprise Edition, EE. So say between the '90s and now, is that we have On the other side, I think this other track, From what I understand you saying. where you can describe more of an application, despite the fact that I came from Vizio. I remember that period. up with you and thank you so much for joining Thank you for having me.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim Kobielus | PERSON | 0.99+ |
John Gossman | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
14 years | QUANTITY | 0.99+ |
Solomon | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
two days | QUANTITY | 0.99+ |
two models | QUANTITY | 0.99+ |
Austin, Texas | LOCATION | 0.99+ |
21 years | QUANTITY | 0.99+ |
Docker | TITLE | 0.99+ |
Canonical | ORGANIZATION | 0.99+ |
two delivery models | QUANTITY | 0.99+ |
INTEL | ORGANIZATION | 0.99+ |
DockerCon 2017 | EVENT | 0.99+ |
Windows | TITLE | 0.99+ |
Linux | TITLE | 0.99+ |
DockerCon | EVENT | 0.99+ |
Windows 2000 | TITLE | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
20 years ago | DATE | 0.99+ |
seven years | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
two years ago | DATE | 0.99+ |
#DockerCon | EVENT | 0.99+ |
both | QUANTITY | 0.99+ |
next year | DATE | 0.98+ |
MySQL | TITLE | 0.98+ |
Docker | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
first methodology | QUANTITY | 0.97+ |
Azure Stack | TITLE | 0.97+ |
both things | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
Red Hat | TITLE | 0.97+ |
Java javax | TITLE | 0.96+ |
CNT | ORGANIZATION | 0.96+ |
one end | QUANTITY | 0.96+ |
Azure | TITLE | 0.95+ |
Intel | ORGANIZATION | 0.95+ |
both directions | QUANTITY | 0.94+ |
George Moore, Microsoft Azure Compute | Fortinet Accelerate 2017
>> Narrator: Live from Las Vegas, Nevada, it's theCUBE covering Accelerate 2017 brought to you by Fortinett. Now, here are your hosts, Lisa Martin and Peter Burris. >> Hi, welcome back to theCUBE. We are SiliconANGLE's flagship program where we go out to the events and extract the signal from the noise. Today, we are with Fortinet at their 2017 Accelerate event in Las Vegas. I'm your host, Lisa Martin, and I'm joined by my cohost, Peter Burris. We are fortunate right now to be joined by George Moore. George is the technology, excuse me, the CSO for Microsoft Azure who is a big technology alliance partner for Fortinet. George, welcome to theCUBE. >> Nice to have you, thank you. >> We are excited to have you on. You are, as you mentioned, the CSO at Azure, but you are the CSO for all of the Azure computer services. You are one of the founders of the Azure engineering team from back in 2006, and we were talking off-line. You hold over 40 patents in things like security deployment, interactive design, et cetera. You are a busy guy. >> I am, yes. (laughing) >> One of the things we have been talking about with our guests on the show today, and a great topic that was in the general session was about the value of data, and how do businesses transform to digital businesses. The value in that data has to be critical. I'd love to get your take on as businesses have to leverage that data to become more successful or to become successful as digital businesses, we know the security of the perimeter is not the only thing. It needs to be with the data. What is Azure doing to secure the cloud for your customers, and how do you help them mitigate or deal with the proliferation of mobile devices and IOT devices that they have that are connecting to their networks? >> Digital disruption is affecting everybody, and it is a huge thing that many companies are struggling to understand and to adopt to their business models, and to really leverage what digital can do for them, and certainly we are doing in the public cloud with Azure helps that significantly. As you mentioned, there is just a proliferation of devices, a proliferation of data, so how do you have defense in depth so you don't have perimeter-based security, but you actually have defense in depth at every level, and at its heart, it really falls down to how do you do encryption at rest, how do you secure the data encrypted? Who holds the keys for the data? What is the proliferation of the keys? How did the controls manage for that? Of course, of the data is encrypted, you really want to be able to do things upon it. You want to be able computer over it. You want to be able to queries, analytics, everything, so there's the question of how to securely exchange the keys? How do you make sure that the right virtual machines are running, the right computers running at the time to do the queries? That's the set of controls and security models and services that we provide in Azure that makes it super easy for customers to actually use that. >> Azure represent what's called the second big transformation for Microsoft where the first one might have been associated with Explorer, those amazing things that Microsoft did to transform itself in the 1990s and it seems to be going pretty well. How is security facilitating this transformation from a customer value proposition? >> Security is absolutely the number one question that every customer has whenever they start talking about the cloud, and so we take that very, very seriously. Microsoft spends over billion dollars a year on all of our security products all up. We have literally armies of people who do nothing every day but wake up and make sure that the product is secure, and that really boils down to two big pieces. One is how do we keep the platform secure from the security control that we have ourselves in the compliance ADA stations and everything to make sure that when customers bring their workloads to us, they are in fact kept secure. Second is a set of security controls that we provide the customers so they can actually secure their workloads, integrate their security models with whatever they're running on premise, and have the right security models, ADA stations, multifactor authentication, identity controls, et cetera for their own workloads. >> Security is very context specific. I'm not necessarily getting into a conversation about industry or whatnot, but in terms of the classifications of services that need to be provided, we were talking a little bit about how some of the services that you provide end being part of the architecture for other services within the Azure cloud. Talk a little bit about how you envision security over time evolving as a way of thinking about how different elements of the cloud are going to be integrated and come together in the role that security is going to play in making that possible and easy. >> You are absolutely right. Azure is composed of, right now, 80 some-odd different services and there's definitely a layering where for example, my components around the compute pieces are used by the higher order of services around HD insight and some of the analytic services and such, and so the security models we have in place internally for compute in turn are used by those higher order services, and the real value we can provide is having a common customer-facing security model for customers, so there is a common way by which they can access the control plane, do management operations upon these services, how they can access the endpoints of the services using a common identity model, a common security model, role-based access control, again, from a common perspective, logging, auditing, reporting, so all this has to be cohesive, correct, and unified so that customers aren't facing this tumultuous array of different services that speak different languages, so to speak. >> We are here at Fortinet Accelerate 2017. Tell us how long Microsoft Azure and Fortinet have been working together, and what are you most excited about with some of the announcements from Fortinet today? >> Microsoft and Fortinet partnership has been going on for quite some time. Specifically in Azure space we've been doing two different, two major thrusts around integration with the Azure Security Center which is a set of services that we have within Azure that provides turnkey access to many, many different vendors including Fortinet as one of our primary partners, and Fortinet also has all their products in Azure marketplace so that customers can readily in a turnkey manner use Fortinet next generation firewalls and such as virtual machines, incorporate those directly within their workloads, and have a very seamless billing model, a very seamless partnership model, a very seamless go-to-market strategy for how we jointly promote, jointly provide the services. >> One of the things that one of our guests was talking with us about today was really about it's an easy sell, if you will, at the C-level to sell the value of investing in the right infrastructure to secure environments. Looking at that in correlation to the fact that there's always historically been a challenge or concerned with security when it comes to enterprises moving workloads to the cloud, I'm curious about this easy-sell position that cyber security and the rise of attacks brings to seeing the adoption of more enterprise workloads. We are seeing numbers that are going to show, or predicting that north of 85% of enterprise workloads will be in the cloud by 2020. How much is Microsoft Azure seeing the fact that cyber security attacks are becoming more and more common, hitting some pretty big targets, affecting a lot of big names. How much are using that as an impetus to and maybe drive that adoption higher and higher from an enterprise perspective? >> Absolutely, I see that everyday. I give many, many talks to the C-level, to CSOs, CEOs, et cetera, and I can say in many industries like the banking industry, financial sector, 18 months ago banks did not have any interest in public cloud. Is just like, "Thank you, we have no interest in cloud," but recently there has been the dawning realization that Azure and the public cloud products are in fact, in many cases, more secure than what the banks and other financial industry sectors can actually provide themselves because we are providing huge amounts of investments from an ongoing basis that we can actually provide better security, better integrated security than what they can afford on premise, so as a result, we are seeing this now, literally, stampede of customers coming to us and saying, "Okay, I get it. "You can actually have a very, very "highly secure environment. "You can provide security controls "that can go well above and beyond "whatever I could do on premise, "and it's better integrated "than what I could ever pull together on premise." >> One of the reasons for that is because of the challenge of finding talent, and you guys can find a really talented person, bring them in, and that person can build security architectures for your cloud that then can be served, can be used by a lot of different customers, so what will be the role of or how will this need for talent in the future, what would be the role for how people engage your people, client's people engage your people to ensure that that people side and moves forward, and how do you keep scaling that is you scale the cloud? >> Certainly people are always the bottleneck in virtually every industry, and specifically within the computing space. The value that we are seeing from customers is that the people that they had previously on premise who were working to secure the base level common infrastructure are now freed because they don't have to do that work. They can do other interesting things at the application level and move their value added further up the stack which means I can innovate more rapidly, they can add more features more quickly, because they are not having to worry about the lower-level infrastructure pieces that are secured by Azure, so we are seeing the dawning realization that we are moving to this new golden age where there is higher degree of agility with respect to innovation happening at the application level, because remember, applications have to be, if you are having a compliant workload, if you are having PCI compliance within the credit card industry for example, you have to have the entire application and its infrastructure part of the compliance boundary, so that means when you are building that app, you have to give your auditors the complete stack for them to pass that. If you are only having to worry about this much as opposed to that much, then the amount of work that you can do, the amount of integration, the amount agility, the amount of innovation you can do at that level is many orders of magnitude higher, so you really see that the value that a lot of customers are having here is that their talented people can be put to use on more important higher order business-related problems as opposed to lower-level infrastructure level issues. >> Let's talk about that for second because one of the things that we see within our research is that the era of cloud as renting virtual machines is starting to transition as people start renting applications, or applications as services that they themselves can start putting together. Partly the reason why that's exciting is because it will liberate more developers. It brings more developers into the process of creating value in the cloud, but as they do that, they now have visibility, or they are going to be doing things that touch an enormous set of resources, so how do you make security easier to developers in Azure? >> The key is that we can do high degrees of integration at the low level between these very services. >> Peter: It goes back to that issue of a cascading of your stuff up into the other Azure services. >> Absolutely, I mean think about it, we sat on top a mountain of information. We have analytics and log files that know about virtually everything that's happening in the cloud, and we can have machine learning, we can have intelligence, we can have machine intelligence and such, that can extract signals from noise that would otherwise be impossible to discover from a single customer's perspective. If you have a low and slow attack by some sort of persistent individual, the fact that they are trying the slow and low attack means that we are able to pull that signal out and extract that information that would not be really physically possible, or economically possible for most companies to do on premise. >> Does this get embedded to some of the toolkits that we are going to use to build these next-generation cloud-based apps? >> It gets embedded into the toolkits, but it also gets embedded at the set of services like the Azure Security Center. A single pane of glass that's integrated with the products from Fortinet and others where the customer can go and have a single view across all their work was running within Azure and get comprehensive alerts and understanding about the analytics that we are able to pull out and provide to those customers. >> What's next? >> Security is an ever evolving field, and the bad guys are always trying new things, so the work that is really happening, a lot of the innovation that's happening is within the analytics, machine learning space around being able to pull more log files out, being able to refine the algorithms and basically being able to provide more AI to the logs themselves so that we can provide integrated alerts, like for example, if you have a kill chain of an individual coming in attacking one of your product, and then using that to the lateral mobility to other products, or other services within your product, we can pull this together in a common log. We can show to customers here's the sequence of this one individual that across three, or four, or five different services. You have top level disability, and we can give you then guidance to say if you insert separation of duties between these two individuals, then you could've broken that kill chain. We can do proactive guidance to customers to help them secure their own workloads even if they necessarily initially were not deployed in a necessarily most secure manner. >> George, we just have a couple of minutes left, but I'd like to get your perspective. You showed a tremendous amount of the accomplishments that Azure has made in public cloud and in security. What are the opportunities for partners to sell and resell Azure services? >> Absolutely. Microsoft has historically always worked incredibly well with partners. We have a very large partner ecosystem. >> Peter: It's the biggest. >> Is the biggest, exactly. Okay, I don't want to brag too much, yes. (laughing) >> That's what I'm here for, George. >> We see specifically in the security space that partners are increasingly, around 40% of their revenue increasingly is coming from cloud-based assets, cloud-based sales. We are setting up the necessary partner channels and partner models where we can make sure that the reseller channels and our partners are an integral part of our environment, and they can get the necessary revenue shares, and we can give them the leads on how the whole system evolves. Absolutely we believe that partners are first and foremost to our success, and we are making deep, deep, deep investments in the partner programs to make that possible. >> Well George, we wish you and Microsoft Azure continued success as well as your partnership with Fortinet. We thank you so much for taking the time to join us on theCUBE today. >> Thank you. >> And for my cohost, Peter Burris, I'm Lisa Martin. Stick around, we will be right back on theCUBE.
SUMMARY :
brought to you by Fortinett. and extract the signal from the noise. We are excited to have you on. I am, yes. One of the things we at the time to do the queries? and it seems to be going pretty well. and make sure that the product is secure, some of the services that you provide and the real value we can provide is and what are you most excited about that we have within Azure that are going to show, that Azure and the public is that the people that they because one of the things that we see The key is that we can do Peter: It goes back to that issue the fact that they are trying and provide to those customers. and we can give you then guidance to say amount of the accomplishments We have a very large partner ecosystem. Is the biggest, exactly. that the reseller to join us on theCUBE today. Stick around, we will be
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
George Moore | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Fortinet | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Peter | PERSON | 0.99+ |
four | QUANTITY | 0.99+ |
2006 | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
two individuals | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Second | QUANTITY | 0.99+ |
Fortinett | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
two big pieces | QUANTITY | 0.99+ |
80 | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
1990s | DATE | 0.98+ |
18 months ago | DATE | 0.98+ |
Azure | TITLE | 0.98+ |
2017 | DATE | 0.98+ |
around 40% | QUANTITY | 0.98+ |
Las Vegas, Nevada | LOCATION | 0.98+ |
Azure | ORGANIZATION | 0.98+ |
over 40 patents | QUANTITY | 0.98+ |
first one | QUANTITY | 0.97+ |
SiliconANGLE | ORGANIZATION | 0.97+ |
Today | DATE | 0.97+ |
Azure Security Center | TITLE | 0.95+ |
five different services | QUANTITY | 0.95+ |
first | QUANTITY | 0.95+ |
single | QUANTITY | 0.93+ |
single view | QUANTITY | 0.91+ |
two major thrusts | QUANTITY | 0.9+ |
single pane | QUANTITY | 0.9+ |
over billion dollars a year | QUANTITY | 0.86+ |
theCUBE | ORGANIZATION | 0.85+ |
one of our guests | QUANTITY | 0.81+ |
north of 85% | QUANTITY | 0.8+ |
Explorer | TITLE | 0.8+ |
Accelerate 2017 | EVENT | 0.79+ |
2017 Accelerate | EVENT | 0.77+ |
Microsoft Azure | ORGANIZATION | 0.77+ |
one individual | QUANTITY | 0.77+ |
Thomas Cornely Indu Keri Eric Lockard Accelerate Hybrid Cloud with Nutanix & Microsoft
>>Okay, we're back with the hybrid Cloud power panel. I'm Dave Ante, and with me our Eric Lockard, who's the corporate vice president of Microsoft Azure Specialized Thomas Corn's, the senior vice president of products at Nutanix. And Indu Carey, who's the Senior Vice President of engineering, NCI and nnc two at Nutanix. Gentlemen, welcome to the cube. Thanks for coming on. >>It's to be >>Here. Have us, >>Eric, let's, let's start with you. We hear so much about cloud first. What's driving the need for hybrid cloud for organizations today? I mean, I not just ev put everything in the public cloud. >>Yeah, well, I mean the public cloud has a bunch of inherent advantages, right? I mean it's, it has effectively infinite capacity, the ability to, you know, innovate without a lot of upfront costs, you know, regions all over the world. So there is a, a trend towards public cloud, but you know, not everything can go to the cloud, especially right away. There's lots of reasons. Customers want to have assets on premise, you know, data gravity, sovereignty and so on. And so really hybrid is the way to achieve the best of both worlds, really to kind of leverage the assets and investments that customers have on premise, but also take advantage of, of the cloud for bursting or regionality or expansion, especially coming outta the pandemic. We saw a lot of this from work from home and, and video conferencing and so on, driving a lot of cloud adoption. So hybrid is really the way that we see customers achieving the best of both worlds. >>Yeah, it makes sense. I wanna, Thomas, if you could talk a little bit, I don't wanna inundate people with the acronyms, but, but the Nutanix cloud clusters on Azure, what is that? What problems does it solve? Give us some color there please. >>Yeah, there, so, you know, cloud clusters on Azure, which we actually call NC two to make it simple and SONC two on Azure is really our solutions for hybrid cloud, right? And you about hybrid cloud, highly desirable customers want it. They, they know this is the right way to do it for them, given that they wanna have workloads on premises at the edge, any public clouds, but it's complicated. It's hard to do, right? And the first thing that you did with just silos, right? You have different infrastructure that you have to go and deal with. You have different teams, different technologies, different areas of expertise and dealing with different portals, networkings get complicated, security gets complicated. And so you heard me say this already, you know, hybrid can be complex. And so what we've done, we then c to Azure is we make that simple, right? We allow teams to go and basically have a solution that allows you to go and take any application running on premises and move it as is to any Azure region where Ncq is available. Once it's running there, you keep the same operating model, right? And that's, so that's actually super valuable to actually go and do this in a simple fashion, do it faster, and basically do hybrid in a more cost effective fashion, know for all your applications. And that's really what's really special about NC two Azure today. >>So Thomas, just a quick follow up on that. So you're, you're, if I understand you correctly, it's an identical experience. Did I get that right? >>This is, this is the key for us, right? Is when you think you're sending on premises, you are used to way of doing things of how you run your applications, how you operate, how you protect them. And what we do here is we extend the Nutanix operating model two workloads running in Azure using the same core stack that you're running on premises, right? So once you have a cluster deploying C to an Azure, it's gonna look like the same cluster that you might be running at the edge or in your own data center using the same tools you, using the same admin constructs to go protect the workloads, make them highly available, do disaster recovery or secure them. All of that becomes the same. But now you are in Azure, and this is what we've spent a lot of time working with Americanist teams on, is you actually have access now to all of those suites of Azure services in from those workloads. So now you get the best of both world, you know, and we bridge them together and you get seamless access of those services between what you get from Nutanix, what you get from Azure. >>Yeah. And as you alluded to, this is traditionally been non-trivial and people have been looking forward to this for, for quite some time. So Indu, I want to understand from an engineering perspective, your team had to work with the Microsoft team, and I'm sure there was this, this is not just a press releases or a PowerPoint, you had to do some some engineering work. So what specific engineering work did you guys do and what's unique about this relative to other solutions in the marketplace? >>So let me start with what's unique about this, and I think Thomas and Eric both did a really good job of describing that the best way to think about what we are delivering jointly with Microsoft is that it speeds of the journey to the public cloud. You know, one way to think about this is moving to the public cloud is sort of like remodeling your house. And when you start remodeling your house, you know, you find that you start with something and before you know it, you're trying to remodel the entire house. And that's a little bit like what journey to the public cloud sort of starts to look like when you start to refactor applications. Because it wasn't, most of the applications out there today weren't designed for the public cloud to begin with. NC two allows you to flip that on its head and say that take your application as is and then lift and shift it to the public cloud, at which point you start the refactor journey. >>And one of the things that you have done really well with the NC two on Azure is that NC two is not something that sits by Azure side. It's fully integrated into the Azure fabric, especially the software defined network and SDN piece. What that means is that, you know, you don't have to worry about connecting your NC two cluster to Azure to some sort of an net worth pipe. You have direct access to the Azure services from the same application that's now running on an NC two cluster. And that makes your refactoring journey so much easier. Your management plan looks the same, your high performance notes let the NVMe notes, they look the same. And really, I mean, other than the facts that you're doing something in the public cloud, all the nutanix's goodness that you're used to continue to receive that, there is a lot of secret sauce that we have had to develop as part of this journey. >>But if we had to pick one that really stands out, it is how do we take the complexity, the network complexity of a public cloud, in this case Azure, and make it as familiar to Nutanix's customers as the VPC construc, the virtual private cloud construc that allows them to really think of that on-prem networking and the public cloud networking in very similar terms. There's a lot more that's gone on behind the scenes. And by the way, I'll tell you a funny sort of anecdote. My dad used to say when I drew up that, you know, if you really want to grow up, you have to do two things. You have to like build a house and you have to marry your kid off to someone. And I would say our dad a third do a flow development with the public cloud provider of the partner. This has been just an absolute amazing journey with Eric and the Microsoft team, and you're very grateful for their >>Support. I, I need NC two for my house. I live in a house that was built in, it's 1687 and we connect all to new and it's, it is a bolt on, but, but, but, and so, but the secret sauce, I mean there's, there's a lot there, but is it a PAs layer? You didn't just wrap it in a container and shove it into the public cloud, You've done more than that. I'm inferring, >>You know, the, it's actually an infrastructure layer offering on top of fid. You can obviously run various types of platform services. So for example, down the road, if you have a containerized application, you'll actually be able to TA it from OnPrem and run it on C two. But the NC two offer itself, the NCAA offer itself is an infrastructure level offering. And the trick is that the storage that you're used to the high performance storage that you know, define tenants to begin with, the hypervisor that you're used to, the network constructs that you're used to light MI segmentation for security purposes, all of them are available to you on NC two in Azure, the same way that we're used to do on-prem. And furthermore, managing all of that through Prism, which is our management interface and management console also remains the same. That makes your security model easier, that makes your management challenge easier, that makes it much easier for an application person or the IT office to be able to report back to the board that they have started to execute on the cloud mandate and they've done that much faster than they'll be able to otherwise. >>Great. Thank you for helping us understand the plumbing. So now Thomas, maybe we can get to like the customers. What, what are you seeing, what are the use cases that are, that are gonna emerge for the solution? >>Yeah, I mean we've, you know, we've had a solution for a while, you know, this is now new on Azure's gonna extend the reach of the solution and get us closer to the type of use cases that are unique to Azure in terms of those solutions for analytics and so forth. But the kind of key use cases for us, the first one you know, talks about it is a migration. You know, we see customers on that cloud journey. They're looking to go and move applications wholesale from on premises to public cloud. You know, we make this very easy because in the end they take the same concept that are around the application and make them, we make them available Now in the Azure region, you can do this for any applications. There's no change to the application, no networking change. The same IP will work the same whether you're running on premises or in Azure. >>The app stays exactly the same, manage the same way, protected the same way. So that's a big one. And you know, the type of drivers point politically or maybe I wanna go do something different or I wanna go and shut down location on premises, I need to do that with a given timeline. I can now move first and then take care of optimizing the application to take advantage of all that Azure has to offer. So migration and doing that in a simple fashion, in a very fast manner is, is a key use case. Another one, and this is classic for leveraging public cloud force, which are doing on premises, is disaster recovery. And something that we refer to as elastic disaster recovery, being able to go and actually configure a secondary site to protect your on premises workloads. But I think that site sitting in Azure as a small site, just enough to hold the data that you're replicating and then use the fact that you cannot get access to resources on demand in Azure to scale out the environment, feed over workloads, run them with performance, potentially fill them back to on premises and then shrink back the environment in Azure to again, optimize cost and take advantage of elasticity that you get from public cloud models. >>And then the last one, building on top of that is just the fact that you cannot get bursting use cases and maybe running a large environment, typically desktop, you know, VDI environments that we see running on premises and I have, you know, a seasonal requirement to go and actually enable more workers to go get access the same solution. You could do this by sizing for the large burst capacity on premises wasting resources during the rest of the year. What we see customers do is optimize what they're running on premises and get access to resources on demand in Azure and basically move the workload and now basically get combined desktop running on premises desktops running on NC two on Azure, same desktop images, same management, same services, and do that as a burst use case during, say you're a retailer that has to go and take care of your holiday season. You know, great use case that we see over and over again for our customers, right? And pretty much complimenting the notion of, look, I wanna go to desktop as a service, but right now, now I don't want to refactor the entire application stack. I just won't be able to get access to resources on demand in the right place at the right time. >>Makes sense. I mean this is really all about supporting customers', digital transformations. We all talk about how that was accelerated during the pandemic and, but the cloud is a fundamental component of the digital transformations. And Eric, you, you guys have obviously made a commitment between Microsoft and and Nutanix to simplify hybrid cloud and that journey to the cloud. How should customers, you know, measure that? What does success look like? What's the ultimate vision here? >>Well, the ultimate vision is really twofold. I think the one is to, you know, first is really to ease a customer's journey to the cloud to allow them to take advantage of all the benefits to the cloud, but to do so without having to rewrite their applications or retrain their, their administrators and or, or to obviate their investment that they already have in platforms like, like Nutanix. And so the, the work that companies have done together here, you know, first and foremost is really to allow folks to come to the cloud in the way that they want to come to the cloud and take really the best of both worlds, right? Leverage, leverage their investment in the capabilities of the Nutanix platform, but do so in conjunction with the advantages and and capabilities of of Azure, you know. Second, it is really to extend some of the cloud capabilities down onto the on-premise infrastructure. And so with investments that we've done together with Azure arc for example, we're really extending the Azure control plane down onto on-premise Nutanix clusters and bringing the capabilities that that provides to the Nutanix customer as well as various Azure services like our data services and Azure SQL server. So it's really kind of coming at the problem from, from two directions. One is from kind of traditional on-prem up into the cloud, and then the second is kind of from the cloud leveraging the investment customers have in in on-premise hci. >>Got it. Thank you. Okay, last question. Maybe each of you could just give us one key takeaway for our audience today. Maybe we start with with with with Thomas and then Indu and then Eric you can bring us home. >>Sure. So the key takeaway is, you know, you takes cloud clusters on Azure is ngi, you know, this is something that we've had tremendous demand from our customers, both from the Microsoft side and the Nutanix side going, going back years literally, right? People have been wanting to go and see this, this is now live GA open for business and you know, we're ready to go and engage and ready to scale, right? This is our first step in a long journey in a very key partnership for us at Nutanix. >>Great Indu >>In our Dave. In a prior life about seven or eight, eight years ago, I was a part of a team that took a popular patch preparation software and moved it to the public cloud. And that was a journey that took us four years and probably several hundred million dollars. And if we had had NC two then it would've saved us half the money, but more importantly would've gotten there in one third the time. And that's really the value of this. >>Okay. Eric, bring us home please. >>Yeah, I'll just point out like this is not something that's just both on or something. We, we, we started yesterday. This is something the teams, both companies have been working on together for, for years really. And it's, it's a way of, of deeply integrating Nutanix into the Azure Cloud and with the ultimate goal of, of again, providing cloud capabilities to the Nutanix customer in a way that they can, you know, take advantage of the cloud and then compliment those applications over time with additional Azure services like storage, for example. So it really is a great on-ramp to the cloud for, for customers who have significant investments in, in Nutanix clusters on premise, >>Love the co-engineering and the ability to take advantage of those cloud native tools and capabilities, real customer value. Thanks gentlemen. Really appreciate your time. >>Thank >>You. Thank you. Thank you. >>Okay, keep it right there. You're watching. Accelerate hybrid cloud, that journey with Nutanix and Microsoft technology on the cube. You're leader in enterprise and emerging tech coverage >>Organizations are increasingly moving towards a hybrid cloud model that contains a mix of on premises public and private clouds. A recent study confirms 83% of businesses agree that hybrid multi-cloud is the ideal operating model. Despite its many benefits, deploying a hybrid cloud can be challenging, complex, slow and expensive require different skills and tool sets and separate siloed management interfaces. In fact, 87% of surveyed enterprises believe that multi-cloud success will require simplified management of mixed infrastructures >>With Nutanix and Microsoft. Your hybrid cloud gets the best of both worlds. The predictable costs, performance control and data sovereignty of a private cloud and the scalability, cloud services, ease of use and fractional economics of the public cloud. Whatever your use case, Nutanix cloud clusters simplifies IT. Operations is faster and lowers risk for migration projects, lowers cloud TCO and provides investment optimization and offers effortless, limitless scale and flexibility. Choose NC two to accelerate your business in the cloud and achieve true hybrid cloud success. Take a free self-guided 30 minute test drive of the solutions provisioning steps and use cases at nutanix.com/azure td. >>Okay, so we're just wrapping up accelerate hybrid cloud with Nutanix and Microsoft made possible by Nutanix where we just heard how Nutanix is partnering with cloud and software leader Microsoft to enable customers to execute on a true hybrid cloud vision with actionable solutions. We pushed and got the answer that with NC two on Azure, you get the same stack, the same performance, the same networking, the same automation, the same workflows across on-prem and Azure Estates. Realizing the goal of simplifying and extending on-prem workloads to any Azure region to move apps without complicated refactoring and to be able to tap the full complement of native services that are available on Azure. Remember, all these videos are available on demand@thecube.net and you can check out silicon angle.com for all the news related to this announcement and all things enterprise tech. Please go to nutanix.com as of course information about this announcement and the partnership, but there's also a ton of resources to better understand the Nutanix product portfolio. There are white papers, videos, and other valuable content, so check that out. This is Dave Ante for Lisa Martin with the Cube, your leader in enterprise and emerging tech coverage. Thanks for watching the program and we'll see you next time.
SUMMARY :
the senior vice president of products at Nutanix. I mean, I not just ev put everything in the public cloud. I mean it's, it has effectively infinite capacity, the ability to, you know, I wanna, Thomas, if you could talk a little bit, I don't wanna inundate people with the And the first thing that you did with just silos, right? Did I get that right? C to an Azure, it's gonna look like the same cluster that you might be running at the edge this is not just a press releases or a PowerPoint, you had to do some some engineering and shift it to the public cloud, at which point you start the refactor journey. And one of the things that you have done really well with the NC two on Azure is And by the way, I'll tell you a funny sort of anecdote. and shove it into the public cloud, You've done more than that. to the high performance storage that you know, define tenants to begin with, the hypervisor that What, what are you seeing, what are the use cases that are, that are gonna emerge for the solution? the first one you know, talks about it is a migration. And you know, the type of drivers point politically And pretty much complimenting the notion of, look, I wanna go to desktop as a service, during the pandemic and, but the cloud is a fundamental component of the digital transformations. and bringing the capabilities that that provides to the Nutanix customer Maybe each of you could just give us one key takeaway ngi, you know, this is something that we've had tremendous demand from our customers, And that's really the value of this. into the Azure Cloud and with the ultimate goal of, of again, Love the co-engineering and the ability to take advantage of those cloud native Thank you. and Microsoft technology on the cube. of businesses agree that hybrid multi-cloud is the ideal operating model. economics of the public cloud. We pushed and got the answer that with NC two on Azure, you get the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Thomas | PERSON | 0.99+ |
Eric | PERSON | 0.99+ |
Eric Lockard | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
four years | QUANTITY | 0.99+ |
Dave Ante | PERSON | 0.99+ |
demand@thecube.net | OTHER | 0.99+ |
Indu Carey | PERSON | 0.99+ |
nutanix | ORGANIZATION | 0.99+ |
NCAA | ORGANIZATION | 0.99+ |
87% | QUANTITY | 0.99+ |
30 minute | QUANTITY | 0.99+ |
83% | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
both companies | QUANTITY | 0.99+ |
two directions | QUANTITY | 0.99+ |
PowerPoint | TITLE | 0.99+ |
One | QUANTITY | 0.99+ |
Azure | TITLE | 0.99+ |
NC two | TITLE | 0.98+ |
yesterday | DATE | 0.98+ |
second | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
NCI | ORGANIZATION | 0.98+ |
both worlds | QUANTITY | 0.98+ |
first step | QUANTITY | 0.98+ |
one key takeaway | QUANTITY | 0.98+ |
Azure Cloud | TITLE | 0.97+ |
today | DATE | 0.97+ |
C two | TITLE | 0.97+ |
two things | QUANTITY | 0.96+ |
one | QUANTITY | 0.95+ |
Ncq | LOCATION | 0.95+ |
nnc two | ORGANIZATION | 0.95+ |
several hundred million dollars | QUANTITY | 0.95+ |
Indu | PERSON | 0.95+ |
third | QUANTITY | 0.94+ |
Azure SQL | TITLE | 0.94+ |
eight years ago | DATE | 0.94+ |
silicon angle.com | OTHER | 0.93+ |
Thomas Cornely Indu Keri Eric Lockard Nutanix Signal
>>Okay, we're back with the hybrid Cloud power panel. I'm Dave Ante and with me our Eric Lockhart, who's the corporate vice president of Microsoft Azure, Specialized Thomas Corny, the senior vice president of products at Nutanix, and Indu Care, who's the Senior Vice President of engineering, NCI and nnc two at Nutanix. Gentlemen, welcome to the cube. Thanks for coming on. >>It's to >>Be here. Have us, >>Eric, let's, let's start with you. We hear so much about cloud first. What's driving the need for hybrid cloud for organizations today? I mean, I wanna just ev put everything in the public cloud. >>Yeah, well, I mean, the public cloud has a bunch of inherent advantages, right? I mean, it's, it has effectively infinite capacity, the ability to, you know, innovate without a lot of upfront costs, you know, regions all over the world. So there is a, a trend towards public cloud, but you know, not everything can go to the cloud, especially right away. There's lots of reasons. Customers want to have assets on premise, you know, data gravity, sovereignty and so on. And so really hybrid is the way to achieve the best of both worlds, really to kind of leverage the assets and investments that customers have on premise, but also take advantage of, of the cloud for bursting or regionality or expansion, especially coming outta the pandemic. We saw a lot of this from work from home and, and video conferencing and so on, driving a lot of cloud adoption. So hybrid is really the way that we see customers achieving the best of both worlds. >>Yeah, makes sense. I wanna, Thomas, if you could talk a little bit, I don't wanna inundate people with the acronyms, but, but the Nutanix Cloud clusters on Azure, what is that? What problems does it solve? Give us some color there, please. >>That is, so, you know, cloud clusters on Azure, which we actually call NC two to make it simple. And so NC two on Azure is really our solutions for hybrid cloud, right? And you think about the hybrid cloud, highly desirable customers want it. They, they know this is the right way to do for them, given that they wanna have workloads on premises at the edge, any public clouds. But it's complicated. It's hard to do, right? And the first thing that you deal with is just silos, right? You have different infrastructure that you have to go and deal with. You have different teams, different technologies, different areas of expertise and dealing with different portals. Networkings get complicated, security gets complicated. And so you heard me say this already, you know, hybrid can be complex. And so what we've done, we then c to Azure is we make that simple, right? We allow teams to go and basically have a solution that allows you to go and take any application running on premises and move it as is to any Azure region where ncq is available. Once it's running there, you keep the same operating model, right? And that's something actually super valuable to actually go and do this in a simple fashion, do it faster, and basically do, do hybrid in a more cost effective fashion, know for all your applications. And that's really what's really special about NC Azure today. >>So Thomas, just a quick follow up on that. So you're, you're, if I understand you correctly, it's an identical experience. Did I get that right? >>This is, this is the key for us, right? Is when you think you're sending on premises, you are used to way of doing things of how you run your applications, how you operate, how you protect them. And what we do here is we extend the Nutanix operating model two workloads running in Azure using the same core stack that you're running on premises, right? So once you have a cluster deploying C to an Azure, it's gonna look like the same cluster that you might be running at the edge or in your own data center, using the same tools, using, using the same admin constructs to go protect the workloads, make them highly available with disaster recovery or secure them. All of that becomes the same, but now you are in Azure, and this is what we've spent a lot of time working with Americanist teams on, is you actually have access now to all of those suites of Azure services in from those workloads. So now you get the best of both world, you know, and we bridge them together and you get seamless access of those services between what you get from Nutanix, what you get from Azure. >>Yeah. And as you alluded to, this is traditionally been non-trivial and people have been looking forward to this for, for quite some time. So Indu, I want to understand from an engineering perspective, your team had to work with the Microsoft team, and I'm sure there was this, this is not just a press releases or a PowerPoint, you had to do some some engineering work. So what specific engineering work did you guys do and what's unique about this relative to other solutions in the marketplace? >>So let me start with what's unique about this, and I think Thomas and Eric both did a really good job of describing that the best way to think about what we are delivering jointly with Microsoft is that it speeds up the journey to the public cloud. You know, one way to think about this is moving to the public cloud is sort of like remodeling your house. And when you start remodeling your house, you know, you find that you start with something and before you know it, you're trying to remodel the entire house. And that's a little bit like what journey to the public cloud sort of starts to look like when you start to refactor applications. Because it wasn't, most of the applications out there today weren't designed for the public cloud to begin with. NC two allows you to flip that on its head and say that take your application as is and then lift and shift it to the public cloud, at which point you start the refactor journey. >>And one of the things that you have done really well with the NC two on Azure is that NC two is not something that sits by Azure side. It's fully integrated into the Azure fabric, especially the software defined network and SDN piece. What that means is that, you know, you don't have to worry about connecting your NC two cluster to Azure to some sort of a net worth pipe. You have direct access to the Azure services from the same application that's now running on an C2 cluster. And that makes your refactoring journey so much easier. Your management claim looks the same, your high performance notes let the NVMe notes, they look the same. And really, I mean, other than the facts that you're doing something in the public cloud, all the Nutanix goodness that you're used to continue to receive that, there is a lot of secret sauce that we have had to develop as part of this journey. >>But if we had to pick one that really stands out, it is how do we take the complexity, the network complexity, offer public cloud, in this case Azure, and make it as familiar to Nutanix's customers as the VPC construc, the virtual private cloud construct that allows them to really think of their on-prem networking and the public cloud networking in very similar terms. There's a lot more that's gone on behind the scenes. And by the way, I'll tell you a funny sort of anecdote. My dad used to say when I drew up that, you know, if you really want to grow up, you have to do two things. You have to like build a house and you have to marry your kid off to someone. And I would say our dad a third do a code development with the public cloud provider of the partner. This has been just an absolute amazing journey with Eric and the Microsoft team, and you're very grateful for their support. >>I need NC two for my house. I live in a house that was built and it's 1687 and we connect old to new and it's, it is a bolt on, but, but, but, and so, but the secret sauce, I mean there's, there's a lot there, but is it a PAs layer? You didn't just wrap it in a container and shove it into the public cloud, You've done more than that. I'm inferring, >>You know, the, it's actually an infrastructure layer offering on top of fid. You can obviously run various types of platform services. So for example, down the road, if you have a containerized application, you'll actually be able to tat it from OnPrem and run it on C two. But the NC two offer itself, the NCAA often itself is an infrastructure level offering. And the trick is that the storage that you're used to the high performance storage that you know, define Nutanix to begin with, the hypervisor that you're used to, the network constructs that you're used to light MI segmentation for security purposes, all of them are available to you on NC two in Azure, the same way that we're used to do on-prem. And furthermore, managing all of that through Prism, which is our management interface and management console also remains the same. That makes your security model easier, that makes your management challenge easier, that makes it much easier for an accusation person or the IT office to be able to report back to the board that they have started to execute on the cloud mandate and they have done that much faster than they'll be able to otherwise. >>Great. Thank you for helping us understand the plumbing. So now Thomas, maybe we can get to like the customers. What, what are you seeing, what are the use cases that are, that are gonna emerge for this solution? >>Yeah, I mean we've, you know, we've had a solution for a while and you know, this is now new on Azure is gonna extend the reach of the solution and get us closer to the type of use cases that are unique to Azure in terms of those solutions for analytics and so forth. But the kind of key use cases for us, the first one you know, talks about it is a migration. You know, we see customers on the cloud journey, they're looking to go and move applications wholesale from on premises to public cloud. You know, we make this very easy because in the end they take the same culture that are around the application and make them, we make them available Now in the Azure region, you can do this for any applications. There's no change to the application, no networking change. The same IP will work the same whether you're running on premises or in Azure. >>The app stays exactly the same, manage the same way, protected the same way. So that's a big one. And you know, the type of drivers point to politically or maybe I wanna go do something different or I wanna go and shut down education on premises, I need to do that with a given timeline. I can now move first and then take care of optimizing the application to take advantage of all that Azure has to offer. So migration and doing that in a simple fashion, in a very fast manner is, is a key use case. Another one, and this is classic for leveraging public cloud force, which are doing on premises IT disaster recovery and something that we refer to as elastic disaster recovery, being able to go and actually configure a secondary site to protect your on premises workloads, but I that site sitting in Azure as a small site, just enough to hold the data that you're replicating and then use the fact that you cannot get access to resources on demand in Azure to scale out the environment, feed over workloads, run them with performance, potentially feed them back to on premises and then shrink back the environment in Azure to again, optimize cost and take advantage of elasticity that you get from public cloud models. >>Then the last one, building on top of that is just the fact that you cannot get boosting use cases and maybe running a large environment, typically desktop, you know, VDI environments that we see running on premises and I have, you know, a seasonal requirement to go and actually enable more workers to go get access the same solution. You could do this by sizing for the large burst capacity on premises wasting resources during the rest of the year. What we see customers do is optimize what they're running on premises and get access to resources on demand in Azure and basically move the workload and now basically get combined desktops running on premises desktops running on NC two on Azure, same desktop images, same management, same services, and do that as a burst use case during, say you're a retailer that has to go and take care of your holiday season. You know, great use case that we see over and over again for our customers, right? And pretty much complimenting the notion of, look, I wanna go to desktop as a service, but right now I don't want to refactor the entire application stack. I just wanna be able to get access to resources on demand in the right place at the right time. >>Makes sense. I mean this is really all about supporting customers', digital transformations. We all talk about how that was accelerated during the pandemic and, but the cloud is a fundamental component of the digital transformation generic. You, you guys have obviously made a commitment between Microsoft and and Nutanix to simplify hybrid cloud and that journey to the cloud. How should customers, you know, measure that? What does success look like? What's the ultimate vision here? >>Well, the ultimate vision is really twofold. I think the one is to, you know, first is really to ease a customer's journey to the cloud to allow them to take advantage of all the benefits to the cloud, but to do so without having to rewrite their applications or retrain their, their administrators and or or to obviate their investment that they already have and platforms like, like Nutanix. And so the, the work that companies have done together here, you know, first and foremost is really to allow folks to come to the cloud in the way that they want to come to the cloud and take really the best of both worlds, right? Leverage, leverage their investment in the capabilities of the Nutanix platform, but do so in conjunction with the advantages and and capabilities of, of Azure. You know, Second is really to extend some of the cloud capabilities down onto the on-premise infrastructure. And so with investments that we've done together with Azure arc for example, we're really extending the Azure control plane down onto on premise Nutanix clusters and bringing the capabilities that that provides to the, the Nutanix customer as well as various Azure services like our data services and Azure SQL server. So it's really kind of coming at the problem from, from two directions. One is from kind of traditional on-premise up into the cloud and then the second is kind of from the cloud leveraging the investment customers have in in on-premise hci. >>Got it. Thank you. Okay, last question. Maybe each of you can just give us one key takeaway for our audience today. Maybe we start with with with with Thomas and then Indu and then Eric you can bring us home. >>Sure. So the key takeaway is, you know, Nutanix Cloud clusters on Azure is now ga you know, this is something that we've had tremendous demand from our customers, both from the Microsoft side and the Nutanix side going, going back years literally, right? People have been wanting to go and see this, this is now live GA open for business and you know, we're ready to go and engage and ready to scale, right? This is our first step in a long journey in a very key partnership for us at Nutanix. >>Great Indu >>In our Dave. In a prior life about seven or eight, eight years ago, I was a part of a team that took a popular cat's preparation software and moved it to the public cloud. And that was a journey that took us four years and probably several hundred million. And if we had had NC two then it would've saved us half the money, but more importantly would've gotten there in one third the time. And that's really the value of this. >>Okay. Eric, bring us home please. >>Yeah, I'll just point out like this is not something that's just both on or something. We, we, we started yesterday. This is something the teams, both companies have been working on together for, for years, really. And it's, it's a way of, of deeply integrating Nutanix into the Azure Cloud and with the ultimate goal of, of again, providing cloud capabilities to the Nutanix customer in a way that they can, you know, take advantage of the cloud and then compliment those applications over time with additional Azure services like storage, for example. So it really is a great on-ramp to the cloud for, for customers who have significant investments in, in Nutanix clusters on premise, >>Love the co-engineering and the ability to take advantage of those cloud native tools and capabilities, real customer value. Thanks gentlemen. Really appreciate your time. >>Thank >>You. Thank you. >>Okay. Keep it right there. You're watching Accelerate Hybrid Cloud, that journey with Nutanix and Microsoft technology on the cube. You're a leader in enterprise and emerging tech coverage.
SUMMARY :
the senior vice president of products at Nutanix, and Indu Care, who's the Senior Vice President of Have us, What's driving the I mean, it's, it has effectively infinite capacity, the ability to, you know, I wanna, Thomas, if you could talk a little bit, I don't wanna inundate people with the And the first thing that you deal with is just silos, right? Did I get that right? C to an Azure, it's gonna look like the same cluster that you might be running at the edge So what specific engineering work did you guys do and what's unique about this relative then lift and shift it to the public cloud, at which point you start the refactor And one of the things that you have done really well with the NC two on Azure is And by the way, I'll tell you a funny sort of anecdote. and shove it into the public cloud, You've done more than that. to the high performance storage that you know, define Nutanix to begin with, the hypervisor that What, what are you seeing, what are the use cases that are, that are gonna emerge for this solution? the first one you know, talks about it is a migration. And you know, the type of drivers point to politically VDI environments that we see running on premises and I have, you know, a seasonal requirement to How should customers, you know, measure that? And so the, the work that companies have done together here, you know, Maybe each of you can just give us one key takeaway for now ga you know, this is something that we've had tremendous demand from our customers, And that's really the value of this. can, you know, take advantage of the cloud and then compliment those applications over Love the co-engineering and the ability to take advantage of those cloud native and Microsoft technology on the cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Thomas | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Eric Lockhart | PERSON | 0.99+ |
Eric | PERSON | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
four years | QUANTITY | 0.99+ |
Indu Care | ORGANIZATION | 0.99+ |
Dave Ante | PERSON | 0.99+ |
NCI | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
NCAA | ORGANIZATION | 0.99+ |
PowerPoint | TITLE | 0.99+ |
both companies | QUANTITY | 0.99+ |
two directions | QUANTITY | 0.99+ |
Azure | TITLE | 0.99+ |
Thomas Corny | PERSON | 0.98+ |
Second | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
C two | TITLE | 0.98+ |
each | QUANTITY | 0.98+ |
second | QUANTITY | 0.98+ |
Indu | PERSON | 0.98+ |
yesterday | DATE | 0.98+ |
both worlds | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
first step | QUANTITY | 0.98+ |
nnc two | ORGANIZATION | 0.98+ |
first | QUANTITY | 0.97+ |
Azure Cloud | TITLE | 0.97+ |
two things | QUANTITY | 0.97+ |
one key takeaway | QUANTITY | 0.97+ |
NC two | TITLE | 0.97+ |
first thing | QUANTITY | 0.97+ |
today | DATE | 0.96+ |
third | QUANTITY | 0.95+ |
both world | QUANTITY | 0.93+ |
several hundred million | QUANTITY | 0.92+ |
first one | QUANTITY | 0.92+ |
1687 | DATE | 0.92+ |
eight years ago | DATE | 0.91+ |
one third | QUANTITY | 0.9+ |
pandemic | EVENT | 0.9+ |
Dave | PERSON | 0.9+ |
Thomas Cornely, Induprakas Keri & Eric Lockard | Accelerate Hybrid Cloud with Nutanix & Microsoft
(gentle music) >> Okay, we're back with the hybrid cloud power panel. I'm Dave Vellante, and with me Eric Lockard who is the Corporate Vice President of Microsoft Azure Specialized. Thomas Cornely is the Senior Vice President of Products at Nutanix and Indu Keri, who's the Senior Vice President of Engineering, NCI and NC2 at Nutanix. Gentlemen, welcome to The Cube. Thanks for coming on. >> It's good to be here. >> Thanks for having us. >> Eric, let's, let's start with you. We hear so much about cloud first. What's driving the need for hybrid cloud for organizations today? I mean, I want to just put everything in the public cloud. >> Yeah, well, I mean the public cloud has a bunch of inherent advantages, right? I mean it's, it has effectively infinite capacity the ability to, you know, innovate without a lot of upfront costs, you know, regions all over the world. So there is a trend towards public cloud, but you know not everything can go to the cloud, especially right away. There's lots of reasons. Customers want to have assets on premise you know, data gravity, sovereignty and so on. And so really hybrid is the way to achieve the best of both worlds, really to kind of leverage the assets and investments that customers have on premise but also take advantage of the cloud for bursting, originality or expansion especially coming out of the pandemic. We saw a lot of this from work from home and and video conferencing and so on driving a lot of cloud adoption. So hybrid is really the way that we see customers achieving the best of both worlds. >> Yeah, makes sense. I want to, Thomas, if you could talk a little bit I don't want to inundate people with the acronyms, but the Nutanix Cloud clusters on Azure, what is that? What problems does it solve? Give us some color there, please. >> Yeah, so, you know, cloud clusters on Azure which we actually call NC2 to make it simple. And so NC2 on Azure is really our solutions for hybrid cloud, right? And you think about hybrid cloud highly desirable, customers want it. They, they know this is the right way to do it for them given that they want to have workloads on premises at the edge, any public clouds, but it's complicated. It's hard to do, right? And the first thing that you deal with is just silos, right? You have different infrastructure that you have to go and deal with. You have different teams, different technologies, different areas of expertise. And dealing with different portals, networking get complicated, security gets complicated. And so you heard me say this already, you know hybrid can be complex. And so what we've done we then NC2 Azure is we make that simple, right? We allow teams to go and basically have a solution that allows you to go and take any application running on premises and move it as-is to any Azure region where NC2 is available. Once it's running there you keep the same operating model, right? And that's, so that actually super valuable to actually go and do this in a simple fashion. Do it faster, and basically do hybrid in a more (indistinct) fashion know for all your applications. And that's what's really special about NC2 today. >> So Thomas, just a quick follow up on that. So you're, you're, if I understand you correctly it's an identical experience. Did I get that right? >> This is the key for us, right? When you think you're sitting on premises you are used to way of doing things of how you run your applications, how you operate, how you protect them. And what we do here is we extend the Nutanix operating model to workloads running in Azure using the same core stack that you're running on premises, right? So once you have a cluster, deploy in NC2 Azure, it's going to look like the same cluster that you might be running at the edge or in your own data center, using the same tools, using the same admin constructs to go protect the workloads make them highly available do disaster recovery or secure them. All of that becomes the same. But now you are in Azure, and this is what we've spent a lot of time working with Eric and his teams on is you actually have access now to all of those suites of Azure services (indistinct) from those workloads. So now you get the best of both world, you know and we bridge them together and you to get seamless access of those services between what you get from Nutanix, what you get from Azure. >> Yeah. And as you alluded to this is traditionally been non-trivial and people have been looking forward to this for quite some time. So Indu, I want to understand from an engineering perspective, your team had to work with the Microsoft team, and I'm sure there was this is not just a press release, this is, or a PowerPoint you had to do some some engineering work. So what specific engineering work did you guys do and what's unique about this relative to other solutions in the marketplace? >> So let me start with what's unique about this. And I think Thomas and Eric both did a really good job of describing that. The best way to think about what we are delivering jointly with Microsoft is that it speeds up the journey to the public cloud. You know, one way to think about this is moving to the public cloud is sort of like remodeling your house. And when you start remodeling your house, you know, you find that you start with something and before you know it, you're trying to remodel the entire house. And that's a little bit like what journey to the public cloud sort of starts to look like when you start to refactor applications. Because it wasn't, most of the applications out there today weren't designed for the public cloud to begin with. NC2 allows you to flip that on its head and say that take your application as-is and then lift and shift it to the public cloud at which point you start the refactor journey. And one of the things that you have done really well with the NC2 on Azure is that NC2 is not something that sits by Azure side. It's fully integrated into the Azure fabric especially the software-defined networking, SDN piece. What that means is that, you know you don't have to worry about connecting your NC2 cluster to Azure to some sort of a network pipe. You have direct access to the Azure services from the same application that's now running on an NC2 cluster. And that makes your refactor journey so much easier. Your management claim looks the same, your high performance notes let the NVMe notes they look the same. And really, I mean, other than the fact that you're doing something in the public cloud all the Nutanix goodness that you're used to continue to receive that. There is a lot of secret sauce that we have had to develop as part of this journey. But if we had to pick one that really stands out it is how do we take the complexity, the network complexity offer public cloud, in this case Azure and make it as familiar to Nutanix's customers as the VPC, the virtual private cloud (indistinct) that allows them to really think of their on-prem networking and the public cloud networking in very similar terms. There's a lot more that's done on behind the scenes. And by the way, I'll tell you a funny sort of anecdote. My dad used to say when I grew up that, you know if you really want to grow up, you have to do two things. You have to like build a house and you have to marry your kid off to someone. And I would say our dad a third, do a cloud development with the public cloud provider of the partner. This has been just an absolute amazing journey with Eric and the Microsoft team and we're very grateful for their support. >> I need NC2 for my house. I live in a house that was built and it's 1687 and we connect all the new and it is a bolt on, but the secret sauce, I mean there's, there's a lot there but is it a (indistinct) layer. You didn't just wrap it in a container and shove it into the public cloud. You've done more than that, I'm inferring. >> You know, the, it's actually an infrastructure layer offering on top of (indistinct). You can obviously run various types of platform services. So for example, down the road if you have a containerized application you'll actually be able to take it from on prem and run it on NC2. But the NC2 offer itself, the NC2 offering itself is an infrastructure level offering. And the trick is that the storage that you're used to the high performance storage that you know define Nutanix to begin with the hypervisor that you're used to the network constructs that you're used to light micro segmentation for security purposes, all of them are available to you on NC2 in Azure the same way that we're used to do on-prem. And furthermore, managing all of that through Prism, which is our management interface and management console also remains the same. That makes your security model easier that makes your management challenge easier that makes it much easier for an application person or the IT office to be able to report back to the board that they have started to execute on the cloud mandate and they've done that much faster than they would be able to otherwise. >> Great. Thank you for helping us understand the plumbing. So now Thomas, maybe we can get to like the customers. What, what are you seeing, what are the use cases that are that are going to emerge for this solution? >> Yeah, I mean we've, you know we've had a solution for a while and you know this is now new on Azure is going to extend the reach of the solution and get us closer to the type of use cases that are unique to Azure in terms of those solutions for analytics and so forth. But the kind of key use cases for us the first one you know, talks about it is a migration. You know, we see customers on that cloud journey. They're looking to go and move applications wholesale from on premises to public cloud. You know, we make this very easy because in the end they take the same culture that were around the application and we make them available now in the Azure region. You can do this for any applications. There's no change to the application, no networking change the same IP constraint will work the same whether you're running on premises or in Azure. The app stays exactly the same manage the same way, protected the same way. So that's a big one. And you know, the type of drivers for (indistinct) maybe I want to go do something different or I want to go and shut down the location on premises I need to do that with a given timeline. I can now move first and then take care of optimizing the application to take advantage of all that Azure has to offer. So migration and doing that in a simple fashion in a very fast manner is, is a key use case. Another one, and this is classic for leveraging public cloud force, which we're doing on premises IT disaster recovery and something that we refer to as Elastic disaster recovery, being able to go and actually configure a secondary site to protect your on premises workloads. But I think that site sitting in Azure as a small site just enough to hold the data that you're replicating and then use the fact that you cannot get access to resources on demand in Azure to scale out the environment feed over workloads, run them with performance potentially fill them back to on premises, and then shrink back the environment in Azure to again optimize cost and take advantage of the elasticity that you get from public cloud models. Then the last one, building on top of that is just the fact that you cannot get bursting use cases and maybe running a large environment, typically desktop, you know, VDI environments that we see running on premises and I have, you know, a seasonal requirement to go and actually enable more workers to go get access the same solution. You could do this by sizing for the large burst capacity on premises wasting resources during the rest of the year. What we see customers do is optimize what they're running on premises and get access to resources on demand in Azure and basically move the workloads and now basically get combined desktops running on premises desktops running on NC2 on Azure same desktop images, same management, same services and do that as a burst use case during say you're a retailer that has to go and take care of your holiday season. You know, great use case that we see over and over again for our customers, right? And pretty much complimenting the notion of, look I want to go to desktop as a service, but right now I don't want to refactor the entire application stack. I just want to be able to get access to resources on demand in the right place at the right time. >> Makes sense. I mean this is really all about supporting customer's, digital transformations. We all talk about how that was accelerated during the pandemic and but the cloud is a fundamental component of the digital transformations generic. You, you guys have obviously made a commitment between Microsoft and Nutanix to simplify hybrid cloud and that journey to the cloud. How should customers, you know, measure that? What does success look like? What's the ultimate vision here? >> Well, the ultimate vision is really twofold, I think. The one is to, you know first is really to ease a customer's journey to the cloud to allow them to take advantage of all the benefits to the cloud, but to do so without having to rewrite their applications or retrain their administrators and or to obviate their investment that they already have and platforms like Nutanix. And so the work that companies have done together here, you know, first and foremost is really to allow folks to come to the cloud in the way that they want to come to the cloud and take really the best of both worlds, right? Leverage their investment in the capabilities of the Nutanix platform, but do so in conjunction with the advantages and capabilities of Azure. You know, second is really to extend some of the cloud capabilities down onto the on-premise infrastructure. And so with investments that we've done together with Azure arc for example, we're really extending the Azure control plane down onto on-premise Nutanix clusters and bringing the capabilities that provides to the Nutanix customer as well as various Azure services like our data services and Azure SQL server. So it's really kind of coming at the problem from two directions. One is from kind of traditional on-premise up into the cloud, and then the second is kind of from the cloud leveraging the investment customers have in on-premise HCI. >> Got it. Thank you. Okay, last question. Maybe each of you could just give us one key takeaway for our audience today. Maybe we start with Thomas and then Indu and then Eric you can bring us home. >> Sure. So the key takeaway is, you know, cloud customers on Azure is now GA you know, this is something that we've had tremendous demand from our customers both from the Microsoft side and the Nutanix side going back years literally, right? People have been wanting to go and see this this is now live GA open for business and you know we're ready to go and engage and ready to scale, right? This is our first step in a long journey in a very key partnership for us at Nutanix. >> Great, Indu. >> In our day, in a prior life about seven or eight years ago, I was a part of a team that took a popular text preparation software and moved it to the public cloud. And that was a journey that took us four years and probably several hundred million dollars. And if we had NC2 then it would've saved us half the money, but more importantly would've gotten there in one third the time. And that's really the value of this. >> Okay. Eric, bring us home please. >> Yeah, I'll just point out that, this is not something that's just bought on or something we started yesterday. This is something the teams both companies have been working on together for years really. And it's a way of deeply integrating Nutanix into the Azure Cloud. And with the ultimate goal of again providing cloud capabilities to the Nutanix customer in a way that they can, you know take advantage of the cloud and then compliment those applications over time with additional Azure services like storage, for example. So it really is a great on-ramp to the cloud for customers who have significant investments in Nutanix clusters on premise. >> Love the co-engineering and the ability to take advantage of those cloud native tools and capabilities, real customer value. Thanks gentlemen. Really appreciate your time. >> Thank you. >> Thank you. >> Okay. Keep it right there. You're watching accelerate hybrid cloud, that journey with Nutanix and Microsoft technology on The Cube, your leader in enterprise and emerging tech coverage. (gentle music)
SUMMARY :
the Senior Vice President everything in the public cloud. the ability to, you know, innovate but the Nutanix Cloud clusters And the first thing that you understand you correctly All of that becomes the same. in the marketplace? for the public cloud to begin with. it into the public cloud. or the IT office to be able to report back that are going to emerge the first one you know, talks and that journey to the cloud. and take really the best Maybe each of you could just and ready to scale, right? and moved it to the public cloud. This is something the teams Love the co-engineering and the ability hybrid cloud, that journey
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Eric | PERSON | 0.99+ |
Thomas | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Eric Lockard | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Thomas Cornely | PERSON | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
four years | QUANTITY | 0.99+ |
both companies | QUANTITY | 0.99+ |
PowerPoint | TITLE | 0.99+ |
first step | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
each | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
one third | QUANTITY | 0.98+ |
NC2 | TITLE | 0.98+ |
1687 | DATE | 0.98+ |
Azure | TITLE | 0.98+ |
NC2 | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
both worlds | QUANTITY | 0.98+ |
NCI | ORGANIZATION | 0.97+ |
both | QUANTITY | 0.97+ |
second | QUANTITY | 0.97+ |
two things | QUANTITY | 0.97+ |
first thing | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
one key takeaway | QUANTITY | 0.97+ |
Azure Cloud | TITLE | 0.96+ |
eight years ago | DATE | 0.96+ |
Indu | PERSON | 0.96+ |
two directions | QUANTITY | 0.95+ |
several hundred million dollars | QUANTITY | 0.94+ |
third | QUANTITY | 0.93+ |
pandemic | EVENT | 0.93+ |
Induprakas Keri | PERSON | 0.93+ |
first one | QUANTITY | 0.93+ |
half the money | QUANTITY | 0.93+ |
NC2 | LOCATION | 0.89+ |
The Future Is Built On InFluxDB
>>Time series data is any data that's stamped in time in some way that could be every second, every minute, every five minutes, every hour, every nanosecond, whatever it might be. And typically that data comes from sources in the physical world like devices or sensors, temperature, gauges, batteries, any device really, or things in the virtual world could be software, maybe it's software in the cloud or data and containers or microservices or virtual machines. So all of these items, whether in the physical or virtual world, they're generating a lot of time series data. Now time series data has been around for a long time, and there are many examples in our everyday lives. All you gotta do is punch up any stock, ticker and look at its price over time and graphical form. And that's a simple use case that anyone can relate to and you can build timestamps into a traditional relational database. >>You just add a column to capture time and as well, there are examples of log data being dumped into a data store that can be searched and captured and ingested and visualized. Now, the problem with the latter example that I just gave you is that you gotta hunt and Peck and search and extract what you're looking for. And the problem with the former is that traditional general purpose databases they're designed as sort of a Swiss army knife for any workload. And there are a lot of functions that get in the way and make them inefficient for time series analysis, especially at scale. Like when you think about O T and edge scale, where things are happening super fast, ingestion is coming from many different sources and analysis often needs to be done in real time or near real time. And that's where time series databases come in. >>They're purpose built and can much more efficiently support ingesting metrics at scale, and then comparing data points over time, time series databases can write and read at significantly higher speeds and deal with far more data than traditional database methods. And they're more cost effective instead of throwing processing power at the problem. For example, the underlying architecture and algorithms of time series databases can optimize queries and they can reclaim wasted storage space and reuse it. At scale time, series databases are simply a better fit for the job. Welcome to moving the world with influx DB made possible by influx data. My name is Dave Valante and I'll be your host today. Influx data is the company behind InfluxDB. The open source time series database InfluxDB is designed specifically to handle time series data. As I just explained, we have an exciting program for you today, and we're gonna showcase some really interesting use cases. >>First, we'll kick it off in our Palo Alto studios where my colleague, John furrier will interview Evan Kaplan. Who's the CEO of influx data after John and Evan set the table. John's gonna sit down with Brian Gilmore. He's the director of IOT and emerging tech at influx data. And they're gonna dig into where influx data is gaining traction and why adoption is occurring and, and why it's so robust. And they're gonna have tons of examples and double click into the technology. And then we bring it back here to our east coast studios, where I get to talk to two practitioners, doing amazing things in space with satellites and modern telescopes. These use cases will blow your mind. You don't want to miss it. So thanks for being here today. And with that, let's get started. Take it away. Palo Alto. >>Okay. Today we welcome Evan Kaplan, CEO of influx data, the company behind influx DB. Welcome Evan. Thanks for coming on. >>Hey John, thanks for having me >>Great segment here on the influx DB story. What is the story? Take us through the history. Why time series? What's the story >><laugh> so the history history is actually actually pretty interesting. Um, Paul dicks, my partner in this and our founder, um, super passionate about developers and developer experience. And, um, he had worked on wall street building a number of time series kind of platform trading platforms for trading stocks. And from his point of view, it was always what he would call a yak shave, which means you had to do a ton of work just to start doing work, which means you had to write a bunch of extrinsic routines. You had to write a bunch of application handling on existing relational databases in order to come up with something that was optimized for a trading platform or a time series platform. And he sort of, he just developed this real clear point of view is this is not how developers should work. And so in 2013, he went through why Combinator and he built something for, he made his first commit to open source in flu DB at the end of 2013. And, and he basically, you know, from my point of view, he invented modern time series, which is you start with a purpose-built time series platform to do these kind of workloads. And you get all the benefits of having something right outta the box. So a developer can be totally productive right away. >>And how many people in the company what's the history of employees and stuff? >>Yeah, I think we're, I, you know, I always forget the number, but it's something like 230 or 240 people now. Um, the company, I joined the company in 2016 and I love Paul's vision. And I just had a strong conviction about the relationship between time series and IOT. Cuz if you think about it, what sensors do is they speak time, series, pressure, temperature, volume, humidity, light, they're measuring they're instrumenting something over time. And so I thought that would be super relevant over long term and I've not regretted it. >>Oh no. And it's interesting at that time, go back in the history, you know, the role of databases, well, relational database is the one database to rule the world. And then as clouds started coming in, you starting to see more databases, proliferate types of databases and time series in particular is interesting. Cuz real time has become super valuable from an application standpoint, O T which speaks time series means something it's like time matters >>Time. >>Yeah. And sometimes data's not worth it after the time, sometimes it worth it. And then you get the data lake. So you have this whole new evolution. Is this the momentum? What's the momentum, I guess the question is what's the momentum behind >>You mean what's causing us to grow. So >>Yeah, the time series, why is time series >>And the >>Category momentum? What's the bottom line? >>Well, think about it. You think about it from a broad, broad sort of frame, which is where, what everybody's trying to do is build increasingly intelligent systems, whether it's a self-driving car or a robotic system that does what you want to do or a self-healing software system, everybody wants to build increasing intelligent systems. And so in order to build these increasing intelligent systems, you have to instrument the system well, and you have to instrument it over time, better and better. And so you need a tool, a fundamental tool to drive that instrumentation. And that's become clear to everybody that that instrumentation is all based on time. And so what happened, what happened, what happened what's gonna happen? And so you get to these applications like predictive maintenance or smarter systems. And increasingly you want to do that stuff, not just intelligently, but fast in real time. So millisecond response so that when you're driving a self-driving car and the system realizes that you're about to do something, essentially you wanna be able to act in something that looks like real time, all systems want to do that, want to be more intelligent and they want to be more real time. And so we just happen to, you know, we happen to show up at the right time in the evolution of a >>Market. It's interesting near real time. Isn't good enough when you need real time. >><laugh> yeah, it's not, it's not. And it's like, and it's like, everybody wants, even when you don't need it, ironically, you want it. It's like having the feature for, you know, you buy a new television, you want that one feature, even though you're not gonna use it, you decide that your buying criteria real time is a buying criteria >>For, so you, I mean, what you're saying then is near real time is getting closer to real time as possible, as fast as possible. Right. Okay. So talk about the aspect of data, cuz we're hearing a lot of conversations on the cube in particular around how people are implementing and actually getting better. So iterating on data, but you have to know when it happened to get, know how to fix it. So this is a big part of how we're seeing with people saying, Hey, you know, I wanna make my machine learning algorithms better after the fact I wanna learn from the data. Um, how does that, how do you see that evolving? Is that one of the use cases of sensors as people bring data in off the network, getting better with the data knowing when it happened? >>Well, for sure. So, so for sure, what you're saying is, is, is none of this is non-linear, it's all incremental. And so if you take something, you know, just as an easy example, if you take a self-driving car, what you're doing is you're instrumenting that car to understand where it can perform in the real world in real time. And if you do that, if you run the loop, which is I instrumented, I watch what happens, oh, that's wrong? Oh, I have to correct for that. I correct for that in the software. If you do that for a billion times, you get a self-driving car, but every system moves along that evolution. And so you get the dynamic of, you know, of constantly instrumenting watching the system behave and do it. And this and sets up driving car is one thing. But even in the human genome, if you look at some of our customers, you know, people like, you know, people doing solar arrays, people doing power walls, like all of these systems are getting smarter. >>Well, let's get into that. What are the top applications? What are you seeing for your, with in, with influx DB, the time series, what's the sweet spot for the application use case and some customers give some >>Examples. Yeah. So it's, it's pretty easy to understand on one side of the equation that's the physical side is sensors are sensors are getting cheap. Obviously we know that and they're getting the whole physical world is getting instrumented, your home, your car, the factory floor, your wrist, watch your healthcare, you name it. It's getting instrumented in the physical world. We're watching the physical world in real time. And so there are three or four sweet spots for us, but, but they're all on that side. They're all about IOT. So they're think about consumer IOT projects like Google's nest todo, um, particle sensors, um, even delivery engines like rapid who deliver the Instacart of south America, like anywhere there's a physical location do and that's on the consumer side. And then another exciting space is the industrial side factories are changing dramatically over time. Increasingly moving away from proprietary equipment to develop or driven systems that run operational because what, what has to get smarter when you're building, when you're building a factory is systems all have to get smarter. And then, um, lastly, a lot in the renewables sustainability. So a lot, you know, Tesla, lucid, motors, Cola, motors, um, you know, lots to do with electric cars, solar arrays, windmills, arrays, just anything that's gonna get instrumented that where that instrumentation becomes part of what the purpose >>Is. It's interesting. The convergence of physical and digital is happening with the data IOT. You mentioned, you know, you think of IOT, look at the use cases there, it was proprietary OT systems. Now becoming more IP enabled internet protocol and now edge compute, getting smaller, faster, cheaper AI going to the edge. Now you have all kinds of new capabilities that bring that real time and time series opportunity. Are you seeing IOT going to a new level? What was the, what's the IOT where's the IOT dots connecting to because you know, as these two cultures merge yeah. Operations, basically industrial factory car, they gotta get smarter, intelligent edge is a buzzword, but I mean, it has to be more intelligent. Where's the, where's the action in all this. So the >>Action, really, it really at the core, it's at the developer, right? Because you're looking at these things, it's very hard to get an off the shelf system to do the kinds of physical and software interaction. So the actions really happen at the developer. And so what you're seeing is a movement in the world that, that maybe you and I grew up in with it or OT moving increasingly that developer driven capability. And so all of these IOT systems they're bespoke, they don't come out of the box. And so the developer, the architect, the CTO, they define what's my business. What am I trying to do? Am I trying to sequence a human genome and figure out when these genes express theself or am I trying to figure out when the next heart rate monitor's gonna show up on my apple watch, right? What am I trying to do? What's the system I need to build. And so starting with the developers where all of the good stuff happens here, which is different than it used to be, right. Used to be you'd buy an application or a service or a SA thing for, but with this dynamic, with this integration of systems, it's all about bespoke. It's all about building >>Something. So let's get to the developer real quick, real highlight point here is the data. I mean, I could see a developer saying, okay, I need to have an application for the edge IOT edge or car. I mean, we're gonna have, I mean, Tesla's got applications of the car it's right there. I mean, yes, there's the modern application life cycle now. So take us through how this impacts the developer. Does it impact their C I C D pipeline? Is it cloud native? I mean, where does this all, where does this go to? >>Well, so first of all, you're talking about, there was an internal journey that we had to go through as a company, which, which I think is fascinating for anybody who's interested is we went from primarily a monolithic software that was open sourced to building a cloud native platform, which means we had to move from an agile development environment to a C I C D environment. So to a degree that you are moving your service, whether it's, you know, Tesla monitoring your car and updating your power walls, right. Or whether it's a solar company updating the arrays, right. To degree that that service is cloud. Then increasingly remove from an agile development to a C I C D environment, which you're shipping code to production every day. And so it's not just the developers, all the infrastructure to support the developers to run that service and that sort of stuff. I think that's also gonna happen in a big way >>When your customer base that you have now, and as you see, evolving with infl DB, is it that they're gonna be writing more of the application or relying more on others? I mean, obviously there's an open source component here. So when you bring in kind of old way, new way old way was I got a proprietary, a platform running all this O T stuff and I gotta write, here's an application. That's general purpose. Yeah. I have some flexibility, somewhat brittle, maybe not a lot of robustness to it, but it does its job >>A good way to think about this is versus a new way >>Is >>What so yeah, good way to think about this is what, what's the role of the developer slash architect CTO that chain within a large, within an enterprise or a company. And so, um, the way to think about it is I started my career in the aerospace industry <laugh> and so when you look at what Boeing does to assemble a plane, they build very, very few of the parts. Instead, what they do is they assemble, they buy the wings, they buy the engines, they assemble, actually, they don't buy the wings. It's the one thing they buy the, the material for the w they build the wings, cuz there's a lot of tech in the wings and they end up being assemblers smart assemblers of what ends up being a flying airplane, which is pretty big deal even now. And so what, what happens with software people is they have the ability to pull from, you know, the best of the open source world. So they would pull a time series capability from us. Then they would assemble that with, with potentially some ETL logic from somebody else, or they'd assemble it with, um, a Kafka interface to be able to stream the data in. And so they become very good integrators and assemblers, but they become masters of that bespoke application. And I think that's where it goes, cuz you're not writing native code for everything. >>So they're more flexible. They have faster time to market cuz they're assembling way faster and they get to still maintain their core competency. Okay. Their wings in this case, >>They become increasingly not just coders, but designers and developers. They become broadly builders is what we like to think of it. People who start and build stuff by the way, this is not different than the people just up the road Google have been doing for years or the tier one, Amazon building all their own. >>Well, I think one of the things that's interesting is is that this idea of a systems developing a system architecture, I mean systems, uh, uh, systems have consequences when you make changes. So when you have now cloud data center on premise and edge working together, how does that work across the system? You can't have a wing that doesn't work with the other wing kind of thing. >>That's exactly. But that's where the that's where the, you know, that that Boeing or that airplane building analogy comes in for us. We've really been thoughtful about that because IOT it's critical. So our open source edge has the same API as our cloud native stuff that has enterprise on pre edge. So our multiple products have the same API and they have a relationship with each other. They can talk with each other. So the builder builds it once. And so this is where, when you start thinking about the components that people have to use to build these services is that you wanna make sure, at least that base layer, that database layer, that those components talk to each other. >>So I'll have to ask you if I'm the customer. I put my customer hat on. Okay. Hey, I'm dealing with a lot. >>That mean you have a PO for <laugh> >>A big check. I blank check. If you can answer this question only if the tech, if, if you get the question right, I got all this important operation stuff. I got my factory, I got my self-driving cars. This isn't like trivial stuff. This is my business. How should I be thinking about time series? Because now I have to make these architectural decisions, as you mentioned, and it's gonna impact my application development. So huge decision point for your customers. What should I care about the most? So what's in it for me. Why is time series >>Important? Yeah, that's a great question. So chances are, if you've got a business that was, you know, 20 years old or 25 years old, you were already thinking about time series. You probably didn't call it that you built something on a Oracle or you built something on IBM's DB two, right. And you made it work within your system. Right? And so that's what you started building. So it's already out there. There are, you know, there are probably hundreds of millions of time series applications out there today. But as you start to think about this increasing need for real time, and you start to think about increasing intelligence, you think about optimizing those systems over time. I hate the word, but digital transformation. Then you start with time series. It's a foundational base layer for any system that you're gonna build. There's no system I can think of where time series, shouldn't be the foundational base layer. If you just wanna store your data and just leave it there and then maybe look it up every five years. That's fine. That's not time. Series time series is when you're building a smarter, more intelligent, more real time system. And the developers now know that. And so the more they play a role in building these systems, the more obvious it becomes. >>And since I have a PO for you and a big check, yeah. What is, what's the value to me as I, when I implement this, what's the end state, what's it look like when it's up and running? What's the value proposition for me. What's an >>So, so when it's up and running, you're able to handle the queries, the writing of the data, the down sampling of the data, they're transforming it in near real time. So that the other dependencies that a system that gets for adjusting a solar array or trading energy off of a power wall or some sort of human genome, those systems work better. So time series is foundational. It's not like it's, you know, it's not like it's doing every action that's above, but it's foundational to build a really compelling, intelligent system. I think that's what developers and archs are seeing now. >>Bottom line, final word. What's in it for the customer. What's what, what's your, um, what's your statement to the customer? What would you say to someone looking to do something in time series on edge? >>Yeah. So, so it's pretty clear to clear to us that if you're building, if you view yourself as being in the build business of building systems that you want 'em to be increasingly intelligent, self-healing autonomous. You want 'em to operate in real time that you start from time series. But I also wanna say what's in it for us influx what's in it for us is people are doing some amazing stuff. You know, I highlighted some of the energy stuff, some of the human genome, some of the healthcare it's hard not to be proud or feel like, wow. Yeah. Somehow I've been lucky. I've arrived at the right time, in the right place with the right people to be able to deliver on that. That's that's also exciting on our side of the equation. >>Yeah. It's critical infrastructure, critical, critical operations. >>Yeah. >>Yeah. Great stuff, Evan. Thanks for coming on. Appreciate this segment. All right. In a moment, Brian Gilmore director of IOT and emerging technology that influx day will join me. You're watching the cube leader in tech coverage. Thanks for watching >>Time series data from sensors systems and applications is a key source in driving automation and prediction in technologies around the world. But managing the massive amount of timestamp data generated these days is overwhelming, especially at scale. That's why influx data developed influx DB, a time series data platform that collects stores and analyzes data influx DB empowers developers to extract valuable insights and turn them into action by building transformative IOT analytics and cloud native applications, purpose built and optimized to handle the scale and velocity of timestamped data. InfluxDB puts the power in your hands with developer tools that make it easy to get started quickly with less code InfluxDB is more than a database. It's a robust developer platform with integrated tooling. That's written in the languages you love. So you can innovate faster, run in flex DB anywhere you want by choosing the provider and region that best fits your needs across AWS, Microsoft Azure and Google cloud flex DB is fast and automatically scalable. So you can spend time delivering value to customers, not managing clusters, take control of your time series data. So you can focus on the features and functionalities that give your applications a competitive edge. Get started for free with influx DB, visit influx data.com/cloud to learn more. >>Okay. Now we're joined by Brian Gilmore director of IOT and emerging technologies at influx data. Welcome to the show. >>Thank you, John. Great to be here. >>We just spent some time with Evan going through the company and the value proposition, um, with influx DV, what's the momentum, where do you see this coming from? What's the value coming out of this? >>Well, I think it, we're sort of hitting a point where the technology is, is like the adoption of it is becoming mainstream. We're seeing it in all sorts of organizations, everybody from like the most well funded sort of advanced big technology companies to the smaller academics, the startups and the managing of that sort of data that emits from that technology is time series and us being able to give them a, a platform, a tool that's super easy to use, easy to start. And then of course will grow with them is, is been key to us. Sort of, you know, riding along with them is they're successful. >>Evan was mentioning that time series has been on everyone's radar and that's in the OT business for years. Now, you go back since 20 13, 14, even like five years ago that convergence of physical and digital coming together, IP enabled edge. Yeah. Edge has always been kind of hyped up, but why now? Why, why is the edge so hot right now from an adoption standpoint? Is it because it's just evolution, the tech getting better? >>I think it's, it's, it's twofold. I think that, you know, there was, I would think for some people, everybody was so focused on cloud over the last probably 10 years. Mm-hmm <affirmative> that they forgot about the compute that was available at the edge. And I think, you know, those, especially in the OT and on the factory floor who weren't able to take Avan full advantage of cloud through their applications, you know, still needed to be able to leverage that compute at the edge. I think the big thing that we're seeing now, which is interesting is, is that there's like a hybrid nature to all of these applications where there's definitely some data that's generated on the edge. There's definitely done some data that's generated in the cloud. And it's the ability for a developer to sort of like tie those two systems together and work with that data in a very unified uniform way. Um, that's giving them the opportunity to build solutions that, you know, really deliver value to whatever it is they're trying to do, whether it's, you know, the, the out reaches of outer space or whether it's optimizing the factory floor. >>Yeah. I think, I think one of the things you also mentions genome too, dig big data is coming to the real world. And I think I, OT has been kind of like this thing for OT and, and in some use case, but now with the, with the cloud, all companies have an edge strategy now. So yeah, what's the secret sauce because now this is hot, hot product for the whole world and not just industrial, but all businesses. What's the secret sauce. >>Well, I mean, I think part of it is just that the technology is becoming more capable and that's especially on the hardware side, right? I mean, like technology compute is getting smaller and smaller and smaller. And we find that by supporting all the way down to the edge, even to the micro controller layer with our, um, you know, our client libraries and then working hard to make our applications, especially the database as small as possible so that it can be located as close to sort of the point of origin of that data in the edge as possible is, is, is fantastic. Now you can take that. You can run that locally. You can do your local decision making. You can use influx DB as sort of an input to automation control the autonomy that people are trying to drive at the edge. But when you link it up with everything that's in the cloud, that's when you get all of the sort of cloud scale capabilities of parallelized, AI and machine learning and all of that. >>So what's interesting is the open source success has been something that we've talked about a lot in the cube about how people are leveraging that you guys have users in the enterprise users that IOT market mm-hmm <affirmative>, but you got developers now. Yeah. Kind of together brought that up. How do you see that emerging? How do developers engage? What are some of the things you're seeing that developers are really getting into with InfluxDB >>What's? Yeah. Well, I mean, I think there are the developers who are building companies, right? And these are the startups and the folks that we love to work with who are building new, you know, new services, new products, things like that. And, you know, especially on the consumer side of IOT, there's a lot of that, just those developers. But I think we, you gotta pay attention to those enterprise developers as well, right? There are tons of people with the, the title of engineer in, in your regular enterprise organizations. And they're there for systems integration. They're there for, you know, looking at what they would build versus what they would buy. And a lot of them come from, you know, a strong, open source background and they, they know the communities, they know the top platforms in those spaces and, and, you know, they're excited to be able to adopt and use, you know, to optimize inside the business as compared to just building a brand new one. >>You know, it's interesting too, when Evan and I were talking about open source versus closed OT systems, mm-hmm <affirmative> so how do you support the backwards compatibility of older systems while maintaining open dozens of data formats out there? Bunch of standards, protocols, new things are emerging. Everyone wants to have a control plane. Everyone wants to leverage the value of data. How do you guys keep track of it all? What do you guys support? >>Yeah, well, I mean, I think either through direct connection, like we have a product called Telegraph, it's unbelievable. It's open source, it's an edge agent. You can run it as close to the edge as you'd like, it speaks dozens of different protocols in its own, right? A couple of which MQTT B, C U a are very, very, um, applicable to these T use cases. But then we also, because we are sort of not only open source, but open in terms of our ability to collect data, we have a lot of partners who have built really great integrations from their own middleware, into influx DB. These are companies like ke wear and high bite who are really experts in those downstream industrial protocols. I mean, that's a business, not everybody wants to be in. It requires some very specialized, very hard work and a lot of support, um, you know, and so by making those connections and building those ecosystems, we get the best of both worlds. The customers can use the platforms they need up to the point where they would be putting into our database. >>What's some of customer testimonies that they, that share with you. Can you share some anecdotal kind of like, wow, that's the best thing I've ever used. This really changed my business, or this is a great tech that's helped me in these other areas. What are some of the, um, soundbites you hear from customers when they're successful? >>Yeah. I mean, I think it ranges. You've got customers who are, you know, just finally being able to do the monitoring of assets, you know, sort of at the edge in the field, we have a customer who's who's has these tunnel boring machines that go deep into the earth to like drill tunnels for, for, you know, cars and, and, you know, trains and things like that. You know, they are just excited to be able to stick a database onto those tunnel, boring machines, send them into the depths of the earth and know that when they come out, all of that telemetry at a very high frequency has been like safely stored. And then it can just very quickly and instantly connect up to their, you know, centralized database. So like just having that visibility is brand new to them. And that's super important. On the other hand, we have customers who are way far beyond the monitoring use case, where they're actually using the historical records in the time series database to, um, like I think Evan mentioned like forecast things. So for predictive maintenance, being able to pull in the telemetry from the machines, but then also all of that external enrichment data, the metadata, the temperatures, the pressure is who is operating the machine, those types of things, and being able to easily integrate with platforms like Jupyter notebooks or, you know, all of those scientific computing and machine learning libraries to be able to build the models, train the models, and then they can send that information back down to InfluxDB to apply it and detect those anomalies, which >>Are, I think that's gonna be an, an area. I personally think that's a hot area because I think if you look at AI right now, yeah. It's all about training the machine learning albums after the fact. So time series becomes hugely important. Yeah. Cause now you're thinking, okay, the data matters post time. Yeah. First time. And then it gets updated the new time. Yeah. So it's like constant data cleansing data iteration, data programming. We're starting to see this new use case emerge in the data field. >>Yep. Yeah. I mean, I think you agree. Yeah, of course. Yeah. The, the ability to sort of handle those pipelines of data smartly, um, intelligently, and then to be able to do all of the things you need to do with that data in stream, um, before it hits your sort of central repository. And, and we make that really easy for customers like Telegraph, not only does it have sort of the inputs to connect up to all of those protocols and the ability to capture and connect up to the, to the partner data. But also it has a whole bunch of capabilities around being able to process that data, enrich it, reform at it, route it, do whatever you need. So at that point you're basically able to, you're playing your data in exactly the way you would wanna do it. You're routing it to different, you know, destinations and, and it's, it's, it's not something that really has been in the realm of possibility until this point. Yeah. Yeah. >>And when Evan was on it's great. He was a CEO. So he sees the big picture with customers. He was, he kinda put the package together that said, Hey, we got a system. We got customers, people are wanting to leverage our product. What's your PO they're sell. He's selling too as well. So you have that whole CEO perspective, but he brought up this notion that there's multiple personas involved in kind of the influx DB system architect. You got developers and users. Can you talk about that? Reality as customers start to commercialize and operationalize this from a commercial standpoint, you got a relationship to the cloud. Yep. The edge is there. Yep. The edge is getting super important, but cloud brings a lot of scale to the table. So what is the relationship to the cloud? Can you share your thoughts on edge and its relationship to the cloud? >>Yeah. I mean, I think edge, you know, edges, you can think of it really as like the local information, right? So it's, it's generally like compartmentalized to a point of like, you know, a single asset or a single factory align, whatever. Um, but what people do who wanna pro they wanna be able to make the decisions there at the edge locally, um, quickly minus the latency of sort of taking that large volume of data, shipping it to the cloud and doing something with it there. So we allow them to do exactly that. Then what they can do is they can actually downsample that data or they can, you know, detect like the really important metrics or the anomalies. And then they can ship that to a central database in the cloud where they can do all sorts of really interesting things with it. Like you can get that centralized view of all of your global assets. You can start to compare asset to asset, and then you can do those things like we talked about, whereas you can do predictive types of analytics or, you know, larger scale anomaly detections. >>So in this model you have a lot of commercial operations, industrial equipment. Yep. The physical plant, physical business with virtual data cloud all coming together. What's the future for InfluxDB from a tech standpoint. Cause you got open. Yep. There's an ecosystem there. Yep. You have customers who want operational reliability for sure. I mean, so you got organic <laugh> >>Yeah. Yeah. I mean, I think, you know, again, we got iPhones when everybody's waiting for flying cars. Right. So I don't know. We can like absolutely perfectly predict what's coming, but I think there are some givens and I think those givens are gonna be that the world is only gonna become more hybrid. Right. And then, you know, so we are going to have much more widely distributed, you know, situations where you have data being generated in the cloud, you have data gen being generated at the edge and then there's gonna be data generated sort sort of at all points in between like physical locations as well as things that are, that are very virtual. And I think, you know, we are, we're building some technology right now. That's going to allow, um, the concept of a database to be much more fluid and flexible, sort of more aligned with what a file would be like. >>And so being able to move data to the compute for analysis or move the compute to the data for analysis, those are the types of, of solutions that we'll be bringing to the customers sort of over the next little bit. Um, but I also think we have to start thinking about like what happens when the edge is actually off the planet. Right. I mean, we've got customers, you're gonna talk to two of them, uh, in the panel who are actually working with data that comes from like outside the earth, like, you know, either in low earth orbit or you know, all the way sort of on the other side of the universe. Yeah. And, and to be able to process data like that and to do so in a way it's it's we gotta, we gotta build the fundamentals for that right now on the factory floor and in the mines and in the tunnels. Um, so that we'll be ready for that one. >>I think you bring up a good point there because one of the things that's common in the industry right now, people are talking about, this is kind of new thinking is hyper scale's always been built up full stack developers, even the old OT world, Evan was pointing out that they built everything right. And the world's going to more assembly with core competency and IP and also property being the core of their apple. So faster assembly and building, but also integration. You got all this new stuff happening. Yeah. And that's to separate out the data complexity from the app. Yes. So space genome. Yep. Driving cars throws off massive data. >>It >>Does. So is Tesla, uh, is the car the same as the data layer? >>I mean the, yeah, it's, it's certainly a point of origin. I think the thing that we wanna do is we wanna let the developers work on the world, changing problems, the things that they're trying to solve, whether it's, you know, energy or, you know, any of the other health or, you know, other challenges that these teams are, are building against. And we'll worry about that time series data and the underlying data platform so that they don't have to. Right. I mean, I think you talked about it, uh, you know, for them just to be able to adopt the platform quickly, integrate it with their data sources and the other pieces of their applications. It's going to allow them to bring much faster time to market on these products. It's gonna allow them to be more iterative. They're gonna be able to do more sort of testing and things like that. And ultimately it will, it'll accelerate the adoption and the creation of >>Technology. You mentioned earlier in, in our talk about unification of data. Yeah. How about APIs? Cuz developers love APIs in the cloud unifying APIs. How do you view view that? >>Yeah, I mean, we are APIs, that's the product itself. Like everything, people like to think of it as sort of having this nice front end, but the front end is B built on our public APIs. Um, you know, and it, it allows the developer to build all of those hooks for not only data creation, but then data processing, data analytics, and then, you know, sort of data extraction to bring it to other platforms or other applications, microservices, whatever it might be. So, I mean, it is a world of APIs right now and you know, we, we bring a very sort of useful set of them for managing the time series data. These guys are all challenged with. It's >>Interesting. You and I were talking before we came on camera about how, um, data is, feels gonna have this kind of SRE role that DevOps had site reliability engineers, which manages a bunch of servers. There's so much data out there now. Yeah. >>Yeah. It's like reigning data for sure. And I think like that ability to be like one of the best jobs on the planet is gonna be to be able to like, sort of be that data Wrangler to be able to understand like what the data sources are, what the data formats are, how to be able to efficiently move that data from point a to point B and you know, to process it correctly so that the end users of that data aren't doing any of that sort of hard upfront preparation collection storage's >>Work. Yeah. That's data as code. I mean, data engineering is it is becoming a new discipline for sure. And, and the democratization is the benefit. Yeah. To everyone, data science get easier. I mean data science, but they wanna make it easy. Right. <laugh> yeah. They wanna do the analysis, >>Right? Yeah. I mean, I think, you know, it, it's a really good point. I think like we try to give our users as many ways as there could be possible to get data in and get data out. We sort of think about it as meeting them where they are. Right. So like we build, we have the sort of client libraries that allow them to just port to us, you know, directly from the applications and the languages that they're writing, but then they can also pull it out. And at that point nobody's gonna know the users, the end consumers of that data, better than those people who are building those applications. And so they're building these user interfaces, which are making all of that data accessible for, you know, their end users inside their organization. >>Well, Brian, great segment, great insight. Thanks for sharing all, all the complexities and, and IOT that you guys helped take away with the APIs and, and assembly and, and all the system architectures that are changing edge is real cloud is real. Yeah, absolutely. Mainstream enterprises. And you got developer attraction too, so congratulations. >>Yeah. It's >>Great. Well, thank any, any last word you wanna share >>Deal with? No, just, I mean, please, you know, if you're, if you're gonna, if you're gonna check out influx TV, download it, try out the open source contribute if you can. That's a, that's a huge thing. It's part of being the open source community. Um, you know, but definitely just, just use it. I think when once people use it, they try it out. They'll understand very, >>Very quickly. So open source with developers, enterprise and edge coming together all together. You're gonna hear more about that in the next segment, too. Right. Thanks for coming on. Okay. Thanks. When we return, Dave LAN will lead a panel on edge and data influx DB. You're watching the cube, the leader in high tech enterprise coverage. >>Why the startup, we move really fast. We find that in flex DB can move as fast as us. It's just a great group, very collaborative, very interested in manufacturing. And we see a bright future in working with influence. My name is Aaron Seley. I'm the CTO at HBI. Highlight's one of the first companies to focus on manufacturing data and apply the concepts of data ops, treat that as an asset to deliver to the it system, to enable applications like overall equipment effectiveness that can help the factory produce better, smarter, faster time series data. And manufacturing's really important. If you take a piece of equipment, you have the temperature pressure at the moment that you can look at to kind of see the state of what's going on. So without that context and understanding you can't do what manufacturers ultimately want to do, which is predict the future. >>Influx DB represents kind of a new way to storm time series data with some more advanced technology and more importantly, more open technologies. The other thing that influx does really well is once the data's influx, it's very easy to get out, right? They have a modern rest API and other ways to access the data. That would be much more difficult to do integrations with classic historians highlight can serve to model data, aggregate data on the shop floor from a multitude of sources, whether that be P C U a servers, manufacturing execution systems, E R P et cetera, and then push that seamlessly into influx to then be able to run calculations. Manufacturing is changing this industrial 4.0, and what we're seeing is influx being part of that equation. Being used to store data off the unified name space, we recommend InfluxDB all the time to customers that are exploring a new way to share data manufacturing called the unified name space who have open questions around how do I share this new data that's coming through my UNS or my QTT broker? How do I store this and be able to query it over time? And we often point to influx as a solution for that is a great brand. It's a great group of people and it's a great technology. >>Okay. We're now going to go into the customer panel and we'd like to welcome Angelo Fasi. Who's a software engineer at the Vera C Ruben observatory in Caleb McLaughlin whose senior spacecraft operations software engineer at loft orbital guys. Thanks for joining us. You don't wanna miss folks this interview, Caleb, let's start with you. You work for an extremely cool company. You're launching satellites into space. I mean, there, of course doing that is, is highly complex and not a cheap endeavor. Tell us about loft Orbi and what you guys do to attack that problem. >>Yeah, absolutely. And, uh, thanks for having me here by the way. Uh, so loft orbital is a, uh, company. That's a series B startup now, uh, who and our mission basically is to provide, uh, rapid access to space for all kinds of customers. Uh, historically if you want to fly something in space, do something in space, it's extremely expensive. You need to book a launch, build a bus, hire a team to operate it, you know, have a big software teams, uh, and then eventually worry about, you know, a bunch like just a lot of very specialized engineering. And what we're trying to do is change that from a super specialized problem that has an extremely high barrier of access to a infrastructure problem. So that it's almost as simple as, you know, deploying a VM in, uh, AWS or GCP is getting your, uh, programs, your mission deployed on orbit, uh, with access to, you know, different sensors, uh, cameras, radios, stuff like that. >>So that's, that's kind of our mission. And just to give a really brief example of the kind of customer that we can serve. Uh, there's a really cool company called, uh, totem labs who is working on building, uh, IOT cons, an IOT constellation for in of things, basically being able to get telemetry from all over the world. They're the first company to demonstrate indoor T, which means you have this little modem inside a container container that you, that you track from anywhere in the world as it's going across the ocean. Um, so they're, it's really little and they've been able to stay a small startup that's focused on their product, which is the, uh, that super crazy complicated, cool radio while we handle the whole space segment for them, which just, you know, before loft was really impossible. So that's, our mission is, uh, providing space infrastructure as a service. We are kind of groundbreaking in this area and we're serving, you know, a huge variety of customers with all kinds of different missions, um, and obviously generating a ton of data in space, uh, that we've gotta handle. Yeah. >>So amazing Caleb, what you guys do, I, now I know you were lured to the skies very early in your career, but how did you kinda land on this business? >>Yeah, so, you know, I've, I guess just a little bit about me for some people, you know, they don't necessarily know what they wanna do like early in their life. For me, I was five years old and I knew, you know, I want to be in the space industry. So, you know, I started in the air force, but have, uh, stayed in the space industry, my whole career and been a part of, uh, this is the fifth space startup that I've been a part of actually. So, you know, I've, I've, uh, kind of started out in satellites, did spent some time in working in, uh, the launch industry on rockets. Then, uh, now I'm here back in satellites and you know, honestly, this is the most exciting of the difference based startups. That I've been a part of >>Super interesting. Okay. Angelo, let's, let's talk about the Ruben observatory, ver C Ruben, famous woman scientist, you know, galaxy guru. Now you guys the observatory, you're up way up high. You're gonna get a good look at the Southern sky. Now I know COVID slowed you guys down a bit, but no doubt. You continued to code away on the software. I know you're getting close. You gotta be super excited. Give us the update on, on the observatory and your role. >>All right. So yeah, Rubin is a state of the art observatory that, uh, is in construction on a remote mountain in Chile. And, um, with Rubin, we conduct the, uh, large survey of space and time we are going to observe the sky with, uh, eight meter optical telescope and take, uh, a thousand pictures every night with a 3.2 gig up peaks of camera. And we are going to do that for 10 years, which is the duration of the survey. >>Yeah. Amazing project. Now you, you were a doctor of philosophy, so you probably spent some time thinking about what's out there and then you went out to earn a PhD in astronomy, in astrophysics. So this is something that you've been working on for the better part of your career, isn't it? >>Yeah, that's that's right. Uh, about 15 years, um, I studied physics in college, then I, um, got a PhD in astronomy and, uh, I worked for about five years in another project. Um, the dark energy survey before joining rubing in 2015. >>Yeah. Impressive. So it seems like you both, you know, your organizations are looking at space from two different angles. One thing you guys both have in common of course is, is, is software. And you both use InfluxDB as part of your, your data infrastructure. How did you discover influx DB get into it? How do you use the platform? Maybe Caleb, you could start. >>Uh, yeah, absolutely. So the first company that I extensively used, uh, influx DBN was a launch startup called, uh, Astra. And we were in the process of, uh, designing our, you know, our first generation rocket there and testing the engines, pumps, everything that goes into a rocket. Uh, and when I joined the company, our data story was not, uh, very mature. We were collecting a bunch of data in LabVIEW and engineers were taking that over to MATLAB to process it. Um, and at first there, you know, that's the way that a lot of engineers and scientists are used to working. Um, and at first that was, uh, like people weren't entirely sure that that was a, um, that that needed to change, but it's something the nice thing about InfluxDB is that, you know, it's so easy to deploy. So as the, our software engineering team was able to get it deployed and, you know, up and running very quickly and then quickly also backport all of the data that we collected thus far into influx and what, uh, was amazing to see. >>And as kind of the, the super cool moment with influx is, um, when we hooked that up to Grafana Grafana as the visualization platform we used with influx, cuz it works really well with it. Uh, there was like this aha moment of our engineers who are used to this post process kind of method for dealing with their data where they could just almost instantly easily discover data that they hadn't been able to see before and take the manual processes that they would run after a test and just throw those all in influx and have live data as tests were coming. And, you know, I saw them implementing like crazy rocket equation type stuff in influx, and it just was totally game changing for how we tested. >>So Angelo, I was explaining in my open, you know, you could, you could add a column in a traditional RDBMS and do time series, but with the volume of data that you're talking about, and the example of the Caleb just gave you, I mean, you have to have a purpose built time series database, where did you first learn about influx DB? >>Yeah, correct. So I work with the data management team, uh, and my first project was the record metrics that measured the performance of our software, uh, the software that we used to process the data. So I started implementing that in a relational database. Um, but then I realized that in fact, I was dealing with time series data and I should really use a solution built for that. And then I started looking at time series databases and I found influx B. And that was, uh, back in 2018. The another use for influx DB that I'm also interested is the visits database. Um, if you think about the observations we are moving the telescope all the time in pointing to specific directions, uh, in the Skype and taking pictures every 30 seconds. So that itself is a time series. And every point in that time series, uh, we call a visit. So we want to record the metadata about those visits and flex to, uh, that time here is going to be 10 years long, um, with about, uh, 1000 points every night. It's actually not too much data compared to other, other problems. It's, uh, really just a different, uh, time scale. >>The telescope at the Ruben observatory is like pun intended, I guess the star of the show. And I, I believe I read that it's gonna be the first of the next gen telescopes to come online. It's got this massive field of view, like three orders of magnitude times the Hub's widest camera view, which is amazing, right? That's like 40 moons in, in an image amazingly fast as well. What else can you tell us about the telescope? >>Um, this telescope, it has to move really fast and it also has to carry, uh, the primary mirror, which is an eight meter piece of glass. It's very heavy and it has to carry a camera, which has about the size of a small car. And this whole structure weighs about 300 tons for that to work. Uh, the telescope needs to be, uh, very compact and stiff. Uh, and one thing that's amazing about it's design is that the telescope, um, is 300 tons structure. It sits on a tiny film of oil, which has the diameter of, uh, human hair. And that makes an almost zero friction interface. In fact, a few people can move these enormous structure with only their hands. Uh, as you said, uh, another aspect that makes this telescope unique is the optical design. It's a wide field telescope. So each image has, uh, in diameter the size of about seven full moons. And, uh, with that, we can map the entire sky in only, uh, three days. And of course doing operations everything's, uh, controlled by software and it is automatic. Um there's a very complex piece of software, uh, called the scheduler, which is responsible for moving the telescope, um, and the camera, which is, uh, recording 15 terabytes of data every night. >>Hmm. And, and, and Angela, all this data lands in influx DB. Correct. And what are you doing with, with all that data? >>Yeah, actually not. Um, so we are using flex DB to record engineering data and metadata about the observations like telemetry events and commands from the telescope. That's a much smaller data set compared to the images, but it is still challenging because, uh, you, you have some high frequency data, uh, that the system needs to keep up and we need to, to start this data and have it around for the lifetime of the price. Mm, >>Got it. Thank you. Okay, Caleb, let's bring you back in and can tell us more about the, you got these dishwasher size satellites. You're kind of using a multi-tenant model. I think it's genius, but, but tell us about the satellites themselves. >>Yeah, absolutely. So, uh, we have in space, some satellites already that as you said, are like dishwasher, mini fridge kind of size. Um, and we're working on a bunch more that are, you know, a variety of sizes from shoebox to, I guess, a few times larger than what we have today. Uh, and it is, we do shoot to have effectively something like a multi-tenant model where, uh, we will buy a bus off the shelf. The bus is, uh, what you can kind of think of as the core piece of the satellite, almost like a motherboard or something where it's providing the power. It has the solar panels, it has some radios attached to it. Uh, it handles the attitude control, basically steers the spacecraft in orbit. And then we build also in house, what we call our payload hub, which is, has all, any customer payloads attached and our own kind of edge processing sort of capabilities built into it. >>And, uh, so we integrate that. We launch it, uh, and those things, because they're in lower orbit, they're orbiting the earth every 90 minutes. That's, you know, seven kilometers per second, which is several times faster than a speeding bullet. So we've got, we have, uh, one of the unique challenges of operating spacecraft and lower orbit is that generally you can't talk to them all the time. So we're managing these things through very brief windows of time, uh, where we get to talk to them through our ground sites, either in Antarctica or, you know, in the north pole region. >>Talk more about how you use influx DB to make sense of this data through all this tech that you're launching into space. >>We basically previously we started off when I joined the company, storing all of that as Angelo did in a regular relational database. And we found that it was, uh, so slow in the size of our data would balloon over the course of a couple days to the point where we weren't able to even store all of the data that we were getting. Uh, so we migrated to influx DB to store our time series telemetry from the spacecraft. So, you know, that's things like, uh, power level voltage, um, currents counts, whatever, whatever metadata we need to monitor about the spacecraft. We now store that in, uh, in influx DB. Uh, and that has, you know, now we can actually easily store the entire volume of data for the mission life so far without having to worry about, you know, the size bloating to an unmanageable amount. >>And we can also seamlessly query, uh, large chunks of data. Like if I need to see, you know, for example, as an operator, I might wanna see how my, uh, battery state of charge is evolving over the course of the year. I can have a plot and an influx that loads that in a fraction of a second for a year's worth of data, because it does, you know, intelligent, um, I can intelligently group the data by, uh, sliding time interval. Uh, so, you know, it's been extremely powerful for us to access the data and, you know, as time has gone on, we've gradually migrated more and more of our operating data into influx. >>You know, let's, let's talk a little bit, uh, uh, but we throw this term around a lot of, you know, data driven, a lot of companies say, oh, yes, we're data driven, but you guys really are. I mean, you' got data at the core, Caleb, what does that, what does that mean to you? >>Yeah, so, you know, I think the, and the clearest example of when I saw this be like totally game changing is what I mentioned before at Astro where our engineer's feedback loop went from, you know, a lot of kind of slow researching, digging into the data to like an instant instantaneous, almost seeing the data, making decisions based on it immediately, rather than having to wait for some processing. And that's something that I've also seen echoed in my current role. Um, but to give another practical example, uh, as I said, we have a huge amount of data that comes down every orbit, and we need to be able to ingest all of that data almost instantaneously and provide it to the operator. And near real time, you know, about a second worth of latency is all that's acceptable for us to react to, to see what is coming down from the spacecraft and building that pipeline is challenging from a software engineering standpoint. >>Um, our primary language is Python, which isn't necessarily that fast. So what we've done is started, you know, in the, in the goal of being data driven is publish metrics on individual, uh, how individual pieces of our data processing pipeline are performing into influx as well. And we do that in production as well as in dev. Uh, so we have kind of a production monitoring, uh, flow. And what that has done is allow us to make intelligent decisions on our software development roadmap, where it makes the most sense for us to, uh, focus our development efforts in terms of improving our software efficiency. Uh, just because we have that visibility into where the real problems are. Um, it's sometimes we've found ourselves before we started doing this kind of chasing rabbits that weren't necessarily the real root cause of issues that we were seeing. Uh, but now, now that we're being a bit more data driven, there we are being much more effective in where we're spending our resources and our time, which is especially critical to us as we scale to, from supporting a couple satellites, to supporting many, many satellites at >>Once. Yeah. Coach. So you reduced those dead ends, maybe Angela, you could talk about what, what sort of data driven means to, to you and your teams? >>I would say that, um, having, uh, real time visibility, uh, to the telemetry data and, and metrics is, is, is crucial for us. We, we need, we need to make sure that the image that we collect with the telescope, uh, have good quality and, um, that they are within the specifications, uh, to meet our science goals. And so if they are not, uh, we want to know that as soon as possible and then, uh, start fixing problems. >>Caleb, what are your sort of event, you know, intervals like? >>So I would say that, you know, as of today on the spacecraft, the event, the, the level of timing that we deal with probably tops out at about, uh, 20 Hertz, 20 measurements per second on, uh, things like our, uh, gyroscopes, but the, you know, I think the, the core point here of the ability to have high precision data is extremely important for these kinds of scientific applications. And I'll give an example, uh, from when I worked at, on the rocket at Astra there, our baseline data rate that we would ingest data during a test is, uh, 500 Hertz. So 500 samples per second. And in some cases we would actually, uh, need to ingest much higher rate data, even up to like 1.5 kilohertz. So, uh, extremely, extremely high precision, uh, data there where timing really matters a lot. And, uh, you know, I can, one of the really powerful things about influx is the fact that it can handle this. >>That's one of the reasons we chose it, uh, because there's times when we're looking at the results of a firing where you're zooming in, you know, I talked earlier about how on my current job, we often zoom out to look, look at a year's worth of data. You're zooming in to where your screen is preoccupied by a tiny fraction of a second. And you need to see same thing as Angela just said, not just the actual telemetry, which is coming in at a high rate, but the events that are coming out of our controllers. So that can be something like, Hey, I opened this valve at exactly this time and that goes, we wanna have that at, you know, micro or even nanosecond precision so that we know, okay, we saw a spike in chamber pressure at, you know, at this exact moment, was that before or after this valve open, those kind of, uh, that kind of visibility is critical in these kind of scientific, uh, applications and absolutely game changing to be able to see that in, uh, near real time and, uh, with a really easy way for engineers to be able to visualize this data themselves without having to wait for, uh, software engineers to go build it for them. >>Can the scientists do self-serve or are you, do you have to design and build all the analytics and, and queries for your >>Scientists? Well, I think that's, that's absolutely from, from my perspective, that's absolutely one of the best things about influx and what I've seen be game changing is that, uh, generally I'd say anyone can learn to use influx. Um, and honestly, most of our users might not even know they're using influx, um, because what this, the interface that we expose to them is Grafana, which is, um, a generic graphing, uh, open source graphing library that is very similar to influx own chronograph. Sure. And what it does is, uh, let it provides this, uh, almost it's a very intuitive UI for building your queries. So you choose a measurement and it shows a dropdown of available measurements. And then you choose a particular, the particular field you wanna look at. And again, that's a dropdown, so it's really easy for our users to discover. And there's kind of point and click options for doing math aggregations. You can even do like perfect kind of predictions all within Grafana, the Grafana user interface, which is really just a wrapper around the APIs and functionality of the influx provides putting >>Data in the hands of those, you know, who have the context of domain experts is, is key. Angela, is it the same situation for you? Is it self serve? >>Yeah, correct. Uh, as I mentioned before, um, we have the astronomers making their own dashboards because they know what exactly what they, they need to, to visualize. Yeah. I mean, it's all about using the right tool for the job. I think, uh, for us, when I joined the company, we weren't using influx DB and we, we were dealing with serious issues of the database growing to an incredible size extremely quickly, and being unable to like even querying short periods of data was taking on the order of seconds, which is just not possible for operations >>Guys. This has been really formative it's, it's pretty exciting to see how the edge is mountaintops, lower orbits to be space is the ultimate edge. Isn't it. I wonder if you could answer two questions to, to wrap here, you know, what comes next for you guys? Uh, and is there something that you're really excited about that, that you're working on Caleb, maybe you could go first and an Angela, you can bring us home. >>Uh, basically what's next for loft. Orbital is more, more satellites, a greater push towards infrastructure and really making, you know, our mission is to make space simple for our customers and for everyone. And we're scaling the company like crazy now, uh, making that happen, it's extremely exciting and extremely exciting time to be in this company and to be in this industry as a whole, because there are so many interesting applications out there. So many cool ways of leveraging space that, uh, people are taking advantage of. And with, uh, companies like SpaceX and the now rapidly lowering cost, cost of launch, it's just a really exciting place to be. And we're launching more satellites. We are scaling up for some constellations and our ground system has to be improved to match. So there's a lot of, uh, improvements that we're working on to really scale up our control software, to be best in class and, uh, make it capable of handling such a large workload. So >>You guys hiring >><laugh>, we are absolutely hiring. So, uh, I would in we're we need, we have PE positions all over the company. So, uh, we need software engineers. We need people who do more aerospace, specific stuff. So, uh, absolutely. I'd encourage anyone to check out the loft orbital website, if there's, if this is at all interesting. >>All right. Angela, bring us home. >>Yeah. So what's next for us is really, uh, getting this, um, telescope working and collecting data. And when that's happen is going to be just, um, the Lu of data coming out of this camera and handling all, uh, that data is going to be really challenging. Uh, yeah. I wanna wanna be here for that. <laugh> I'm looking forward, uh, like for next year we have like an important milestone, which is our, um, commissioning camera, which is a simplified version of the, of the full camera it's going to be on sky. And so yeah, most of the system has to be working by them. >>Nice. All right, guys, you know, with that, we're gonna end it. Thank you so much, really fascinating, and thanks to influx DB for making this possible, really groundbreaking stuff, enabling value creation at the edge, you know, in the cloud and of course, beyond at the space. So really transformational work that you guys are doing. So congratulations and really appreciate the broader community. I can't wait to see what comes next from having this entire ecosystem. Now, in a moment, I'll be back to wrap up. This is Dave ante, and you're watching the cube, the leader in high tech enterprise coverage. >>Welcome Telegraph is a popular open source data collection. Agent Telegraph collects data from hundreds of systems like IOT sensors, cloud deployments, and enterprise applications. It's used by everyone from individual developers and hobbyists to large corporate teams. The Telegraph project has a very welcoming and active open source community. Learn how to get involved by visiting the Telegraph GitHub page, whether you want to contribute code, improve documentation, participate in testing, or just show what you're doing with Telegraph. We'd love to hear what you're building. >>Thanks for watching. Moving the world with influx DB made possible by influx data. I hope you learn some things and are inspired to look deeper into where time series databases might fit into your environment. If you're dealing with large and or fast data volumes, and you wanna scale cost effectively with the highest performance and you're analyzing metrics and data over time times, series databases just might be a great fit for you. Try InfluxDB out. You can start with a free cloud account by clicking on the link and the resources below. Remember all these recordings are gonna be available on demand of the cube.net and influx data.com. So check those out and poke around influx data. They are the folks behind InfluxDB and one of the leaders in the space, we hope you enjoyed the program. This is Dave Valante for the cube. We'll see you soon.
SUMMARY :
case that anyone can relate to and you can build timestamps into Now, the problem with the latter example that I just gave you is that you gotta hunt As I just explained, we have an exciting program for you today, and we're And then we bring it back here Thanks for coming on. What is the story? And, and he basically, you know, from my point of view, he invented modern time series, Yeah, I think we're, I, you know, I always forget the number, but it's something like 230 or 240 people relational database is the one database to rule the world. And then you get the data lake. So And so you get to these applications Isn't good enough when you need real time. It's like having the feature for, you know, you buy a new television, So this is a big part of how we're seeing with people saying, Hey, you know, And so you get the dynamic of, you know, of constantly instrumenting watching the What are you seeing for your, with in, with influx DB, So a lot, you know, Tesla, lucid, motors, Cola, You mentioned, you know, you think of IOT, look at the use cases there, it was proprietary And so the developer, So let's get to the developer real quick, real highlight point here is the data. So to a degree that you are moving your service, So when you bring in kind of old way, new way old way was you know, the best of the open source world. They have faster time to market cuz they're assembling way faster and they get to still is what we like to think of it. I mean systems, uh, uh, systems have consequences when you make changes. But that's where the that's where the, you know, that that Boeing or that airplane building analogy comes in So I'll have to ask you if I'm the customer. Because now I have to make these architectural decisions, as you mentioned, And so that's what you started building. And since I have a PO for you and a big check, yeah. It's not like it's, you know, it's not like it's doing every action that's above, but it's foundational to build What would you say to someone looking to do something in time series on edge? in the build business of building systems that you want 'em to be increasingly intelligent, Brian Gilmore director of IOT and emerging technology that influx day will join me. So you can focus on the Welcome to the show. Sort of, you know, riding along with them is they're successful. Now, you go back since 20 13, 14, even like five years ago that convergence of physical And I think, you know, those, especially in the OT and on the factory floor who weren't able And I think I, OT has been kind of like this thing for OT and, you know, our client libraries and then working hard to make our applications, leveraging that you guys have users in the enterprise users that IOT market mm-hmm <affirmative>, they're excited to be able to adopt and use, you know, to optimize inside the business as compared to just building mm-hmm <affirmative> so how do you support the backwards compatibility of older systems while maintaining open dozens very hard work and a lot of support, um, you know, and so by making those connections and building those ecosystems, What are some of the, um, soundbites you hear from customers when they're successful? machines that go deep into the earth to like drill tunnels for, for, you know, I personally think that's a hot area because I think if you look at AI right all of the things you need to do with that data in stream, um, before it hits your sort of central repository. So you have that whole CEO perspective, but he brought up this notion that You can start to compare asset to asset, and then you can do those things like we talked about, So in this model you have a lot of commercial operations, industrial equipment. And I think, you know, we are, we're building some technology right now. like, you know, either in low earth orbit or you know, all the way sort of on the other side of the universe. I think you bring up a good point there because one of the things that's common in the industry right now, people are talking about, I mean, I think you talked about it, uh, you know, for them just to be able to adopt the platform How do you view view that? Um, you know, and it, it allows the developer to build all of those hooks for not only data creation, There's so much data out there now. that data from point a to point B and you know, to process it correctly so that the end And, and the democratization is the benefit. allow them to just port to us, you know, directly from the applications and the languages Thanks for sharing all, all the complexities and, and IOT that you Well, thank any, any last word you wanna share No, just, I mean, please, you know, if you're, if you're gonna, if you're gonna check out influx TV, You're gonna hear more about that in the next segment, too. the moment that you can look at to kind of see the state of what's going on. And we often point to influx as a solution Tell us about loft Orbi and what you guys do to attack that problem. So that it's almost as simple as, you know, We are kind of groundbreaking in this area and we're serving, you know, a huge variety of customers and I knew, you know, I want to be in the space industry. famous woman scientist, you know, galaxy guru. And we are going to do that for 10 so you probably spent some time thinking about what's out there and then you went out to earn a PhD in astronomy, Um, the dark energy survey So it seems like you both, you know, your organizations are looking at space from two different angles. something the nice thing about InfluxDB is that, you know, it's so easy to deploy. And, you know, I saw them implementing like crazy rocket equation type stuff in influx, and it Um, if you think about the observations we are moving the telescope all the And I, I believe I read that it's gonna be the first of the next Uh, the telescope needs to be, And what are you doing with, compared to the images, but it is still challenging because, uh, you, you have some Okay, Caleb, let's bring you back in and can tell us more about the, you got these dishwasher and we're working on a bunch more that are, you know, a variety of sizes from shoebox sites, either in Antarctica or, you know, in the north pole region. Talk more about how you use influx DB to make sense of this data through all this tech that you're launching of data for the mission life so far without having to worry about, you know, the size bloating to an Like if I need to see, you know, for example, as an operator, I might wanna see how my, You know, let's, let's talk a little bit, uh, uh, but we throw this term around a lot of, you know, data driven, And near real time, you know, about a second worth of latency is all that's acceptable for us to react you know, in the, in the goal of being data driven is publish metrics on individual, So you reduced those dead ends, maybe Angela, you could talk about what, what sort of data driven means And so if they are not, So I would say that, you know, as of today on the spacecraft, the event, so that we know, okay, we saw a spike in chamber pressure at, you know, at this exact moment, the particular field you wanna look at. Data in the hands of those, you know, who have the context of domain experts is, issues of the database growing to an incredible size extremely quickly, and being two questions to, to wrap here, you know, what comes next for you guys? a greater push towards infrastructure and really making, you know, So, uh, we need software engineers. Angela, bring us home. And so yeah, most of the system has to be working by them. at the edge, you know, in the cloud and of course, beyond at the space. involved by visiting the Telegraph GitHub page, whether you want to contribute code, and one of the leaders in the space, we hope you enjoyed the program.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brian Gilmore | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Angela | PERSON | 0.99+ |
Evan | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
SpaceX | ORGANIZATION | 0.99+ |
2016 | DATE | 0.99+ |
Dave Valante | PERSON | 0.99+ |
Antarctica | LOCATION | 0.99+ |
Boeing | ORGANIZATION | 0.99+ |
Caleb | PERSON | 0.99+ |
10 years | QUANTITY | 0.99+ |
Chile | LOCATION | 0.99+ |
Brian | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Evan Kaplan | PERSON | 0.99+ |
Aaron Seley | PERSON | 0.99+ |
Angelo Fasi | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
Paul | PERSON | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
2018 | DATE | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
two questions | QUANTITY | 0.99+ |
Caleb McLaughlin | PERSON | 0.99+ |
40 moons | QUANTITY | 0.99+ |
two systems | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Angelo | PERSON | 0.99+ |
230 | QUANTITY | 0.99+ |
300 tons | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
500 Hertz | QUANTITY | 0.99+ |
3.2 gig | QUANTITY | 0.99+ |
15 terabytes | QUANTITY | 0.99+ |
eight meter | QUANTITY | 0.99+ |
two practitioners | QUANTITY | 0.99+ |
20 Hertz | QUANTITY | 0.99+ |
25 years | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Python | TITLE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Paul dicks | PERSON | 0.99+ |
First | QUANTITY | 0.99+ |
iPhones | COMMERCIAL_ITEM | 0.99+ |
first | QUANTITY | 0.99+ |
earth | LOCATION | 0.99+ |
240 people | QUANTITY | 0.99+ |
three days | QUANTITY | 0.99+ |
apple | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
HBI | ORGANIZATION | 0.99+ |
Dave LAN | PERSON | 0.99+ |
today | DATE | 0.99+ |
each image | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
cube.net | OTHER | 0.99+ |
InfluxDB | TITLE | 0.99+ |
one | QUANTITY | 0.98+ |
1000 points | QUANTITY | 0.98+ |
Breaking Analysis: AWS & Azure Accelerate Cloud Momentum
>> From theCUBE studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE in ETR. This is "Breaking Analysis" with Dave Vellante. >> Despite all the talk about repatriation, hybrid and multi-Cloud opportunities, and Cloud is an increasingly expensive option for customers, the data continues to show the importance of public Cloud to the digital economy. Moreover, the two leaders, AWS and Azure, are showing signs of accelerated momentum that point to those two giants pulling away from the pack in the years ahead, with each firm's showing broad based momentum across their respective product lines. It's unclear if anything, other than government intervention or self-inflicted wounds will slow these two companies down this decade. Despite their commanding lead, a winning strategy for companies that don't run their own Cloud continues to be innovating on top of their massive CapEx investments. The most notable example here being Snowflake. Hello, everyone. Welcome to this week's Wikibon CUBE insights powered by ETR. In this breaking analysis, we provide our quarterly market share update for the big four hyperscale Cloud providers. And we'll share some new ETR data from their most recent survey. And we'll drill into some of the reasons for the momentum of these two companies and drill further into the database and data warehouse sector to see what, if anything, has changed in that space. First, let's look at some of the noteworthy comments from AWS and Microsoft in their recent earnings updates. We heard from Amazon, the following, "AWS has seen a reacceleration of revenue growth as customers have expanded their commitment to the Cloud and selected AWS as their Cloud partner." Notably, AWS revenues increased 39% in Q3 2021. That's a thousand basis point increase in growth relative to Q3 2020. That's an astounding milestone for a company that we expect to surpass $60 billion in revenue this year. Further, AWS touted the adoption of its custom silicon, and specifically its Graviton2 processors. AWS is fond of emphasizing Graviton's 40% price performance improvements relative to x86 processors, something we've reported on quite extensively. AWS is investing in custom silicon, encouraging ISVs to port their code to the platform so that customers will experience little or no code changes when they migrate. Again, we believe this is a secret weapon for AWS as its cost structure will continue to improve at a rate faster than competitors that don't have the resources or the skills or the stomach to develop such capabilities. Microsoft, for its part, also saw astoundingly good growth of 48% this past quarter for Azure. This is a company that we forecast will approach $40 billion in IaaS and PaaS public Cloud revenue this year. Microsoft's CEO, Satya Nadella, on its earnings call, emphasized the changing nature of Cloud expanding in a distributed fashion to the edge. He referenced Azure as the world's computer. Building on his statements last year that Microsoft is building out a powerful, ubiquitous, intelligent, sensing and predictive Cloud. Yes, folks, it does feel like we're entering the so-called Metaverse, doesn't it? Okay, to underscore the momentum of these two companies, let's take a look at the ETR breakdown of Net score, which measures spending momentum. This chart will be familiar to our listeners. It shows the breakdown of net score for AWS, with the lime green showing new adoptions. That's 11%. The forest green is spending more than 6% relative to the first half of this year. That's a very robust 53%. The gray is flat spending. That's 30% on a very, very large base. And the pink is spending declines of minus 6% or worse. That's 4%. And the bright red is defections i.e those leaving AWS. That's 1%. That's virtually non-existent. You subtract the reds from the greens and you get a net score of 59. Remember, anything over 40, we can still consider to be elevated. Let's look at that same data for Microsoft again. You have some new ads that lime green, that's 7%. The forest green is at 46% of customers spending more, which is an incredible figure for a company with revenues that will in the near term surpass $200 billion. And the red is in the low single digits. Buffered by its enormous PC software profits over the years, Microsoft is powered through its Window's Dogma and transitioned into a Cloud powerhouse. Let's now share some of our latest numbers for the big four hyperscale players, AWS, Azure, Alibaba and Google. Here, we show data for these companies from 2018 and our estimates for 2021. This data includes our final figures for AWS, Azure and GCP for Q3 with Alibaba yet to report. Remember, only AWS and Alibaba report IaaS revenue cleanly with Microsoft and Google, they give us a little breadcrumb nuggets that allow us to triangulate with our survey data and other intelligence. But it's our attempt to do an apples to apples comparison for those four companies using AWS and it's reporting as a baseline. In Q3, AWS reported more than $16 billion in revenue. We estimate Azure at 10 billion, Alibaba, we expect to come in at just under 3 billion, and GCP at 2.5 billion for the quarter. With three quarters of data in, with the exception of Alibaba, we're forecasting AWS to capture 51% of the big four revenue, the hyperscale revenue. And really we believe these are the only four hyperscalers. AWS will surpass 60 billion with Azure just under 40 billion, Alibaba approaching 11 billion, and Google coming in just under 10 billion for the year is our expectation. We forecast these four will account for $120 billion this year. That's a 41% increase over 2020 and the same collective growth rate as 2020 relative to 2019. We expect Azure to be 63% of the size of AWS revenue. So it is gaining share. Both of those companies, however, saw accelerated growth this past quarter with Alibaba and GCP's growth rates decelerating relative to last year. Now, let's take a closer look at those growth rates. This chart shows the quarterly growth rates for each of the four going back to the beginning of 2019. Both GCP and Alibaba are showing dramatic declines in growth rates, whereas, this past quarter Azure saw accelerated growth and AWS has now seen an increased rate of growth for the past two quarters. In fact, AWS' growth is about where it was in 2019 when it was around half of its current revenue size. And in 2019 growth was decelerating through the quarters as you can see where today that trend has reversed. It's quite amazing. All right, let's take a look at the broader Cloud landscape and bring back some ETR data. This chart that we're showing here, it shows net score or spending momentum on the vertical axis and market share or presence in the dataset on the horizontal axis. Note that red dotted line, anything above that we can still consider elevated and impressive. As when we've previously shared this data, AWS and Microsoft Azure are up and to the right. Now remember, this chart is not just counting IaaS and PaaS as we showed you earlier, it's however the customers views whatever they think Cloud is. And so they're likely including Microsoft SaaS in this picture. Which is why Microsoft shows larger than AWS despite what we showed you earlier. Nonetheless, these two are well ahead of the pack and the growth rates indicate that they're pulling away. But we've added some of the other players, most notably VMware Cloud on AWS. It's showing momentum as is VMware Cloud, which is VMware Cloud foundation and other on-prem Cloud offerings, even though it's below the red line for the on-prem piece, it's very respectable. The VMware Cloud on AWS has been consistently up above that red line. Has popped beneath it in some quarters, but it's very, very strong. As is, you know, Red Hat OpenShift, it's a little bit below the line, but it is respectable. We've superimposed this by the way. Red Hat OpenShift in the ETR platform is under the container orchestration taxonomy, but we'd like to put it in next to the Cloud players for context. That's how Red Hat sort of thinks about this as well. They think about OpenShift as Cloud. And then you can see the other players. Alibaba has got a small sample in the ETR dataset. Just does not enough presence in China. But Dell and HPE have started to show up in the Cloud taxonomy. So buyers are associating their private Clouds with Cloud. So Dell's Apex, HPE's GreenLake. So that's a positive. And you can see Oracle, which of course is OCI, Oracle Cloud infrastructure. And then IBM with its public Cloud. So, it's a positive that these on-prem players are showing up in this data, but the reality is the hyperscalers are growing collectively at 40% annually and the on-prem players are growing in the low single digits. So, and if you carve out the IaaS business of AWS and Azure, they're larger than most of the on-premises infrastructure players. And all the on-prem players are moving toward an as a service model, as I just alluded to. So, undoubtedly, hybrid multicloud edge are going to present opportunities for the likes of Dell, HPE, Cisco, VMware, IBM, Red Hat, et cetera. But they also present opportunities for the public Cloud players who have vibrant ecosystems and marketplaces much more diverse and deep than the traditional vendors. You know, we have a clearer picture of Microsoft's sort of hybrid and edge strategy because the company has such an enormous legacy business, it really had to think about that much more deeply. It wasn't a blank sheet of paper like AWS. It's going to be interesting at reinvent this year if new CEO, Adam Selipsky, will talk about this. And it will be good to hear how he's thinking about the next decade, how AWS thinks about hybrid and edge, I guarantee that with their developer affinity and custom Silicon capabilities, they're thinking about it differently than traditional enterprise players. And as we've stressed in this segment, they have across the board momentum. Now to quantify that, let's take a look at AWS as portfolio in the spending momentum within its product segments. This chart shows AWS's net scores or spending momentum in the areas where AWS participates in the ETR taxonomy. Again, note that red line. Anything above 40% is considered an elevated watermark. We're showing data from last October, this past July and the latest October 21 survey. That yellow line or a bar. What's notable is the yellow versus the gray bars up across the board for the most part, other than chime... And by the way, other than chime, everything is above the 40% mark as well. Now, we've highlighted database because we feel it's one of the most strategic sectors in a real battleground. So we want to drill into that a bit. Here's our familiar X Y graph showing Net score on the Y axis, remember, that's, again, spending momentum and market share or pervasiveness in the survey on the horizontal axis. This data, by the way, includes on-prem and Cloud database data warehouse. So keep that in mind. Let's start with one of our favorite topics; Snowflake. We've reported again and again and again, that we've never seen anything like this. The company's net score has moderated ever so slightly this quarter, but it's still just below 80%. Very highly elevated. Well, above that 40% mark. It's Snowflake's presence continues to grow as a gain share in the market. Snowflake is growing revenue in the triple digits. It's an insane pace, hence its current $115 billion market cap as of this episode. Now that said, all three US-based Cloud players there are above the 40% line with AWS and Microsoft having significant presence on the horizontal axis. You see Cockroach Labs, Redis, Couchbase, they're all elevated or highly elevated. Couchbase just went public this summer. So that may help with its presence. MongoDB, they're killing it. They have a $37 billion market cap as of this episode. The stock has been on a tear. You see MariaDB was also in the mix. And then of course you have Oracle, the database leader. Look, they continue to invest in making the Oracle database and other software like MySQL, the best solution for mission critical workloads, and they're investing in their Cloud. But you can see overall, they just don't have the momentum from a spending standpoint that the others do because the declines in their legacy business. And they've been around a long time. Those declines are not fully offset by the growth in Cloud database and Cloud migration. But look, Oracle is a financial powerhouse with a $250 billion plus market cap. And the stock has done very well this past year. Up over 60%. Cloudera is going private. So it can hide the pain of the transitions that it's undergoing between the legacy install bases of Cloudera and Hortonworks. It's just a tough situation. When the companies came together, Cloudera essentially had a dead end. Each of those respective platforms and migrate their customers to a more modern stack as part of its Cloud strategy. Ironic that it's name is Cloudera. You know, that's always a difficult thing to do. So as a private company, Cloudera can maybe get off that 90 day shot clock and buy some time to invest without getting hammered by the street. And you know, Teradata consistently has not shown up well in the ETR dataset. It's transitioned to Cloud and cross-Cloud still hasn't shown momentum in the surveys. So, look right now, it's looking like the rich get richer. So just to quantify that a little bit, let's line up some of the database players and look a little bit more closely at net score. This chart shows the spending momentum or lack thereof with the net score or spending velocity granularity that we described before. Remember, green is spending more, red is spending less, bright red is leaving the platform, bright green is adding the platform. You take red, subtract red from the green, and that gives you a net score. Snowflake, as we said, tops the list. You can see the granularity there. You can compare the performance. In a little different view to understand how these scores are derived, look, the ideal profile is a solid lime green, a big forest green, a not too large gray and ideally little or no bright red AKA defections. And you can see the green funnel in the gray increasing prominence as the vendor momentum declines. Interestingly, with the exception of Cloudera and Teradata, defections are all in the single digits or nonexistent. In the case of Snowflake, Redis, red is no red at all, but small sample, Couchbase has no defections and very little defection for the giant Microsoft. Incredibly impressive. This speaks to how hard it is to migrate off of a database no matter how disgruntled you are. The more common scenario is to isolate the database and build new functionality on modern platforms. Okay, so what to watch out for. Well, reinvent this coming up next month. Oh this month. It's the first time someone other than Andy Jassy will be keynoting as CEO. 15 years of Cloud, this is the 10th re-invent, which is always a market for the direction of the industry. I've said many times that the last decade was largely about IT transformation powered by the Cloud. I believe we're entering a new era of business transformation where the Cloud is going to play a significant role. But the Cloud is evolving from a set of remote services out there in the Cloud to an omnipresent platform on top of which many customers and technology companies can innovate. And virtually every industry will be impacted by Cloud. However it evolves in the coming decade. The question will be, how fast can you go? And how will players like AWS and Microsoft and many others that are building on top of these platforms make it easier for you to go fast? That's what I'll be watching for at re-invent and beyond. Okay, that's a wrap for today. Remember, these episodes, they're all available as podcasts, wherever you listen. All you got to do is search Breaking Analysis podcasts. Check out ETR's website at etr.plus. We also publish a full report every week on wikibon.com and siliconangle.com. You can get in touch with me, david.vellante@siliconangle.com. You can DM me @dvellante or comment on our LinkedIn posts. This is Dave Vellante for theCUBE insights powered by ETR. Have a great week, everybody. Stay safe, be well. And we'll see you next time. We'll see you at re-invent. (soft upbeat music)
SUMMARY :
This is "Breaking Analysis" and GCP at 2.5 billion for the quarter.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Adam Selipsky | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
AWS' | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
Satya Nadella | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
$115 billion | QUANTITY | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
China | LOCATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Dell | ORGANIZATION | 0.99+ |
2018 | DATE | 0.99+ |
$200 billion | QUANTITY | 0.99+ |
$37 billion | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
41% | QUANTITY | 0.99+ |
2.5 billion | QUANTITY | 0.99+ |
10 billion | QUANTITY | 0.99+ |
GCP | ORGANIZATION | 0.99+ |
53% | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
40% | QUANTITY | 0.99+ |
51% | QUANTITY | 0.99+ |
63% | QUANTITY | 0.99+ |
$250 billion | QUANTITY | 0.99+ |
30% | QUANTITY | 0.99+ |
4% | QUANTITY | 0.99+ |
48% | QUANTITY | 0.99+ |
60 billion | QUANTITY | 0.99+ |
7% | QUANTITY | 0.99+ |
$60 billion | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
two leaders | QUANTITY | 0.99+ |
$120 billion | QUANTITY | 0.99+ |
39% | QUANTITY | 0.99+ |
more than $16 billion | QUANTITY | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Survey Data Shows no Slowdown in AWS & Cloud Momentum
from the cube studios in palo alto in boston bringing you data-driven insights from the cube and etr this is breaking analysis with dave vellante despite all the chatter about cloud repatriation and the exorbitant cost of cloud computing customer spending momentum continues to accelerate in the post-isolation economy if the pandemic was good for the cloud it seems that the benefits of cloud migration remain lasting in the late stages of covid and beyond and we believe this stickiness is going to continue for quite some time we expect i asked revenue for the big four hyperscalers to surpass 115 billion dollars in 2021 moreover the strength of aws specifically as well as microsoft azure remain notable such large organizations showing elevated spending momentum as shown in the etr survey results is perhaps unprecedented in the technology sector hello everyone and welcome to this week's wikibon cube insights powered by etr in this breaking analysis we'll share some some fresh july survey data that indicates accelerating momentum for the largest cloud computing firms importantly not only is the momentum broad-based but it's also notable in key strategic sectors namely ai and database there seems to be no stopping the cloud momentum there's certainly plenty of buzz about the cloud tax so-called cloud tax but other than wildly assumptive valuation models and some pockets of anecdotal evidence you don't really see the supposed backlash impacting cloud momentum our forecast calls for the big four hyperscalers aws azure alibaba and gcp to surpass 115 billion as we said in is revenue this year the latest etr survey results show that aws lambda has retaken the lead among all major cloud services tracked in the data set as measured in spending momentum this is the service with the most elevated scores azure overall azure functions vmware cloud on aws and aws overall also demonstrate very highly elevated performance all above that of gcp now impressively aws momentum in the all-important fortune 500 where it has always showed strength is also accelerating one concern in the most recent survey data is that the on-prem clouds and so-called hybrid platforms which we had previously reported as showing an upward spending trajectory seem to have cooled off a bit but the data is mixed and it's a little bit too early to draw firm conclusions nonetheless while hyperscalers are holding steady the spending data appears to be somewhat tepid for the on-prem players you know particularly for their cloud we'll study that further after etr drops its full results on july 23rd now turning our attention back to aws the aws cloud is showing strength across its entire portfolio and we're going to show you that shortly in particular we see notable strength relative to others in analytics ai and the all-important database category aurora and redshift are particularly strong but several other aws database services are showing elevated spending velocity which we'll quantify in a moment all that said snowflake continues to lead all database suppliers in spending momentum by a wide margin which again will quantify in this episode but before we dig into the survey let's take a look at our latest projections for the big four hyperscalers in is as you know we track quarterly revenues for the hyperscalers remember aws and alibaba ias data is pretty clean and reported in their respective earnings reports azure and gcp we have to extrapolate and strip out all a lot of the the apps and other certain revenue to make an apples-to-apples comparison with aws and alibaba and as you can see we have the 2021 market exceeding 115 billion dollars worldwide that's a torrid 35 growth rate on top of 41 in 2020 relative to 2019. aggressive yes but the data continues to point us in this direction until we see some clearer headwinds for the cloud players this is the call we're making aws is perhaps losing a sharepoint or so but it's also is so large that its annual incremental revenue is comparable to alibaba's and google's respective cloud business in total is business in total the big three u.s cloud companies all report at the end of july while alibaba is mid mid-august so we'll update these figures at that time okay let's move on and dig into the survey data we don't have the data yet on alibaba and we're limited as to what we can share until etr drops its research update on on the 23rd but here's a look at the net score timeline in the fortune 500 specifically so we filter the fortune 500 for cloud computing you got azure and the yellow aws and the black and gcp in blue so two points here stand out first is that aws and microsoft are converging and remember the customers who respond to the survey they probably include a fair amount of application software spending in their cloud answers so it favors microsoft in that respect and gcp second point is showing notable deceleration relative to the two leaders and the green call out is because this cut is from an aws point of view so in other words gcp declines are a positive for aws so that's how it should be interpreted now let's take a moment to better understand the idea of net score this is one of the fundamental metrics of the etr methodology here's the data for aws so we use that as a as a reference point net score is calculated by asking customers if they're adding a platform new that's the lime green bar that you see here in the current survey they're asking are you spending six percent or more in the second half relative to the first half of the year that's the forest green they're also asking is spending flat that's the gray or are you spending less that's the pink or are you replacing the platform i.e repatriating so not much spending going on in replacements now in fairness one percent of aws is half a billion dollars so i can see where some folks would get excited about that but in the grand scheme of things it's a sliver so again we don't see repatriation in the numbers okay back to net score subtract the reds from the greens and you get net score which in the case of aws is 61 now just for reference my personal subjective elevated net score level is 40 so anything above that is really impressive based on my experience and to have a company of this size be so elevated is meaningful same for microsoft by the way which is consistently well above the 50 mark in net score in the etr surveys so that's you can think about it that's even more impressive perhaps than aws because it's triple the revenue okay let's stay with aws and take a look at the portfolio and the strength across the board this chart shows net score for the past three surveys serverless is on fire by the way not just aws but azure and gcp functions as well but look at the aws portfolio every category is well above the 40 percent elevated red line the only exception is chime and even chime is showing an uptick and chime is meh if you've ever used chime every other category is well above 50 percent next net score very very strong for aws now as we've frequently reported ai is one of the four biggest focus areas from a spending standpoint along with cloud containers and rpa so it stands to reason that the company with the best ai and ml and the greatest momentum in that space has an advantage because ai is being embedded into apps data processes machines everywhere this chart compares the ai players on two dimensions net score on the vertical axis and market share or presence in the data set on the horizontal axis for companies with more than 15 citations in the survey aws has the highest net score and what's notable is the presence on the horizontal axis databricks is a company where high on also shows elevated scores above both google and microsoft who are showing strength in their own right and then you can see data iq data robot anaconda and salesforce with einstein all above that 40 percent mark and then below you can see the position of sap with leonardo ibm watson and oracle which is well below the 40 line all right let's look at at the all-important database category for a moment and we'll first take a look at the aws database portfolio this chart shows the database services in aws's arsenal and breaks down the net score components with the total net score superimposed on top of the bars point one is aurora is highly elevated with a net score above 70 percent that's due to heavy new adoptions redshift is also very strong as are virtually all aws database offerings with the exception of neptune which is the graph database rds dynamodb elastic document db time stream and quantum ledger database all show momentum above that all important 40 line so while a lot of people criticize the fragmentation of the aws data portfolio and their right tool for the right job approach the spending spending metrics tell a story and that that the strategy is working now let's take a look at the microsoft database portfolio there's a story here similar similar to that of aws azure sql and cosmos db microsoft's nosql distributed database are both very highly elevated as are azure database for mysql and mariadb azure cash for redis and azure for cassandra also microsoft is giving look at microsoft's giving customers a lot of options which is kind of interesting you know we've often said that oracle's strategy because we think about oracle they're building the oracle database cloud we've said oracle strategy should be to not just be the cloud for oracle databases but be the cloud for all databases i mean oracle's got a lot of specialty capability there but it looks like microsoft is beating oracle to that punch not that oracle is necessarily going there but we think it should to expand the appeal of its cloud okay last data chart that we'll show and then and then this one looks at database disruption the chart shows how the cloud database companies are doing in ibm oracle teradata in cloudera accounts the bars show the net score granularity as we described earlier and the etr callouts are interesting so first remember this is an aws this is in an aws context so with 47 responses etr rightly indicates that aws is very well positioned in these accounts with the 68 net score but look at snowflake it has an 81 percent net score which is just incredible and you can see google database is also very strong and the high 50 percent range while microsoft even though it's above the 40 percent mark is noticeably lower than the others as is mongodb with presumably atlas which is surprisingly low frankly but back to snowflake so the etr callout stresses that snowflake doesn't have a strong as strong a presence in the legacy database vendor accounts yet now i'm not sure i would put cloudair in the legacy database category but okay whatever cloudera they're positioning cdp is a hybrid platform as are all the on-prem players with their respective products and platforms but it's going to be interesting to see because snowflake has flat out said it's not straddling the cloud and on-prem rather it's all in on cloud but there is a big opportunity to connect on-prem to the cloud and across clouds which snowflake is pursuing that that ladder the cross-cloud the multi-cloud and snowflake is betting on incremental use cases that involve data sharing and federated governance while traditional players they're protecting their turf at the same time trying to compete in cloud native and of course across cloud i think there's room for both but clearly as we've shown cloud has the spending velocity and a tailwind at its back and aws along with microsoft seem to be getting stronger especially in the all-important categories related to machine intelligence ai and database now to be an essential infrastructure technology player in the data era it would seem obvious that you have to have database and or data management intellectual property in your portfolio or you're going to be less valuable to customers and investors okay we're going to leave it there for today remember these episodes they're all available as podcasts wherever you listen all you do is search breaking analysis podcast and please subscribe to the series check out etr's website at etr dot plus plus etr plus we also publish a full report every week on wikibon.com and siliconangle.com you can get in touch with me david.velante at siliconangle.com you can dm me at d vallante or you can hit hit me up on our linkedin post this is dave vellante for the cube insights powered by etr have a great week stay safe be well and we'll see you next time you
SUMMARY :
that the company with the best ai and ml
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
alibaba | ORGANIZATION | 0.99+ |
six percent | QUANTITY | 0.99+ |
81 percent | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
2021 | DATE | 0.99+ |
2019 | DATE | 0.99+ |
40 percent | QUANTITY | 0.99+ |
july 23rd | DATE | 0.99+ |
microsoft | ORGANIZATION | 0.99+ |
115 billion | QUANTITY | 0.99+ |
dave vellante | PERSON | 0.99+ |
50 percent | QUANTITY | 0.99+ |
41 | QUANTITY | 0.99+ |
61 | QUANTITY | 0.99+ |
47 responses | QUANTITY | 0.99+ |
boston | LOCATION | 0.99+ |
one percent | QUANTITY | 0.99+ |
second half | QUANTITY | 0.99+ |
aws | ORGANIZATION | 0.99+ |
40 | QUANTITY | 0.99+ |
two leaders | QUANTITY | 0.99+ |
second point | QUANTITY | 0.99+ |
115 billion dollars | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
half a billion dollars | QUANTITY | 0.99+ |
more than 15 citations | QUANTITY | 0.98+ |
mid mid-august | DATE | 0.98+ |
two points | QUANTITY | 0.98+ |
ORGANIZATION | 0.98+ | |
siliconangle.com | OTHER | 0.98+ |
end of july | DATE | 0.98+ |
david.velante | PERSON | 0.97+ |
july | DATE | 0.97+ |
50 | QUANTITY | 0.97+ |
40 percent | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
both | QUANTITY | 0.96+ |
oracle | ORGANIZATION | 0.95+ |
sql | TITLE | 0.95+ |
mysql | TITLE | 0.95+ |
first half | QUANTITY | 0.95+ |
palo alto | ORGANIZATION | 0.95+ |
pandemic | EVENT | 0.95+ |
35 | QUANTITY | 0.94+ |
this week | DATE | 0.93+ |
etr | ORGANIZATION | 0.93+ |
four biggest focus areas | QUANTITY | 0.91+ |
aws azure | ORGANIZATION | 0.91+ |
azure | ORGANIZATION | 0.91+ |
one | QUANTITY | 0.91+ |
23rd | DATE | 0.9+ |
40 line | QUANTITY | 0.89+ |
Breaking Analysis: Mobile World Congress Highlights Telco Transformation
>> From theCUBE Studios in Palo Alto and Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> Mobile World Congress is alive, theCUBE will be there and we'll certainly let you know if it's alive and well when we get on the ground. Now, as we approach a delayed mobile world congress, it's really appropriate to reflect in the state of the telecoms industry. Let's face it. Telcos have done of really good job of keeping us all connected during the pandemic, supporting work from home and that whole pivot, accommodating the rapid shift to landline traffic, securing the network and keeping it up and running but it doesn't change the underlying fundamental dilemma that Telco has faced. Telco is a slow growth, no growth industry, with revenue expectations in the low single digits. And at the same time network traffic continues to grow at 20% annually. And last year it grew at 40% to 50%. Despite these challenges, Telcos are still investing in the future. For example, the Telco industry collectively is selling out more than a trillion dollars in the first half of this decade on 5G and fiber infrastructure. And it's estimated that there are now more than 200 5G networks worldwide. But a lot of questions remain, not the least of which is, can and should Telcos go beyond connectivity and fiber. Can the Telcos actually monetize 5G or whatever's next beyond 5G? Or is that going to be left to the ecosystem? Now what about the ecosystem? How is that evolving? And very importantly, what role will the Cloud Hyperscalers play in Telco? Are they infrastructure on which the Telcos can build or are they going to suck the value out of the market as they have done in the enterprise? Hello everyone, and welcome to this week's Wiki Bond Cube Insights powered by ETR. In this breaking analysis, it's my pleasure to welcome a long time telecoms industry analyst and colleague, and the founding director of Lewis Insight, Mr. Chris Lewis. Chris, welcome to the program. Thanks for coming on >> Dave, it's a pleasure to be here. Thank you for having me. >> It is really our pleasure. So, we're going to cover a lot of ground today. And first thing, we're going to talk about Mobile World Congress. I've never been, you're an expert at that and what we can expect. And then we're going to review the current state of telecoms infrastructure, where it should go. We're going to dig into transformation. Is it a mandate? Is it aspirational? Can Telcos enter adjacent markets in ways they haven't been able to in the past? And then how about the ecosystem? We're going to talk about that, and then obviously we're going to talk about Cloud as I said, and we'll riff a little bit on the tech landscape. So Chris, let's get into it, Mobile World Congress, it's back on, what's Mobile World Congress typically like? What's your expectation this year for the vibe compared to previous events? >> Well Dave, the issue of Mobile World Congress is always that we go down there for a week into Barcelona. We stress ourselves building a matrix of meetings in 30 minutes slots and we return at the end of it trying to remember what we'd been told all the way through. The great thing is that with the last time we had a live, with around 110,000 people there, you could see anyone and everyone you needed to within the mobile, and increasingly the adjacent industry and ecosystem. So, he gave you that once a year, big download of everything new, obviously because it's the Mobile World Congress, a lot of it around devices, but increasingly over the last few years, we saw many, many stands with cars on them because the connected car became an issue, a lot more software oriented players there, but always the Telcos, always the people providing the network infrastructure. Increasingly in the last few years people provided the software and IT infrastructure, but all of these people contributing to what the network should be in the future, what needs to be connected. But of course the reach of the network has been growing. You mentioned during lockdown about connecting people in their homes, well, of course we've also been extending that connection to connect things whether it's in the home or the different devices, monitoring of doorbells and lights and all that sort of stuff. And in the industry environment, connecting all of the robots and sensors. So, actually the perimeter, the remit of the industry to connect has been expanding, and so is the sort of remit of Mobile World Congress. So, we set an awful lot of different suppliers coming in, trying to attach to this enormous market of roughly $1.5 trillion globally. >> Chris, what's the buzz in the industry in terms of who's going to show up. I know a lot of people have pulled out, I've got the Mobile World Congress app and I can see who's attending. And it looks like quite a few people are going to go but what's your expectation? >> Well, from an analyst point of view, obviously I'm mainly keeping up with my clients and trying to get new clients. I'm looking at it and going most of my clients are not attending in person. Now, of course, we need the DSMA, we need Mobile World Congress for future for the industry interaction. But of course, like many people having adopted and adapted to be online, then they're putting a lot of the keynotes online, a lot of the activities will be online. But of course many of the vendors have also produced their independent content and content to actually deliver to us as analysts. So, I'm not sure who will be there. I like you, but you'll be on the ground. You'll be able to report back and let us know exactly who turned up. But from my point of view, I've had so many pre-briefs already, the difference between this year and previous years, I used to get loads of pre-briefs and then have to go do the briefs as well. So this year I've got the pre-brief so I can sit back, put my feet up and wait for your report to come back as to what's happening on the ground. >> You got it. Okay, let's get into a little bit and talk about Telco infrastructure and the state, where it is today, where it's going, Chris, how would you describe the current state of Telco infrastructure? Where does it need to go? Like, what is the ideal future state look like for Telcos in your view? >> So there's always a bit of an identity crisis when it comes to Telco. I think going forward, the connectivity piece was seen as being table stakes, and then people thought where can we go beyond connectivity? And we'll come back to that later. But actually to the connectivity under the scenario I just described of people, buildings, things, and society, we've got to do a lot more work to make that connectivity extend, to be more reliable, to be more secure. So, the state of the network is that we have been building out infrastructure, which includes fiber to connect households and businesses. It includes that next move to cellular from 4G to 5G. It obviously includes Wi-Fi, wherever we've got that as well. And actually it's been a pretty good state, as you said in your opening comments they've done a pretty good job keeping us all connected during the pandemic, whether we're a fixed centric market like the UK with a lot of mobile on top and like the US, or in many markets in Africa and Asia, where we're very mobile centric. So, the fact is that every country market is different, so we should never make too many assumptions at a very top level, but building out that network, building out the services, focusing on that connectivity and making sure we get that cost of delivery right, because competition is pushing us towards having and not ever increasing prices, because we don't want to pay a lot extra every time. But the big issue for me is how do we bring together the IT and the network parts of this story to make sure that we build that efficiency in, and that brings in many questions that we going to touch upon now around Cloud and Hyperscalers around who plays in the ecosystem. >> Well, as you know, Telco is not my wheelhouse, but hanging around with you, I've learned, you've talked a lot about the infrastructure being fit for purpose. It's easy from an IT perspective. Oh yeah, it's fossilized, it's hardened, and it's not really flexible, but the flip side of that coin is as you're pointing out, it's super reliable. So, the big talk today is, "Okay, we're going to open up the network, open systems, and Open RAN, and open everything and microservices and containers. And so, the question is this, can you mimic that historical reliability in that open platform? >> Well, for me, this is the big trade-off and in my great Telco debate every year, I always try and put people against each other to try and to literally debate the future. And one of the things we looked at was is a more open network against this desire of the Telcos to actually have a smaller supplier roster. And of course, as a major corporation, these are on a national basis, very large companies, not large compared to the Hyperscalers for example, but they're large organizations, and they're trying to slim down their organization, slim down the supplier ecosystem. So actually in some ways, the more open it becomes, the more someone's got to manage and integrate all those pieces together. And that isn't something we want to do necessarily. So, I see a real tension there between giving more and more to the traditional suppliers. The Nokia's, Ericsson's, Huawei's, Amdocs and so on, the Ciscos. And then the people coming in breaking new ground like Mavenir and come in, and the sort of approach that Rakuten and Curve taken in bringing in more open and more malleable pieces of smaller software. So yeah, it's a real challenge. And I think as an industry which is notorious for being slow moving, actually we've begun to move relatively quickly, but not necessarily all the way through the organization. We've got plenty of stuff sitting on major or mainframes still in the back of the organization. But of course, as mobile has come in, we've started to deal much more closely, uninteractively in real time, God forbid, with the customers. So actually, at that front end, we've had to do things a lot more quickly. And that's where we're seeing the quickest adaptation to what you might see in your IT environment as being much more, continuous development, continuous improvement, and that sort of on demand delivery. >> Yeah, and we're going to get to that sort of in the Cloud space, but I want to now touch on Telco transformation which is sort of the main theme of this episode. And there's a lot of discussion on this topic, can Telcos move beyond connectivity and managing fiber? Is this a mandate? Is it a pipe dream that's just aspirational? Can they attack adjacencies to grow beyond the 1% a year? I mean, they haven't been successful historically. What are those adjacencies that might be, an opportunity and how will that ecosystem develop? >> Sure. >> So Chris, can and should Telcos try to move beyond core connectivity? Let's start there. >> I like what you did there by saying pipe dreams. Normally, pipe is a is a negative comment in the telecom world. But pipe dream gives it a real positive feel. So can they move beyond connectivity? Well, first of all, connectivity is growing in terms of the number of things being connected. So, in that sense, the market is growing. What we pay for that connectivity is not necessarily growing. So, therefore the mandate is absolutely to transform the inner workings and reduce the cost of delivery. So, that's the internal perspective. The external perspective is that we've tried in many Telcos around the world to break into those adjacent markets, being around media, being enterprise, being around IOT, and actually for the most part they've failed. And we've seen some very significant recent announcements from AT&T, Verizon, BT, beginning to move away from, owning content and not delivering content, but owning content. And the same as they've struggled often in the enterprise market to really get into that, because it's a well-established channel of delivery bringing all those ecosystem players in. So, actually rather than the old Telco view of we going to move into adjacent markets and control those markets, actually moving into them and enabling fellow ecosystem players to deliver the service is what I think we're beginning to see a lot more of now. And that's the big change, it's actually learning to play with the other people in the ecosystem. I always use a phrase that there's no room for egos in the ecosystem. And I think Telcos went in initially with an ego thinking we're really important, we are on connectivity. But actually now they're beginning to approach the ecosystem things saying, "How can we support partners? How can we support everyone in this ecosystem to deliver the services to consumers, businesses and whomever in this evolving ecosystem?" So, there are opportunities out there, plenty of them, but of course, like any opportunity, you've got to approach it in the right way. You've got to get the right investment in place. You've got to approach it with the right open API so everyone can integrate with your approach, and approach it, do I say with a little bit of humility to say, "Hey, we can bring this to the table, how do we work together? >> Well, it's an enormous market. I think you've shared with me, it's like 1.4 trillion. And I want to stay on these adjacencies for a minute, because one of the obvious things that Telcos will talk about is managed services. And I know we have to be careful of that term in an IT context, that it's different in a, you're talking about managing connectivity, but there's professional services. That's a logical sort of extension of their business and probably a safe adjacency, maybe not even adjacency, but they're not going to get into devices. I mean, they'll resell devices, but they're not going to be, I would presume not go back to trying to make devices, but there's certainly the edge and that's so, it'll define in opaque, but it's huge. If there's 5G, there's the IT component and that's probably a partnership opportunity. And as you pointed out, there's the ecosystem, but I wonder, how do you think about 5G as an adjacency or indoor opportunity? Is it a revenue opportunity for Telcos or is that just something that is really aspirational? >> Oh, absolutely it's a revenue opportunity, but I prefer to think of 5G as being a sort of a metaphor for the whole future of telecom. So, we usually talk, and MWC would normally talk about 5G just as a mobile solution. Of course, what you can get with, you can use this fixed wireless access approach, where the roots that sits in your house or your building. So, it's a potential replacement for some fixed lines. And of course, it's also, gives you the ability to build out, let's say in a manufacturing or a campus environment, a private 5G network. So, many of the early opportunities we're seeing with 5G are actually in that more private network environment addressing those very low latency, and high bandwidth requirements. So yeah, there are plenty of opportunities. Of course, the question here is, is connectivity enough, or especially with your comment around the edge, at the edge we need to manage connectivity, storage, compute, analytics, and of course the applications. So, that's a blend of players. It's not going to be in the hands of one player. So yes, plenty of opportunities but understanding what comes the other way from the customer base, where that's, you and I in our homes or outward as an about, or from a business point of view, an office or a campus environment, that's what should be driving, and not the technology itself. And I think this is the trap that the industry has fallen into many times, is we've got a great new wave of technology coming, how can we possibly deliver it to everybody rather than listening to what the customers really require and delivering it in a way consumable by all those different markets. >> Yeah now, of course all of these topics blend together. We try to keep them separately, but we're going to talk about Cloud, we're going to talk about competition, But one of the areas that we don't have a specific agenda item on is, is data and AI. And of course there's all this data flowing through the network, so presumably it's an opportunity for the Telcos. At the same time, they're not considered AI experts. They do when you talk about Edge, they would appear to have the latency advantage because of the last mile and their proximity, to various end points. But the Cloud is sort of building out as well. How do you think about data and AI as an opportunity for Telco? >> I think the whole data and AI piece for me sits on top of the cake or pie, whatever you want to call it. What we're doing with all this connectivity, what we're doing with all these moving parts and gathering information around it, and building automation into the delivery of the service, and using the analytics, whether you call it ML or AI, it doesn't really matter. But actually using that information to deliver a better service, a better outcome. Now, of course, Telcos have had much of this data for years and years, for decades, but they've never used it. So, I think what's happening is, the Cloud players are beginning to educate many of the Telcos around how valuable this stuff is. And that then brings in that question of how do we partner with people using open APIs to leverage that data. Now, do the Telcos keep hold of all that data? Do they let the Cloud players do all of it? No, it's going to be a combination depending on particular environments, and of course the people owning their devices also have a vested interest in this as well. So, you've always got to look at it end to end and where the data flows are, and where we can analyze it. But I agree that analysis on the device at the Edge, and perhaps less and less going back to the core, which is of course the original sort of mandate of the Cloud. >> Well, we certainly think that most of the Edge is going to be about AI inferencing, and then most of the data is going to stay at the edge. Some will come back for sure. And that is big opportunity for whether you're selling compute or conductivity, or maybe storage as well, but certainly insights at the Edge. >> Everything. >> Yeah. >> Everything, yeah. >> Let's get into the Cloud discussion and talk about the Hyperscalers, the big Hyperscaler elephant in the room. We're going to try to dig into what role the Cloud will play in the transformation of telecoms on Telecom TV at the great Telco debate. You likened the Hyperscalers, Chris, to Dementors from Harry Potter hovering over the industry. So, the question is, are the Cloud players going to suck the value out of the Telcos? Or are they more like Dobby the elf? They're powerful, there's sometimes friendly but they're unpredictable. >> Thank you for extending that analogy. Yes, it got a lot of reaction when I use that, but I think it indicates some of the direction of power shift where, we've got to remember here that Telcos are fundamentally national, and they're restricted by regulation, and the Cloud players are global, perhaps not as global as they'd like be, but some regional restrictions, but the global players, the Hyperscalers, they will use that power and they they will extend their reach, and they are extending their reach. If you think they now command some fantastic global networks, in some ways they've replaced some of the Telco international networks, all the submarine investments that tend to be done primarily for the Hyperscalers. So, they're building that out. So, as soon as you get onto their network, then you suddenly become part of that environment. And that is reducing some of the spend on the longer distances we might have got in the past approaches from the Telcos. Now, does that mean they're going to go all the way down and take over the Telcos? I don't believe so, because it's a fundamentally different business digging fiber in people's streets and delivering to the buildings, and putting antennas up. So, they will be a coexistence. And in fact, what we've already seen with Cloud and the Hyperscalers is that they're working much more close together than people might imagine. Now, you mentioned about data in the previous question, Google probably the best known of the of the AI and ML delivers from the Cloud side, working with many of the Telcos, even in some cases to actually have all the data outsourced into the Google Cloud for analytics purposes. They've got the power, the heavy lifting to do that. And so, we begin to see that, and obviously with shifting of workloads as appropriate within the Telco networking environment, we're seeing that with AWS, and of course with Azure as well. And Azure of course acquired a couple of companies in affirmed and Metro switch, which actually do some of the formal 5G core and the likes there within the connectivity environment. So, it's not clean cuts. And to go back to the analogy, those Dementors are swooping around and looking for opportunities, and we know that they will pick up opportunities, and they will extend their reach as far as they can down to that edge. But of course, the edge is where, as you rightly say, the Telcos have the control, they don't necessarily own the customer. I don't believe anyone owns the customer in this digital environment, because digital allows you to move your allegiance and your custom elsewhere anyway. So, but they do own that access piece, and that's what's important from a national point of view, from an economic point of view. And that's why we've seen some of the geopolitical activity banning Huawei from certain markets, encouraging more innovation through open ecosystem plays. And so, there is a tension there between the local Telco, the local market and the Hyperscaler market, but fundamentally they've got an absolute brilliant way of working together using the best of both worlds to deliver the services that we need as an economy. >> Well, and we've talked about this you and I in the past where the Telcos, portions of the Telco network could move into the Cloud. And there of course the Telcos all run the big data centers, and portions of that IT infrastructure could move into the Cloud. But it's very clear, they're not going to give up the entire family jewels to the Cloud players. Why would they? But there are portions of their IT that they could move into. Particularly, in the front end, they want to build like everybody. They want to build an abstraction layer. They're not going to move their core systems and their backend Oracle databases, they're going to put a brick wall around those, but they wanted abstraction layer, and they want to take advantage of microservices and use that data from those transaction systems. But the web front end stuff makes sense to put into Cloud. So, how do you think about that? >> I think you've hit the nail on the head. So you can't move those big backend systems straight away, gradually over time, you will, but you've got to go for those easy wins. And certainly in the research I've been doing with many of my clients, they're suggested that front end piece, making sure that you can onboard customers more easily, you can get the right mix of services. You can provide the omnichannel interaction from that customer experience that everybody talks about, for which the industry is not very well known at all by the way. So, any improvement on that is going to be good from an MPS point of view. So yeah, leveraging what we might, what we call BSS OSS in the telecom world, and actually putting that into the Cloud, leveraging both the Hyperscalers, but also by the way, many of the traditional players who people think haven't moved Cloud wards, but they are moving Cloud wards and they're embracing microservices and Cloud native. So, what you would have seen if we'd been in person down in Barcelona next week, would be a lot of the vendors who perhaps traditionally seems a bit slow moving, actually have done a lot of work to move their portfolio into the Cloud and into Cloud native environments. And yes, as you say, we can use that front end, we can use the API openness that's developed by people at the TM forum, to actually make sure we don't have to do the backend straight away, do it over time. Because of course the thing that we're not touching upon here, is the revenue stream is a consistent revenue stream. So, just because you don't need to change the backend to keep your revenue stream going, this is on a new, it keeps delivering every month, we keep paying our 50, 40, whatever bucks a month into the Telco pot. That's why it's such a big market, and people aren't going to stop doing that. So, I think the dynamics of the industry, we often spend a lot of time thinking about the inner workings of it and the potential of adjacent markets, whereas actually, we keep paying for this stuff, we keep pushing revenue into the pockets of all the Telcos. So, it's not a bad industry to be in, even if they were just pushed back to be in the access market, it's a great business. We need it more and more. The elasticity of demand is very inelastic, we need it. >> Yeah, it's the mother of old golden geese. We don't have a separate topic on security, and I want to touch on security here, is such an important topic. And it's top of mind obviously for everybody, Telcos, Hyperscalers, the Hyperscalers have this shared responsibility model, you know it well. A lot of times it's really confusing for customers. They don't realize it until there has been a problem. The Telcos are going to be very much tuned into this. How will all this openness, and we're going to talk about technology in a moment, but how will this transformation in your view, in the Cloud, with the shared responsibility model, how will that affect the whole security posture? >> Security is a great subject, and I do not specialize in it. I don't claim to be an expert by any stretch of the imagination, but I would say security for me is a bit like AI and analytics. It's everywhere. It's part of everything. And therefore you cannot think of it as a separate add on issue. So, every aspect, every element, every service you build into your micro services environment has to think about how do you secure that connection, that transaction, how do you secure the customer's data? Obviously, sovereignty plays a role in that as well in terms of where it sits, but at every level of every connection, every hop that we look through, every route to jump, we've got to see that security is built in. And in some ways, it's seen as being a separate part of the industry, but actually, as we collapse parts of the network down, we're talking about bringing optical and rooting together in many environments, security should be talked about in the same breath. So when I talked about Edge, when I talked about connectivity, storage, compute, analytics, I should've said security as well, because I absolutely believe that is fundamental to every chain in the link and let's face it, we've got a lot of links in the chain. >> Yeah, 100%. Okay, let's hit on technologies and competition, we kind of blend those together. What technology should we be paying attention to that are going to accelerate this transformation. We hear a lot about 5G, Open RAN. There's a lot of new tech coming in. What are you watching? Who are the players that we maybe should be paying attention to, some that you really like, that are well positioned? >> We've touched upon it in various of the questions that have proceeded this. So, the sort of Cloudification of the networking environment is obviously really important. The automation of the process we've got to move away from bureaucratic manual processes within these large organizations, because we've got to be more efficient, we've got to be more reliable. So, anything which is related to automation. And then the Open RAN question is really interesting. Once again, you raised this topic of when you go down an Open RAN routes or any open route, it ultimately requires more integration. You've got more moving parts from more suppliers. So, therefore there are potential security issues there, depending on how it's defined, but everybody is entering the Open RAN market. There are some names that you will see regularly next week, being pushed, I'm not going to push them anymore, because some of them just attract the oxygen of attention. But there are plenty out there. The good news is, the key vendors who come from the more traditional side are also absolutely embracing that and accept the openness. But I think the piece which probably excites me more, apart from the whole shift towards Cloud and microservices, is the coming together, the openness between the IT environment and the networking environment. And you see it, for example, in the Open RAN, this thing called the RIC, the RAN Interconnection Controller. We're actually, we're beginning to find people come from the IT side able to control elements within the wireless controller piece. Now that that starts to say to me, we're getting a real handle on it, anybody can manage it. So, more specialization is required, but understanding how the end to end flow works. What we will see of course is announcements about new devices, the big guys like Apple and Samsung do their own thing during the year, and don't interrupt their beat with it with MWC, but you'll see a lot of devices being pushed by many other providers, and you'll see many players trying to break into the different elements of the market. But I think mostly, you'll see the people approaching it from more and more Cloudified angle where things are much more leveraging, that Cloud capability and not relying on the sort of rigid and stodgy infrastructure that we've seen in the past >> Which is kind of interesting because Cloud, a lot of the Clouds are Walled Gardens, at the same time they host a lot of open technologies, and I think as these two worlds collide, IT and the Telco industry, it's going to be interesting to see how the Telco developer ecosystem evolves. And so, that's something that we definitely want to watch. You've got a comment there? >> Yeah, I think the Telco developer they've not traditionally been very big in that area at all, have they? They've had their traditional, if you go back to when you and I were kids, the plain old telephone service was a, they were a one trick pony, and they've moved onto that. In some ways, I'd like them to move on and to have the one trick of plain old broadband that we just get broadband delivered everywhere. So, there are some issues about delivering service to all parts of every country, and obviously the globe, whether we do that through satellite, we might see some interesting satellite stuff coming out during NWC. There's an awful lot of birds flying up there trying to deliver signal back to the ground. Traditionally, that's not been very well received, with the change in generation of satellite might help do that. But we've known traditionally that a lot of developer activity in there, what it does bring to the four though, Dave, is this issue of players like the Ciscos and Junipers, and all these guys of the world who bring a developer community to the table as well. This is where the ecosystem play comes in, because that's where you get the innovation in the application world, working with channels, working with individual applications. And so it's opening up, it's basically building a massive fabric that anybody can tap into, and that's what becomes so exciting. So, the barriers to entry come down, but I think it will see us settling down, a stabilization of relationship between the Telcos and the Hyperscalers, because they need each other as we talked about previously, then the major providers, the Ciscos, Nokias, Ericssons, Huawei's, the way they interact with the Telcos. And then allowing that level of innovation coming in from the smaller players, whether it's on a national or a global basis. So, it's actually a really exciting environment. >> So I want to continue that theme and just talk about Telco in the enterprise. And Chris, on this topic, I want to just touch on some things and bring in some survey data from ETR, Enterprise Technology Research, our partner. And of course the Telcos, they've got lots of data centers. And as we talked about, they're going to be moving certain portions into the Cloud, lots of the front end pieces in particular, but let's look at the momentum of some of the IT players within the ETR dataset, and look at how they compare to some of the Telcos that ETR captures specifically within the Telco industry. So, we filtered this data on the Telco industry. So, this is our X, Y graph that we show you oftentimes on the vertical axis, is net score which measures spending momentum, and in the horizontal axis is market share, which is a measure of pervasiveness in the dataset. Now, this data is for shared accounts just in the Telco sector. So we filtered on certain sectors, like within the technology sectors, Cloud, networking, and so it's narrow, it's a narrow slice of the 1500. It respondents, it represents about 133 shared accounts. And a couple of things to jump right out. Within the Telco industry, it's no surprise, but Azure and AWS have massive presence on the horizontal axis, but what's notable as they score very highly in the vertical axis, with elevated spending velocity on their platforms within Telco. Google Cloud doesn't have as much of a presence, but it's elevated as well. Chris was talking about their data posture before, Arista and Verizon, along with VMware are also elevated, as is Aruba, which is HPEs networking division, but they don't have the presence on the horizontal axis. And you got Red Hat OpenStack is actually quite prominent in Telco as we've reported in previous segments. Is no surprise You see Akamai there. Now remember, this survey is weighted toward enterprise IT, so you have to take that into consideration, but look at Cisco, very strong presence, nicely elevated as is Equinox, both higher than many of the others including Dell, but you could see Dell actually has pretty respectable spending in Telco. It's an area that they're starting to focus on more. And then you got that cluster below, your Juniper, AT&T, Oracle, the rest of HPE TELUM and Lumen which is formerly, century link via IBM. Now again, I'm going to caution you. This is an enterprise IT heavy survey, but the big takeaway is the Cloud players have a major presence inside of firms that say they're in the telecommunications industry. And certain IT players like Cisco, VMware and Red Hat appear to be well positioned inside these accounts. So Chris, I'm not sure if any of this commentary resonates with you, but it seems that the Telcos would love to partner up with traditional IT vendors and Cloud players, and maybe find ways to grow their respective businesses. >> I think some of the data points you brought out there are very important. So yes, we've seen a Microsoft Azure and AWS very strong working with Telcos. We've seen Google Cloud platform actually really aggressively pushed into the market certainly the last 12, 24 months. So yeah, they're well positioned, and they all come from a slightly different background. As I said, the Google with this, perhaps more data centric approach in its analytics, tools very useful, AWS with this outpost reaching out, connecting out, and as you'll, with its knowledge of the the Microsoft business market certainly pushing into private networks as well, by the way. So yeah, and Cisco, of course in there does have, and it's a mass scale division, a lot of activity there, some of the people collapsing, some of that rooting an obstacle together, their big push on Silicon. So, what you've got here is a sort of cross representation of many of the different sorts of suppliers who are active in this market. Now Telcos is a big spenders, the telecom market, as we said, a $1.4 trillion market, they spend a lot, they probably have to double bubble spend at the moment to get over the hump of 5G investment, to build out fiber where they need to build out. So, any anything that relates to that is of course a major spending opportunity, a major market opportunity for players. And we know when you need the infrastructure behind it, whether it's in data centers or in their own data centers or in the Cloud to deliver against it. So, what I do like about this as an analyst, a lot of people would focus on one particular piece of the market. So you specialize on handsets, people specialize on home markets and home gateways. So, I tend to sit back and try and look at the big picture, the whole picture. And I think we're beginning to see some very good momentum where people are, where companies are building upon, of course their core business within the telecom industry, extending it out. But the lines of demarcation are blurring between enterprise, Telco, and indeed moving down into small business. And you think about the SD-WAN Market, which came from nowhere to build a much more flexible solution for connecting people over the wide area network, which has been brilliant during the pandemic, because it's allowed us to extend that to home, but be of course, build a campus ready for the future as well. So there are plenty of opportunities out there. I think the big question in my mind is always about from going into the Telco, as I said, whether they wannna reduce the number of suppliers on the roster. So that puts a question mark against some of the open approaches, and then from the Telco to the end customer, because it goes to the Telcos, 30% of their revenue comes from the enterprise market, 60% from the consumer market. How do they leverage the channel? Which includes all the channels, we talked about security, all of the IT stuff that you've already touched upon and the Cloud. It's going to be a very interesting mix and balancing act between different channels to get the services that the customers want. And I think increasingly, customers are more aware of the opportunities open to them to reach back into this ecosystem and say, "Yeah, I want a piece of humans to Telco, but I want it to come to me through my local integrated channel, because I need a bit of their expertise on security." So, fascinating market, and I think not telecom's no longer considered in isolation, but very much as part of that broader digital ecosystem. >> Chris, it's very hard to compress an analysis of a $1.4 trillion business into 30 or 35 minutes, but you're just the guy to help me do it. So, I got to really thank you for participating today and bringing your knowledge. Awesome. >> Do you know, it's my pleasure. I love looking at this market. Obviously I love analogies like Harry Potter, which makes it bring things to life. But at the end of the day, we as people, we want to be connected, we as business, we want to be connected, in society we want to be connected. So, the fundamental of this industry are unbelievably strong. Let's hope that governments don't mess with it too much. And let's hope that we get the right technology comes through, and help support that world of connectivity going forward. >> All right, Chris, well, I'll be texting you from Mobile World Congress in Barcelona, and many thanks to my colleague, Chris Lewis, he brought some serious knowledge today and thank you. And remember, I publish each week on wikibond.com and siliconangle.com. And these episodes are all available as podcasts. You just got to search for Breaking Analysis podcasts. You can always connect with me on twitter @dvellante or email me at dave.vellante@siliconangle.com. And you can comment on my LinkedIn post, and don't forget to check out etr.plus for all the survey data. This is Dave Vellante, for theCUBE Insights powered by ETR. Be well, and we'll see you next time. (upbeat music)
SUMMARY :
bringing you data-driven and the founding director of Dave, it's a pleasure to be here. bit on the tech landscape. the remit of the industry to I've got the Mobile World Congress app a lot of the activities will be online. describe the current state and the network parts of this story And so, the question is this, And one of the things we looked at was sort of in the Cloud space, So Chris, can and should Telcos So, in that sense, the market is growing. because one of the and of course the applications. because of the last mile and of course the people but certainly insights at the Edge. and talk about the Hyperscalers, And that is reducing some of the spend in the past where the Telcos, and actually putting that into the Cloud, in the Cloud, with the about in the same breath. Who are the players that we maybe and not relying on the sort of rigid a lot of the Clouds are Walled Gardens, So, the barriers to entry come down, and in the horizontal or in the Cloud to deliver against it. So, I got to really thank So, the fundamental of this industry for all the survey data.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris Lewis | PERSON | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
Ericsson | ORGANIZATION | 0.99+ |
Chris | PERSON | 0.99+ |
Telcos | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Huawei | ORGANIZATION | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
BT | ORGANIZATION | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
40% | QUANTITY | 0.99+ |
AT&T | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Ciscos | ORGANIZATION | 0.99+ |
Nokias | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Africa | LOCATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Arista | ORGANIZATION | 0.99+ |
60% | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Ericssons | ORGANIZATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Dave | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
30 minutes | QUANTITY | 0.99+ |
50 | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Breaking Analysis - How AWS is Revolutionizing Systems Architecture
from the cube studios in palo alto in boston bringing you data-driven insights from the cube and etr this is breaking analysis with dave vellante aws is pointing the way to a revolution in system architecture much in the same way that aws defined the cloud operating model last decade we believe it is once again leading in future systems design the secret sauce underpinning these innovations is specialized designs that break the stranglehold of inefficient and bloated centralized processing and allows aws to accommodate a diversity of workloads that span cloud data center as well as the near and far edge hello and welcome to this week's wikibon cube insights powered by etr in this breaking analysis we'll dig into the moves that aws has been making which we believe define the future of computing we'll also project what this means for customers partners and aws many competitors now let's take a look at aws's architectural journey the is revolution it started by giving easy access as we all know to virtual machines that could be deployed and decommissioned on demand amazon at the time used a highly customized version of zen that allowed multiple vms to run on one physical machine the hypervisor functions were controlled by x86 now according to werner vogels as much as 30 of the processing was wasted meaning it was supporting hypervisor functions and managing other parts of the system including the storage and networking these overheads led to aws developing custom asics that help to accelerate workloads now in 2013 aws began shipping custom chips and partnered with amd to announce ec2 c3 instances but as the as the aws cloud started to scale they really weren't satisfied with the performance gains that they were getting and they were hitting architectural barriers that prompted aws to start a partnership with anaperta labs this was back in 2014 and they launched then ec2 c4 instances in 2015. the asic in c4 optimized offload functions for storage and networking but still relied on intel xeon as the control point aws aws shelled out a reported 350 million dollars to acquire annapurna in 2015 which is a meager sum to acquire the secret sauce of its future system design this acquisition led to a modern version of project nitro in 2017 nitro nitro offload cards were first introduced in 2013 at this time aws introduced c5 instances and replaced zen with kvm and more tightly coupled the hypervisor with the asic vogels shared last year that this milestone offloaded the remaining components including the control plane the rest of the i o and enabled nearly a hundred percent of the processing to support customer workloads it also enabled a bare metal version of the compute that spawned the partnership the famous partnership with vmware to launch vmware cloud on aws then in 2018 aws took the next step and introduced graviton its custom designed arm-based chip this broke the dependency on x86 and launched a new era of architecture which now supports a wide variety of configurations to support data intensive workloads now these moves preceded other aws innovations including new chips optimized for machine learning and training and inferencing and all kinds of ai the bottom line is aws has architected an approach that offloaded the work currently done by the central processing unit in most general purpose workloads like in the data center it has set the stage in our view for the future allowing shared memory memory disaggregation and independent resources that can be configured to support workloads from the cloud all the way to the edge and nitro is the key to this architecture and to summarize aws nitro think of it as a set of custom hardware and software that runs on an arm-based platform from annapurna aws has moved the hypervisor the network the storage virtualization to dedicated hardware that frees up the cpu to run more efficiently this in our opinion is where the entire industry is headed so let's take a look at that this chart pulls data from the etr data set and lays out key players competing for the future of cloud data center and the edge now we've superimposed nvidia up top and intel they don't show up directly in the etr survey but they clearly are platform players in the mix we covered nvidia extensively in previous breaking analysis and won't go too deep there today but the data shows net scores on the vertical axis that's a measure of spending velocity and then it shows market share in the horizontal axis which is a measure of pervasiveness within the etr data set we're not going to dwell on the relative positions here rather let's comment on the players and start with aws we've laid out aws how they got here and we believe they are setting the direction for the future of the industry and aws is really pushing migration to its arm-based platforms pat morehead at the 6-5 summit spoke to dave brown who heads ec2 at aws and he talked extensively about migrating from x86 to aws's arm-based graviton 2. and he announced a new developer challenge to accelerate that migration to arm instances graviton instances and the end game for customers is a 40 better price performance so a customer running 100 server instances can do the same work with 60 servers now there's some work involved but for the by the customers to actually get there but the payoff if they can get 40 improvement in price performance is quite large imagine this aws currently offers 400 different ec2 instances last year as we reported sorry last year as we reported earlier this year nearly 50 percent of the new ec2 instances so nearly 50 percent of the new ec2 instances shipped in 2020 were arm based and aws is working hard to accelerate this pace it's very clear now let's talk about intel i'll just say it intel is finally responding in earnest and basically it's taking a page out of arm's playbook we're going to dig into that a bit today in 2015 intel paid 16.7 billion dollars for altera a maker of fpgas now also at the 6.5 summit nevin shenoy of intel presented details of what intel is calling an ipu it's infrastructure processing unit this is a departure from intel norms where everything is controlled by a central processing unit ipu's are essentially smart knicks as our dpus so don't get caught up in all the acronym soup as we've reported it's all about offloading work and disaggregating memory and evolving socs system-on-chip and sops system on package but just let this sink in a bit a bit for a moment intel's moves this past week it seems to us anyway are designed to create a platform that is nitro like and the basis of that platform is a 16.7 billion dollar acquisition just compare that to aws's 350 million dollar tuck-in of annapurna that is incredible now chenoy said in his presentation rough quote we've already deployed ipu's using fpgas in a in very high volume at microsoft azure and we've recently announced partnerships with baidu jd cloud and vmware so let's look at vmware vmware is the other you know really big platform player in this race in 2020 vmware announced project monterrey you might recall that it's based on the aforementioned fpgas from intel so vmware is in the mix and it chose to work with intel most likely for a variety of reasons one of the obvious ones is all the software that's running on on on vmware it's been built for x86 and there's a huge install base there the other is pat was heading vmware at the time and and you know when project monterey was conceived so i'll let you connect the dots if you like regardless vmware has a nitro like offering in our view its optionality however is limited by intel but at least it's in the game and appears to be ahead of the competition in this space aws notwithstanding because aws is clearly in the lead now what about microsoft and google suffice it to say that we strongly believe that despite the comments that intel made about shipping fpgas and volume to microsoft that both microsoft and google as well as alibaba will follow aws's lead and develop an arm-based platform like nitro we think they have to in order to keep pace with aws now what about the rest of the data center pack well dell has vmware so despite the split we don't expect any real changes there dell is going to leverage whatever vmware does and do it better than anyone else cisco is interesting in that it just revamped its ucs but we don't see any evidence that it has a nitro like plans in its roadmap same with hpe now both of these companies have history and capabilities around silicon cisco designs its own chips today for carrier class use cases and and hpe as we've reported probably has some remnants of the machine hanging around but both companies are very likely in our view to follow vmware's lead and go with an intel based design what about ibm well we really don't know we think the best thing ibm could do would be to move the ibm cloud of course to an arm-based nitro-like platform we think even the mainframe should move to arm as well i mean it's just too expensive to build a specialized mainframe cpu these days now oracle they're interesting if we were running oracle we would build an arm-based nitro-like database cloud where oracle the database runs cheaper faster and consumes less energy than any other platform that would would dare to run oracle and we'd go one step further and we would optimize for competitive databases in the oracle cloud so we would make oci run the table on all databases and be essentially the database cloud but you know back to sort of fpgas we're not overly excited about about the market amd is acquiring xi links for 35 billion dollars so i guess that's something to get excited about i guess but at least amd is using its inflated stock price to do the deal but we honestly we think that the arm ecosystem will will obliterate the fpga market by making it simpler and faster to move to soc with far better performance flexibility integration and mobility so again we're not too sanguine about intel's acquisition of altera and the moves that amd is making in in the long term now let's take a deeper look at intel's vision of the data center of the future here's a chart that intel showed depicting its vision of the future of the data center what you see is the ipu's which are intelligent nixed and they're embedded in the four blocks shown and they're communicating across a fabric now you have general purpose compute in the upper left and machine intelligent on the bottom left machine intelligence apps and up in the top right you see storage services and then the bottom right variation of alternative processors and this is intel's view of how to share resources and go from a world where everything is controlled by a central processing unit to a more independent set of resources that can work in parallel now gelsinger has talked about all the cool tech that this will allow intel to incorporate including pci and gen 5 and cxl memory interfaces and or cxl memory which are interfaces that enable memory sharing and disaggregation and 5g and 6g connectivity and so forth so that's intel's view of the future of the data center let's look at arm's vision of the future and compare them now there are definite similarities as you can see especially on the right hand side of this chart you've got the blocks of different process processor types these of course are programmable and you notice the high bandwidth memory the hbm3 plus the ddrs on the two sides kind of bookending the blocks that's shared across the entire system and it's connected by pcie gen 5 cxl or ccix multi-die socket so you know you may be looking to say okay two sets of block diagrams big deal well while there are similarities around disaggregation and i guess implied shared memory in the intel diagram and of course the use of advanced standards there are also some notable differences in particular arm is really already at the soc level whereas intel is talking about fpgas neoverse arms architecture is shipping in test mode and we'll have end market product by year end 2022 intel is talking about maybe 2024 we think that's aspirational or 2025 at best arm's road map is much more clear now intel said it will release more details in october so we'll pay attention then maybe we'll recalibrate at that point but it's clear to us that arm is way further along now the other major difference is volume intel is coming at this from a high data center perspective and you know presumably plans to push down market or out to the edge arm is coming at this from the edge low cost low power superior price performance arm is winning at the edge and based on the data that we shared earlier from aws it's clearly gaining ground in the enterprise history strongly suggests that the volume approach will win not only at the low end but eventually at the high end so we want to wrap by looking at what this means for customers and the partner ecosystem the first point we'd like to make is follow the consumer apps this capability the capabilities that we see in consumer apps like image processing and natural language processing and facial recognition and voice translation these inference capabilities that are going on today in mobile will find their way into the enterprise ecosystem ninety percent of the cost associated with machine learning in the cloud is around inference in the future most ai in the enterprise and most certainly at the edge will be inference it's not today because it's too expensive this is why aws is building custom chips for inferencing to drive costs down so it can increase adoption now the second point is we think that customers should start experimenting and see what you can do with arm-based platforms moore's law is accelerating at least the outcome of moore's law the doubling of performance every of the 18 to 24 months it's it's actually much higher than that now when you add up all the different components in these alternative processors just take a look at apple's a5 a15 chip and arm is in the lead in terms of performance price performance cost and energy consumption by moving some workloads onto graviton for example you'll see what types of cost savings you can drive for which applications and possibly generate new applications that you can deliver to your business put a couple engineers in the task and see what they can do in two or three weeks you might be surprised or you might say hey it's too early for us but you'll find out and you may strike gold we would suggest that you talk to your hybrid cloud provider as well and find out if they have a nitro we shared that vmware they've got a clear path as does dell because they're you know vmware cousins what about your other strategic suppliers what's their roadmap what's the time frame to move from where they are today to something that resembles nitro do they even think about that how do they think about that do they think it's important to get there so if if so or if not how are they thinking about reducing your costs and supporting your new workloads at scale now for isvs these consumer capabilities that we discussed earlier all these mobile and and automated systems and cars and and things like that biometrics another example they're going to find their way into your software and your competitors are porting to arm they're embedding these consumer-like capabilities into their apps are you we would strongly recommend that you take a look at that talk to your cloud suppliers and see what they can do to help you innovate run faster and cut costs okay that's it for now thanks to my collaborator david floyer who's been on this topic since early last decade thanks to the community for your comments and insights and hey thanks to patrick morehead and daniel newman for some timely interviews from your event nice job fellas remember i published each week on wikibon.com and siliconangle.com these episodes are all available as podcasts just search for breaking analysis podcasts you can always connect with me on twitter at d vallante or email me at david.velante at siliconangle.com i appreciate the comments on linkedin and clubhouse so follow us if you see us in a room jump in and let's riff on these topics and don't forget to check out etr.plus for all the survey data this is dave vellante for the cube insights powered by etr be well and we'll see you next time
SUMMARY :
and nitro is the key to this
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
2013 | DATE | 0.99+ |
2015 | DATE | 0.99+ |
dave brown | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
2020 | DATE | 0.99+ |
2017 | DATE | 0.99+ |
david floyer | PERSON | 0.99+ |
60 servers | QUANTITY | 0.99+ |
2018 | DATE | 0.99+ |
last year | DATE | 0.99+ |
18 | QUANTITY | 0.99+ |
microsoft | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
daniel newman | PERSON | 0.99+ |
35 billion dollars | QUANTITY | 0.99+ |
alibaba | ORGANIZATION | 0.99+ |
16.7 billion dollars | QUANTITY | 0.99+ |
16.7 billion dollar | QUANTITY | 0.99+ |
2025 | DATE | 0.99+ |
second point | QUANTITY | 0.99+ |
ninety percent | QUANTITY | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
october | DATE | 0.99+ |
350 million dollars | QUANTITY | 0.99+ |
dave vellante | PERSON | 0.99+ |
2024 | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
nvidia | ORGANIZATION | 0.99+ |
amd | ORGANIZATION | 0.99+ |
boston | LOCATION | 0.99+ |
first point | QUANTITY | 0.99+ |
both companies | QUANTITY | 0.99+ |
three weeks | QUANTITY | 0.99+ |
24 months | QUANTITY | 0.99+ |
apple | ORGANIZATION | 0.98+ |
30 | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
graviton | TITLE | 0.98+ |
each week | QUANTITY | 0.98+ |
nearly 50 percent | QUANTITY | 0.98+ |
aws | ORGANIZATION | 0.98+ |
earlier this year | DATE | 0.98+ |
100 server instances | QUANTITY | 0.98+ |
amazon | ORGANIZATION | 0.98+ |
two sides | QUANTITY | 0.98+ |
intel | ORGANIZATION | 0.98+ |
400 different | QUANTITY | 0.97+ |
early last decade | DATE | 0.97+ |
ORGANIZATION | 0.97+ | |
ORGANIZATION | 0.97+ | |
40 improvement | QUANTITY | 0.97+ |
x86 | TITLE | 0.96+ |
last decade | DATE | 0.96+ |
cisco | ORGANIZATION | 0.95+ |
oracle | ORGANIZATION | 0.95+ |
chenoy | PERSON | 0.95+ |
40 better | QUANTITY | 0.95+ |
vmware | ORGANIZATION | 0.95+ |
350 million dollar | QUANTITY | 0.94+ |
nitro | ORGANIZATION | 0.92+ |
Adam Glick & Andrew Glinka, Dell Technologies | Dell Technologies World 2021
>>Welcome to the cubes coverage of Dell Technologies world 2021. The digital experience. I'm lisa martin. I've got two guests here with me today. Adam Glick is here. Senior Director of portfolio marketing for Apex at Dell Technologies. Adam welcome to the cube >>lisa. It's great to be here with you >>likewise. And Andrew Glinka is here VP of Competitive intelligence at Dell Technologies as well. Andrew welcome to you as well. >>Thank you. Glad to be here. >>So the last Dell Technologies world was only about six months or so ago and sadly I was sitting in the same room doing that. We're not in Vegas at the convention center but hopefully one day we will be soon. But a lot of news there um Adam was about Apex and this big transformation about what Dell wants to do, give us a little bit of a history and what's transpired in the last six months. >>Well, a lot of things have happened in the past six months with what we were calling Project Apex before probably the first most obvious one is we've removed project from the name as we've made the offering generally available. We've also added a lot to it. There's a lot of new pieces of technology that are part of Project Apex now, we've talked about bringing in the cloud, bringing the custom solutions, hear a lot about that at Dell Technologies where all this time and really practicing that all up together in a single experience for customers, giving them something that's super simple agile and gives them all the control that they want to use their infrastructure where they want it all of that as a service. >>Big changes Andrew. Let's go over to you now. Talk to me about some of the players in the market. >>Well, he has a service market is growing incredibly fast and will continue to grow over the next number of years. And what we're seeing is a lot of players trying to enter that market because it is growing so fast. So you have some of the traditional infrastructure players that are entering like HP has their offer out in the market and pure storage and that happened many others. And you also have the public cloud providers like amazon web services, google Microsoft azure that are starting to develop um on prem tech capabilities to kind of validate this hybrid cloud as a service, all things everywhere model. So uh rapidly growing market a lot changing in a lot of players entering this space very quickly. >>So a lot of acceleration we've seen with respect to digital transformation Andrew in the last year. So talk to me about how Apex compares to those infrastructure players, you mentioned peer storage, Netapp HP. Talk to me about the comparison there. >>Yeah, so one of the things is we continue to develop, Apex is we're going to offer the broadest portfolio of as a service solutions for customers, all with different consumption models. So we'll be offering outcome based meter based as well as custom solutions, which is a little bit different than what others can provide all delivered using market leading technology and all Dell supported. So we're not using third party to deliver any of the asset service, it's all Dell supported, um some other very tactical things like single rate, so we don't charge for over usage or charge extra, which is different than some um and also it's all self service. So through the console you can place an order for a new system or upgraded system and you're avoiding the lengthy sale cycles and all the back and forth. So just a couple of questions you can get the outcome that you're looking for. >>Adam. Talk to me about how apex compares to the public cloud providers, customers obviously have that choice as well. AWS google cloud platform. What's the comparison contrast there? >>So when we think about what's going on with public cloud providers, we really look at them as partners and people that we work with. There's a Venn diagram if you think about it and the reality is that although there is some overlap between, there's also a lot of differentiated value that we look at, that we bring their and it's how do we work together on those pieces? So the most obvious of those is when you're thinking about things like a hybrid cloud and how people work together to make sure that they've got a cloud that meets their needs, both on prem in their Coehlo locations out of the edge as well as whatever they're doing with public >>cloud. >>And so we're looking at how do we bring all those pieces together? And there are certain things that work better in certain places, certain ones that work better than others. We do a lot of things around the simplicity of billing to make that easy for customers, giving them really high performance ways to to work well that really meet the needs of a lot of workloads that might need regulatory needs or might have specific performance mapping, high performance computing, things like that. But it works together. And that's really the point is that what customers tell us is that they have needs for on premises, They have needs for things in their private cloud and follows. They also have needs in the public cloud. And how do they bring that together? And so we're working to say, how do we bridge that gap to make the best possible outcome for customers? We work on partnerships with the partnership that we announced with Equinix to bring together co location facilities around the world and bring apex services customers easily when they want to say reduce the latency between what they're running and what they control within their own hardware stacks and what might be running in the public cloud. It's kind of a merger of both that really helps customers get the best of all that they need because at the end of the day that's the goal is helping our customers get the best I. T. Outcomes for their businesses as possible. >>Right? And you mentioned Hybrid cloud and we talk about that so often customers are in that hybrid world for many reasons. So basically what you're saying is there is partnerships that Dell Technologies has with Apex and the other hyper scholars so that when customers come in, if they're most likely already using some of those other platforms, they actually could come in and work with Apex too, develop a solution that works very synergistically. >>Yeah, we're helping them pull together what they need. And if you take a look, 72 of organizations say that they're taking a hybrid cloud approach, they want to be able to bring the best of both worlds to what they're doing and really choose what's right for them. Where do they need to be able to really control what's happening with their data? Where do they want to be able to maintain and control the costs that they have and also be able to access the other services that might be out there that they would need. So how do they bring those together? And those ways that we work together for the benefit of customers? And we bridge those two pieces is really what we're aiming to do here. >>Excellent. So Andrew, let's go back over you. I want to talk about workloads here because you know when we look at some of the numbers, the 8020 rule with the cloud, 80 of those workloads still on prem customers needing to determine which workloads should go to the cloud. How does apex work with customers to facilitate making those decisions? Um about the workloads that are best suited for apex versus club? >>Well, I think that's the beauties, it's very flexible. And so some of those traditional workloads that are still on prem can be run as a service without a whole lot of change. So you don't have to re platform, you don't have to reengineer them and you can move them into an as a service model, continue to run them easily. But then there's a whole lot of new development like high performance computing and Ai And machine learning, particularly at an edge where Gartner says by 2025 75 of all data will be processed at the edge. So as these new capabilities are being built out, uh customers have been asking us to start to run that infrastructure in these new workloads and and at as a service model and so high performance computing ai. Ml these edge workloads are fantastic use cases just get started with as a service and can certainly extend back into some of the more traditional workloads that they've been running >>adam. Can you talk to us a little bit about what's transpired in the last six months from the customers lens as we talked a little bit about, we talked a lot in the last year about the acceleration of digital transformation and so many businesses having to pivot multiple times in the last year. A lot of acceleration of those getting to cloud for, for to survive. Talk to me about the customer experience, what you see in the last six months. >>So what we've heard a lot from our customers is that they're really looking for the benefits of consumption as a service that especially as you see the financial impacts that happened over the past year, People looking at ways to preserve capital and what are the ways that they can go and maintain what they want to do or perhaps even grow and accelerate. Take advantage of those new opportunities in ways that don't require large capital purchases and the ability to go in and purchase as a service is something we've heard from multiple customers is something that is really attractive to them as they look at. Hey, there's no opportunities they've opened up and how do they be able to expand on those as well as how do they be able to preserve the capital? They have, be able to continue with the projects that they're looking at but be able to take a more agile approach for those things. And so the as a service offerings that we've been talking to our customers about have been really something they've been excited about and they come to us kind of, hey, what do you have? What's the roadmap? How can we have more of those kinds of things? And that's why we're so excited Dell Technologies world to be talking about how we're bringing even more apex services as a service available to our customers. >>And I'm just curious in the last year since we've seen so many industries, every industry really rocked by the very dynamic market, but some of the things like healthcare and government, I'm just curious if you've seen any industries in particular really take a leading edge here and working with you in apex. >>one of the most >>interesting things that I've seen from the customers that I've been talking to is that it really is broad ranging that I've talked to customers who are governmental customers who are interested in expanding what they're doing with it but very concerned about things like data, locality and data sovereignty. That's very interesting to them. I've talked to manufacturing organizations, they're looking at how do they expand their operations in asian manufacturing for instance. And they're going from, how do they operate within the United States to how do they expand their operations? Be able to do that in a more quick fashion? What they're doing? Talk to healthcare organizations, they're looking at, how do they be able to bring digital healthcare and as you to think about what's happening more virtually that people are doing, What does that mean in terms of health care? Both from people who are actually doing virtual visits with their doctors as well as even things like digital surgery. So there's so many things that are happening really. I could talk to you about dozens of industries. But the takeaway that I've had is that there's no real one industry, it's really something that has impacted just operations globally and different folks. Look at different things in different ways. I talked to a company that does train that actually train company. They do logistics and they're looking at edge scenarios and how do they do train inspections faster to be able to provide better turnaround times for their trains because there's a limited amount of track and so if they miss a maintenance window like that's time that they not only have to wait for the next window, they have to wait for all the other trains to pass too. So it's really breathtaking, just the scope of all that's changing in it and all the opportunities that are coming up as people think about what consuming it services as a service can mean for them. >>Yeah, amazing opportunities. And you talked about, you know, the virtual and there's so much of it that's going to persist in in a good way, silver linings, right? Um and you want to go back over to you talk to me when we, when we talked about apex at Dell technologies world 2026 months ago, this was kind of revolutionary and really looking at it as a really big change to Dell's future strategy. Talk to me about that. >>Well, it's a change for the entire company, so having to rethink how we deliver all these services and outcomes to customers. So it's it's not just about the product. The product is now the service and the service is the product, so it's very different in how we approach it. Thinking more about how we can help our customers achieve these outcomes um and help deliver these services that get them there, which is a little different than just developing the products themselves. And so that's been a big thing that we've been taking on and making sure that we deliver these outcomes for our customers. >>Yeah. And then adam last question for you talk to me about kind of same perspective of looking at this as as how Dell intends to compete in the future and what customers can expect. Also how can they engage? Is this something that is available with Channel Partners? Dell Direct? >>So this is the beginning of a huge journey and transformation as Andrew spoke about, like this is a transformation of not only what we're providing, but a transformation across all of Dell. We're looking at how do we expand the X portfolio to bring a portfolio of options to our customers? You know, we're starting with with storage and cloud and some are custom solutions, but we really have a vision of how do we bring all of Dell's business products and into services for our customers? You know, it's a huge transformation, it's something I'm incredibly excited about because it really aligns what we do with what our customers do. We've never had an opportunity to be so closely connected with our customers and create great outcomes for them. So the transformation, like we're just at the beginning of this and it's an incredible path that we're on that's providing amazing value for the people that we've already started working with. For people that want to find out more about it. You can certainly come to our website, Dell technologies dot com slash apex. People who have a relationship with Dell already contact their sales representative will be more than happy to talk to them about what their current needs are and what effects can do to help them continue their digital transformation and create better outcomes for their organization. >>Excellent, Adam Andrew, Thank you for joining me today to talk about what's going on. Project apex to apex the tremendous amount of opportunities that it's helping customers in any industry uncover. We look forward to seeing down the road some of those great customer outcomes that come from this. I thank you both for joining me today. >>Thank you very much. Thank you >>for Adam Glick and Andrew Glinka. I'm lisa martin. You're watching the cubes coverage of Dell Technologies World 2021 The Digital Experience.
SUMMARY :
Welcome to the cubes coverage of Dell Technologies world 2021. It's great to be here with you Andrew welcome to you as well. Glad to be here. So the last Dell Technologies world was only about six months or so ago and sadly I was sitting in the same room Well, a lot of things have happened in the past six months with what we were calling Project Apex Let's go over to you now. that are starting to develop um on prem tech capabilities to kind of validate this hybrid So talk to me about how Apex compares to those infrastructure players, So just a couple of questions you can get the outcome that you're looking for. What's the comparison contrast there? So the most obvious of those is when We do a lot of things around the simplicity of billing to make that easy for customers, And you mentioned Hybrid cloud and we talk about that so often customers are in that hybrid world Where do they need to be able to really control what's happening with their data? some of the numbers, the 8020 rule with the cloud, 80 of those workloads still on prem So you don't have to re platform, Talk to me about the customer experience, what you see in the last six months. require large capital purchases and the ability to go in and purchase as a service is something we've heard And I'm just curious in the last year since we've seen so many industries, I could talk to you about dozens of industries. Talk to me about that. Well, it's a change for the entire company, so having to rethink how we deliver all these at this as as how Dell intends to compete in the future and what customers We've never had an opportunity to be so closely connected with our customers and create We look forward to seeing down the road some of those great Thank you very much. I'm lisa martin.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Andrew | PERSON | 0.99+ |
Andrew Glinka | PERSON | 0.99+ |
Adam Glick | PERSON | 0.99+ |
Adam | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Vegas | LOCATION | 0.99+ |
Equinix | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
two pieces | QUANTITY | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
80 | QUANTITY | 0.99+ |
lisa martin | PERSON | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
Adam Andrew | PERSON | 0.99+ |
Apex | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
two guests | QUANTITY | 0.99+ |
2026 months ago | DATE | 0.99+ |
United States | LOCATION | 0.99+ |
Both | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
ORGANIZATION | 0.98+ | |
one | QUANTITY | 0.97+ |
Netapp | ORGANIZATION | 0.97+ |
Coehlo | ORGANIZATION | 0.97+ |
both worlds | QUANTITY | 0.97+ |
dozens | QUANTITY | 0.97+ |
lisa | PERSON | 0.96+ |
single rate | QUANTITY | 0.95+ |
last six months | DATE | 0.94+ |
apex | ORGANIZATION | 0.94+ |
single experience | QUANTITY | 0.93+ |
2021 | DATE | 0.92+ |
about six months | DATE | 0.91+ |
one industry | QUANTITY | 0.89+ |
past year | DATE | 0.88+ |
Dell Direct | ORGANIZATION | 0.88+ |
Dell Technologies World 2021 | EVENT | 0.88+ |
apex | TITLE | 0.86+ |
Project Apex | ORGANIZATION | 0.85+ |
one day | QUANTITY | 0.84+ |
past six months | DATE | 0.84+ |
72 of organizations | QUANTITY | 0.83+ |
2025 75 | DATE | 0.8+ |
8020 | QUANTITY | 0.79+ |
Dell Technologies World | EVENT | 0.78+ |
Channel Partners | ORGANIZATION | 0.75+ |
couple | QUANTITY | 0.71+ |
industries | QUANTITY | 0.62+ |
Yusef Khan, Io Tahoe | Enterprise Data Automation
>>from around the globe. It's the Cube with digital coverage of enterprise data automation, an event Siri's brought to you by Iot. Tahoe, everybody, We're back. We're talking about enterprise data automation. The hashtag is data automated, and we're going to really dig into data migrations, data, migrations. They're risky. They're time consuming, and they're expensive. Yousef con is here. He's the head of partnerships and alliances at I o ta ho coming again from London. Hey, good to see you, Seth. Thanks very much. >>Thank you. >>So your role is is interesting. We're talking about data migrations. You're gonna head of partnerships. What is your role specifically? And how is it relevant to what we're gonna talk about today? >>Uh, I work with the various businesses such as cloud companies, systems integrators, companies that sell operating systems, middleware, all of whom are often quite well embedded within a company. I t infrastructures and have existing relationships. Because what we do fundamentally makes migrating to the cloud easier on data migration easier. A lot of businesses that are interested in partnering with us. Um, we're interested in parting with, So >>let's set up the problem a little bit. And then I want to get into some of the data. You know, I said that migration is a risky, time consuming, expensive. They're they're often times a blocker for organizations to really get value out of data. Why is that? >>Uh, I think I mean, all migrations have to start with knowing the facts about your data, and you can try and do this manually. But when that you have an organization that may have been going for decades or longer, they will probably have a pretty large legacy data estate so that I have everything from on premise mainframes. They may have stuff which is probably in the cloud, but they probably have hundreds, if not thousands of applications and potentially hundreds of different data stores. Um, now they're understanding of what they have. Ai's often quite limited because you can try and draw a manual maps, but they're outdated very quickly. Every time that data changes the manual that's out of date on people obviously leave organizations over time, so that kind of tribal knowledge gets built up is limited as well. So you can try a Mackel that manually you might need a db. Hey, thanks. Based analyst or ah, business analyst, and they won't go in and explore the data for you. But doing that manually is very, very time consuming this contract teams of people, months and months. Or you can use automation just like what's the bank with Iot? And they managed to do this with a relatively small team. Are in a timeframe of days. >>Yeah, we talked to Paul from Webster Bank. Awesome discussion. So I want to dig into this migration and let's let's pull up graphic it will talk about. We'll talk about what a typical migration project looks like. So what you see here it is. It's very detailed. I know it's a bit of an eye test, but let me call your attention to some of the key aspects of this Ah, and then use. If I want you to chime in. So at the top here, you see that area graph that's operational risk for a typical migration project, and you can see the timeline and the the milestones. That blue bar is the time to test so you can see the second step data analysis talking 24 weeks so, you know, very time consuming. And then Let's not get dig into the stuff in the middle of the fine print, but there's some real good detail there, but go down the bottom. That's labor intensity in the in the bottom and you can see high is that sort of brown and and you could see a number of data analysis, data staging data prep, the trial, the implementation post implementation fixtures, the transition toe B A B a year, which I think is business as usual. Those are all very labor intensive. So what do you take aways from this typical migration project? What do we need to know yourself? >>I mean, I think the key thing is, when you don't understand your data upfront, it's very difficult to scope to set up a project because you go to business stakeholders and decision makers and you say Okay, we want to migrate these data stores. We want to put them in the cloud most often, but actually, you probably don't know how much data is there. You don't necessarily know how many applications that relates to, you know, the relationships between the data. You don't know the flow of the data. So the direction in which the data is going between different data stores and tables, so you start from a position where you have pretty high risk and alleviate that risk. You could be stacking project team of lots and lots of people to do the next base, which is analysis. And so you set up a project which has got a pretty high cost. The big projects, more people, the heavy of governance, obviously on then there, then in the phase where they're trying to do lots and lots of manual analysis manage. That, in a sense, is, as we all know, on the idea of trying to relate data that's in different those stores relating individual tables and columns. Very, very time consuming, expensive. If you're hiring in resource from consultants or systems integrators externally, you might need to buy or to use party tools, Aziz said earlier. The people who understand some of those systems may have left a while ago. See you even high risks quite cost situation from the off on the same things that have developed through the project. Um, what are you doing with it, Ayatollah? Who is that? We're able to automate a lot of this process from the very beginning because we can do the initial data. Discovery run, for example, automatically you very quickly have an automated validator. A data map on the data flow has been generated automatically, much less time and effort and much less cars. Doctor Marley. >>Okay, so I want to bring back that that first chart, and I want to call your attention to the again that area graph the blue bars and then down below that labor intensity. And now let's bring up the the the same chart. But with a set of an automation injection in here and now. So you now see the So let's go Said Accelerated by Iot, Tom. Okay, great. And we're going to talk about this. But look, what happens to the operational risk. A dramatic reduction in that. That graph. And then look at the bars, the bars, those blue bars. You know, data analysis went from 24 weeks down to four weeks and then look at the labor intensity. The it was all these were high data analysis data staging data prep. Try a lot post implementation fixtures in transition to be a you. All of those went from high labor intensity. So we've now attack that and gone to low labor intensity. Explain how that magic happened. >>I think that the example off a data catalog. So every large enterprise wants to have some kind of repository where they put all their understanding about their data in its Price States catalog, if you like, um, imagine trying to do that manually. You need to go into every individual data store. You need a DB a business analyst, rich data store they need to do in extracted the data table was individually they need to cross reference that with other data school, it stores and schemers and tables. You probably were the mother of all lock Excel spreadsheets. It would be a very, very difficult exercise to do. I mean, in fact, one of our reflections as we automate lots of data lots of these things is, um it accelerates the ability to water may, But in some cases, it also makes it possible for enterprise customers with legacy systems um, take banks, for example. There quite often end up staying on mainframe systems that they've had in place for decades. Uh, no migrating away from them because they're not able to actually do the work of understanding the data g duplicating the data, deleting data isn't relevant and then confidently going forward to migrate. So they stay where they are with all the attendant problems assistance systems that are out of support. Go back to the data catalog example. Um, whatever you discover invades, discovery has to persist in a tool like a data catalog. And so we automate data catalog books, including Out Way Cannot be others, but we have our own. The only alternative to this kind of automation is to build out this very large project team or business analysts off db A's project managers processed analysts together with data to understand that the process of gathering data is correct. To put it in the repository to validate it except etcetera, we've got into organizations and we've seen them ramp up teams off 2030 people costs off £234 million a year on a time frame, 15 20 years just to try and get a data catalog done. And that's something that we can typically do in a timeframe of months, if not weeks. And the difference is using automation. And if you do what? I've just described it. In this manual situation, you make migrations to the cloud prohibitively expensive. Whatever saving you might make from shutting down your legacy data stores, we'll get eaten up by the cost of doing it. Unless you go with the more automated approach. >>Okay, so the automated approach reduces risk because you're not gonna, you know you're going to stay on project plan. Ideally, it's all these out of scope expectations that come up with the manual processes that kill you in the rework andan that data data catalog. People are afraid that their their family jewels data is not going to make it through to the other side. So So that's something that you're you're addressing and then you're also not boiling the ocean. You're really taking the pieces that are critical and stuff you don't need. You don't have to pay for >>process. It's a very good point. I mean, one of the other things that we do and we have specific features to do is to automatically and noise data for a duplication at a rover or record level and redundancy on a column level. So, as you say before you go into a migration process. You can then understand. Actually, this stuff it was replicated. We don't need it quite often. If you put data in the cloud you're paying, obviously, the storage based offer compute time. The more data you have in there that's duplicated, that is pure cost. You should take out before you migrate again if you're trying to do that process of understanding what's duplicated manually off tens or hundreds of bases stores. It was 20 months, if not years. Use machine learning to do that in an automatic way on it's much, much quicker. I mean, there's nothing I say. Well, then, that costs and benefits of guitar. Every organization we work with has a lot of money existing, sunk cost in their I t. So have your piece systems like Oracle or Data Lakes, which they've spent a good time and money investing in. But what we do by enabling them to transition everything to the strategic future repositories, is accelerate the value of that investment and the time to value that investment. So we're trying to help people get value out of their existing investments on data estate, close down the things that they don't need to enable them to go to a kind of brighter, more future well, >>and I think as well, you know, once you're able to and this is a journey, we know that. But once you're able to go live on, you're infusing sort of a data mindset, a data oriented culture. I know it's somewhat buzzword, but when you when you see it in organizations, you know it's really and what happens is you dramatically reduce that and cycle time of going from data to actually insights. Data's plentiful, but insights aren't, and that is what's going to drive competitive advantage over the next decade and beyond. >>Yeah, definitely. And you could only really do that if you get your data estate cleaned up in the first place. Um, I worked with the managed teams of data scientists, data engineers, business analysts, people who are pushing out dashboards and trying to build machine learning applications. You know, you know, the biggest frustration for lots of them and the thing that they spend far too much time doing is trying to work out what the right data is on cleaning data, which really you don't want a highly paid thanks to scientists doing with their time. But if you sort out your data stays in the first place, get rid of duplication. If that pans migrate to cloud store, where things are really accessible on its easy to build connections and to use native machine learning tools, you're well on the way up to date the maturity curve on you can start to use some of those more advanced applications. >>You said. What are some of the pre requisites? Maybe the top few that are two or three that I need to understand as a customer to really be successful here? Is it skill sets? Is it is it mindset leadership by in what I absolutely need to have to make this successful? >>Well, I think leadership is obviously key just to set the vision of people with spiky. One of the great things about Ayatollah, though, is you can use your existing staff to do this work. If you've used on automation, platform is no need to hire expensive people. Alright, I was a no code solution. It works out of the box. You just connect to force on your existing stuff can use. It's very intuitive that has these issues. User interface? >>Um, it >>was only to invest vast amounts with large consultants who may well charging the earth. Um, and you already had a bit of an advantage. If you've got existing staff who are close to the data subject matter experts or use it because they can very easily learn how to use a tool on, then they can go in and they can write their own data quality rules on. They can really make a contribution from day one, when we are go into organizations on way. Can I? It's one of the great things about the whole experience. Veritas is. We can get tangible results back within the day. Um, usually within an hour or two great ones to say Okay, we started to map relationships. Here's the data map of the data that we've analyzed. Harrison thoughts on where the sensitive data is because it's automated because it's running algorithms stater on. That's what they were really to expect. >>Um, >>and and you know this because you're dealing with the ecosystem. We're entering a new era of data and many organizations to your point, they just don't have the resources to do what Google and Amazon and Facebook and Microsoft did over the past decade To become data dominant trillion dollar market cap companies. Incumbents need to rely on technology companies to bring that automation that machine intelligence to them so they can apply it. They don't want to be AI inventors. They want to apply it to their businesses. So and that's what really was so difficult in the early days of so called big data. You have this just too much complexity out there, and now companies like Iot Tahoe or bringing your tooling and platforms that are allowing companies to really become data driven your your final thoughts. Please use it. >>That's a great point, Dave. In a way, it brings us back to where it began. In terms of partnerships and alliances. I completely agree with a really exciting point where we can take applications like Iot. Uh, we can go into enterprises and help them really leverage the value of these type of machine learning algorithms. And and I I we work with all the major cloud providers AWS, Microsoft Azure or Google Cloud Platform, IBM and Red Hat on others, and we we really I think for us. The key thing is that we want to be the best in the world of enterprise data automation. We don't aspire to be a cloud provider or even a workflow provider. But what we want to do is really help customers with their data without automated data functionality in partnership with some of those other businesses so we can leverage the great work they've done in the cloud. The great work they've done on work flows on virtual assistants in other areas. And we help customers leverage those investments as well. But our heart, we really targeted it just being the best, uh, enterprised data automation business in the world. >>Massive opportunities not only for technology companies, but for those organizations that can apply technology for business. Advantage yourself, count. Thanks so much for coming on the Cube. Appreciate. All right. And thank you for watching everybody. We'll be right back right after this short break. >>Yeah, yeah, yeah, yeah.
SUMMARY :
of enterprise data automation, an event Siri's brought to you by Iot. And how is it relevant to what we're gonna talk about today? fundamentally makes migrating to the cloud easier on data migration easier. a blocker for organizations to really get value out of data. And they managed to do this with a relatively small team. That blue bar is the time to test so you can see the second step data analysis talking 24 I mean, I think the key thing is, when you don't understand So you now see the So let's go Said Accelerated by Iot, You need a DB a business analyst, rich data store they need to do in extracted the data processes that kill you in the rework andan that data data catalog. close down the things that they don't need to enable them to go to a kind of brighter, and I think as well, you know, once you're able to and this is a journey, And you could only really do that if you get your data estate cleaned up in I need to understand as a customer to really be successful here? One of the great things about Ayatollah, though, is you can use Um, and you already had a bit of an advantage. and and you know this because you're dealing with the ecosystem. And and I I we work And thank you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Paul | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Amazon | ORGANIZATION | 0.99+ |
London | LOCATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Yusef Khan | PERSON | 0.99+ |
Seth | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
20 months | QUANTITY | 0.99+ |
Aziz | PERSON | 0.99+ |
hundreds | QUANTITY | 0.99+ |
tens | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Webster Bank | ORGANIZATION | 0.99+ |
24 weeks | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
four weeks | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Io Tahoe | PERSON | 0.99+ |
Marley | PERSON | 0.99+ |
Harrison | PERSON | 0.99+ |
Data Lakes | ORGANIZATION | 0.99+ |
Siri | TITLE | 0.99+ |
Excel | TITLE | 0.99+ |
Veritas | ORGANIZATION | 0.99+ |
second step | QUANTITY | 0.99+ |
15 20 years | QUANTITY | 0.98+ |
Tahoe | PERSON | 0.98+ |
One | QUANTITY | 0.98+ |
first chart | QUANTITY | 0.98+ |
an hour | QUANTITY | 0.98+ |
Red Hat | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.97+ |
Tom | PERSON | 0.96+ |
hundreds of bases | QUANTITY | 0.96+ |
first | QUANTITY | 0.95+ |
next decade | DATE | 0.94+ |
first place | QUANTITY | 0.94+ |
Iot | ORGANIZATION | 0.94+ |
Iot | TITLE | 0.93+ |
earth | LOCATION | 0.93+ |
day one | QUANTITY | 0.92+ |
Mackel | ORGANIZATION | 0.91+ |
today | DATE | 0.91+ |
Ayatollah | PERSON | 0.89+ |
£234 million a year | QUANTITY | 0.88+ |
data | QUANTITY | 0.88+ |
Iot | PERSON | 0.83+ |
hundreds of | QUANTITY | 0.81+ |
thousands of applications | QUANTITY | 0.81+ |
decades | QUANTITY | 0.8+ |
I o ta ho | ORGANIZATION | 0.75+ |
past decade | DATE | 0.75+ |
Microsoft Azure | ORGANIZATION | 0.72+ |
two great ones | QUANTITY | 0.72+ |
2030 people | QUANTITY | 0.67+ |
Doctor | PERSON | 0.65+ |
States | LOCATION | 0.65+ |
Iot Tahoe | ORGANIZATION | 0.65+ |
a year | QUANTITY | 0.55+ |
Yousef | PERSON | 0.45+ |
Cloud Platform | TITLE | 0.44+ |
Cube | ORGANIZATION | 0.38+ |
Scott Raynovich, Futuriom | Future Proof Your Enterprise 2020
>> From theCUBE Studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. (smooth music) >> Hi, I'm Stu Miniman, and welcome to this special exclusive presentation from theCUBE. We're digging into Pensando and their Future Proof Your Enterprise event. To help kick things off, welcoming in a friend of the program, Scott Raynovich. He is the principal analyst at Futuriom coming to us from Montana. I believe first time we've had a guest on the program in the state of Montana, so Scott, thanks so much for joining us. >> Thanks, Stu, happy to be here. >> All right, so we're going to dig a lot into Pensando. They've got their announcement with Hewlett Packard Enterprise. Might help if we give a little bit of background, and definitely I want Scott and I to talk a little bit about where things are in the industry, especially what's happening in networking, and how some of the startups are helping to impact what's happening on the market. So for those that aren't familiar with Pensando, if you followed networking I'm sure you are familiar with the team that started them, so they are known, for those of us that watch the industry, as MPLS, which are four people, not to be confused with the protocol MPLS, but they had very successfully done multiple spin-ins for Cisco, Andiamo, Nuova and Insieme, which created Fibre Channel switches, the Cisco UCS, and the ACI product line, so multiple generations to the Nexus, and Pensando is their company. They talk about Future Proof Your Enterprise is the proof point that they have today talking about the new edge. John Chambers, the former CEO of Cisco, is the chairman of Pensando. Hewlett Packard Enterprise is not only an investor, but also a customer in OEM piece of this solution, and so very interesting piece, and Scott, I want to pull you into the discussion. The waves of technology, I think, the last 10, 15 years in networking, a lot it has been can Cisco be disrupted? So software-defined networking was let's get away from hardware and drive towards more software. Lots of things happening. So I'd love your commentary. Just some of the macro trends you're seeing, Cisco's position in the marketplace, how the startups are impacting them. >> Sure, Stu. I think it's very exciting times right now in networking, because we're just at the point where we kind of have this long battle of software-defined networking, like you said, really pushed by the startups, and there's been a lot of skepticism along the way, but you're starting to see some success, and the way I describe it is we're really on the third generation of software-defined networking. You have the first generation, which was really one company, Nicira, which VMware bought and turned into their successful NSX product, which is a virtualized networking solution, if you will, and then you had another round of startups, people like Big Switch and Cumulus Networks, all of which were acquired in the last year. Big Switch went to Arista, and Cumulus just got purchased by... Who were they purchased by, Stu? >> Purchased by Nvidia, who interestingly enough, they just picked up Mellanox, so watching Nvidia build out their stack. >> Sorry, I was having a senior moment. It happens to us analysts. (chuckling) But yeah, so Nvidia's kind of rolling up these data center and networking plays, which is interesting because Nvidia is not a traditional networking hardware vendor. It's a chip company. So what you're seeing is kind of this vision of what they call in the industry disaggregation. Having the different components sold separately, and then of course Cisco announced the plan to roll out their own chip, and so that disaggregated from the network as well. When Cisco did that, they acknowledged that this is successful, basically. They acknowledged that disaggregation is happening. It was originally driven by the large public cloud providers like Microsoft Azure and Amazon, which started the whole disaggregation trend by acquiring different components and then melding it all together with software. So it's definitely the future, and so there's a lot of startups in this area to watch. I'm watching many of them. They include ArcOS, which is a exciting new routing vendor. DriveNets, which is another virtualized routing vendor. This company Alkira, which is going to do routing fully in the cloud, multi-cloud networking. Aviatrix, which is doing multi-cloud networking. All these are basically software companies. They're not pitching hardware as part of their value add, or their integrated package, if you will. So it's a different business model, and it's going to be super interesting to watch, because I think the third generation is the one that's really going to break this all apart. >> Yeah, you brought up a lot of really interesting points there, Scott. That disaggregation, and some of the changing landscape. Of course that more than $1 billion acquisition of Nicira by VMware caused a lot of tension between VMware and Cisco. Interesting. I think back when to Cisco created the UCS platform it created a ripple effect in the networking world also. HP was a huge partner of Cisco's before UCS launched, and not long after UCS launched HP stopped selling Cisco gear. They got heavier into the networking component, and then here many years later we see who does the MPLS team partner with when they're no longer part of Cisco, and Chambers is no longer the CEO? Well, it's HPE front and center there. You're going to see John Chambers at HPE Discover, so it was a long relationship and change. And from the chip companies, Intel, of course, has built a sizeable networking business. We talked a bit about Mellanox and the acquisitions they've done. One you didn't mention but caused a huge impact in the industry, and something that Pensando's responding to is Amazon, but Annapurna Labs, and Annapurna Labs, a small Israeli company, and really driving a lot of the innovation when it comes to compute and networking at Amazon. The Graviton, Compute, and Nitro is what powers their Outposts solutions, so if you look at Amazon, they buy lots of pieces. It's that mixture of hardware and software. In early days people thought that they just bought kind of off-the-shelf white boxes and did it cheap, but really we see Amazon really hyper optimizes what they're doing. So Scott, let's talk a little bit about Pensando if we can. Amazon with the Nitro solutions built to Outposts, which is their hybrid solution, so the same stack that they put in Amazon they can now put in customers' data center. What Pensando's positioning is well, other cloud providers and enterprise, rather than having to buy something from Amazon, we're going to enable that. So what do you think about what you've seen and heard from Pensando, and what's that need in the market for these type of solutions? >> Yes, okay. So I'm glad you brought up Outposts, because I should've mentioned this next trend. We have, if you will, the disaggregated open software-based networking which is going on. It started in the public cloud, but then you have another trend taking hold, which is the so-called edge of the network, which is going to be driven by the emergence of 5G, and the technology called CBRS, and different wireless technologies that are emerging at the so-called edge of the network, and the purpose of the edge, remember, is to get closer to the customer, get larger bandwidth, and compute, and storage closer to the customer, and there's a lot of people excited about this, including the public cloud providers, Amazon's building out their Outposts, Microsoft has an Edge stack, the Azure Edge Stack that they've built. They've acquired a couple companies for $1 billion. They acquired Metaswitch, they acquired Affirmed Networks, and so all these public cloud providers are pushing their cloud out to the edge with this infrastructure, a combination of software and hardware, and that's the opportunity that Pensando is going after with this Outposts theme, and it's very interesting, Stu, because the coopetition is very tenuous. A lot of players are trying to occupy this edge. If you think about what Amazon did with public cloud, they sucked up all of this IT compute power and services applications, and everything moved from these enterprise private clouds to the public cloud, and Amazon's market cap exploded, right, because they were basically sucking up all the money for IT spending. So now if this moves to the edge, we have this arms race of people that want to be on the edge. The way to visualize it is a mini cloud. Whether this mini cloud is at the edge of Costco, so that when Stu's shopping at Costco there's AI that follows you in the store, knows everything you're going to do, and predicts you're going to buy this cereal and "We're going to give you a deal today. "Here's a coupon." This kind of big brother-ish AI tracking thing, which is happening whether you like it or not. Or autonomous vehicles that need to connect to the edge, and have self-driving, and have very low latency services very close to them, whether that's on the edge of the highway or wherever you're going in the car. You might not have time to go back to the public cloud to get the data, so it's about pushing these compute and data services closer to the customers at the edge, and having very low latency, and having lots of resources there, compute, storage, and networking. And that's the opportunity that Pensando's going after, and of course HPE is going after that, too, and HPE, as we know, is competing with its other big mega competitors, primarily Dell, the Dell/VMware combo, and the Cisco... The Cisco machine. At the same time, the service providers are interested as well. By the way, they have infrastructure. They have central offices all over the world, so they are thinking that can be an edge. Then you have the data center people, the Equinixes of the world, who also own real estate and data centers that are closer to the customers in the metro areas, so you really have this very interesting dynamic of all these big players going after this opportunity, putting in money, resources, and trying to acquire the right technology. Pensando is right in the middle of this. They're going after this opportunity using the P4 networking language, and a specialized ASIC, and a NIC that they think is going to accelerate processing and networking of the edge. >> Yeah, you've laid out a lot of really good pieces there, Scott. As you said, the first incarnation of this, it's a NIC, and boy, I think back to years ago. It's like, well, we tried to make the NIC really simple, or do we build intelligence in it? How much? The hardware versus software discussion. What I found interesting is if you look at this team, they were really good, they made a chip. It's a switch, it's an ASIC, it became compute, and if you look at the technology available now, they're building a lot of your networking just in a really small form factor. You talked about P4. It's highly programmable, so the theme of Future Proof Your Enterprise. With anything you say, "Ah, what is it?" It's a piece of hardware. Well, it's highly programmable, so today they position it for security, telemetry, observability, but if there's other services that I need to get to edge, so you laid out really well a couple of those edge use cases and if something comes up and I need that in the future, well, just like we've been talking about for years with software-defined networking, and network function virtualization, I don't want a dedicated appliance. It's going to be in software, and a form factor like Pensando does, I can put that in lots of places. They're positioning they have a cloud business, which they sell direct, and expect to have a couple of the cloud providers using this solution here in 2020, and then the enterprise business, and obviously a huge opportunity with HPE's position in the marketplace to take that to a broad customer base. So interesting opportunity, so many different pieces. Flexibility of software, as you relayed, Scott. It's a complicated coopetition out there, so I guess what would you want to see from the market, and what is success from Pensando and HPE, if they make this generally available this month, it's available on ProLiant, it's available on GreenLake. What would you want to be hearing from customers or from the market for you to say further down the road that this has been highly successful? >> Well, I want to see that it works, and I want to see that people are buying it. So it's not that complicated. I mean I'm being a little superficial there. It's hard sometimes to look in these technologies. They're very sophisticated, and sometimes it comes down to whether they perform, they deliver on the expectation, but I think there are also questions about the edge, the pace of investment. We're obviously in a recession, and we're in a very strange environment with the pandemic, which has accelerated spending in some areas, but also throttled back spending in other areas, and 5G is one of the areas that it appears to have been throttled back a little bit, this big explosion of technology at the edge. Nobody's quite sure how it's going to play out, when it's going to play out. Also who's going to buy this stuff? Personally, I think it's going to be big enterprises. It's going to start with the big box retailers, the Walmarts, the Costcos of the world. By the way, Walmart's in a big competition with Amazon, and I think one of the news items you've seen in the pandemic is all these online digital ecommerce sales have skyrocketed, obviously, because people are staying at home more. They need that intelligence at the edge. They need that infrastructure. And one of the things that I've heard is the thing that's held it back so far is the price. They don't know how much it's going to cost. We actually ran a survey recently targeting enterprises buying 5G, and that was one of the number one concerns. How much does this infrastructure cost? So I don't actually know how much Pensando costs, but they're going to have to deliver the right ROI. If it's a very expensive proprietary NIC, who pays for that, and does it deliver the ROI that they need? So we're going to have to see that in the marketplace, and by the way, Cisco's going to have the same challenge, and Dell's going to have the same challenge. They're all racing to supply this edge stack, if you will, packaged with hardware, but it's going to come down to how is it priced, what's the ROI, and are these customers going to justify the investment is the trick. >> Absolutely, Scott. Really good points there, too. Of course the HPE announcement, big move for Pensando. Doesn't mean that they can't work with the other server vendors. They absolutely are talking to all of them, and we will see if there are alternatives to Pensando that come up, or if they end up singing with them. All right, so what we have here is I've actually got quite a few interviews with the Pensando team, starting with I talked about MPLS. We have Prem, Jane, and Sony Giandoni, who are the P and the S in MPLS as part of it. Both co-founders, Prem is the CEO. We have Silvano Guy who, anybody that followed this group, you know writes the book on it. If you watched all the way this far and want to learn even more about it, I actually have a few copies of Silvano's book, so if you reach out to me, easiest way is on Twitter. Just hit me up at @Stu. I've got a few copies of the book about Pensando, which you can go through all those details about how it works, the programmability, what changes and everything like that. We've also, of course, got Hewlett Packard Enterprise, and while we don't have any customers for this segment, Scott mentioned many of the retail ones. Goldman Sachs is kind of the marquee early customer, so did talk with them. I have Randy Pond, who's the CFO, talking about they've actually seen an increase beyond what they expected at this point of being out of stealth, only a little over six months, even more, which is important considering that it's tough times for many startups coming out in the middle of a pandemic. So watch those interviews. Please hit us up with any other questions. Scott Raynovich, thank you so much for joining us to help talk about the industry, and this Pensando partnership extending with HPE. >> Thanks, Stu. Always a pleasure to join theCUBE team. >> All right, check out thecube.net for all the upcoming, as well as if you just search "Pensando" on there, you can see everything we had on there. I'm Stu Miniman, and thank you for watching theCUBE. (smooth music)
SUMMARY :
leaders all around the world, He is the principal analyst at Futuriom and how some of the startups are helping and the way I describe it is we're really they just picked up Mellanox, and it's going to be super and Chambers is no longer the CEO? and "We're going to give you a deal today. in the marketplace to take and 5G is one of the areas that it appears Scott mentioned many of the retail ones. Always a pleasure to join theCUBE team. I'm Stu Miniman, and thank
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Scott | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Walmarts | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Scott Raynovich | PERSON | 0.99+ |
Annapurna Labs | ORGANIZATION | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
Montana | LOCATION | 0.99+ |
Nuova | ORGANIZATION | 0.99+ |
Andiamo | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Pensando | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
John Chambers | PERSON | 0.99+ |
Prem | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Costco | ORGANIZATION | 0.99+ |
Randy Pond | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Cumulus | ORGANIZATION | 0.99+ |
$1 billion | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Stu | PERSON | 0.99+ |
Goldman Sachs | ORGANIZATION | 0.99+ |
John Chambers | PERSON | 0.99+ |
Nicira | ORGANIZATION | 0.99+ |
Silvano | PERSON | 0.99+ |
more than $1 billion | QUANTITY | 0.99+ |
Jane | PERSON | 0.99+ |
first generation | QUANTITY | 0.99+ |
Mellanox | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
ACI | ORGANIZATION | 0.99+ |
Alkira | ORGANIZATION | 0.99+ |
Big Switch | ORGANIZATION | 0.99+ |
third generation | QUANTITY | 0.99+ |
Ajay Vohora Final
>> Narrator: From around the globe, its theCUBE! With digital coverage of enterprise data automation. An event series brought to you by Io-Tahoe. >> Okay, we're back, welcome back to Data Automated, Ajay Vohora is CEO of Io-Tahoe. Ajay, good to see you, how are things in London? >> Things are doing well, things are doing well, we're making progress. Good to see you, hope you're doing well, and pleasure being back here on theCUBE. >> Yeah, it's always great to talk to you, we're talking enterprise data automation, as you know, within our community we've been pounding the whole DataOps conversation. A little different, though, we're going to dig into that a little bit, but let's start with, Ajay, how are you seeing the response to COVID, and I'm especially interested in the role that data has played in this pandemic. >> Yeah, absolutely, I think everyone's adapting, both socially and in business, the customers that I speak to, day in, day out, that we partner with, they're busy adapting their businesses to serve their customers, it's very much a game of ensuring that we can serve our customers to help their customers, and the adaptation that's happening here is trying to be more agile, trying to be more flexible, and there's a lot of pressure on data, lot of demand on data to deliver more value to the business, to serve that customer. >> Yeah, I mean data, machine intelligence and cloud are really three huge factors that have helped organizations in this pandemic, and the machine intelligence or AI piece, that's what automation is all about, how do you see automation helping organizations evolve, maybe faster than they thought they might have to? >> For sure, I think the necessity of these times, there's, as they say, there's a lot of demand on doing something with data, data, a lot of businesses talk about being data-driven. It's interesting, I sort of look behind that when we work with our customers, and it's all about the customer. My peers, CEOs, investors, shareholders, the common theme here is the customer, and that customer experience starts and ends with data. Being able to move from a point that is reacting to what the customer is expecting, and taking it to that step forward where you can be proactive to serve what that customer's expectation to, and that's definitely come alive now with the current time. >> Yeah, so as I said, we were talking about DataOps a lot, the idea being DevOps applied to the data pipeline, but talk about enterprise data automation, what is it to you and how is it different from DataOps? >> Yeah, great question, thank you. I think we're all familiar with, got more and more awareness around DevOps as it's applied to processes, methodologies that have become more mature over the past five years around DevOps, but managing change, managing application life cycles, managing software development, DevOps has been great, but breaking down those silos between different roles, functions, and bringing people together to collaborate. And we definitely see that those tools, those methodologies, those processes, that kind of thinking, lending itself to data with DataOps is exciting, we're excited about that, and shifting the focus from being IT versus business users to, who are the data producers and who are the data consumers, and in a lot of cases it can sit in many different lines of business. So with DataOps, those methods, those tools, those processes, what we look to do is build on top of that with data automation, it's the nuts and bolts of the algorithms, the models behind machine learning, the functions, that's where we invest our R&D. And bringing that in to build on top of the methods, the ways of thinking that break down those silos, and injecting that automation into the business processes that are going to drive a business to serve its customer. It's a layer beyond DevOps, DataOps, taking it to that point where, way I like to think about it is, is the automation behind the automation. We can take, I'll give you an example of a bank where we've done a lot of work to move them into accelerating their digital transformation, and what we're finding is that as we're able to automate the jobs related to data, and managing that data, and serving that data, that's going into them as a business automating their processes for their customer. So it's definitely having a compound effect. >> Yeah, I mean I think that DataOps for a lot of people is somewhat new, the whole DevOps, the DataOps thing is good and it's a nice framework, good methodology, there is obviously a level of automation in there, and collaboration across different roles, but it sounds like you're talking about sort of supercharging it if you will, the automation behind the automation. You know, organizations talk about being data-driven, you hear that thrown around a lot. A lot of times people will sit back and say "We don't make decisions without data." Okay, but really, being data-driven is, there's a lot of aspects there, there's cultural, but there's also putting data at the core of your organization, understanding how it affects monetization, and as you know well, silos have been built up, whether it's through M&A, data sprawl, outside data sources, so I'm interested in your thoughts on what data-driven means and specifically how Io-Tahoe plays there. >> Yeah, sure, I'd be happy to put that through, David. We've come a long way in the last three or four years, we started out with automating some of those simple, to codify, but have a high impact on an organization across a data lake, across a data warehouse. Those data-related tasks that help classify data. And a lot of our original patents and IP portfolio that were built up is very much around there. Automating, classifying data across different sources, and then being able to serve that for some purpose. So originally, some of those simpler challenges that we help our customers solve, were around data privacy. I've got a huge data lake here, I'm a telecoms business, so I've got millions of subscribers, and quite often a chief data office challenge is, how do I cover the operational risk here, where I've got so much data, I need to simplify my approach to automating, classifying that data. Reason is, can't do that manually, we can't throw people at it, and the scale of that is prohibitive. Quite often, if you were to do it manually, by the time you've got a good picture of it, it's already out of date. So in starting with those simple challenges that we've been able to address, we've then gone on and built on that to see, what else do we serve? What else do we serve for the chief data officer, chief marketing officer, and the CFO, and in these times, where those decision-makers are looking for, have a lot of choices in the platform options that they take, the tooling, they're very much looking for that Swiss army knife, being able to do one thing really well is great, but more and more, where that cost pressure challenge is coming in, is about how do we offer more across the organization, bring in those business, lines of business activities that depend on data, to not just with IT. >> So we like, in theCUBE sometimes we like to talk about okay, what is it, and then how does it work, and what's the business impact? We kind of covered what it is, I'd love to get into the tech a little bit in terms of how it works, and I think we have a graphic here that gets into that a little bit. So guys, if you could bring that up, I wonder, Ajay, if you could tell us, what is the secret sauce behind Io-Tahoe, and if you could take us through this slide. >> Ajay: Sure, I mean right there in the middle, the heart of what we do, it is the intellectual property that were built up over time, that takes from heterogeneous data sources, your Oracle relational database, your mainframe, your data lake, and increasingly APIs and devices that produce data. And now creates the ability to automatically discover that data, classify that data, after it's classified then have the ability to form relationship across those different source systems, silos, different lines of business, and once we've automated that, then we can start to do some cool things, such as put some context and meaning around that data. So it's moving it now from being data-driven, and increasingly where we have really smart, bright people in our customer organizations who want to do some of those advanced knowledge tasks, data scientists, and quants in some of the banks that we work with. The onus is on them, putting everything we've done there with automation, classifying it, relationship, understanding data quality, the policies that you can apply to that data, and putting it in context. Once you've got the ability to power a professional who's using data, to be able to put that data in context and search across the entire enterprise estate, then they can start to do some exciting things, and piece together the tapestry, the fabric, across their different system. Could be CRM, ELP systems, such as SAP, and some of the newer cloud databases that we work with, Snowflake is a great one. >> Yeah, so this is, you're describing sort of one of the reasons why there's so many stovepipes in organizations, 'cause data is kind of locked into these silos and applications, and I also want to point out that previously, to do discovery, to do that classification that you talked about, form those relationships, to glean context from data, a lot of that, if not most of that, in some cases all of that would've been manual. And of course it's out of date so quickly, nobody wants to do it because it's so hard, so this again is where automation comes into the idea of really becoming data-driven. >> Sure, I mean the efforts, if I look back maybe five years ago, we had a prevalence of data lake technologies at the cutting edge, and those have started to converge and move to some of the cloud platforms that we work with, such as Google and AWS. And I think very much as you've said it, those manual attempts to try and grasp what is such a complex challenge at scale, quickly runs out of steam, because once you've got your fingers on the details of what's in your data estate, it's changed. You've onboarded a new customer, you've signed up a new partner, a customer has adopted a new product that you've just launched, and that slew of data keeps coming, so it's keeping pace with that, the only answer really here is some form of automation. And what we've found is if we can tie automation with what I said before, the expertise, the subject matter experience that sometimes goes back many years within an organization's people, that augmentation between machine learning, AI, and that knowledge that sits inside the organization really tends to allot a lot of value in data. >> Yeah, so you know well, Ajay, you can't be as a smaller company all things to all people, so the ecosystem is critical. You're working with AWS, you're working with Google, you got Red Hat, IBM as partners. What is attracting those folks to your ecosystem, and give us your thoughts on the importance of ecosystem. >> Yeah, that's fundamental, I mean when I came into Io-Tahoe here as CEO, one of the trends that I wanted us to be part of was being open, having an open architecture that allowed one thing that was close to my heart, which was as a CEO, a CIO, well you've got a budget vision, and you've already made investments into your organization, and some of those are pretty long term bets, they could be going out five, 10 years sometimes, with a CRM system, training up your people, getting everybody working together around a common business platform. What I wanted to ensure is that we could openly plug in, using APIs that were available, to a lot of that sunk investment, and the cost that has already gone into managing an organization's IT, for business users to perform. So, part of the reason why we've been able to be successful with some of our partners like Google, AWS, and increasingly a number of technology players such as Red Hat, MongoDB is another one that we're doing a lot of good work with, and Snowflake, there is, those investments have been made by the organizations that are our customers, and we want to make sure we're adding to that, and then leveraging the value that they've already committed to. >> Okay, so we've talked about what it is and how it works, now I want to get into the business impact, I would say what I would be looking for, from this, would be can you help me lower my operational risk, I've got tasks that I do, many are sequential, some are in parallel, but can you reduce my time to task, and can you help me reduce the labor intensity, and ultimately my labor cost, so I can put those resources elsewhere, and ultimately I want to reduce the end to end cycle time, because that is going to drive telephone number ROI, so am I missing anything, can you do those things, maybe you can give us some examples of the ROI and the business impact. >> Yeah, I mean the ROI, David, is built upon three things that I've mentioned, it's a combination of leveraging the existing investment with the existing estate, whether that's on Microsoft Azure, or AWS, or Google, IBM, and putting that to work, because the customers that we work with have made those choices. On top of that, it's ensuring that we have got the automation that is working right down to the level of data, at a column level or the file level. So we don't deal with metadata, it's being very specific, to be at the most granular level. So as we run our processes and the automation, classification, tagging, applying policies from across different compliance and regulatory needs an organization has to the data, everything that then happens downstream from that is ready to serve a business outcome. It could be a customer who wants that experience on a mobile device, a tablet, or face to face, within a store. And being able to provision the right data, and enable our customers to do that for their customers, with the right data that they can trust, at the right time, just in that real time moment where a decision or an action is being expected, that's driving the ROI to be in some cases 20x plus, and that's really satisfying to see, that kind of impact, it's taking years down to month, and in many cases months of work down to days, and some cases hours, the time to value. I'm impressed with how quickly out of the box, with very little training a customer can pick up our tool, and use features such as search, data discovery, knowledge graph, and identifying duplicates, and redundant data. Straight off the bat, within hours. >> Well it's why investors are interested in this space, I mean they're looking for a big, total available market, they're looking for a significant return, 10x is, you got to have 10x, 20x is better. So that's exciting, and obviously strong management, and a strong team. I want to ask you about people, and culture. So you got people process technology, we've seen with this pandemic that the processes are really unpredictable, and the technology has to be able to adapt to any process, not the reverse, you can't force your process into some static software, so that's very very important, but at the end of the day, you got to get people on board. So I wonder if you could talk about this notion of culture, and a data-driven culture. >> Yeah, that's so important, I mean, current times is forcing the necessity of the moment to adapt, but as we start to work our way through these changes and adapt and work with our customers to adapt to these changing economic times, what we're seeing here is the ability to have the technology complement, in a really smart way, what those business users and IT knowledge workers are looking to achieve together. So, I'll give you an example. We have quite often with the data operations teams, in the companies that we are partnering with, have a lot of inbound inquiries on a day to day level, "I really need this set of data because I think it can help "my data scientists run a particular model," or "What would happen if we combine these two different "silos of data and get some enrichment going?" Now those requests can sometimes take weeks to realize, what we've been able to do with the power of (audio glitches) technology, is to get those answers being addressed by the business users themselves, and now, with our customers, they're coming to the data and IT folks saying "Hey, I've now built something in a development environment, "why don't we see how that can scale up "with these sets of data?" I don't need terabytes of it, I know exactly the columns and the feats in the data that I'm going to use, and that cuts out a lot of wastage, and time, and cost, to innovate. >> Well that's huge, I mean the whole notion of self-service in the lines of business actually feeling like they have ownership of the data, as opposed to IT or some technology group owning the data because then you've got data quality issues, or if it doesn't line up with their agenda, you're going to get a lot of finger pointing, so that is a really important piece of it. I'll give you a last word, Ajay, your final thoughts if you would. >> Yeah, we're excited to be on this path, and I think we've got some great customer examples here, where we're having a real impact in a really fast pace, whether it's helping them migrate to the cloud, helping them clean up their legacy data lake, and quite often now, the conversation is around data quality. As more of the applications that we enable to work more proficiently could be data, RPA, could be robotic process automation, a lot of the APIs that are now available in the cloud platforms, a lot of those are dependent on data quality and being able to automate for business users, to take accountability of being able to look at the trend of their data quality over time and get those signaled, is really driving trust, and that trust in data is helping in turn, the IT teams, the data operations teams they partner with, do more, and more quickly. So it comes back to culture, being able to apply the technology in such a way that it's visual, it's intuitive, and helping just like DevOps has with IT, DataOps, putting the intelligence in at the data level, to drive that collaboration. We're excited. >> You know, you remind me of something, I lied, I don't want to go yet, if it's okay. I know we're tight on time, but you mentioned a migration to the cloud, and I'm thinking about the conversation with Paula from Webster Bank. Migrations are, they're a nasty word for organizations, and we saw this with Webster, how are you able to help minimize the migration pain and why is that something that you guys are good at? >> Yeah, I mean there are many large, successful companies that we've worked with, Webster's a great example. Where I'd like to give you the analogy where, you've got a lot of bright people in your teams, if you're running a business as a CEO, and it's a bit like a living brain. But imagine if those different parts of your brain were not connected, that would certainly diminish how you're able to perform. So, what we're seeing, particularly with migration, is where banks, retailers, manufacturers have grown over the last 10 years, through acquisition, and through different initiatives to drive customer value. That sprawl in their data estate hasn't been fully dealt with. It's sometimes been a good thing to leave whatever you've acquired or created in situ, side by side with that legacy mainframe, and your Oracle ERP. And what we're able to do very quickly with that migration challenge is shine a light on all the different parts of data application at the column level, or at the file level if it's a data lake, and show an enterprise architect, a CDO, how everything's connected, where there may not be any documentation. The bright people that created some of those systems have long since moved on, or retired, or been promoted into other roles, and within days, being able to automatically generate and keep refreshed the states of that data, across that landscape, and put it into context, then allows you to look at a migration from a confidence that you're dealing with the facts, rather than what we've often seen in the past, is teams of consultants and business analysts and data analysts, spend months getting an approximation, and a good idea of what it could be in the current state, and try their very best to map that to the future target state. Now with Io-Tahoe being able to run those processes within hours of getting started, and build that picture, visualize that picture, and bring it to life. The ROI starts off the bat with finding data that should've been deleted, data that there's copies of, and being able to allow the architect, whether it's we have working on GCP, or in migration to any of the clouds such as AWS, or a multicloud landscape, quite often now. We're seeing, yeah. >> Yeah, that visi-- That visibility is key to sort of reducing operational risk, giving people confidence that they can move forward, and being able to do that and update that on an ongoing basis means you can scale. Ajay Vohora, thanks so much for coming to theCUBE and sharing your insights and your experiences, great to have you. >> Thank you David, look forward to talking again. >> All right, and keep it right there everybody, we're here with Data Automated on theCUBE, this is Dave Vellante, and we'll be right back right after this short break. (calm music)
SUMMARY :
to you by Io-Tahoe. Ajay, good to see you, Good to see you, hope you're doing well, Yeah, it's always great to talk to you, and the adaptation and it's all about the customer. the jobs related to data, and as you know well, that depend on data, to not just with IT. and if you could take and quants in some of the in some cases all of that and move to some of the cloud so the ecosystem is critical. and the cost that has already gone into the end to end cycle time, and some cases hours, the time to value. and the technology has to be able to adapt and the feats in the data of self-service in the lines of business at the data level, to and we saw this with Webster, and being able to allow the architect, and being able to do that and update that forward to talking again. and we'll be right back
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Ajay Vohora | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Paula | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Io-Tahoe | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
Ajay | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Webster | ORGANIZATION | 0.99+ |
10x | QUANTITY | 0.99+ |
London | LOCATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Webster Bank | ORGANIZATION | 0.99+ |
DevOps | TITLE | 0.99+ |
20x | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.98+ |
five years ago | DATE | 0.98+ |
one thing | QUANTITY | 0.97+ |
three things | QUANTITY | 0.97+ |
Data Automated | ORGANIZATION | 0.96+ |
DataOps | TITLE | 0.94+ |
10 years | QUANTITY | 0.94+ |
three huge factors | QUANTITY | 0.93+ |
one | QUANTITY | 0.92+ |
both | QUANTITY | 0.91+ |
millions of subscribers | QUANTITY | 0.89+ |
four years | QUANTITY | 0.85+ |
DataOps | ORGANIZATION | 0.84+ |
two different | QUANTITY | 0.82+ |
MongoDB | ORGANIZATION | 0.81+ |
pandemic | EVENT | 0.77+ |
Microsoft Azure | ORGANIZATION | 0.74+ |
20x plus | QUANTITY | 0.72+ |
past five years | DATE | 0.69+ |
theCUBE | ORGANIZATION | 0.68+ |
SAP | ORGANIZATION | 0.64+ |
M | TITLE | 0.63+ |
last three | DATE | 0.57+ |
Snowflake | ORGANIZATION | 0.57+ |
last 10 years | DATE | 0.52+ |
Webster | PERSON | 0.49+ |
Swiss | ORGANIZATION | 0.43+ |
COVID | PERSON | 0.23+ |
Awards Show | DockerCon 2020
>> From around the globe. It's theCUBE, with digital coverage of DockerCon Live 2020. brought to you by Docker and its ecosystem partners. >> Hello and welcome to DockerCon 2020. I'm John Furrier here in the DockerCon virtual studios. It's CUBE studios it's theCUBE virtual meets DuckerCon 2020 virtual event with my coach, Jenny Barocio and Peter McKee, as well as Brett Fisher, over on the captains who's doing his sessions. This is the wrap up of the long day of continuous amazing action packed DockerCon 2020. Jenny and Peter, what a day we still got the energy. We can go another 24 hours, let's do it now. This is a wrap up. So exciting day, tons of sessions, great feedback. Twitter's on fire the chats and engagements are on fire, but this is the time where we do the most coveted piece, the community awards, so Jenny, this is the time for you to deliver the drum roll for the community awards, take it away. >> Okay, (mumbles) It's the past few years and have been able to recognize those in the community that deliver so much to everyone else. And even though we're wrapping up here, there is still other content going on because we just couldn't stop till five o'clock. Peter what's happening right now? >> Yeah, so over in the Devs in Action channel, we have earning Docker Daemon with rootless mode. That's still going on, should be a great talk. And then in the How To channel, we have transforming open source into live service with Docker. They're still running now, two great talks. >> Awesome, and then the captains are still going. I think they probably started the after party already, although this channel's going to wait till, you know, 30 more minutes for that one. So if you're an after party mode, definitely go check out after we announced the awards, Brett and Marcos and Jeff and the captain's channel. So, we have some great things to share. And I mentioned it in my last segment, but nothing happens without the collective community. DockerCon is no exception. So, I really just want to take a moment again to thank the Docker team, the attendees, our sponsors and our community leaders and captains. They've been all over the virtual conference today, just like they would have been at a real conference. And I love the energy. You know, as an organizer planning a virtual event, there's always the concern of how it's going to work. Right, this is new for lots of people, but I'm in Florida and I'm thrilled with how everyone showed up today. Yeah, for sure. And to the community done some excellent things, Marcus, over them in the Captain's channel, he has built out PWD play with Docker. So, if you haven't checked that out, please go check that out. We going to be doing some really great things with that. Adding some, I think I mentioned earlier in the day, but we're adding a lot of great content into their. A lot more labs, so, please go check that out. And then talking about the community leaders, you know, they bring a lot to the community. They put there their free time in, right? No one paying them. And they do it just out of sheer joy to give back to the community organizing events. I don't know if you ever organized an event Jenny I know you have, but they take a lot of time, right? You have to plan everything, you have to get sponsors, you have to find out place to host. And now with virtual, you have to figure out how you're going to deliver the feel of a meetup in virtually. And we just had our community summit the other day and we heard from the community leaders, what they're doing, they're doing some really cool stuff. Live streaming, Discord, pulling in a lot of tools to be able to kind of recreate that, feel of being together as a community. So super excited and really appreciate all the community leaders for putting in the extra effort one of these times. >> Yeah, for really adapting and continuing in their mission and their passion to share and to teach. So, we want to recognize a few of those awesome community leaders. And I think we get to it right now Peter, are you ready? >> Set, let's go for it, right away. >> All right, so, the first community leaders are from Docker Bangalore and they are rocking it. Sangam Biradar, Ajeet singh Raina and Saiyam Pathak, thank you all so much for your commitment to this community. >> All right, and the next one we have is Docker Panang. Thank you so much to Sujay Pillai, did a great job. >> Got to love that picture and that shirt, right? >> Yeah. >> All right, next up, we'd love to recognize Docker Rio, Camila Martins, Andre Fernande, long time community leaders. >> Yeah, if I ever get a chance that's. I have a bunch of them that I want to go travel and visit but Rio is on top of list I think. >> And then also-- >> Rio maybe That could be part of the award, it's, you get to. >> I can deliver. >> Go there, bring them their awards in person now, as soon as we can do that again. >> That would be awesome, that'd be awesome. Okay, the next one is Docker Guatemala And Marcos Cano, really appreciate it and that is awesome. >> Awesome Marcos has done, has organized and put on so many meetups this last year. Really, really amazing. All right, next one is Docker Budapest and Lajos Papp, Karoly Kass and Bence Lvady, awesome. So, the mentorship and leadership coming out of this community is fantastic and you know, we're so thrilled to write, now is you. >> All right, and then we go to Docker Algeria. Yeah we got some great all over the country it's so cool to see. But Ayoub Benaissa, it's been great look at that great picture in background, thank you so much. >> I think we need we need some clap sound effects here. >> Yeah where's Beth. >> I'm clapping. >> Lets, lets. >> Alright. >> Last one, Docker Chicago, Mark Panthofer. After Chicago, Docker Milwaukee and Docker Madison one meet up is not enough for Mark. So, Mark, thank you so much for spreading your Docker knowledge throughout multiple locations. >> Yeah, and I'll buy half a Docker. Thank you to all of our winners and all of our community leaders. We really, really appreciate it. >> All right, and the next award I have the pleasure of giving is the Docker Captain's Award. And if you're not familiar with captains, Docker captains are recognized by Docker for their outstanding contributions to the community. And this year's winner was selected by his fellow captains for his tireless commitment to that community. On behalf of Docker and the captains. And I'm sure the many many people that you have helped, all 13.3 million of them on Stack Overflow and countless others on other platforms, the 2020 tip of the Captain's Hat award winner is Brandon Mitchell, so so deserving. And luckily Brandon made it super easy for me to put together this slide because he took his free DockerCon selfie wearing his Captains' Hat, so it worked out perfectly. >> Yeah, I have seen Brandon not only on Stack Overflow, but in our community Slack answering questions, just in the general area where everybody. The questions are random. You have everybody from intermediate to beginners and Brandon is always in there answering questions. It's a huge help. >> Yeah, always in there answering questions, sharing code, always providing feedback to the Docker team. Just such a great voice, both in and out for Docker. I mean, we're so proud to have you as a captain, Brandon. And I'm so excited to give you this award. All right, so, that was the most fun, right? We get to do the community awards. Do you want to do any sort of recap on the day? >> What was your favorite session? What was your favorite tweet? Favorite tweet was absolutely Peter screenshotting his parents. >> Mom mom my dear mom, it's sweet though, that's sweet. I appreciate it, can't believe they gave me an award. >> Yeah, I mean, have they ever seen you do a work presentation before? >> No, they've seen me lecture my kids a lot and I can go on about life's lessons and then I'm not sure if it's the same thing but yeah. >> I don't think so. >> No they have never see me. >> Peter you got to get the awards for the kids. That's the secret to success, you know, and captain awards and the community household awards for the kids. >> Yeah, well I am grooming my second daughter, she teaches go to afterschool kids and never thought she would be interested in programming cause when she was younger she wasn't interested in, but yes, super interested in now I have to, going to bring her into the community now, yeah. >> All right, well, great awards. Jenny is there any more awards, we good on the awards? >> Nope, we are good on the awards, but certainly not the thank yous is for today. It's an absolute honor to put on an event like this and have the community show up, have our speakers show up have the Docker team show up, right? And I'm just really thrilled. And I think the feedback has been phenomenal so far. And so I just really want to thank our speakers and our sponsors and know that, you know, while DockerCon may be over, like what we did today here and it never ends. So, thank you, let's continue the conversation. There's still things going on and tons of sessions on demand now, you can catch up, okay. >> One more thing, I have to remind everybody. I mentioned it earlier, but I got to say it again go back, watch the keynote. And I'll say at this time there is an Easter egg in there. I don't think anybody's found it yet. But if you do, tweet me and might be a surprise. >> Well you guys-- >> Are you watching your tweet feed right now? Because you're going to get quite a few. >> Yeah, it's probably blowing up right now. >> Well you got to get on a keynote deck for sure. Guys, it's been great, you guys have been phenomenal. It's been a great partnership, the co-creation this event. And again, what's blows me away is the global reach of the event, the interaction, the engagement and the cost was zero to attend. And that's all possible because of the sponsors. Again, shout out to Amazon web services, Microsoft Azure Engine X, Cockroach Labs and sneak of Platinum sponsors. And also we had some ecosystem sponsors. And if you liked the event, go to the sponsors and say hello and say, thank you. They're all listed on the page, hit their sessions and they really make it possible. So, all this effort on all sides have been great. So, awesome, I learned a lot. Thanks everyone for watching. Peter you want to get a final word and then I'll give Jenny the final, final word. >> No again, yes, thank you, thank you everybody. It's been great, theCUBE has been phenomenal. People behind the scenes has been just utterly professional. And thank you Jenny, if anybody doesn't know, you guys don't know how much Jenny shepherds this whole process through she's our captain internally making sure everything stays on track and gets done. You cannot even imagine what she does. It's incredible, so thank you, Jenny. I really, really appreciate it. >> Jenny, take us home, wrap this up 2020, dockerCon. >> All Right. >> In the books, but it's going to be on demand. It's 365 days a year now, come on final word. >> It's not over, it's not over. Community we will see you tomorrow. We will continue to see you, thank you to everyone. I had a great day, I hope everyone else did too. And happy DockerCon 2020, see you next year. >> Okay, that's a wrap, see on the internet, everyone. I'm John, for Jenny and Peter, thank you so much for your time and attention throughout the day. If you were coming in and out, remember, go see those sessions are on a calendar, but now they're a catalog of content and consume and have a great evening. Thanks for watching. (upbeat music)
SUMMARY :
brought to you by Docker for the community awards, take it away. It's the past few years and have been able Yeah, so over in the And I love the energy. and their passion to share and to teach. All right, so, the All right, and the next love to recognize Docker Rio, I have a bunch of them That could be part of the as soon as we can do that again. Okay, the next one is Docker Guatemala and you know, we're so all over the country I think we need we need So, Mark, thank you so much for spreading and all of our community leaders. And I'm sure the many many just in the general area where everybody. And I'm so excited to give you this award. What was your favorite session? I appreciate it, can't it's the same thing but yeah. and the community household the community now, yeah. awards, we good on the awards? and have the community show have to remind everybody. Are you watching your Yeah, it's probably And if you liked the And thank you Jenny, if this up 2020, dockerCon. In the books, but it's Community we will see you tomorrow. on the internet, everyone.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jenny | PERSON | 0.99+ |
Brandon Mitchell | PERSON | 0.99+ |
Sujay Pillai | PERSON | 0.99+ |
Peter McKee | PERSON | 0.99+ |
Brandon | PERSON | 0.99+ |
Marcus | PERSON | 0.99+ |
Jenny Barocio | PERSON | 0.99+ |
Camila Martins | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Florida | LOCATION | 0.99+ |
Andre Fernande | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Mark | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Mark Panthofer | PERSON | 0.99+ |
Brett Fisher | PERSON | 0.99+ |
Ayoub Benaissa | PERSON | 0.99+ |
Brett | PERSON | 0.99+ |
Saiyam Pathak | PERSON | 0.99+ |
Bence Lvady | PERSON | 0.99+ |
Cockroach Labs | ORGANIZATION | 0.99+ |
second daughter | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Marcos | PERSON | 0.99+ |
Sangam Biradar | PERSON | 0.99+ |
Docker | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
Jeff | PERSON | 0.99+ |
DockerCon 2020 | EVENT | 0.99+ |
DockerCon | EVENT | 0.99+ |
Beth | PERSON | 0.99+ |
13.3 million | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
30 more minutes | QUANTITY | 0.99+ |
DockerCon Live 2020 | EVENT | 0.99+ |
DuckerCon 2020 | EVENT | 0.99+ |
Docker Bangalore | ORGANIZATION | 0.99+ |
24 hours | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Docker Milwaukee | ORGANIZATION | 0.98+ |
Karoly Kass | PERSON | 0.98+ |
Lajos Papp | PERSON | 0.98+ |
five o'clock | DATE | 0.98+ |
zero | QUANTITY | 0.98+ |
Docker Panang | ORGANIZATION | 0.97+ |
ORGANIZATION | 0.96+ | |
Docker Budapest | ORGANIZATION | 0.96+ |
PWD | ORGANIZATION | 0.96+ |
Docker Captain's Award | TITLE | 0.96+ |
both | QUANTITY | 0.96+ |
2020 | DATE | 0.96+ |
DockerCon 2020 | EVENT | 0.95+ |
Docker Guatemala | ORGANIZATION | 0.95+ |
Captain's Hat | TITLE | 0.94+ |
Docker Madison | ORGANIZATION | 0.94+ |
last year | DATE | 0.9+ |
365 days a year | QUANTITY | 0.9+ |
this year | DATE | 0.89+ |
Easter | EVENT | 0.89+ |
Microsoft Azure Engine X | ORGANIZATION | 0.88+ |
Chicago | LOCATION | 0.87+ |
two great talks | QUANTITY | 0.86+ |
first community | QUANTITY | 0.86+ |
past few years | DATE | 0.81+ |
CUBE | ORGANIZATION | 0.81+ |
Stack Overflow | TITLE | 0.81+ |
One more thing | QUANTITY | 0.8+ |
Docker Algeria | ORGANIZATION | 0.78+ |
Simon Taylor, HYCU | CUBE Conversation March 2020
>> Announcer: From theCUBE Studios (upbeat music) in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. >> Hi, and welcome to a special CUBE Conversation. I'm Stu Miniman, coming to you from our Boston Area studio, and today, March 31st, 2020, is World Backup Day. Joining me is one of our CUBE alumni, Simon Taylor, who's the CEO of HYCU. Simon, we had you a couple weeks ago in our studio, of course. Today we have you joining us remotely. Thank you so much for joining us, and great to see you. >> Great to see you, as well Stu, as always. >> So, there's certain dates that everybody circles on the calendar and gets ready. In your industry, I have to imagine, normally World Backup Day would be a huge party, cake, and everything like that. Just, in all jest, it's my understanding, HYCU did not create World Backup Day, but it is gratuitous to talk about that, and it's something that's been around for a few years. Let's start there, thank you for joining. >> Yeah, absolutely Stu, and again, thanks for having me back on. You're going to get sick of having me on your wonderful program if we keep visiting too much, but I do appreciate it. I'm actually calling in today, as you can imagine, from Vermont, as we sort of escape the city here and get out in the countryside away from all the hectic and the crisis. You know, it's speaking of the crisis, I think that World Backup Day really could not have come at a better time, in some senses. It's so important, and I think it was created to help the world remember that their data loss is such a major issue, and that if we don't watch out for our data, if we don't backup our data, if we don't put proper data protection practices into place, we really can have a problem. And I think when we are moving to work-from-home environments, when we are moving out of the office, when the world is in such a state of flux, as it is today with Coronavirus, these are the moments when you want to really know that your data is protected, that you're safe. You know, we're seeing a rise in ransomware attacks. We're seeing all sorts of things that are tangential to the crisis with COVID-19, and I think, you know, us all taking a moment to kind of realize what an issue data loss is in the world, there just couldn't be a more important time to do that than during a crisis like this one. >> Yeah, Simon, unfortunately, it's scary times also for the IT department because bad actors are definitely making even more attacks right now, in the midst of the global pandemic, something that people are concerned about. When I've been looking out at the community, there's been conversations about, you know, "What does this mean for digital transformation, "and cloud adoption?" And some of the things I'm hearing, especially over in Europe, is there were certain companies, and if you look at certain countries, take Germany for one example, where they might have been a little bit slow to say, "Uh, I'm not sure "if I want to do the cloud." Well, if everybody's working from home for a little bit, and IT needs to keep the business running, there's been a push even faster to the cloud, and one of the main things we're going to talk about today is your partnership with Microsoft Azure. So love to hear what you're hearing from your customers out there, especially ones that normally it's, "Oh, I'm going to make my plan," and we know how fast, or slow, the enterprise normally moves, and now there's a little bit of an acceleration to say, "Hey, we need to get involved in the cloud, and my backup, my data protection is absolutely even more critical when I go to the public cloud." >> Oh my gosh, Stu, absolutely. You're right on all counts. I mean, one of the most horrific issues that we're seeing over, and over, and over again with our customers, and it's such a shame, is that there are bad actors out there. There are bad individuals out there who are trying to take advantage of this crisis, and what they're doing is they're understanding that they can now exploit the fact there's so much work from home. We're seeing more man-in-the-middle attacks. We're seeing more customers who are calling us up and saying, "I've just been hit with a ransomware, "all my data's locked down. "Somebody didn't follow protocol when they were working "from home, and boom, all of the sudden we're being asked "to pay a million dollars in bitcoin, "and what do I do?" I'm really, really proud of my team for stepping up during the crisis. We've actually seen more than 10 different customers, just in the last month, who've called us up and have said, "You know, I'm supposed to pay this bitcoin ransom. "Can you get my data back?" In all 10 out of 10 cases, because of how natively integrated HYCU is into the platforms we support, we were actually able to recover that data within the next few hours to days, get it back for them before they had to pay out those ransoms. So again, not just a plug for HYCU, a plug for backup and recovery in general. A plug for everybody who's thinking about, "How do I keep myself safe when I'm moving to the cloud?" Absolutely, this is the time to keep yourself safe with proper data protection strategies. You know, I think the second thing that you bring up, very rightly so, is that there were a lot of countries, Germany's one of them but there's many, who had sort of been on the back foot during this crisis, and had always expected that on-prem was going to be a majority of their infrastructure for the foreseeable future. They were all dipping their toe in the cloud water, the cloud pond, as it were, but you didn't see a lot of folks in Europe who were 100% committing to cloud. Well, wow, has that changed. You know, as we moved to work from home, you need a lot of that dynamic scaling that only true cloud environments, public cloud, can provide. But I think the second thing, and maybe more importantly, is we don't want to see our IT departments having to go into the office. We don't want to see them having to put themselves, and potentially their families at risk, simply to go in and manage data. So being able to work off of infrastructure-as-a-service, hugely critical during the crisis, and to the fact that HYCU is a natively integrated service into those different enterprise and public clouds means that you can do all of it remotely. And I think this is where the whole HYCU simplicity is a pillar, and a guiding principle for the company has become so important. I can't tell you how many customers have called us up and said, "I wanted to be at home with my family. "The other backup companies, the legacy deployments we had, "just simply wouldn't have allowed me to stay at home. "I would have needed to go back to the office. "Do you have something that's as-a-service?" That certainly brings me to, I think, your third point, which is we are absolutely thrilled here today, on World Backup Day, in the midst of this crisis, to be announcing the launch of HYCU for Azure. And again, this is a natively integrated service that customers can literally just VPN to their set data center, go directly to their cloud, go directly to Azure, in the marketplace turn on HYCU for Azure, and boom, you're going to have all of that wonderful, natively integrated, purpose filled backup recovery as a service. You're going to have all of that application support. You're going to have all of the things you've become, sort of used to, Stu, when we talk about HYCU, natively integrated into Azure as well. And again, I think because of this crisis, because we want people to stay at home, we want to flatten that curve, the fact that we've got this new service for Azure, which is so important for everybody, I think is just critical at this particular time. >> Yeah, definitely hugely important. We've been talking to you and HYCU for a number of years, Simon. Of course, started out very focused on, really, a Nutanix environment, broadened out to really the virtualization environment, and you're really going with your customers heavily into a cloud environment. So, Azure, really important. When I was at Microsoft Ignite last year, CEO Satya Nadella, I could sum up his main them in one word, and that was trust. So, number one, it was a knock against a certain company in the cloud that mainly drive their revenue from ads, but when he talked about customers and partners, he wanted Microsoft to really be the company that people trust in that environment. We've seen Microsoft, one of the biggest movers from an application standpoint, the real push to Office 365, got people to really embrace and trust SaaS. Would love to hear your early customers who you've been working through for this announcement, why this is so important that HYCU, not only supporting and integrating with Azure, but in the Azure marketplace, and what your customers are telling you. >> Gosh, that's such a great question, Stu. You know, first, just talking about Satya Nadella, I really think the world of him. I think he does truly believe it, when he, you know, if you've read his book, "Hit Refresh", you do start to see a man who truly cares, not just about the bottom line, or even the top line, for that matter, but really strives to drive real customer value. And I think one of the things he really did at Microsoft is he talks a lot about how the solutions that they're selling have real world effects across so many different industries. It's the net result of the technology that I think he cares about, as opposed to just the sum of its parts. So it's really, really interesting, I think, when we think about him and his leadership style, to think about how HYCU kind of fits into that. And you know, one of the things I'm really proud to announce is that here, on World Backup Day, in the midst of this horrible pandemic, what we're doing, as we launch HYCU for Azure is we're actually going to give it entirely free, no cost, no strings attached, to the entire world for the next three months. And the reason we're choosing to do that is we believe that data protection is so important, that in a situation like this, it's incredibly important that people don't take life threatening risks, things that could threaten not only them, but their families, going in to the office to do this. You know, and I think one of the great things about HYCU and also HYCU with Protege, our multi-cloud data management platform, is that you can now migrate your data from on-prem to the cloud with the touch of a button from home. You can literally go sign up, it's free of charge for all the HYCU backup you want for the next three months. You know, get on there and protect your data, that's number one. You know, that adds real value to customers in the midst of this crisis. Number two is you can use HYCU Protege to then migrate entire workloads, keep them safe, whether it's applications, whether it's databases. You know, we want customers to know that they can trust a third party, like HYCU, to be able to automate the process of migration to the cloud, and then you know, in the midst of a crisis like this everybody's thinking about disaster recovery. Well, guess what? We can even more data back on-prem, using a runbook, and we can actually drive true disaster recovery preparedness, as well, all for Azure customers. You know, Stu, you and I were talking about this offline a few minutes ago, but the reality is, we've interviewed our customers, and 72% of them, and that's in 71 countries now around the world, funny enough, but 72% of those customers are Azure customers, as well. So when we talk about our on-prem business, 72% of our on-prem business is also using Azure. So the ability to dynamically move these workloads to the cloud, move it back again for DR, as well as protect that data wherever it's sitting, and do all of that from home, with simplicity, and for the next three months no cost, I think that's how we're trying to drive value and trust into the Azure marketplace. >> Yeah, first of all, Simon, that's really a lot of good pieces here. It almost becomes a little bit trite when we talk about, "Oh, well, I want to build optionality "into my product, I want to be ready to change "and adjust things." But the environment and landscape that we're living with today is we understand, companies need to be able to react really fast, and they need to be able to adjust with this changing landscape. So what they're doing last month, versus what they're doing today, versus what they might doing in a couple of months, you know, I don't want to get locked in. I don't want to make any big decision. So therefore, it's great to see you're giving customers flexibility there. They've got both the free usage of the software, but also that migration built in to HYCU Protege. You know, how do I move my data around? How do I make sure it's still protected? So important that, you know, we've been talking for years about the ability to make changes fast and to move with speed, but you know, I think today's landscape really just put the point on (laughs), you know, we've been planning for this, in some ways, and this might have been the exact thing we're planning for, but this is the reason that this technology's so important. >> It's so well said, Stu. I mean, honestly, just like you said, we started out with the purpose-built back up recovery for Nutanix, and then we added GCP, we added VMware, now of course, we're launching Azure, but in each case, we said it's got to be natively integrated, it's got to be super simple, we've got to automate every process we can. We want to make sure that customers can wake up in the morning, log in to their cloud infrastructure, whether it's GCP, now Azure, you know, turn this on as a service. We always say, "There's nothing to download "when it's a true service," right? And I think that's so important now. It used to be kind of a talking point, but I think now people are really seeing the true value, which is when you don't need to go in to your data center, when you don't need to VPN in, when you don't need to figure out all the rest of this architecture. Well, when people are moving enormous amounts of data, and buying so much VDI, and deploying all these work-from-home modules to, sort of, protect their infrastructure, and create and environment that works for the current conditions, the last thing they have to do is put themselves at risk for the backup. I think because this is purpose-built, because it's a true service, because it's a natural extension, really, of the cloud provider they've chosen, or multiple, I think we make that really, really easy for customers, and we're very proud of the work we've done on that front. >> All right, Simon, just want to give you the opportunity. You know, what kind of feedback have you had from customers over the last couple of weeks, specifically? You talked about how important Azure was for them, of course, prior to this announcement, but just anecdotally, we'd love to hear just viewpoints as to customers you're talking and working with in these challenging times. >> Sure, sure, so first of all, everybody's hard pressed. I mean, there's a crunch everywhere. You know, people are feeling this sort of potential for a really, really systemic downturn into the economy, but at the same time, there were really urgent needs in terms of acquiring mission-critical infrastructure to support the move to work from home. And I think that's caused massive shifts in the way people are thinking about purchasing technology, and specifically infrastructure technology in the marketplace. People truly want services now. You know, before it was something that maybe drove the valuation of a company, et cetera, et cetera, but now people are saying, "Hey, it's nothing to do with that at all. "I just want a service that I can scale up, "and I can scale down. "I want it now, I want it fast, and I want it simple." So I think anything that's natively integrated, and is acting as a SaaS, true SaaS offering, has a real advantage in today's marketplace. I think the second thing is, that as customers are moving in droves to VDI, you know, I think there's a lot of talk right now about whether it's ever going to move 100% back. I think as people are discovering how effective and powerful we can be as we work from home, I mean, Stu, look at us right now having this very conversation, I think it's amazing what we're able to achieve with the technology that's out there, and I think that's really reduced the panic, and I think it's something that people aren't talking about. That there's such, imagine what the panic would have been if we didn't have Zoom, if we weren't able to do a GoToMeeting, if we weren't able to log in with a VPN and access our infrastructure. I mean, the entire world would have shut down like this. Now there's arguments being made that it may still shut down, et cetera, but you know we have at least delayed that process. I think we've created a lot of support for the economy and the environment through all of the technology that the marketplace is presenting to customers. And I think the next step in that is making sure that we recognize a couple of things. You know, we're seeing, again, a huge rise in ransomware attacks. There are many, many bad actors out there looking to exploit and take advantage of this situation, which is why I say you don't need to buy our product, but please, if you've got Azure, go and turn on the backup. Well, why wouldn't you? Protect your data, make sure it's recoverable. God forbid anything bad happens, or you do get attacked, make sure you can get that data back from a third party. Make sure it's really easy to recover. Make sure all your mission-critical applications and databases are supported. And I think if we do those things, and we work together to protect our customers, and for just a very short period of time, really don't worry so much about how much money we're going to make off of them, but think about how to protect them, truly, I think that's where the value is, and I think that's how we as human beings, can sort of do a better job of protecting each other. >> All right, well, Simon, thank you so much for all the updates. Happy World Backup Day. You know, I definitely look forward to chatting with you soon, and thanks for joining, and please be safe. >> Stu, always a pleasure. Please stay healthy, as well, take care. >> All right, I'm Stu Miniman, and you've been watching theCUBE here with some of our remote interviews. Check out thecube.net for everything online, and thank you for watching theCUBE. (upbeat digital music)
SUMMARY :
connecting with thought leaders all around the world, I'm Stu Miniman, coming to you from our Boston Area studio, but it is gratuitous to talk about that, and that if we don't watch out for our data, and IT needs to keep the business running, and to the fact that HYCU is a natively integrated service the real push to Office 365, got people to really embrace to the cloud, and then you know, and to move with speed, but you know, I think in the morning, log in to their cloud infrastructure, of course, prior to this announcement, that the marketplace is presenting to customers. to chatting with you soon, and thanks for joining, Stu, always a pleasure. and thank you for watching theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Simon Taylor | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Simon | PERSON | 0.99+ |
Vermont | LOCATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Satya Nadella | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
HYCU | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
March 2020 | DATE | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Today | DATE | 0.99+ |
71 countries | QUANTITY | 0.99+ |
72% | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
COVID-19 | OTHER | 0.99+ |
10 cases | QUANTITY | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
last month | DATE | 0.99+ |
second thing | QUANTITY | 0.99+ |
Office 365 | TITLE | 0.99+ |
Stu | PERSON | 0.99+ |
World Backup Day | EVENT | 0.99+ |
thecube.net | OTHER | 0.99+ |
one word | QUANTITY | 0.99+ |
each case | QUANTITY | 0.99+ |
more than 10 different customers | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Hit Refresh | TITLE | 0.98+ |
one example | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
third point | QUANTITY | 0.98+ |
Nutanix | ORGANIZATION | 0.98+ |
first | QUANTITY | 0.98+ |
10 | QUANTITY | 0.98+ |
theCUBE Studios | ORGANIZATION | 0.95+ |
Coronavirus | OTHER | 0.94+ |
Azure | TITLE | 0.94+ |
a million dollars | QUANTITY | 0.93+ |
today, March 31st, 2020 | DATE | 0.92+ |
CEO | PERSON | 0.91+ |
pandemic | EVENT | 0.9+ |
CUBE Conversation | EVENT | 0.9+ |
Germany | LOCATION | 0.88+ |
UNLIST TILL 4/2 - Vertica in Eon Mode: Past, Present, and Future
>> Paige: Hello everybody and thank you for joining us today for the virtual Vertica BDC 2020. Today's breakout session is entitled Vertica in Eon Mode past, present and future. I'm Paige Roberts, open source relations manager at Vertica and I'll be your host for this session. Joining me is Vertica engineer, Yuanzhe Bei and Vertica Product Manager, David Sprogis. Before we begin, I encourage you to submit questions or comments during the virtual session. You don't have to wait till the end. Just type your question or comment as you think of it in the question box, below the slides and click Submit. Q&A session at the end of the presentation. We'll answer as many of your questions as we're able to during that time, and any questions that we don't address, we'll do our best to answer offline. If you wish after the presentation, you can visit the Vertica forums to post your questions there and our engineering team is planning to join the forums to keep the conversation going, just like a Dev Lounge at a normal in person, BDC. So, as a reminder, you can maximize your screen by clicking the double arrow button in the lower right corner of the slides, if you want to see them bigger. And yes, before you ask, this virtual session is being recorded and will be available to view on demand this week. We are supposed to send you a notification as soon as it's ready. All right, let's get started. Over to you, Dave. >> David: Thanks, Paige. Hey, everybody. Let's start with a timeline of the life of Eon Mode. About two years ago, a little bit less than two years ago, we introduced Eon Mode on AWS. Pretty specifically for the purpose of rapid scaling to meet the cloud economics promise. It wasn't long after that we realized that workload isolation, a byproduct of the architecture was very important to our users and going to the third tick, you can see that the importance of that workload isolation was manifest in Eon Mode being made available on-premise using Pure Storage FlashBlade. Moving to the fourth tick mark, we took steps to improve workload isolation, with a new type of subcluster which Yuanzhe will go through and to the fifth tick mark, the introduction of secondary subclusters for faster scaling and other improvements which we will cover in the slides to come. Getting started with, why we created Eon Mode in the first place. Let's imagine that your database is this pie, the pecan pie and we're loading pecan data in through the ETL cutting board in the upper left hand corner. We have a couple of free floating pecans, which we might imagine to be data supporting external tables. As you know, the Vertica has a query engine capability as well which we call external tables. And so if we imagine this pie, we want to serve it with a number of servers. Well, let's say we wanted to serve it with three servers, three nodes, we would need to slice that pie into three segments and we would serve each one of those segments from one of our nodes. Now because the data is important to us and we don't want to lose it, we're going to be saving that data on some kind of raid storage or redundant storage. In case one of the drives goes bad, the data remains available because of the durability of raid. Imagine also, that we care about the availability of the overall database. Imagine that a node goes down, perhaps the second node goes down, we still want to be able to query our data and through nodes one and three, we still have all three shards covered and we can do this because of buddy projections. Each neighbor, each nodes neighbor contains a copy of the data from the node next to it. And so in this case, node one is sharing its segment with node two. So node two can cover node one, node three can cover node two and node one back to node three. Adding a little bit more complexity, we might store the data in different copies, each copy sorted for a different kind of query. We call this projections in Vertica and for each projection, we have another copy of the data sorted differently. Now it gets complex. What happens when we want to add a node? Well, if we wanted to add a fourth node here, what we would have to do, is figure out how to re-slice all of the data in all of the copies that we have. In effect, what we want to do is take our three slices and slice it into four, which means taking a portion of each of our existing thirds and re-segmenting into quarters. Now that looks simple in the graphic here, but when it comes to moving data around, it becomes quite complex because for each copy of each segment we need to replace it and move that data on to the new node. What's more, the fourth node can't have a copy of itself that would be problematic in case it went down. Instead, what we need is we need that buddy to be sitting on another node, a neighboring node. So we need to re-orient the buddies as well. All of this takes a lot of time, it can take 12, 24 or even 36 hours in a period when you do not want your database under high demand. In fact, you may want to stop loading data altogether in order to speed it up. This is a planned event and your applications should probably be down during this period, which makes it difficult. With the advent of cloud computing, we saw that services were coming up and down faster and we determined to re-architect Vertica in a way to accommodate that rapid scaling. Let's see how we did it. So let's start with four nodes now and we've got our four nodes database. Let's add communal storage and move each of the segments of data into communal storage. Now that's the separation that we're talking about. What happens if we run queries against it? Well, it turns out that the communal storage is not necessarily performing and so the IO would be slow, which would make the overall queries slow. In order to compensate for the low performance of communal storage, we need to add back local storage, now it doesn't have to be raid because this is just an ephemeral copy but with the data files, local to the node, the queries will run much faster. In AWS, communal storage really does mean an S3 bucket and here's a simplified version of the diagram. Now, do we need to store all of the data from the segment in the depot? The answer is no and the graphics inside the bucket has changed to reflect that. It looks more like a bullseye, showing just a segment of the data being copied to the cache or to the depot, as we call it on each one of the nodes. How much data do you store on the node? Well, it would be the active data set, the last 30 days, the last 30 minutes or the last. Whatever period of time you're working with. The active working set is the hot data and that's how large you want to size your depot. By architecting this way, when you scale up, you're not re-segmenting the database. What you're doing, is you're adding more compute and more subscriptions to the existing shards of the existing database. So in this case, we've added a complete set of four nodes. So we've doubled our capacity and we've doubled our subscriptions, which means that now, the two nodes can serve the yellow shard, two nodes can serve the red shard and so on. In this way, we're able to run twice as many queries in the same amount of time. So you're doubling the concurrency. How high can you scale? Well, can you scale to 3X, 5X? We tested this in the graphics on the right, which shows concurrent users in the X axis by the number of queries executed in a minute along the Y axis. We've grouped execution in runs of 10 users, 30 users, 50, 70 up to 150 users. Now focusing on any one of these groups, particularly up around 150. You can see through the three bars, starting with the bright purple bar, three nodes and three segments. That as you add nodes to the middle purple bar, six nodes and three segments, you've almost doubled your throughput up to the dark purple bar which is nine nodes and three segments and our tests show that you can go to 5X with pretty linear performance increase. Beyond that, you do continue to get an increase in performance but your incremental performance begins to fall off. Eon architecture does something else for us and that is it provides high availability because each of the nodes can be thought of as ephemeral and in fact, each node has a buddy subscription in a way similar to the prior architecture. So if we lose node four, we're losing the node responsible for the red shard and now node one has to pick up responsibility for the red shard while that node is down. When a query comes in, and let's say it comes into one and one is the initiator then one will look for participants, it'll find a blue shard and a green shard but when it's looking for the red, it finds itself and so the node number one will be doing double duty. This means that your performance will be cut in half approximately, for the query. This is acceptable until you are able to restore the node. Once you restore it and once the depot becomes rehydrated, then your performance goes back to normal. So this is a much simpler way to recover nodes in the event of node failure. By comparison, Enterprise Mode the older architecture. When we lose the fourth node, node one takes over responsibility for the first shard and the yellow shard and the red shard. But it also is responsible for rehydrating the entire data segment of the red shard to node four, this can be very time consuming and imposes even more stress on the first node. So performance will go down even further. Eon Mode has another feature and that is you can scale down completely to zero. We call this hibernation, you shut down your database and your database will maintain full consistency in a rest state in your S3 bucket and then when you need access to your database again, you simply recreate your cluster and revive your database and you can access your database once again. That concludes the rapid scaling portion of, why we created Eon Mode. To take us through workload isolation is Yuanzhe Bei, Yuanzhe. >> Yuanzhe: Thanks Dave, for presenting how Eon works in general. In the next section, I will show you another important capability of Vertica Eon Mode, the workload isolation. Dave used a pecan pie as an example of database. Now let's say it's time for the main course. Does anyone still have a problem with food touching on their plates. Parents know that it's a common problem for kids. Well, we have a similar problem in database as well. So there could be multiple different workloads accessing your database at the same time. Say you have ETL jobs running regularly. While at the same time, there are dashboards running short queries against your data. You may also have the end of month report running and their can be ad hoc data scientists, connect to the database and do whatever the data analysis they want to do and so on. How to make these mixed workload requests not interfere with each other is a real challenge for many DBAs. Vertica Eon Mode provides you the solution. I'm very excited here to introduce to you to the important concept in Eon Mode called subclusters. In Eon Mode, nodes they belong to the predefined subclusters rather than the whole cluster. DBAs can define different subcluster for different kinds of workloads and it redirects those workloads to the specific subclusters. For example, you can have an ETL subcluster, dashboard subcluster, report subcluster and the analytic machine learning subcluster. Vertica Eon subcluster is designed to achieve the three main goals. First of all, strong workload isolation. That means any operation in one subcluster should not affect or be affected by other subclusters. For example, say the subcluster running the report is quite overloaded and already there can be, the data scienctists running crazy analytic jobs, machine learning jobs on the analytics subcluster and making it very slow, even stuck or crash or whatever. In such scenario, your ETL and dashboards subcluster should not be or at least very minimum be impacted by this crisis and which means your ETL job which should not lag behind and dashboard should respond timely. We have done a lot of improvements as of 10.0 release and will continue to deliver improvements in this category. Secondly, fully customized subcluster settings. That means any subcluster can be set up and tuned for very different workloads without affecting other subclusters. Users should be able to tune up, tune down, certain parameters based on the actual needs of the individual subcluster workload requirements. As of today, Vertica already supports few settings that can be done at the subcluster level for example, the depot pinning policy and then we will continue extending more that is like resource pools (mumbles) in the near future. Lastly, Vertica subclusters should be easy to operate and cost efficient. What it means is that the subcluster should be able to turn on, turn off, add or remove or should be available for use according to rapid changing workloads. Let's say in this case, you want to spin up more dashboard subclusters because we need higher scores report, we can do that. You might need to run several report subclusters because you might want to run multiple reports at the same time. While on the other hand, you can shut down your analytic machine learning subcluster because no data scientists need to use it at this moment. So we made automate a lot of change, the improvements in this category, which I'll explain in detail later and one of the ultimate goal is to support auto scaling To sum up, what we really want to deliver for subcluster is very simple. You just need to remember that accessing subclusters should be just like accessing individual clusters. Well, these subclusters do share the same catalog. So you don't have to work out the stale data and don't need to worry about data synchronization. That'd be a nice goal, Vertica upcoming 10.0 release is certainly a milestone towards that goal, which will deliver a large part of the capability in this direction and then we will continue to improve it after 10.0 release. In the next couple of slides, I will highlight some issues about workload isolation in the initial Eon release and show you how we resolve these issues. First issue when we initially released our first or so called subcluster mode, it was implemented using fault groups. Well, fault groups and the subcluster have something in common. Yes, they are both defined as a set of nodes. However, they are very different in all the other ways. So, that was very confusing in the first place, when we implement this. As of 9.3.0 version, we decided to detach subcluster definition from the fault groups, which enabled us to further extend the capability of subclusters. Fault groups in the pre 9.3.0 versions will be converted into subclusters during the upgrade and this was a very important step that enabled us to provide all the amazing, following improvements on subclusters. The second issue in the past was that it's hard to control the execution groups for different types of workloads. There are two types of problems here and I will use some example to explain. The first issue is about control group size. There you allocate six nodes for your dashboard subcluster and what you really want is on the left, the three pairs of nodes as three execution groups, and each pair of nodes will need to subscribe to all the four shards. However, that's not really what you get. What you really get is there on the right side that the first four nodes subscribed to one shard each and the rest two nodes subscribed to two dangling shards. So you won't really get three execusion groups but instead only get one and two extra nodes have no value at all. The solution is to use subclusters. So instead of having a subcluster with six nodes, you can split it up into three smaller ones. Each subcluster will guarantee to subscribe to all the shards and you can further handle this three subcluster using load balancer across them. In this way you achieve the three real exclusion groups. The second issue is that the session participation is non-deterministic. Any session will just pick four random nodes from the subcluster as long as this covers one shard each. In other words, you don't really know which set of nodes will make up your execution group. What's the problem? So in this case, the fourth node will be doubled booked by two concurrent sessions. And you can imagine that the resource usage will be imbalanced and both queries performance will suffer. What is even worse is that these queries of the two concurrent sessions target different table They will cause the issue, that depot efficiency will be reduced, because both session will try to fetch the files on to two tables into the same depot and if your depot is not large enough, they will evict each other, which will be very bad. To solve this the same way, you can solve this by declaring subclusters, in this case, two subclusters and a load balancer group across them. The reason it solved the problem is because the session participation would not go across the boundary. So there won't be a case that any node is double booked and in terms of the depot and if you use the subcluster and avoid using a load balancer group, and carefully send the first workload to the first subcluster and the second to the second subcluster and then the result is that depot isolation is achieved. The first subcluster will maintain the data files for the first query and you don't need to worry about the file being evicted by the second kind of session. Here comes the next issue, it's the scaling down. In the old way of defining subclusters, you may have several execution groups in the subcluster. You want to shut it down, one or two execution groups to save cost. Well, here comes the pain, because you don't know which nodes may be used by which session at any point, it is hard to find the right timing to hit the shutdown button of any of the instances. And if you do and get unlucky, say in this case, you pull the first four nodes, one of the session will fail because it's participating in the node two and node four at that point. User of that session will notice because their query fails and we know that for many business this is critical problem and not acceptable. Again, with subclusters this problem is resolved. Same reason, session cannot go across the subcluster boundary. So all you need to do is just first prevent query sent to the first subcluster and then you can shut down the instances in that subcluster. You are guaranteed to not break any running sessions. Now, you're happy and you want to shut down more subclusters then you hit the issue four, the whole cluster will go down, why? Because the cluster loses quorum. As a distributed system, you need to have at least more than half of a node to be up in order to commit and keep the cluster up. This is to prevent the catalog diversion from happening, which is important. But do you still want to shut down those nodes? Because what's the point of keeping those nodes up and if you are not using them and let them cost you money right. So Vertica has a solution, you can define a subcluster as secondary to allow them to shut down without worrying about quorum. In this case, you can define the first three subclusters as secondary and the fourth one as primary. By doing so, this secondary subclusters will not be counted towards the quorum because we changed the rule. Now instead of requiring more than half of node to be up, it only require more than half of the primary node to be up. Now you can shut down your second subcluster and even shut down your third subcluster as well and keep the remaining primary subcluster to be still running healthily. There are actually more benefits by defining secondary subcluster in addition to the quorum concern, because the secondary subclusters no longer have the voting power, they don't need to persist catalog anymore. This means those nodes are faster to deploy, and can be dropped and re-added. Without the worry about the catalog persistency. For the most the subcluster that only need to read only query, it's the best practice to define them as secondary. The commit will be faster on this secondary subcluster as well, so running this query on the secondary subcluster will have less spikes. Primary subcluster as usual handle everything is responsible for consistency, the background tasks will be running. So DBAs should make sure that the primary subcluster is stable and assume is running all the time. Of course, you need to at least one primary subcluster in your database. Now with the secondary subcluster, user can start and stop as they need, which is very convenient and this further brings up another issue is that if there's an ETL transaction running and in the middle, a subcluster starting and it become up. In older versions, there is no catalog resync mechanism to keep the new subcluster up to date. So Vertica rolls back to ETL session to keep the data consistency. This is actually quite disruptive because real world ETL workloads can sometimes take hours and rolling back at the end means, a large waste of resources. We resolved this issue in 9.3.1 version by introducing a catalog resync mechanism when such situation happens. ETL transactions will not roll back anymore, but instead will take some time to resync the catalog and commit and the problem is resolved. And last issue I would like to talk about is the subscription. Especially for large subcluster when you start it, the startup time is quite long, because the subscription commit used to be serialized. In one of the in our internal testing with large catalogs committing a subscription, you can imagine it takes five minutes. Secondary subcluster is better, because it doesn't need to persist the catalog during the commit but still take about two seconds to commit. So what's the problem here? Let's do the math and look at this chart. The X axis is the time in the minutes and the Y axis is the number of nodes to be subscribed. The dark blues represents your primary subcluster and light blue represents the secondary subcluster. Let's say the subcluster have 16 nodes in total and if you start a secondary subcluster, it will spend about 30 seconds in total, because the 2 seconds times 16 is 32. It's not actually that long time. but if you imagine that starting secondary subcluster, you expect it to be super fast to react to the fast changing workload and 30 seconds is no longer trivial anymore and what is even worse is on the primary subcluster side. Because the commit is much longer than five minutes let's assume, then at the point, you are committing to six nodes subscription all other nodes already waited for 30 minutes for GCLX or we know the global catalog lock, and the Vertica will crash the nodes, if any node cannot get the GCLX for 30 minutes. So the end result is that your whole database crashed. That's a serious problem and we know that and that's why we are already planning for the fix, for the 10.0, so that all the subscription will be batched up and all the nodes will commit at the same time concurrently. And by doing that, you can imagine the primary subcluster can finish commiting in five minutes instead of crashing and the secondary subcluster can be finished even in seconds. That summarizes the highlights for the improvements we have done as of 10.0, and I hope you already get excited about Emerging Eon Deployment Pattern that's shown here. A primary subcluster that handles data loading, ETL jobs and tuple mover jobs is the backbone of the database and you keep it running all the time. At the same time defining different secondary subcluster for different workloads and provision them when the workload requirement arrives and then de-provision them when the workload is done to save the operational cost. So can't wait to play with the subcluster. Here as are some Admin Tools command you can start using. And for more details, check out our Eon subcluster documentation for more details. And thanks everyone for listening and I'll head back to Dave to talk about the Eon on-prem. >> David: Thanks Yuanzhe. At the same time that Yuanzhe and the rest of the dev team were working on the improvements that Yuanzhe described in and other improvements. This guy, John Yovanovich, stood on stage and told us about his deployment at at&t where he was running Eon Mode on-prem. Now this was only six months after we had launched Eon Mode on AWS. So when he told us that he was putting it into production on-prem, we nearly fell out of our chairs. How is this possible? We took a look back at Eon and determined that the workload isolation and the improvement to the operations for restoring nodes and other things had sufficient value that John wanted to run it on-prem. And he was running it on the Pure Storage FlashBlade. Taking a second look at the FlashBlade we thought alright well, does it have the performance? Yes, it does. The FlashBlade is a collection of individual blades, each one of them with NVMe storage on it, which is not only performance but it's scalable and so, we then asked is it durable? The answer is yes. The data safety is implemented with the N+2 redundancy which means that up to two blades can fail and the data remains available. And so with this we realized DBAs can sleep well at night, knowing that their data is safe, after all Eon Mode outsources the durability to the communal storage data store. Does FlashBlade have the capacity for growth? Well, yes it does. You can start as low as 120 terabytes and grow as high as about eight petabytes. So it certainly covers the range for most enterprise usages. And operationally, it couldn't be easier to use. When you want to grow your database. You can simply pop new blades into the FlashBlade unit, and you can do that hot. If one goes bad, you can pull it out and replace it hot. So you don't have to take your data store down and therefore you don't have to take Vertica down. Knowing all of these things we got behind Pure Storage and partnered with them to implement the first version of Eon on-premise. That changed our roadmap a little bit. We were imagining it would start with Amazon and then go to Google and then to Azure and at some point to Alibaba cloud, but as you can see from the left column, we started with Amazon and went to Pure Storage. And then from Pure Storage, we went to Minio and we launched Eon Mode on Minio at the end of last year. Minio is a little bit different than Pure Storage. It's software only, so you can run it on pretty much any x86 servers and you can cluster them with storage to serve up an S3 bucket. It's a great solution for up to about 120 terabytes Beyond that, we're not sure about performance implications cause we haven't tested it but for your dev environments or small production environments, we think it's great. With Vertica 10, we're introducing Eon Mode on Google Cloud. This means not only running Eon Mode in the cloud, but also being able to launch it from the marketplace. We're also offering Eon Mode on HDFS with version 10. If you have a Hadoop environment, and you want to breathe new fresh life into it with the high performance of Vertica, you can do that starting with version 10. Looking forward we'll be moving Eon mode to Microsoft Azure. We expect to have something breathing in the fall and offering it to select customers for beta testing and then we expect to release it sometime in 2021 Following that, further on horizon is Alibaba cloud. Now, to be clear we will be putting, Vertica in Enterprise Mode on Alibaba cloud in 2020 but Eon Mode is going to trail behind whether it lands in 2021 or not, we're not quite sure at this point. Our goal is to deliver Eon Mode anywhere you want to run it, on-prem or in the cloud, or both because that is one of the great value propositions of Vertica is the hybrid capability, the ability to run in both your on prem environment and in the cloud. What's next, I've got three priority and roadmap slides. This is the first of the three. We're going to start with improvements to the core of Vertica. Starting with query crunching, which allows you to run long running queries faster by getting nodes to collaborate, you'll see that coming very soon. We'll be making improvements to large clusters and specifically large cluster mode. The management of large clusters over 60 nodes can be tedious. We intend to improve that. In part, by creating a third network channel to offload some of the communication that we're now loading onto our spread or agreement protocol. We'll be improving depot efficiency. We'll be pushing down more controls to the subcluster level, allowing you to control your resource pools at the subcluster level and we'll be pairing tuple moving with data loading. From an operational flexibility perspective, we want to make it very easy to shut down and revive primaries and secondaries on-prem and in the cloud. Right now, it's a little bit tedious, very doable. We want to make it as easy as a walk in the park. We also want to allow you to be able to revive into a different size subcluster and last but not least, in fact, probably the most important, the ability to change shard count. This has been a sticking point for a lot of people and it puts a lot of pressure on the early decision of how many shards should my database be? Whether it's in 2020 or 2021. We know it's important to you so it's important to us. Ease of use is also important to us and we're making big investments in the management console, to improve managing subclusters, as well as to help you manage your load balancer groups. We also intend to grow and extend Eon Mode to new environments. Now we'll take questions and answers
SUMMARY :
and our engineering team is planning to join the forums and going to the third tick, you can see that and the second to the second subcluster and the improvement to the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Sprogis | PERSON | 0.99+ |
David | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
John Yovanovich | PERSON | 0.99+ |
10 users | QUANTITY | 0.99+ |
Paige Roberts | PERSON | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
Yuanzhe Bei | PERSON | 0.99+ |
John | PERSON | 0.99+ |
five minutes | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
30 seconds | QUANTITY | 0.99+ |
50 | QUANTITY | 0.99+ |
second issue | QUANTITY | 0.99+ |
12 | QUANTITY | 0.99+ |
Yuanzhe | PERSON | 0.99+ |
120 terabytes | QUANTITY | 0.99+ |
30 users | QUANTITY | 0.99+ |
two types | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
Paige | PERSON | 0.99+ |
30 minutes | QUANTITY | 0.99+ |
three pairs | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
nine nodes | QUANTITY | 0.99+ |
first subcluster | QUANTITY | 0.99+ |
two tables | QUANTITY | 0.99+ |
two nodes | QUANTITY | 0.99+ |
first issue | QUANTITY | 0.99+ |
each copy | QUANTITY | 0.99+ |
2 seconds | QUANTITY | 0.99+ |
36 hours | QUANTITY | 0.99+ |
second subcluster | QUANTITY | 0.99+ |
fourth node | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
six nodes | QUANTITY | 0.99+ |
third subcluster | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
twice | QUANTITY | 0.99+ |
First issue | QUANTITY | 0.99+ |
three segments | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
three bars | QUANTITY | 0.99+ |
24 | QUANTITY | 0.99+ |
5X | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
16 nodes | QUANTITY | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
each segment | QUANTITY | 0.99+ |
first node | QUANTITY | 0.99+ |
three slices | QUANTITY | 0.99+ |
Each subcluster | QUANTITY | 0.99+ |
each nodes | QUANTITY | 0.99+ |
three nodes | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
two subclusters | QUANTITY | 0.98+ |
three servers | QUANTITY | 0.98+ |
four shards | QUANTITY | 0.98+ |
3X | QUANTITY | 0.98+ |
three | QUANTITY | 0.98+ |
two concurrent sessions | QUANTITY | 0.98+ |
UNLIST TILL 4/2 - Vertica Big Data Conference Keynote
>> Joy: Welcome to the Virtual Big Data Conference. Vertica is so excited to host this event. I'm Joy King, and I'll be your host for today's Big Data Conference Keynote Session. It's my honor and my genuine pleasure to lead Vertica's product and go-to-market strategy. And I'm so lucky to have a passionate and committed team who turned our Vertica BDC event, into a virtual event in a very short amount of time. I want to thank the thousands of people, and yes, that's our true number who have registered to attend this virtual event. We were determined to balance your health, safety and your peace of mind with the excitement of the Vertica BDC. This is a very unique event. Because as I hope you all know, we focus on engineering and architecture, best practice sharing and customer stories that will educate and inspire everyone. I also want to thank our top sponsors for the virtual BDC, Arrow, and Pure Storage. Our partnerships are so important to us and to everyone in the audience. Because together, we get things done faster and better. Now for today's keynote, you'll hear from three very important and energizing speakers. First, Colin Mahony, our SVP and General Manager for Vertica, will talk about the market trends that Vertica is betting on to win for our customers. And he'll share the exciting news about our Vertica 10 announcement and how this will benefit our customers. Then you'll hear from Amy Fowler, VP of strategy and solutions for FlashBlade at Pure Storage. Our partnership with Pure Storage is truly unique in the industry, because together modern infrastructure from Pure powers modern analytics from Vertica. And then you'll hear from John Yovanovich, Director of IT at AT&T, who will tell you about the Pure Vertica Symphony that plays live every day at AT&T. Here we go, Colin, over to you. >> Colin: Well, thanks a lot joy. And, I want to echo Joy's thanks to our sponsors, and so many of you who have helped make this happen. This is not an easy time for anyone. We were certainly looking forward to getting together in person in Boston during the Vertica Big Data Conference and Winning with Data. But I think all of you and our team have done a great job, scrambling and putting together a terrific virtual event. So really appreciate your time. I also want to remind people that we will make both the slides and the full recording available after this. So for any of those who weren't able to join live, that is still going to be available. Well, things have been pretty exciting here. And in the analytic space in general, certainly for Vertica, there's a lot happening. There are a lot of problems to solve, a lot of opportunities to make things better, and a lot of data that can really make every business stronger, more efficient, and frankly, more differentiated. For Vertica, though, we know that focusing on the challenges that we can directly address with our platform, and our people, and where we can actually make the biggest difference is where we ought to be putting our energy and our resources. I think one of the things that has made Vertica so strong over the years is our ability to focus on those areas where we can make a great difference. So for us as we look at the market, and we look at where we play, there are really three recent and some not so recent, but certainly picking up a lot of the market trends that have become critical for every industry that wants to Win Big With Data. We've heard this loud and clear from our customers and from the analysts that cover the market. If I were to summarize these three areas, this really is the core focus for us right now. We know that there's massive data growth. And if we can unify the data silos so that people can really take advantage of that data, we can make a huge difference. We know that public clouds offer tremendous advantages, but we also know that balance and flexibility is critical. And we all need the benefit that machine learning for all the types up to the end data science. We all need the benefits that they can bring to every single use case, but only if it can really be operationalized at scale, accurate and in real time. And the power of Vertica is, of course, how we're able to bring so many of these things together. Let me talk a little bit more about some of these trends. So one of the first industry trends that we've all been following probably now for over the last decade, is Hadoop and specifically HDFS. So many companies have invested, time, money, more importantly, people in leveraging the opportunity that HDFS brought to the market. HDFS is really part of a much broader storage disruption that we'll talk a little bit more about, more broadly than HDFS. But HDFS itself was really designed for petabytes of data, leveraging low cost commodity hardware and the ability to capture a wide variety of data formats, from a wide variety of data sources and applications. And I think what people really wanted, was to store that data before having to define exactly what structures they should go into. So over the last decade or so, the focus for most organizations is figuring out how to capture, store and frankly manage that data. And as a platform to do that, I think, Hadoop was pretty good. It certainly changed the way that a lot of enterprises think about their data and where it's locked up. In parallel with Hadoop, particularly over the last five years, Cloud Object Storage has also given every organization another option for collecting, storing and managing even more data. That has led to a huge growth in data storage, obviously, up on public clouds like Amazon and their S3, Google Cloud Storage and Azure Blob Storage just to name a few. And then when you consider regional and local object storage offered by cloud vendors all over the world, the explosion of that data, in leveraging this type of object storage is very real. And I think, as I mentioned, it's just part of this broader storage disruption that's been going on. But with all this growth in the data, in all these new places to put this data, every organization we talk to is facing even more challenges now around the data silo. Sure the data silos certainly getting bigger. And hopefully they're getting cheaper per bit. But as I said, the focus has really been on collecting, storing and managing the data. But between the new data lakes and many different cloud object storage combined with all sorts of data types from the complexity of managing all this, getting that business value has been very limited. This actually takes me to big bet number one for Team Vertica, which is to unify the data. Our goal, and some of the announcements we have made today plus roadmap announcements I'll share with you throughout this presentation. Our goal is to ensure that all the time, money and effort that has gone into storing that data, all the data turns into business value. So how are we going to do that? With a unified analytics platform that analyzes the data wherever it is HDFS, Cloud Object Storage, External tables in an any format ORC, Parquet, JSON, and of course, our own Native Roth Vertica format. Analyze the data in the right place in the right format, using a single unified tool. This is something that Vertica has always been committed to, and you'll see in some of our announcements today, we're just doubling down on that commitment. Let's talk a little bit more about the public cloud. This is certainly the second trend. It's the second wave maybe of data disruption with object storage. And there's a lot of advantages when it comes to public cloud. There's no question that the public clouds give rapid access to compute storage with the added benefit of eliminating data center maintenance that so many companies, want to get out of themselves. But maybe the biggest advantage that I see is the architectural innovation. The public clouds have introduced so many methodologies around how to provision quickly, separating compute and storage and really dialing-in the exact needs on demand, as you change workloads. When public clouds began, it made a lot of sense for the cloud providers and their customers to charge and pay for compute and storage in the ratio that each use case demanded. And I think you're seeing that trend, proliferate all over the place, not just up in public cloud. That architecture itself is really becoming the next generation architecture for on-premise data centers, as well. But there are a lot of concerns. I think we're all aware of them. They're out there many times for different workloads, there are higher costs. Especially if some of the workloads that are being run through analytics, which tend to run all the time. Just like some of the silo challenges that companies are facing with HDFS, data lakes and cloud storage, the public clouds have similar types of siloed challenges as well. Initially, there was a belief that they were cheaper than data centers, and when you added in all the costs, it looked that way. And again, for certain elastic workloads, that is the case. I don't think that's true across the board overall. Even to the point where a lot of the cloud vendors aren't just charging lower costs anymore. We hear from a lot of customers that they don't really want to tether themselves to any one cloud because of some of those uncertainties. Of course, security and privacy are a concern. We hear a lot of concerns with regards to cloud and even some SaaS vendors around shared data catalogs, across all the customers and not enough separation. But security concerns are out there, you can read about them. I'm not going to jump into that bandwagon. But we hear about them. And then, of course, I think one of the things we hear the most from our customers, is that each cloud stack is starting to feel even a lot more locked in than the traditional data warehouse appliance. And as everybody knows, the industry has been running away from appliances as fast as it can. And so they're not eager to get locked into another, quote, unquote, virtual appliance, if you will, up in the cloud. They really want to make sure they have flexibility in which clouds, they're going to today, tomorrow and in the future. And frankly, we hear from a lot of our customers that they're very interested in eventually mixing and matching, compute from one cloud with, say storage from another cloud, which I think is something that we'll hear a lot more about. And so for us, that's why we've got our big bet number two. we love the cloud. We love the public cloud. We love the private clouds on-premise, and other hosting providers. But our passion and commitment is for Vertica to be able to run in any of the clouds that our customers choose, and make it portable across those clouds. We have supported on-premises and all public clouds for years. And today, we have announced even more support for Vertica in Eon Mode, the deployment option that leverages the separation of compute from storage, with even more deployment choices, which I'm going to also touch more on as we go. So super excited about our big bet number two. And finally as I mentioned, for all the hype that there is around machine learning, I actually think that most importantly, this third trend that team Vertica is determined to address is the need to bring business critical, analytics, machine learning, data science projects into production. For so many years, there just wasn't enough data available to justify the investment in machine learning. Also, processing power was expensive, and storage was prohibitively expensive. But to train and score and evaluate all the different models to unlock the full power of predictive analytics was tough. Today you have those massive data volumes. You have the relatively cheap processing power and storage to make that dream a reality. And if you think about this, I mean with all the data that's available to every company, the real need is to operationalize the speed and the scale of machine learning so that these organizations can actually take advantage of it where they need to. I mean, we've seen this for years with Vertica, going back to some of the most advanced gaming companies in the early days, they were incorporating this with live data directly into their gaming experiences. Well, every organization wants to do that now. And the accuracy for clickability and real time actions are all key to separating the leaders from the rest of the pack in every industry when it comes to machine learning. But if you look at a lot of these projects, the reality is that there's a ton of buzz, there's a ton of hype spanning every acronym that you can imagine. But most companies are struggling, do the separate teams, different tools, silos and the limitation that many platforms are facing, driving, down sampling to get a small subset of the data, to try to create a model that then doesn't apply, or compromising accuracy and making it virtually impossible to replicate models, and understand decisions. And if there's one thing that we've learned when it comes to data, prescriptive data at the atomic level, being able to show end of one as we refer to it, meaning individually tailored data. No matter what it is healthcare, entertainment experiences, like gaming or other, being able to get at the granular data and make these decisions, make that scoring applies to machine learning just as much as it applies to giving somebody a next-best-offer. But the opportunity has never been greater. The need to integrate this end-to-end workflow and support the right tools without compromising on that accuracy. Think about it as no downsampling, using all the data, it really is key to machine learning success. Which should be no surprise then why the third big bet from Vertica is one that we've actually been working on for years. And we're so proud to be where we are today, helping the data disruptors across the world operationalize machine learning. This big bet has the potential to truly unlock, really the potential of machine learning. And today, we're announcing some very important new capabilities specifically focused on unifying the work being done by the data science community, with their preferred tools and platforms, and the volume of data and performance at scale, available in Vertica. Our strategy has been very consistent over the last several years. As I said in the beginning, we haven't deviated from our strategy. Of course, there's always things that we add. Most of the time, it's customer driven, it's based on what our customers are asking us to do. But I think we've also done a great job, not trying to be all things to all people. Especially as these hype cycles flare up around us, we absolutely love participating in these different areas without getting completely distracted. I mean, there's a variety of query tools and data warehouses and analytics platforms in the market. We all know that. There are tools and platforms that are offered by the public cloud vendors, by other vendors that support one or two specific clouds. There are appliance vendors, who I was referring to earlier who can deliver package data warehouse offerings for private data centers. And there's a ton of popular machine learning tools, languages and other kits. But Vertica is the only advanced analytic platform that can do all this, that can bring it together. We can analyze the data wherever it is, in HDFS, S3 Object Storage, or Vertica itself. Natively we support multiple clouds on-premise deployments, And maybe most importantly, we offer that choice of deployment modes to allow our customers to choose the architecture that works for them right now. It still also gives them the option to change move, evolve over time. And Vertica is the only analytics database with end-to-end machine learning that can truly operationalize ML at scale. And I know it's a mouthful. But it is not easy to do all these things. It is one of the things that highly differentiates Vertica from the rest of the pack. It is also why our customers, all of you continue to bet on us and see the value that we are delivering and we will continue to deliver. Here's a couple of examples of some of our customers who are powered by Vertica. It's the scale of data. It's the millisecond response times. Performance and scale have always been a huge part of what we have been about, not the only thing. I think the functionality all the capabilities that we add to the platform, the ease of use, the flexibility, obviously with the deployment. But if you look at some of the numbers they are under these customers on this slide. And I've shared a lot of different stories about these customers. Which, by the way, it still amaze me every time I talk to one and I get the updates, you can see the power and the difference that Vertica is making. Equally important, if you look at a lot of these customers, they are the epitome of being able to deploy Vertica in a lot of different environments. Many of the customers on this slide are not using Vertica just on-premise or just in the cloud. They're using it in a hybrid way. They're using it in multiple different clouds. And again, we've been with them on that journey throughout, which is what has made this product and frankly, our roadmap and our vision exactly what it is. It's been quite a journey. And that journey continues now with the Vertica 10 release. The Vertica 10 release is obviously a massive release for us. But if you look back, you can see that building on that native columnar architecture that started a long time ago, obviously, with the C-Store paper. We built it to leverage that commodity hardware, because it was an architecture that was never tightly integrated with any specific underlying infrastructure. I still remember hearing the initial pitch from Mike Stonebreaker, about the vision of Vertica as a software only solution and the importance of separating the company from hardware innovation. And at the time, Mike basically said to me, "there's so much R&D in innovation that's going to happen in hardware, we shouldn't bake hardware into our solution. We should do it in software, and we'll be able to take advantage of that hardware." And that is exactly what has happened. But one of the most recent innovations that we embraced with hardware is certainly that separation of compute and storage. As I said previously, the public cloud providers offered this next generation architecture, really to ensure that they can provide the customers exactly what they needed, more compute or more storage and charge for each, respectively. The separation of compute and storage, compute from storage is a major milestone in data center architectures. If you think about it, it's really not only a public cloud innovation, though. It fundamentally redefines the next generation data architecture for on-premise and for pretty much every way people are thinking about computing today. And that goes for software too. Object storage is an example of the cost effective means for storing data. And even more importantly, separating compute from storage for analytic workloads has a lot of advantages. Including the opportunity to manage much more dynamic, flexible workloads. And more importantly, truly isolate those workloads from others. And by the way, once you start having something that can truly isolate workloads, then you can have the conversations around autonomic computing, around setting up some nodes, some compute resources on the data that won't affect any of the other data to do some things on their own, maybe some self analytics, by the system, etc. A lot of things that many of you know we've already been exploring in terms of our own system data in the product. But it was May 2018, believe it or not, it seems like a long time ago where we first announced Eon Mode and I want to make something very clear, actually about Eon mode. It's a mode, it's a deployment option for Vertica customers. And I think this is another huge benefit that we don't talk about enough. But unlike a lot of vendors in the market who will dig you and charge you for every single add-on like hit-buy, you name it. You get this with the Vertica product. If you continue to pay support and maintenance, this comes with the upgrade. This comes as part of the new release. So any customer who owns or buys Vertica has the ability to set up either an Enterprise Mode or Eon Mode, which is a question I know that comes up sometimes. Our first announcement of Eon was obviously AWS customers, including the trade desk, AT&T. Most of whom will be speaking here later at the Virtual Big Data Conference. They saw a huge opportunity. Eon Mode, not only allowed Vertica to scale elastically with that specific compute and storage that was needed, but it really dramatically simplified database operations including things like workload balancing, node recovery, compute provisioning, etc. So one of the most popular functions is that ability to isolate the workloads and really allocate those resources without negatively affecting others. And even though traditional data warehouses, including Vertica Enterprise Mode have been able to do lots of different workload isolation, it's never been as strong as Eon Mode. Well, it certainly didn't take long for our customers to see that value across the board with Eon Mode. Not just up in the cloud, in partnership with one of our most valued partners and a platinum sponsor here. Joy mentioned at the beginning. We announced Vertica Eon Mode for Pure Storage FlashBlade in September 2019. And again, just to be clear, this is not a new product, it's one Vertica with yet more deployment options. With Pure Storage, Vertica in Eon mode is not limited in any way by variable cloud, network latency. The performance is actually amazing when you take the benefits of separate and compute from storage and you run it with a Pure environment on-premise. Vertica in Eon Mode has a super smart cache layer that we call the depot. It's a big part of our secret sauce around Eon mode. And combined with the power and performance of Pure's FlashBlade, Vertica became the industry's first advanced analytics platform that actually separates compute and storage for on-premises data centers. Something that a lot of our customers are already benefiting from, and we're super excited about it. But as I said, this is a journey. We don't stop, we're not going to stop. Our customers need the flexibility of multiple public clouds. So today with Vertica 10, we're super proud and excited to announce support for Vertica in Eon Mode on Google Cloud. This gives our customers the ability to use their Vertica licenses on Amazon AWS, on-premise with Pure Storage and on Google Cloud. Now, we were talking about HDFS and a lot of our customers who have invested quite a bit in HDFS as a place, especially to store data have been pushing us to support Eon Mode with HDFS. So as part of Vertica 10, we are also announcing support for Vertica in Eon Mode using HDFS as the communal storage. Vertica's own Roth format data can be stored in HDFS, and actually the full functionality of Vertica is complete analytics, geospatial pattern matching, time series, machine learning, everything that we have in there can be applied to this data. And on the same HDFS nodes, Vertica can actually also analyze data in ORC or Parquet format, using External tables. We can also execute joins between the Roth data the External table holds, which powers a much more comprehensive view. So again, it's that flexibility to be able to support our customers, wherever they need us to support them on whatever platform, they have. Vertica 10 gives us a lot more ways that we can deploy Eon Mode in various environments for our customers. It allows them to take advantage of Vertica in Eon Mode and the power that it brings with that separation, with that workload isolation, to whichever platform they are most comfortable with. Now, there's a lot that has come in Vertica 10. I'm definitely not going to be able to cover everything. But we also introduced complex types as an example. And complex data types fit very well into Eon as well in this separation. They significantly reduce the data pipeline, the cost of moving data between those, a much better support for unstructured data, which a lot of our customers have mixed with structured data, of course, and they leverage a lot of columnar execution that Vertica provides. So you get complex data types in Vertica now, a lot more data, stronger performance. It goes great with the announcement that we made with the broader Eon Mode. Let's talk a little bit more about machine learning. We've been actually doing work in and around machine learning with various extra regressions and a whole bunch of other algorithms for several years. We saw the huge advantage that MPP offered, not just as a sequel engine as a database, but for ML as well. Didn't take as long to realize that there's a lot more to operationalizing machine learning than just those algorithms. It's data preparation, it's that model trade training. It's the scoring, the shaping, the evaluation. That is so much of what machine learning and frankly, data science is about. You do know, everybody always wants to jump to the sexy algorithm and we handle those tasks very, very well. It makes Vertica a terrific platform to do that. A lot of work in data science and machine learning is done in other tools. I had mentioned that there's just so many tools out there. We want people to be able to take advantage of all that. We never believed we were going to be the best algorithm company or come up with the best models for people to use. So with Vertica 10, we support PMML. We can import now and export PMML models. It's a huge step for us around that operationalizing machine learning projects for our customers. Allowing the models to get built outside of Vertica yet be imported in and then applying to that full scale of data with all the performance that you would expect from Vertica. We also are more tightly integrating with Python. As many of you know, we've been doing a lot of open source projects with the community driven by many of our customers, like Uber. And so now with Python we've integrated with TensorFlow, allowing data scientists to build models in their preferred language, to take advantage of TensorFlow. But again, to store and deploy those models at scale with Vertica. I think both these announcements are proof of our big bet number three, and really our commitment to supporting innovation throughout the community by operationalizing ML with that accuracy, performance and scale of Vertica for our customers. Again, there's a lot of steps when it comes to the workflow of machine learning. These are some of them that you can see on the slide, and it's definitely not linear either. We see this as a circle. And companies that do it, well just continue to learn, they continue to rescore, they continue to redeploy and they want to operationalize all that within a single platform that can take advantage of all those capabilities. And that is the platform, with a very robust ecosystem that Vertica has always been committed to as an organization and will continue to be. This graphic, many of you have seen it evolve over the years. Frankly, if we put everything and everyone on here wouldn't fit on a slide. But it will absolutely continue to evolve and grow as we support our customers, where they need the support most. So, again, being able to deploy everywhere, being able to take advantage of Vertica, not just as a business analyst or a business user, but as a data scientists or as an operational or BI person. We want Vertica to be leveraged and used by the broader organization. So I think it's fair to say and I encourage everybody to learn more about Vertica 10, because I'm just highlighting some of the bigger aspects of it. But we talked about those three market trends. The need to unify the silos, the need for hybrid multiple cloud deployment options, the need to operationalize business critical machine learning projects. Vertica 10 has absolutely delivered on those. But again, we are not going to stop. It is our job not to, and this is how Team Vertica thrives. I always joke that the next release is the best release. And, of course, even after Vertica 10, that is also true, although Vertica 10 is pretty awesome. But, you know, from the first line of code, we've always been focused on performance and scale, right. And like any really strong data platform, the execution engine, the optimizer and the execution engine are the two core pieces of that. Beyond Vertica 10, some of the big things that we're already working on, next generation execution engine. We're already actually seeing incredible early performance from this. And this is just one example, of how important it is for an organization like Vertica to constantly go back and re-innovate. Every single release, we do the sit ups and crunches, our performance and scale. How do we improve? And there's so many parts of the core server, there's so many parts of our broader ecosystem. We are constantly looking at coverages of how we can go back to all the code lines that we have, and make them better in the current environment. And it's not an easy thing to do when you're doing that, and you're also expanding in the environment that we are expanding into to take advantage of the different deployments, which is a great segue to this slide. Because if you think about today, we're obviously already available with Eon Mode and Amazon, AWS and Pure and actually MinIO as well. As I talked about in Vertica 10 we're adding Google and HDFS. And coming next, obviously, Microsoft Azure, Alibaba cloud. So being able to expand into more of these environments is really important for the Vertica team and how we go forward. And it's not just running in these clouds, for us, we want it to be a SaaS like experience in all these clouds. We want you to be able to deploy Vertica in 15 minutes or less on these clouds. You can also consume Vertica, in a lot of different ways, on these clouds. As an example, in Amazon Vertica by the Hour. So for us, it's not just about running, it's about taking advantage of the ecosystems that all these cloud providers offer, and really optimizing the Vertica experience as part of them. Optimization, around automation, around self service capabilities, extending our management console, we now have products that like the Vertica Advisor Tool that our Customer Success Team has created to actually use our own smarts in Vertica. To take data from customers that give it to us and help them tune automatically their environment. You can imagine that we're taking that to the next level, in a lot of different endeavors that we're doing around how Vertica as a product can actually be smarter because we all know that simplicity is key. There just aren't enough people in the world who are good at managing data and taking it to the next level. And of course, other things that we all hear about, whether it's Kubernetes and containerization. You can imagine that that probably works very well with the Eon Mode and separating compute and storage. But innovation happens everywhere. We innovate around our community documentation. Many of you have taken advantage of the Vertica Academy. The numbers there are through the roof in terms of the number of people coming in and certifying on it. So there's a lot of things that are within the core products. There's a lot of activity and action beyond the core products that we're taking advantage of. And let's not forget why we're here, right? It's easy to talk about a platform, a data platform, it's easy to jump into all the functionality, the analytics, the flexibility, how we can offer it. But at the end of the day, somebody, a person, she's got to take advantage of this data, she's got to be able to take this data and use this information to make a critical business decision. And that doesn't happen unless we explore lots of different and frankly, new ways to get that predictive analytics UI and interface beyond just the standard BI tools in front of her at the right time. And so there's a lot of activity, I'll tease you with that going on in this organization right now about how we can do that and deliver that for our customers. We're in a great position to be able to see exactly how this data is consumed and used and start with this core platform that we have to go out. Look, I know, the plan wasn't to do this as a virtual BDC. But I really appreciate you tuning in. Really appreciate your support. I think if there's any silver lining to us, maybe not being able to do this in person, it's the fact that the reach has actually gone significantly higher than what we would have been able to do in person in Boston. We're certainly looking forward to doing a Big Data Conference in the future. But if I could leave you with anything, know this, since that first release for Vertica, and our very first customers, we have been very consistent. We respect all the innovation around us, whether it's open source or not. We understand the market trends. We embrace those new ideas and technologies and for us true north, and the most important thing is what does our customer need to do? What problem are they trying to solve? And how do we use the advantages that we have without disrupting our customers? But knowing that you depend on us to deliver that unified analytics strategy, it will deliver that performance of scale, not only today, but tomorrow and for years to come. We've added a lot of great features to Vertica. I think we've said no to a lot of things, frankly, that we just knew we wouldn't be the best company to deliver. When we say we're going to do things we do them. Vertica 10 is a perfect example of so many of those things that we from you, our customers have heard loud and clear, and we have delivered. I am incredibly proud of this team across the board. I think the culture of Vertica, a customer first culture, jumping in to help our customers win no matter what is also something that sets us massively apart. I hear horror stories about support experiences with other organizations. And people always seem to be amazed at Team Vertica's willingness to jump in or their aptitude for certain technical capabilities or understanding the business. And I think sometimes we take that for granted. But that is the team that we have as Team Vertica. We are incredibly excited about Vertica 10. I think you're going to love the Virtual Big Data Conference this year. I encourage you to tune in. Maybe one other benefit is I know some people were worried about not being able to see different sessions because they were going to overlap with each other well now, even if you can't do it live, you'll be able to do those sessions on demand. Please enjoy the Vertica Big Data Conference here in 2020. Please you and your families and your co-workers be safe during these times. I know we will get through it. And analytics is probably going to help with a lot of that and we already know it is helping in many different ways. So believe in the data, believe in data's ability to change the world for the better. And thank you for your time. And with that, I am delighted to now introduce Micro Focus CEO Stephen Murdoch to the Vertica Big Data Virtual Conference. Thank you Stephen. >> Stephen: Hi, everyone, my name is Stephen Murdoch. I have the pleasure and privilege of being the Chief Executive Officer here at Micro Focus. Please let me add my welcome to the Big Data Conference. And also my thanks for your support, as we've had to pivot to this being virtual rather than a physical conference. Its amazing how quickly we all reset to a new normal. I certainly didn't expect to be addressing you from my study. Vertica is an incredibly important part of Micro Focus family. Is key to our goal of trying to enable and help customers become much more data driven across all of their IT operations. Vertica 10 is a huge step forward, we believe. It allows for multi-cloud innovation, genuinely hybrid deployments, begin to leverage machine learning properly in the enterprise, and also allows the opportunity to unify currently siloed lakes of information. We operate in a very noisy, very competitive market, and there are people, who are in that market who can do some of those things. The reason we are so excited about Vertica is we genuinely believe that we are the best at doing all of those things. And that's why we've announced publicly, you're under executing internally, incremental investment into Vertica. That investments targeted at accelerating the roadmaps that already exist. And getting that innovation into your hands faster. This idea is speed is key. It's not a question of if companies have to become data driven organizations, it's a question of when. So that speed now is really important. And that's why we believe that the Big Data Conference gives a great opportunity for you to accelerate your own plans. You will have the opportunity to talk to some of our best architects, some of the best development brains that we have. But more importantly, you'll also get to hear from some of our phenomenal Roth Data customers. You'll hear from Uber, from the Trade Desk, from Philips, and from AT&T, as well as many many others. And just hearing how those customers are using the power of Vertica to accelerate their own, I think is the highlight. And I encourage you to use this opportunity to its full. Let me close by, again saying thank you, we genuinely hope that you get as much from this virtual conference as you could have from a physical conference. And we look forward to your engagement, and we look forward to hearing your feedback. With that, thank you very much. >> Joy: Thank you so much, Stephen, for joining us for the Vertica Big Data Conference. Your support and enthusiasm for Vertica is so clear, and it makes a big difference. Now, I'm delighted to introduce Amy Fowler, the VP of Strategy and Solutions for FlashBlade at Pure Storage, who was one of our BDC Platinum Sponsors, and one of our most valued partners. It was a proud moment for me, when we announced Vertica in Eon mode for Pure Storage FlashBlade and we became the first analytics data warehouse that separates compute from storage for on-premise data centers. Thank you so much, Amy, for joining us. Let's get started. >> Amy: Well, thank you, Joy so much for having us. And thank you all for joining us today, virtually, as we may all be. So, as we just heard from Colin Mahony, there are some really interesting trends that are happening right now in the big data analytics market. From the end of the Hadoop hype cycle, to the new cloud reality, and even the opportunity to help the many data science and machine learning projects move from labs to production. So let's talk about these trends in the context of infrastructure. And in particular, look at why a modern storage platform is relevant as organizations take on the challenges and opportunities associated with these trends. The answer is the Hadoop hype cycles left a lot of data in HDFS data lakes, or reservoirs or swamps depending upon the level of the data hygiene. But without the ability to get the value that was promised from Hadoop as a platform rather than a distributed file store. And when we combine that data with the massive volume of data in Cloud Object Storage, we find ourselves with a lot of data and a lot of silos, but without a way to unify that data and find value in it. Now when you look at the infrastructure data lakes are traditionally built on, it is often direct attached storage or data. The approach that Hadoop took when it entered the market was primarily bound by the limits of networking and storage technologies. One gig ethernet and slower spinning disk. But today, those barriers do not exist. And all FlashStorage has fundamentally transformed how data is accessed, managed and leveraged. The need for local data storage for significant volumes of data has been largely mitigated by the performance increases afforded by all Flash. At the same time, organizations can achieve superior economies of scale with that segregation of compute and storage. With compute and storage, you don't always scale in lockstep. Would you want to add an engine to the train every time you add another boxcar? Probably not. But from a Pure Storage perspective, FlashBlade is uniquely architected to allow customers to achieve better resource utilization for compute and storage, while at the same time, reducing complexity that has arisen from the siloed nature of the original big data solutions. The second and equally important recent trend we see is something I'll call cloud reality. The public clouds made a lot of promises and some of those promises were delivered. But cloud economics, especially usage based and elastic scaling, without the control that many companies need to manage the financial impact is causing a lot of issues. In addition, the risk of vendor lock-in from data egress, charges, to integrated software stacks that can't be moved or deployed on-premise is causing a lot of organizations to back off the all the way non-cloud strategy, and move toward hybrid deployments. Which is kind of funny in a way because it wasn't that long ago that there was a lot of talk about no more data centers. And for example, one large retailer, I won't name them, but I'll admit they are my favorites. They several years ago told us they were completely done with on-prem storage infrastructure, because they were going 100% to the cloud. But they just deployed FlashBlade for their data pipelines, because they need predictable performance at scale. And the all cloud TCO just didn't add up. Now, that being said, well, there are certainly challenges with the public cloud. It has also brought some things to the table that we see most organizations wanting. First of all, in a lot of cases applications have been built to leverage object storage platforms like S3. So they need that object protocol, but they may also need it to be fast. And the said object may be oxymoron only a few years ago, and this is an area of the market where Pure and FlashBlade have really taken a leadership position. Second, regardless of where the data is physically stored, organizations want the best elements of a cloud experience. And for us, that means two main things. Number one is simplicity and ease of use. If you need a bunch of storage experts to run the system, that should be considered a bug. The other big one is the consumption model. The ability to pay for what you need when you need it, and seamlessly grow your environment over time totally nondestructively. This is actually pretty huge and something that a lot of vendors try to solve for with finance programs. But no finance program can address the pain of a forklift upgrade, when you need to move to next gen hardware. To scale nondestructively over long periods of time, five to 10 years plus is a crucial architectural decisions need to be made at the outset. Plus, you need the ability to pay as you use it. And we offer something for FlashBlade called Pure as a Service, which delivers exactly that. The third cloud characteristic that many organizations want is the option for hybrid. Even if that is just a DR site in the cloud. In our case, that means supporting appplication of S3, at the AWS. And the final trend, which to me represents the biggest opportunity for all of us, is the need to help the many data science and machine learning projects move from labs to production. This means bringing all the machine learning functions and model training to the data, rather than moving samples or segments of data to separate platforms. As we all know, machine learning needs a ton of data for accuracy. And there is just too much data to retrieve from the cloud for every training job. At the same time, predictive analytics without accuracy is not going to deliver the business advantage that everyone is seeking. You can kind of visualize data analytics as it is traditionally deployed as being on a continuum. With that thing, we've been doing the longest, data warehousing on one end, and AI on the other end. But the way this manifests in most environments is a series of silos that get built up. So data is duplicated across all kinds of bespoke analytics and AI, environments and infrastructure. This creates an expensive and complex environment. So historically, there was no other way to do it because some level of performance is always table stakes. And each of these parts of the data pipeline has a different workload profile. A single platform to deliver on the multi dimensional performances, diverse set of applications required, that didn't exist three years ago. And that's why the application vendors pointed you towards bespoke things like DAS environments that we talked about earlier. And the fact that better options exists today is why we're seeing them move towards supporting this disaggregation of compute and storage. And when it comes to a platform that is a better option, one with a modern architecture that can address the diverse performance requirements of this continuum, and allow organizations to bring a model to the data instead of creating separate silos. That's exactly what FlashBlade is built for. Small files, large files, high throughput, low latency and scale to petabytes in a single namespace. And this is importantly a single rapid space is what we're focused on delivering for our customers. At Pure, we talk about it in the context of modern data experience because at the end of the day, that's what it's really all about. The experience for your teams in your organization. And together Pure Storage and Vertica have delivered that experience to a wide range of customers. From a SaaS analytics company, which uses Vertica on FlashBlade to authenticate the quality of digital media in real time, to a multinational car company, which uses Vertica on FlashBlade to make thousands of decisions per second for autonomous cars, or a healthcare organization, which uses Vertica on FlashBlade to enable healthcare providers to make real time decisions that impact lives. And I'm sure you're all looking forward to hearing from John Yavanovich from AT&T. To hear how he's been doing this with Vertica and FlashBlade as well. He's coming up soon. We have been really excited to build this partnership with Vertica. And we're proud to provide the only on-premise storage platform validated with Vertica Eon Mode. And deliver this modern data experience to our customers together. Thank you all so much for joining us today. >> Joy: Amy, thank you so much for your time and your insights. Modern infrastructure is key to modern analytics, especially as organizations leverage next generation data center architectures, and object storage for their on-premise data centers. Now, I'm delighted to introduce our last speaker in our Vertica Big Data Conference Keynote, John Yovanovich, Director of IT for AT&T. Vertica is so proud to serve AT&T, and especially proud of the harmonious impact we are having in partnership with Pure Storage. John, welcome to the Virtual Vertica BDC. >> John: Thank you joy. It's a pleasure to be here. And I'm excited to go through this presentation today. And in a unique fashion today 'cause as I was thinking through how I wanted to present the partnership that we have formed together between Pure Storage, Vertica and AT&T, I want to emphasize how well we all work together and how these three components have really driven home, my desire for a harmonious to use your word relationship. So, I'm going to move forward here and with. So here, what I'm going to do the theme of today's presentation is the Pure Vertica Symphony live at AT&T. And if anybody is a Westworld fan, you can appreciate the sheet music on the right hand side. What we're going to what I'm going to highlight here is in a musical fashion, is how we at AT&T leverage these technologies to save money to deliver a more efficient platform, and to actually just to make our customers happier overall. So as we look back, and back as early as just maybe a few years ago here at AT&T, I realized that we had many musicians to help the company. Or maybe you might want to call them data scientists, or data analysts. For the theme we'll stay with musicians. None of them were singing or playing from the same hymn book or sheet music. And so what we had was many organizations chasing a similar dream, but not exactly the same dream. And, best way to describe that is and I think with a lot of people this might resonate in your organizations. How many organizations are chasing a customer 360 view in your company? Well, I can tell you that I have at least four in my company. And I'm sure there are many that I don't know of. That is our problem because what we see is a repetitive sourcing of data. We see a repetitive copying of data. And there's just so much money to be spent. This is where I asked Pure Storage and Vertica to help me solve that problem with their technologies. What I also noticed was that there was no coordination between these departments. In fact, if you look here, nobody really wants to play with finance. Sales, marketing and care, sure that you all copied each other's data. But they actually didn't communicate with each other as they were copying the data. So the data became replicated and out of sync. This is a challenge throughout, not just my company, but all companies across the world. And that is, the more we replicate the data, the more problems we have at chasing or conquering the goal of single version of truth. In fact, I kid that I think that AT&T, we actually have adopted the multiple versions of truth, techno theory, which is not where we want to be, but this is where we are. But we are conquering that with the synergies between Pure Storage and Vertica. This is what it leaves us with. And this is where we are challenged and that if each one of our siloed business units had their own stories, their own dedicated stories, and some of them had more money than others so they bought more storage. Some of them anticipating storing more data, and then they really did. Others are running out of space, but can't put anymore because their bodies aren't been replenished. So if you look at it from this side view here, we have a limited amount of compute or fixed compute dedicated to each one of these silos. And that's because of the, wanting to own your own. And the other part is that you are limited or wasting space, depending on where you are in the organization. So there were the synergies aren't just about the data, but actually the compute and the storage. And I wanted to tackle that challenge as well. So I was tackling the data. I was tackling the storage, and I was tackling the compute all at the same time. So my ask across the company was can we just please play together okay. And to do that, I knew that I wasn't going to tackle this by getting everybody in the same room and getting them to agree that we needed one account table, because they will argue about whose account table is the best account table. But I knew that if I brought the account tables together, they would soon see that they had so much redundancy that I can now start retiring data sources. I also knew that if I brought all the compute together, that they would all be happy. But I didn't want them to tackle across tackle each other. And in fact that was one of the things that all business units really enjoy. Is they enjoy the silo of having their own compute, and more or less being able to control their own destiny. Well, Vertica's subclustering allows just that. And this is exactly what I was hoping for, and I'm glad they've brought through. And finally, how did I solve the problem of the single account table? Well when you don't have dedicated storage, and you can separate compute and storage as Vertica in Eon Mode does. And we store the data on FlashBlades, which you see on the left and right hand side, of our container, which I can describe in a moment. Okay, so what we have here, is we have a container full of compute with all the Vertica nodes sitting in the middle. Two loader, we'll call them loader subclusters, sitting on the sides, which are dedicated to just putting data onto the FlashBlades, which is sitting on both ends of the container. Now today, I have two dedicated storage or common dedicated might not be the right word, but two storage racks one on the left one on the right. And I treat them as separate storage racks. They could be one, but i created them separately for disaster recovery purposes, lashing work in case that rack were to go down. But that being said, there's no reason why I'm probably going to add a couple of them here in the future. So I can just have a, say five to 10, petabyte storage, setup, and I'll have my DR in another 'cause the DR shouldn't be in the same container. Okay, but I'll DR outside of this container. So I got them all together, I leveraged subclustering, I leveraged separate and compute. I was able to convince many of my clients that they didn't need their own account table, that they were better off having one. I eliminated, I reduced latency, I reduced our ticketing I reduce our data quality issues AKA ticketing okay. I was able to expand. What is this? As work. I was able to leverage elasticity within this cluster. As you can see, there are racks and racks of compute. We set up what we'll call the fixed capacity that each of the business units needed. And then I'm able to ramp up and release the compute that's necessary for each one of my clients based on their workloads throughout the day. And so while they compute to the right before you see that the instruments have already like, more or less, dedicated themselves towards all those are free for anybody to use. So in essence, what I have, is I have a concert hall with a lot of seats available. So if I want to run a 10 chair Symphony or 80, chairs, Symphony, I'm able to do that. And all the while, I can also do the same with my loader nodes. I can expand my loader nodes, to actually have their own Symphony or write all to themselves and not compete with any other workloads of the other clusters. What does that change for our organization? Well, it really changes the way our database administrators actually do their jobs. This has been a big transformation for them. They have actually become data conductors. Maybe you might even call them composers, which is interesting, because what I've asked them to do is morph into less technology and more workload analysis. And in doing so we're able to write auto-detect scripts, that watch the queues, watch the workloads so that we can help ramp up and trim down the cluster and subclusters as necessary. There has been an exciting transformation for our DBAs, who I need to now classify as something maybe like DCAs. I don't know, I have to work with HR on that. But I think it's an exciting future for their careers. And if we bring it all together, If we bring it all together, and then our clusters, start looking like this. Where everything is moving in harmonious, we have lots of seats open for extra musicians. And we are able to emulate a cloud experience on-prem. And so, I want you to sit back and enjoy the Pure Vertica Symphony live at AT&T. (soft music) >> Joy: Thank you so much, John, for an informative and very creative look at the benefits that AT&T is getting from its Pure Vertica symphony. I do really like the idea of engaging HR to change the title to Data Conductor. That's fantastic. I've always believed that music brings people together. And now it's clear that analytics at AT&T is part of that musical advantage. So, now it's time for a short break. And we'll be back for our breakout sessions, beginning at 12 pm Eastern Daylight Time. We have some really exciting sessions planned later today. And then again, as you can see on Wednesday. Now because all of you are already logged in and listening to this keynote, you already know the steps to continue to participate in the sessions that are listed here and on the previous slide. In addition, everyone received an email yesterday, today, and you'll get another one tomorrow, outlining the simple steps to register, login and choose your session. If you have any questions, check out the emails or go to www.vertica.com/bdc2020 for the logistics information. There are a lot of choices and that's always a good thing. Don't worry if you want to attend one or more or can't listen to these live sessions due to your timezone. All the sessions, including the Q&A sections will be available on demand and everyone will have access to the recordings as well as even more pre-recorded sessions that we'll post to the BDC website. Now I do want to leave you with two other important sites. First, our Vertica Academy. Vertica Academy is available to everyone. And there's a variety of very technical, self-paced, on-demand training, virtual instructor-led workshops, and Vertica Essentials Certification. And it's all free. Because we believe that Vertica expertise, helps everyone accelerate their Vertica projects and the advantage that those projects deliver. Now, if you have questions or want to engage with our Vertica engineering team now, we're waiting for you on the Vertica forum. We'll answer any questions or discuss any ideas that you might have. Thank you again for joining the Vertica Big Data Conference Keynote Session. Enjoy the rest of the BDC because there's a lot more to come
SUMMARY :
And he'll share the exciting news And that is the platform, with a very robust ecosystem some of the best development brains that we have. the VP of Strategy and Solutions is causing a lot of organizations to back off the and especially proud of the harmonious impact And that is, the more we replicate the data, Enjoy the rest of the BDC because there's a lot more to come
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Stephen | PERSON | 0.99+ |
Amy Fowler | PERSON | 0.99+ |
Mike | PERSON | 0.99+ |
John Yavanovich | PERSON | 0.99+ |
Amy | PERSON | 0.99+ |
Colin Mahony | PERSON | 0.99+ |
AT&T | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
John Yovanovich | PERSON | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
Joy King | PERSON | 0.99+ |
Mike Stonebreaker | PERSON | 0.99+ |
John | PERSON | 0.99+ |
May 2018 | DATE | 0.99+ |
100% | QUANTITY | 0.99+ |
Wednesday | DATE | 0.99+ |
Colin | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Vertica Academy | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
Joy | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Stephen Murdoch | PERSON | 0.99+ |
Vertica 10 | TITLE | 0.99+ |
Pure Storage | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Philips | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
AT&T. | ORGANIZATION | 0.99+ |
September 2019 | DATE | 0.99+ |
Python | TITLE | 0.99+ |
www.vertica.com/bdc2020 | OTHER | 0.99+ |
One gig | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Second | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
15 minutes | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Werner Vogels Keynote Analysis | AWS re:Invent 2019
>>LA from Las Vegas. It's the cube covering AWS reinvent 2019 brought to you by Amazon web services and along with its ecosystem partners. >>Hello everyone. Welcome back to the cubes. Day three coverage of ADAS reinvent in Las Vegas. It's the cubes coverage. Want to thank Intel for being the headline sponsor for the cube two sets. Without Intel, we wouldn't make it happen. We're here extracting the signal from the noise as usual. Wall-to-wall SiliconANGLE the cube coverage. I'm John Feria with student men and men doing a keynote analysis from Verner Vogel. Stu, you know Vernor's, they always, they always got the disc, the format jazzy kicks it off. You get the partner thing on day two and then they say Verner flask could nerd out on all the good stuff. Uh, containers. Coobernetti's all under the hood stuff. So let's jump in a keynote analysis. What's your take? What's Verner's posture this year? What's the vibe? What's the overall theme of the keynote? >>Well, well, first of all, John, to answer the question that everybody asks when Werner takes the stage, this year's t-shirt was posse. So Verner usually either has a Seattle band or it's usually a Dutch DJ, something like that. So he always delivers it. The geek crowd there. And really after seeing it of sitting through Werner's keynote, I think everybody walks out with AWS certification because architecturally we dig into all these environments. So right. You mentioned they started out with the master class on how Amazon built their hypervisor. Super important. Nitro underneath is the secret sauce. When they bought Annapurna labs, we knew that those chips would be super important going forward. But this is what is going to be the driver for outposts. It is the outpost is the building block for many of the other services announced this week. And absolutely the number one thing I'm hearing in the ecosystems around outpost but far gate and firecracker micro databases and managing containers. >>Um, they had some enterprises up on stage talking about transformation, picking up on the themes that Andy started with his three hour keynote just yesterday. But um, it's a lighter on the news. One of the bigger things out there is we will poke Amazon about how open and transparent they are. About what they're doing. And one of the things they announced was the Amazon builders library. So it's not just getting up on stage and saying, Hey, we've got really smart people and we architected these things and you need to use all of our tools, but Hey, this is how we do things. Reminded me a little bit of a, you know, just echoes of what I heard from get lab, who of course is fully open source, fully transparent, but you know, Amazon making progress. It's Adrian Cockcroft and that team has moved on open source, the container group. >>I had a great interview yesterday with Deepak saying, and Abby fuller, the container group actually has a roadmap up on containers. They're so sharing a lot of deep knowledge and good customers talk about how they're taking advantage, transforming their business. In serverless, I mean, John, coming out of Andy's keynote, I was like, there wasn't a lot of security and there wasn't a lot of serverless. And while serverless has been something that we know is transforming Amazon underneath the covers, we finally got to hear a little bit more about not just Lambda but yes, Lambda, but the rest of it as to how serverless is transforming underneath. >>You know ain't Jessie's got along three hour keynote, 30 announcements, so he has to cut save some minutes there. So for Verner we were expecting to go in a little bit more deeper dive on this transformational architecture. What did you learn about what they're proposing, what they're saying or continuing to say around how enterprises should be reborn in the cloud? Because that's the conversation here and again, we are, the memes that are developing are take the T out of cloud native. It's cloud naive. If you're not doing it right, you're going to be pretty naive. And then reborn in the cloud is the theme. So cloud native, born in the cloud, that's proven. Reborn in the cloud is kind of the theme we're hearing. Did he show anything? Did he talk about what that architecture is for transformation? Right. >>Did actually, it was funny. I'm in a watching the social stream. While things are going on. There was actually a cube alumni that I follow that we've interviewed at this show and he's like, if we've heard one of these journeys to you know, transformation, haven't we heard them all and I said, you know, while the high level message may be similar is I'm going to transfer math transform, I'm going to use data. When you looked at what they were doing, and this is a significant, you know, Vanguard, you know the financial institutions, Dave Volante commenting that you know the big banks, John, we know Goldman Sachs, we know JP Morgan, these banks that they have huge it budgets and very smart staffs there. They years ago would have said, Oh we don't need to use those services. We'll do what ourselves. Well Vanguard talking about how they're transforming rearchitecting my trip services. >>I love your term being reborn cloud native because that is the architecture. Are you cloud native or I used to call it you've kind of cloud native or kinda you know a little bit fo a cloud. Naive is a great term too. So been digging in and it is resonating is to look, transformation is art. This is not trying to move the organizational faster than it will naturally happen is painful. There's skillsets, there's those organizational pieces. There are politics inside the company that can slow you down in the enterprise is not known for speed. The enterprises that will continue to exist going forward better have taken this methodology. They need to be more agile and move. >>Well the thing about the cloud net naive thing that I like and first of all I agree with reborn in the cloud. We coined the term in the queue but um, that's kinda got this born again kind of vibe to it, which I think is what they're trying to say. But the cloud naive is, is some of the conversations we're hearing in the community and the customer base of these clouds, which is there are, and Jesse said it is Kino. There are now two types of developers and customers, the ones that want the low level building blocks and ones who want a more custom or solution oriented packages. So if you look at Microsoft Azure and Oracle of the clouds, they're trying to appeal to the folks that are classic it. Some are saying that that's a naive approach because it's a false sense of cloud, false sense of security. >>They got a little cloud. Is it really true? Cloud is, it's really true. Cloud native. So it's an interesting confluence between what true cloud is from a cloud native standpoint and yet all the big success stories are transformations not transitions. And so to me, I'm watching this it market, which is going to have trillions of dollars in, are they just transitioning? I old it with a new coat of paint or is it truly a skill, a truly an architectural transformation and does it impact the business model? That to me is the question. What's your reaction to that? >>Yeah, so John, I think actually the best example of that cloud native architecture is the thing we're actually all talking about this week, but is misunderstood. AWS outpost was announced last year. It is GA with the AWS native services this year. First, the VMware version is going to come out early in 2020 but here's why I think it is super exciting but misunderstood. When Microsoft did Azure stack, they said, we're going to give you an availability zone basically in your data center. It wasn't giving you, it was trying to extend the operational model, but it was a different stack. It was different hardware. They had to put these things together and really it's been a failure. The architectural design point of outpost is different. It is the same stack. It is an extension of your availability zone, so don't think of it of I've got the cloud in my data center. >>It's no, no, no. What I need for low latency and locality, it's here, but starting off there is no S3 in it because we were like, wait, what do you mean there's no S3 in it? I want to do all these services and everything. Oh yeah. Your S three bucket is in your local AC, so why would you say it's sharing? If you are creating data and doing data, of course I want it in my S three bucket. You know that, that that makes that no, they're going to add us three next year, but they are going to be very careful about what surfaces do and don't go on. This is not, Oh Amazon announces lots of things. Of course it's on outpost. It has the security, it has the operational model. It fits into the whole framework. It can be disconnected song, but it is very different. >>I actually think it's a little bit of a disservice. You can actually go see the rack. I took a selfie with it and put it out on Twitter and it's cool gear. We all love to, you know, see the rack and see the cables and things like that. But you know, my recommendation to Amazon would be just put a black curtain around it because pay no attention to what's here. Amazon manages it for you and yes, it's Amazon gear with the nitro chip underneath there. So customers should not have to think about it. It's just when they're doing that architecture, which from an application standpoint, it's a hybrid architecture. John, some services stay more local because of latency, but others it's that transformation. And it's moving the cloud, the edge, my data center things are much more mobile. Can you to change and move over? >>Well this spring you mentioned hybrid. I think to me the outpost announcement in terms of unpacking that is all about validation of hybrid. You know, VMware's got a smile on their face. Sanjay Poonen came in because you know Gelson you're kind of was pitching hybrid, you know, we were challenging him and then, but truly this means cloud operations has come. This is now very clear. There's no debate and this is what multi-cloud ultimately will look like. But hybrid cloud and public cloud is now the architecture of the of it. There's no debate because outpost is absolute verification that the cloud operating model with the cloud as a center of gravity for all the reasons scale, lower costs management, but moving the cloud operations on premises or the edge proves hybrid is here to stay. And that's where the money is. >>So John, there's a small nuance I'll say there because hybrid, we often think of public and private as equal. The Amazon positioning is it's outpost. It's an extension of what we're doing. The public cloud is the main piece, the edge and the outposts are just extensions where we're reaching out as opposed to if I look at, you know what VMware's doing, I've got my data center footprint. You look at the HCI solution out there. Outpost is not an HCI competitor and people looking at this misunderstand the fundamental architecture in there. Absolutely. Hybrid is real. Edge is important. Amazon is extending their reach, but all I'm saying is that nuance is still, Amazon has matured their thinking on hybrid or even multi-cloud. When you talk to Andy, he actually would talk about multi-cloud, but still at the center of gravity is the public cloud and the Amazon services. It's not saying that, Oh yeah, like you know, let's wrap arounds around all of your existing, >>well, the reason why I liked the cloud naive, take the T out of cloud native and cloud naive is because there is a lot of negativity around what cloud actually is about. I forget outpost cloud itself, and if you look at like Microsoft for instance, love Microsoft, I think they do an amazing work. They're catching up as fast as they can, but, and they play the car. Well we are large scale too, but the difference between Amazon and Microsoft Azure is very clear. Microsoft's had these data centers for MSN, I. E. browsers, global infrastructure around the world for themselves and literally overnight they have to serve other people. And if you look at Gardner's results, their downtime has been pretty much at an all time high. So what you're seeing is the inefficiencies and the district is a scale for Microsoft trying to copy Amazon because they now have to serve millions of customers anywhere. This is what Jessie was telling me in my one-on-one, which is there's no compression algorithm for experience. What he's basically saying is when you try to take shortcuts, there's diseconomies of scale. Amazon's got years of economies of scale, they're launching new services. So Jesse's bet is to make the capabilities. The problem is Microsoft Salesforce do is out there and Amos can't compete with, they're not present and they're going into their customers think we got you covered. And frankly that's working like real well. >>Yeah. So, so, so John, we had the cube at Microsoft ignite. I've done that show for the last few years. And my takeaway at Microsoft this year was they build bridges. If you are, you know, mostly legacy, you know, everything in my data center versus cloud native, I'm going to build your bridge. They have five different developer groups to work with you where you are and they'll go there. Amazon is a little bit more aggressive with cloud native transformation, you know, you need to change your mindset. So Microsoft's a little bit more moderate and it is safer for companies to just say, well, I trust Microsoft and I've worked with Microsoft and I've got an enterprise license agreement, so I'll slowly make change. But here's the challenge, Don. We know if you really want to change your business, you can't get there incrementally. Transformation's important for innovation. So the battle is amazing. You can't be wrong for betting on either Microsoft or Amazon these days. Architecturally, I think Amazon has clear the broadest and deepest out there. They keep proving some of their environments and it has, >>well the economies of scale versus diseconomies scale discussion is huge because ultimately if Microsoft stays on that path of just, you know, we got a two and they continue down that path, they could be on the wrong side of the history. And I'll tell you why I see that and why I'm evaluating Microsoft one, they have the data center. So can they reach tool fast enough? Can they, can they eliminate that technical debt because ultimately they're, they're making a bet. And the true bet is if they become just an it transition, they in my opinion, will, will lose in the long run. Microsoft's going all in on, Nope, we're not the old guard. We're the new guard. So there's an interesting line being formed too. And if Microsoft doesn't get cloud native and doesn't bring true scale, true reliability at the capabilities of Amazon, then they're just going to be just another it solution. And they could, that could fall right on there, right on their face on that. >>And John, when we first came to this show in 2013 it was very developer centric and could Amazon be successful in wooing the enterprise? You look around this show, the answer was a resounding yes. Amazon is there. They have not lost the developers. They're doing the enterprise. When you talk to Andy, you talked about the bottoms up and the top down leadership and working there and across the board as opposed to Google. Google has been trying and not making great progress moving to the enterprise and that has been challenging. >>Oh, I've got to tell you this too. Last night I was out and I got some really good information on jet eye and I was networking around and kind of going in Cognito mode and doing the normal and I found someone who was sharing some really critical information around Jedi. Here's what I learned around this is around Microsoft, Microsoft, one that Jed ideal without the capabilities to deliver on the contract. This was a direct quote from someone inside the DOD and inside the intelligence community who I got some clear information and I said to him, I go, how's that possible? He says, Microsoft one on the fact that they say they could do it. They have not yet proven any capabilities for Jedi. And he even said quote, they don't even have the data centers to support the deal. So here you have the dynamic we save, we can do it. Amazon is doing it. This is ultimately the true test of cloud naive versus cloud native. Ask the clouds, show me the proof, John, you could do it and I'll go with, >>you've done great reporting on the jet. I, it has been a bit of a train wreck to watch what's going on in the industry with that because we know, uh, Microsoft needs to get a certain certification. They've got less than a year. The clock is ticking to be able to support some of those environments. Amazon could support that today. So we knew when this started, this was Amazon's business and that there was the executive office going in and basically making sure that Amazon did not win it. So we said there's a lot of business out there. We know Amazon doing well, and the government deals Gelsinger was on record from VMware talking about lots of, >>well here's, here's, here's the thing. I also talked to someone inside the CIA community who will tell me that the spending in the CIA is flat. Okay. And the, the flatness of the, of the spending is flat, but the demand for mission support is going exponential. So the cloud fits that bill. On the Jedi side, what we're hearing is the DOD folks love this architecture. It was not jury rig for Amazon's jury rig for the workload, so that they're all worried that it's going to get scuttled and they don't want that project to fail. There's huge support and I think the Jedi supports the workload transformational thinking because it's completely different. And that's why everyone was running scared because the old guard was getting, getting crushed by it. But no one wants that deal to fail. They want it to go forward. So it's gonna be very interesting dynamics do if Microsoft can't deliver the goods, Amazon's back in the driver's seat >>deal. And John, I guess you know my final takeaway, we talked a bunch about outpost but that is a building block, 80 West local zones starting first in LA for the telco media group, AWS wavelength working with the five G providers. We had Verizon on the program here. Amazon is becoming the everywhere cloud and they really, as Dave said in your opening keynote there, shock and awe, Amazon delivers mere after a year >>maybe this logo should be everything everywhere cause they've got a lot of capabilities that you said the everything cloud, they've got everything in the store do great stuff. Great on the keynote from Verner Vogel's again, more technology. I'm super excited around the momentum around Coobernetti's you know we love that they think cloud native is going to be absolutely legit and continue to be on a tear in 2020 and beyond. I think the five G wavelength is going to change the network constructs because that's going to introduce new levels of kinds of policy. Managing data and compute at the edge will create new opportunities at the networking layer, which for us, you know, we love that. So I think the IOT edge is going to be a super, super valuable. We even had Blackberry on their, their car group talking about the software inside the car. I mean that's a moving mobile device of, of of industrial strength is industrial IOT. So industrial IOT, IOT, edge outpost, hybrid dude, we called this what year? Yeah, we call that 2013. >>And John, it's great to help our audience get a little bit more cloud native on their education and uh, you know, make sure that we're not as naive anymore. >>Still you're not naive. You're certainly cloud native, born in the clouds do, it's us born here. Our seventh year here at Amazon web services. Want to thank Intel for being our headline sponsor. Without Intel support, we would not have the two stages and bringing all the wall to wall coverage. Thanks for supporting our mission. Intel. We really appreciate it. Give them a shout out. We've got Andy Jassy coming on for exclusive at three o'clock day three stay with us for more coverage. Live in Vegas for reinvent 2019 be right back.
SUMMARY :
AWS reinvent 2019 brought to you by Amazon web services We're here extracting the signal from the noise as It is the outpost is the building block for And one of the things they announced was the Amazon builders library. Amazon underneath the covers, we finally got to hear a little bit more about not just So cloud native, born in the cloud, that's proven. these journeys to you know, transformation, haven't we heard them all and I said, you know, while the high level message There are politics inside the company that But the cloud naive is, is some of the conversations we're hearing in the community and the customer base of these clouds, the business model? It is the same but starting off there is no S3 in it because we were like, wait, what do you mean there's no S3 in it? And it's moving the cloud, the edge, the cloud operating model with the cloud as a center of gravity for all the reasons scale, of gravity is the public cloud and the Amazon services. and the district is a scale for Microsoft trying to copy Amazon because they now have So the battle is amazing. And the true bet is if they become just They have not lost the developers. the fact that they say they could do it. and the government deals Gelsinger was on record from VMware talking about lots of, So the cloud fits that bill. Amazon is becoming the everywhere cloud and they really, as I'm super excited around the momentum around Coobernetti's you know we love that And John, it's great to help our audience get a little bit more cloud native on their education You're certainly cloud native, born in the clouds do, it's us born here.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Stefanie | PERSON | 0.99+ |
Dave Valenti | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Frank Luman | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Joe | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Deutsche Bank | ORGANIZATION | 0.99+ |
Exxon | ORGANIZATION | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Werner | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Symantec | ORGANIZATION | 0.99+ |
Joe Fitzgerald | PERSON | 0.99+ |
Ashesh Badani | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
Sanjay Poonen | PERSON | 0.99+ |
Italy | LOCATION | 0.99+ |
Jessie | PERSON | 0.99+ |
ExxonMobil | ORGANIZATION | 0.99+ |
Jon Sakoda | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Stefanie Chiras | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Ashesh | PERSON | 0.99+ |
Jesse | PERSON | 0.99+ |
Adrian Cockcroft | PERSON | 0.99+ |
LA | LOCATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Johnson | PERSON | 0.99+ |
Dave allante | PERSON | 0.99+ |
Miami | LOCATION | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
Phoummala Schmitt, Microsoft | Microsoft Ignite 2019
>> Narrator: Live from Orlando, Florida it's theCUBE! Covering Microsoft Ignite. Brought to you by Cohesity. >> Good afternoon everyone and welcome back to theCUBE's live coverage of Microsoft Ignite, one of Microsoft's biggest shows of the year 26,000 people here in Orlando. I'm your host, Rebecca Knight co-hosting alongside of Stu Miniman. We are joined by Phoummala Schmitt. She is the Senior Cloud Advocate, Microsoft Azure Engineering. Thank you so much for coming on the show. >> Well thank you for having me. >> Rebecca: For coming back on the show. >> Yeah, last year we were here-- Well, actually, we were what, a month earlier last year? It's November. >> Rebecca: We were indeed, we were indeed. >> Hoping the weather was better, but still warm. >> Well we're not getting much fresh air, but, we're going to talk today about cloud governance. So this is something that companies that are moving to the cloud, often as an experiment and then suddenly it's live. How do you make sure that your governance is in order and how do you help companies wrap their brains around getting things buttoned up? >> Part of it is enabling developers, operations. See, governance, typically is a negative, right? Oh my gosh, governance! It's a road blocker. We have to stop thinking in that way and think of it as an enabler and instead of governance, they're guardrails. We put those guardrails in place in the beginning, enable our developers. Now you've got control and speed, because everything is about speed right now, because if you are not, you know, developing at speed you're not at velocity. You're not meeting business and then developers are off doing their own thing and then, often times, when you're off doing things really, really fast, you forget about the little things. Like leaving a port open, or you're doing a POC and you're like oh we'll come back and fix all that stuff later, let's just get this out the door. And then next thing you know it, you're like, oh, wait! What happened here? >> Phoummala, it reminds me of just a lot of things when you talk about when you rollout DevOps. I need to think about things like security, governance and compliances. Part of what I'm doing and if I'm going to be releasing code constantly, it's not something that I can go back to later, 'cause you're never going to catch up, you're always going to be, you know, N minus X behind what you're doing. So, organizationally, what do companies need to do to make sure that governance is taken care of just as part of the ongoing day-to-day activity and development? >> Well, building is checks and balances, right? So we put those guardrails in place. Let's start with infrastructure guardrails. Your ports, do those audits, just making sure that what you have on premises is the same in the cloud. Once you do that, that's like one checkbox you've done. And then there's the app development portion of it. That's where we're going to get developers thinking let's build security into our application. It's going to make life a lot easier, like you said, than going back and trying to build and trying to put, you now, new code in. And then when you're doing DevOps and that's just like a combination of everything, keeping governance in mind helps the flow of all those different transactions. Personally, I think DevOps is probably the hardest, in terms of just maintaining governance, because you do have different teams working together. You know, it's these different principles all coming together, but it comes down to doing things right. You know, doing what's right, ultimately, because at the end of the day if there's something that's missing and then next you know it you're on the front of the page. Nobody wants to be on the front page. And it's those little things. Like, checking permissions, just making sure that we have the right identity access management. And it's just throwing in some audit, just making sure our ports are closed, multifactor, you can audit and check you know, your root accounts, your administrative accounts. Little things like that just making sure that we have like the proper authentication, multifactor all that good stuff and then you just start building upon that once you have a little bit of governance in play. >> Well, Phoummala, I think, you know, identity management's one of the real strengths that Microsoft has, you know? So, maybe give us a little viewpoint as to how that's gone from, you know, identity just about Outlook or Office 365 to, you know, today's environment where, my users can be anywhere, my applications are everywhere and I still need to make sure that, you now, those corporate guidelines and identity go with me wherever I am and whatever I'm doing? >> So from an Azure standpoint identity management we have Azure AD, we've got all that component, but when you're coming into Azure, we like to emphasize using our back, role-based access control. Let's just make sure that the people that we're giving access to, have access to what they really need to. Building those roles out and people can have multiple roles. I mean, it's as simple as that, right? We started off just defining what's your job? Right, Stu, you've got a job, what are your roles? Let's just make sure we give you those roles and then we build upon that. If you need a little bit more, okay. And then you can give external users access, as well and you can give them roles, but just giving anybody full access to everything... Do you really need it? And it's the same thing with, you know Office and E-mail and SharePoint. So we're just taking those concepts from those applications and putting it into access into the Azure infrastructure. And then developers can actually build that into their applications, as well. >> One of the things that we keep talking a lot about, because Satya Nadella was talking a lot about it is trust and that is really the bedrock of good governance and making sure that people have confidence in your systems and that things are going to be done right, as you say. How much does that play into your work with customers and clients in terms of there's just an inherent trust right now that Microsoft is worthy of this and it is sort of the grown up in the room when it comes to big technology? >> Trust is huge. If you have trust in us as a customer that's, you know, that's amazing. We're going to give you the tools, we're going to give you the features, so that you and your customers have trust. You know, Azure policies. I mean that's just one component of governance and it's-- Policies isn't about completely control, but it's about auditing. Just checking, right? Checks and balances, 'cause that's really what governance is. Those checks and balances make sure that your operations is meeting your business needs. So if we can just do those little checks, simple trust like check marks, it goes a long way. And then, then we've got Azure blueprints which is our governance at scale. So we've taken everything that we've learned about governance in general, those different tools that we had and now you're just going to stamp it. Every time you build a new subscription you're just going to rollout governance and it's just-- I don't want to say as easy as a button, but it sort of is, right? You can do it through the portal. And everything that you've built as a team, those roles that you've created, the policies you've been submitted from your audit checks to controlling who creates what, where they can create that from, because, you know, GDPR that's huge. Because we can actually help you control where you resources are being deployed from. I mean that's going to be huge for most organizations right now. So, knowing that we have the right tools in place for you to run your business, that's trust. >> Phoummala, give us a little bit of a walk-around the show in your shoes. You're speaking at the show, you're hosting people on channel 9, you're behind the scenes helping a lot of people. Give us what you're most looking forward to, what you're most looking to share at the event this week. >> Most looking forward to just meeting all my friends that I've made throughout the years, but meeting new friends and, of course, there's puppies with therapy dogs, as well. Thursday I'm doing several channel 9 live interviews and I've got two sessions-- Well diversity sessions, which, typically, I do technical sessions, but diversity sessions I feel are very, very important. We talk about stuff nobody really wants to talk about all the time, right? We actually have a parenting and tech session tomorrow. How do we handle being a parent and working full time? And then I'm talking about the career journey and those two actually kind of go together, in some way, I mean, I'm-- And everyone's been asking me how I'm doing, my son just went off to bootcamp and so as a parent, I felt it was really important to be part of that session and talk about how, how do I handle it? Where I wasn't here yesterday, the first day, I was off sending my son, starting off his life, his new career and my career, you know, has gone on for several years, but it's a new change now for me and balancing that, that FOMO right? Fear of missing out, like everyone's at work and I have to be here with my son. There is an adjustment and a lot of parents have actually reached out to me and said how do you handle that? So there's several of us speaking tomorrow that, we're going to talk to the attendees and then here are some tips to how we do it. Especially with our traveling schedule. >> Well, I'm interested to hear, because we had another guest who was talking about stress, I mean, I think it was your best friend, Teresa Miller. >> Yes! >> Talking about stress as endemic to this high, stressed, fast-paced industry. Where, as you said, there's a lot of demands on your time, a lot of demands on your travel schedule and really a push for excellence add-all time. How is it to be a hard driving professional and also want to make time for your family, because your kids matter, of course? >> It is-- There's a balance. So the tech career includes that balance. We always want more in that career, right? We all do, but sometimes we have to step back. We've got to play the game a little bit, you know? You can't always have everything all at once and I've learned that. So tomorrow's session's about sharing what I've gone through, you know, as a parent, as a woman in tech. It's been a tough journey, but it's been fulfilling. So I work for Microsoft now and here's what I've done. I've made some bad mistakes, I've made some, you know, some good choices, but overall, there's been a balance. There's been a give-and-take I've had to do and I feel like the journey I've been through could be helpful for others. I've had a lot of people ask me, especially about career journeys now with the cloud, it's very, very scary. And a lot of people are worried, will I still have a job? My job transitions, what do I do? And I'm like, let's talk about this. I went through the same thing, I mean, exchange got us, exchange servers. Most people don't deploy exchange anymore. It's Office 365. So I went through that several years ago, that transition, where do I go next? 'Cause I know I really don't have that much of a life anymore. Like the AS/400 engineers, right? >> And diversity is another, of course, hot button issue in the technology industry. There is a dearth of women, there is dearth of underrepresented groups and LGBTQ. How are you as someone who is a woman of color navigating these thorny issues and helping the next generation come up and to create a different technology industry for the future? >> So it's tough. I navigate through with a lot of candles, a lot of wine. (the ladies laugh) With friends, I've got a great support system, but I strongly believe in paying it forward. There's a lot of stuff I do behind the scenes a lot of people do not know. A lot of forwarding of hey, this person is really good, you know, in this space, you might want to speak with them. I, Tech Field Day, I'm sure you all know the great people over there. I've forwarded a lot of names over there. I feel like I'm-- I've come up the ladder or the elevator, it's time to push that button, send it back down to help others and I've been doing it a lot more, I've always felt it, but now I feel I'm in a position that I can really help others and it just feels really good when someone I've helped Tweets about it. Obviously they're not going to mention my name, but when I see them being so happy, it just makes me feel really, really good, like, wow, you know, you just feel-- Like your heart just fills up like okay. This is good. >> Rebecca: Contributing to their success. >> Yeah and it becomes addictive almost. Like, how can I, you know-- If I see an opportunity to help somebody I will, I'll help 'em, anyway I can. >> So you are an avid blogger and you are considered one of the top 50 tech influencers and thought leaders you should follow. So congratulations on that. >> Phoummala: Thank you. >> I'm interested to hear, how do you keep up on the news and what do you read, who do you talk to, what do you pay attention to? And tell our viewers, too, because they want to know. >> Twitter is probably my source of everything now, 'cause it's quick, but pretty much just keeping up on the internet. Honestly, it's a lot, between my travel schedule, my family, it is almost impossible to stay up to date on everything. And I've learned that I can't. I just-- 'Cause I don't want to get burnout. I've been burnout several times and now I just, I take one day at a time. Oh, there was something that was announced, I didn't hear about it and someone said something I'm like oh, okay, oh that's cool. I'll read up on that later. But I don't feel like I need to know everything all at once. I think when you get to a certain place you're just comfortable knowing what you know and, you know, I'll read, I'll read the news when I get home. You know, something like that where you're-- You've got to be at that place where you're comfortable and not always feeling like I have to know everything, 'cause we're humans, we can't know everything all at once. >> And as we've talked about there has been, talking about not being able to keep up with everything, this conference, Microsoft Ignite. So many new product announcements, new buzzwords, new strategies that are all washing over us. What has been most interesting to you, most exciting? Who have you talked to? What sessions have you seen, that have sort of, sparked your interest the most? >> Azure Arc. Now I'm just reading into it, I haven't gotten real deep into it, but from what I know, from what I've seen, I like it. I like it a lot. It's, when we think about the cloud, it's multicloud, you know, it is right? It's every organization, they're dipping in their toes into just about everything and Azure Arc is giving that opportunity to our customers to be able to say, hey, we know you're in the cloud, in different clouds, here's a view into it. And, you know, you're able to manage these environments and see what's going on. Because that is the future and... Its a hybrid, multicloud. I think that's going to be my word, you know. Hybrid-multi, because we're in everything. I expect every organization to be in a little bit of everything, because it's... It's like, you know, your personal lives, right? You're in that little bit of everything. It makes it more dynamic and I just don't think one, one thing is going to be an organization's like, you know, that's all they're doing. I truly believe everyone's meant to dip their toes in a little bit of everything. They'll have one defined set of, here we're just going to use this one cloud, or this one's servers, but for the most part, they are going to dabble. And we're-- Azure Arc is giving customers the opportunity to manage those environments, where they've decided to dabble a little bit or because of business needs. They need to be in different environments. >> Exactly, renaissance organizations. >> Phoummala: Yeah. >> I love it. Phoummala, thank you so much for coming on theCUBE. Always a pleasure having you. >> Thank you for having me. >> I'm Rebecca Knight for Stu Miniman stay tuned for more of theCUBE's live coverage of Microsoft Ignite. (theCUBE theme song)
SUMMARY :
Brought to you by Cohesity. Thank you so much for coming on the show. Well, actually, we were what, How do you make sure that your governance And then next thing you know it, a lot of things when you talk about and then next you know it you're on And it's the same thing with, you know going to be done right, as you say. We're going to give you the tools, what you're most looking to share and then here are some tips to how we do it. I think it was your best friend, Teresa Miller. How is it to be a hard driving professional We've got to play the game a little bit, you know? and to create a different technology industry hey, this person is really good, you know, Like, how can I, you know-- and you are considered one of the top 50 I'm interested to hear, how do you keep up I think when you get to a certain place Who have you talked to? I think that's going to be my word, you know. Phoummala, thank you so much for coming on theCUBE. of Microsoft Ignite.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Microsoft | ORGANIZATION | 0.99+ |
Rebecca | PERSON | 0.99+ |
Teresa Miller | PERSON | 0.99+ |
Phoummala Schmitt | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Satya Nadella | PERSON | 0.99+ |
Orlando | LOCATION | 0.99+ |
Phoummala | PERSON | 0.99+ |
tomorrow | DATE | 0.99+ |
Thursday | DATE | 0.99+ |
last year | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
November | DATE | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
Outlook | TITLE | 0.99+ |
channel 9 | ORGANIZATION | 0.99+ |
26,000 people | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
GDPR | TITLE | 0.98+ |
one day | QUANTITY | 0.98+ |
Office 365 | TITLE | 0.98+ |
Office | TITLE | 0.98+ |
this week | DATE | 0.98+ |
two sessions | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
One | QUANTITY | 0.97+ |
DevOps | TITLE | 0.97+ |
ORGANIZATION | 0.96+ | |
Stu | PERSON | 0.96+ |
SharePoint | TITLE | 0.96+ |
one | QUANTITY | 0.95+ |
Azure AD | TITLE | 0.93+ |
first day | QUANTITY | 0.93+ |
Azure Arc | TITLE | 0.92+ |
Azure | TITLE | 0.91+ |
several years ago | DATE | 0.9+ |
theCUBE | ORGANIZATION | 0.89+ |
one checkbox | QUANTITY | 0.89+ |
a month earlier last year | DATE | 0.88+ |
LGBTQ | ORGANIZATION | 0.87+ |
one component | QUANTITY | 0.81+ |
Microsoft Azure | ORGANIZATION | 0.73+ |
top | QUANTITY | 0.72+ |
Microsoft Ignite | ORGANIZATION | 0.69+ |
Tech Field Day | EVENT | 0.69+ |
50 tech influencers | QUANTITY | 0.69+ |
Azure | ORGANIZATION | 0.62+ |
Cohesity | ORGANIZATION | 0.52+ |
AS/400 | COMMERCIAL_ITEM | 0.51+ |
Ignite | EVENT | 0.48+ |
365 | TITLE | 0.44+ |
Ignite | TITLE | 0.41+ |
2019 | DATE | 0.41+ |
VMware 2019 Preview & 10 Year Reflection
>> From the Silicon Angle Media office in Boston Massachusetts, it's theCUBE. Now here's your host, Dave Vellante. (upbeat music) >> Hello everybody, this is Dave Vallante with Stu Miniman and we're going to take a look back at ten years of theCUBE at VMworld and look forward to see what's coming next. So, as I say, this is theCUBE's 10th year at VMworld, that's VMworld, of course 2019. And Stu, if you think about the VMware of 2010, when we first started, it's a dramatically different VMware today. Let's look back at 2010. Paul Maritz was running VMware, he set forth the vision of the software mainframe last decade, well, what does that mean, software mainframe? Highly integrated hardware and software that can run any workload, any application. That is the gauntlet that Tucci and Maritz laid down. A lot of people were skeptical. Fast forward 10 years, they've actually achieved that, I mean, essentially, it is the standard operating system, if you will, in the data center, but there's a lot more to the story. But you remember, at the time, Stu, it was a very complex environment. When something went wrong, you needed guys with lab coats to come in a figure out, you know, what was going on, the I/O blender problem, storage was a real bottleneck. So let's talk about that. >> Yeah, Dave, so much. First of all, hard to believe, 10 years, you know, think back to 2010, it was my first time being at VMworld, even though I started working with VMware back in 2002 when it was like, you know, 100, 150 person company. Remember when vMotion first launched. But that first show that we went to, Dave, was in San Francisco, and most people didn't know theCUBE, heck, we were still figuring out exactly what theCUBE will be, and we brought in a bunch of our friends that were doing the CloudCamps in Silicon Valley, and we were talking about cloud. And there was this gap that we saw between, as you said, the challenges we were solving with VMware, which was fixing infrastructure, storage and networking had been broken, and how were we going to make sure that that worked in a virtual environment even better? But there were the early thought leaders that were talking about that future of cloud computing, which, today in 2019, looks like we had a good prediction. And, of course, where VMware is today, we're talking all about cloud. So, so many different eras and pieces and research that we did, you know, hundreds and hundreds of interviews that we've done at that show, it's definitely been one of our flagship shows and one of our favorite for guests and ecosystems and so much that we got to dig into at that event. >> So Tod Nielsen, who was the President and probably COO at the time, talked about the ecosystem. For every dollar spent on a VMware license, $15 was spent on the ecosystem. VMware was a very, even though they were owned by EMC, they were very, sort of, neutral to the ecosystem. You had what we called the storage cartel. It was certainly EMC, you know, but NetApp was right there, IBM, HP, you know, Dell had purchased EqualLogic, HDS was kind of there as well. These companies were the first to get the APIs, you remember, the VASA VAAI. So, we pushed VMware at the time, saying, "Look, you guys got a storage problem." And they said, "Well, we don't have a lot of resources, "we're going to let the ecosystem solve the problem, "here's an API, you guys figure it out." Which they largely did, but it took a long time. The other big thing you had in that 2010 timeframe was storage consolidation. You had the bidding war between Dell and HP, which, ultimately, HP, under Donatelli's leadership, won that bidding war and acquired 3PAR >> Bought 3PAR >> for 2.4, 2.5 billion, it forced Dell to buy Compellent. Subsequently, Isilon was acquired, Data Domain was acquired by EMC. So you had this consolidation of the early 2000s storage startups and then, still, storage was a major problem back then. But the big sea change was, two things happened in 2012. Pat Gelsinger took over as CEO, and VMware acquired Nicira, beat Cisco to the punch. Why did that change everything? >> Yeah, Dave, we talked a lot about storage, and how, you know, the ecosystem was changing this. Nicira, we knew it was a big deal. When I, you know, I talked to my friends that were deep in networking and I talked with Nicira and was majorly impressed with what they were doing. But this heterogeneous, and what now is the multi-cloud environment, networking needs to play a critical role. You see, you know, Cisco has clearly targeted that environment and Nicira had some really smart people and some really fundamental technology underneath that would allow networking to go just beyond the virtual machine where it was before, the vSwitch. So, you know, that expansion, and actually, it took a little while for, you know, the Nicira acquisition to run into NSX and that product to gain maturity, and to gain adoption, but as Pat Gelsinger has said more recently, it is one of the key drivers for VMware, getting them beyond just the hypervisor itself. So, so much is happening, I mean, Dave, I look at the swings as, you know, you said, VMware didn't have enough resources, they were going to let the ecosystem do it. In the early days, it was, I chose a server provider, and, oh yeah, VMware kind of plays in it. So VMware really grew how much control and how much power they had in buying decisions, and we're going through more of that change now, as to, as they're partnering we're going to talk about AWS and Microsoft and Google as those pieces. And Pat driving that ship. The analogy we gave is, could Pat do for VMware what Intel had done for a long time, which is, you have a big ecosystem, and you slowly start eating away at some of that other functionality without alienating that ecosystem. And to Pat's credit, it's actually something that he's done quite well. There's been some ebbs and flows, there's pushback in the community. Those that remember things like the "vTax," when they rolled that out. You know, there's certain features that the rolled into the hypervisor that have had parts of the ecosystem gripe a little bit, but for the most part, VMware is still playing well with the ecosystem, even though, after the Dell acquisition of EMC, you know, we'll talk about this some more, that relationship between Dell and VMware is tighter than it ever was in the EMC days. >> So that led to the Software-Defined Data Center, which was the big, sort of, vision. VMware wanted to do to storage and networking what it had done to compute. And this started to set up the tension between with VMware and Cisco, which, you know, lives on today. The other big mega trend, of course, was flash storage, which was coming into play. In many ways, that whole API gymnastics was a Band-Aid. But the other big piece if it is Pat Gelsinger was much more willing to integrate, you know, some of the EMC technologies, and now Dell technologies, into the VMware sort of stack. >> Right, so Dave, you talked about all of those APIs, Vvols was a huge multi-year initiative that VMware worked on and all of the big storage players were talking about how that would allow them to deeply integrate and make it virtualization-aware storage your so tense we come out on their own and try to do that. But if you look at it, VVols was also what enabled VMware to do vSAN, and that is a little bit of how they can try to erode in some of the storage piece, because vSAN today has the most customers in the hyperconverged infrastructure space, and is keeping to grow, but they still have those storage partnerships. It didn't eliminate it, but it definitely adds some tension. >> Well it is important, because under EMC's ownership it was sort of a let 1,000 flowers bloom sort of strategy, and today you see Jeff Clarke coming in and consolidating the portfolios, saying, "Look, let's let VMware go hard with vSAN." So you're seeing a different type of governance structure, we'll talk about that. 2013 was a big year. That's the year they brought in Sanjay Poonen, they did the AirWatch acquisition, they took on what the industry called VDI, what VMware called EUC, End-User Computing. Citrix was the dominant player in that space, VMware was fumbling, frankly. Sanjay Poonen came in, the AirWatch acquisition, now, VMware is a leader in that space, so that was big. The other big thing in 2013 was, you know, the famous comment by Carl Eschenbach about, you know, if we lose to the book seller, we'll all lose. VMware came out with it's cloud strategy, vCloud Air. I was there with the Wall Street analyst that day listening to Pat explain that and we were talking afterwards to a number of the Wall Street analysts saying, "This really doesn't make a lot of sense." And then they sort of retreated on that, saying that it was going to be an accelerant, and it just was basically a failed cloud strategy. >> And Dave, that 2013 is also when they spun out Cloud Foundry and founded Pivital. So, you know, this is where they took some of the pieces from EMC, the Greenplum, and they took some of the pieces from VMware, Spring and the Cloud Foundation, and put those together. As we speak right now, there was just an SEC Filing that VMware might suck them back in. Where I look at that, back in 2013, there was a huge gap between what VMware was doing on the infrastructure side and what Cloud Foundry was doing on the application modernization standpoint, they had bought the Pivotal Labs piece to help people understand new programming models and everything along those lines. Today, in 2019, if you look at where VMware is going, the changes happening in containerization, the changes happening from the application down, they need to come together. The Achilles heel that I have seen from VMware for a long time is that VMware doesn't have enough a tie to or help build the applications. Microsoft owns the applications, Oracle owns the applications. You know, there are all the ISVs that own the applications, and Pivotal, if they bring that back into VMware it can help, but it made sense at the time to kind of spin that out because it wasn't synergies between them. >> It was what I called at the time a bunch of misfit toys. And so it was largely David Goulden's engineering of what they called The Federation. And now you're seeing some more engineering, financial engineering, of having VMware essentially buy another, you know, Dell Silver Lake asset, which, you know, drove the stock price up 77% in a day that the Dow dropped 800 points. So I guess that works, kind of funny money. The other big trend sort of in that mid-part of this decade, hyperconverged, you know, really hit. Nutanix, who was at one point a strong partner of both VMware and Dell, was sort of hitting its groove swing. Fast forward to 2019, different situation, Nutanix really doesn't have a presence there. You know, people are looking at going beyond hyperconverged. So there's sort of the VMware ecosystem, sort of friendly posture has changed, they point fingers at each other. VMware says, "Well, it's Nutanix's fault." Nutanix will say it's VMware's fault. >> Right, so Dave, I pointed out, the Achilles heel for VMware might be that they don't have the closest tie to the application, but their greatest strength is, really, they are really the data center operating system, if you will. When we wrote out our research on Server SAN was before vSAN had gotten launched. It was where Nutanix, Scale Computing, SimpliVity, you know, Pivot3, and a few others were early in that space, but we stated in our research, if Microsoft and VMware get serious about that space, they can dominate. And we've seen, VMware came in strong, they do work with their partnerships. Of course, Dell, with the VxRail is their largest solution, but all of the other server providers, you know, have offerings and can put those together. And Microsoft, just last year, they kind of rebranded some of the Azure Stack as HCI and they're going strong in that space. So, absolutely, you know, strong presence in the data center platform, and that's what they're extending into their hybrid and multi-cloud offering, the VMware Cloud Solutions. >> So I want to get to some of the trends today, but just real quick, let's go through some of this. So 2015 was the big announcement in the fall where Dell was acquiring EMC, so we entered, really, the Dell era of VMware ownership in 2016. And the other piece that happened, really 2016 in the fall, but it went GA 2017, was the announcement AWS and VMware as the preferred partnership. Yes, AWS had a partnership with IBM, they've subsequently >> VMware had a partnership >> Yeah, sorry, VMware has a partnership with IBM for their cloud, subsequently VMware has done deals with Google and Microsoft, so there's, we now have entered the multi-cloud hybrid world. VMware capitulated on cloud, smart move, cleaned up its cloud strategy, cleaned that AirWatch mess. AWS also capitulated on hybrid. It's a term that they would never use, they don't use it necessarily a lot today, but they recognize that On Prem is a viable portion of the marketplace. And so now we've entered this new era of cloud, hybrid cloud, containers is the other big trend. People said, "Containers are going to really hurt VMware." You know, the jury's still out on that, VMware sort of pushes back on that. >> And Dave, just to put a point on that, you know, everybody, including us, spent a lot of time looking at this VMware Cloud on AWS partnership, and what does it mean, especially, to the parent, you know, Dell? How do they make that environment? And you've pointed out, Dave, that while VMware gets in those environments and gives themselves a very strong cloud strategy, AWS is the key partner, but of course, as you said, Microsoft Azure, Google Cloud, and all the server providers, we have a number of them including CenturyLink and Rackspace that they're partnering with, but we have to wait a little while before Amazon, when they announced their outpost solutions, VMware is a critical software piece, and you've got two flavors of the hardware. You can run the full AWS Stack, just like what they're running in their data center, but the alternative, of course, is VMware software running on Dell hardware. And we think that if VMware hadn't come in with a strong position with Amazon and their 600,000 customers, we're not sure that Amazon would have said, "Oh yeah, hey, you can run that same software stack "that you're running, but run some different hardware." So that's a good place for Dell to get in the environment, it helps kind of close out that story of VMware, Dell, and AWS and how the pieces fit together. >> Yeah, well so, by the way, earlier this week I privately mentioned to a Dell executive that one of the things I thought they should do was fold Pivotal into VMware. By the way, I think they should go further. I think they should look at RSA and Dell Boomi and SecureWorks, make VMware the mothership of software, and then really tie in Dell's hardware to VMware. That seems to me, Stu, the direction that they're going to try to gain an advantage on the balance of the ecosystem. I think VMware now is in a position of strength with, what, 5 or 600,000 customers. It feels like it's less ecosystem friendly than it used to be. >> Yeah, Dave, there's no doubt about it. HPE and IBM, who were two of the main companies that helped with VMware's ascendancy, do a lot of other things beyond VMware. Of course, IBM bought Red Hat, it is a key counterbalance to what VMware is doing in the multi-cloud. And Dave, to your point, absolutely, if you look at Dell's cloud strategy, they're number one offering is VMware, VMware cloud on Dell. Dell as the project dimension piece. All of these pieces do line up. I'll say, some of those pieces, absolutely, I would say, make sense to kind of pull in and shell together. I know one of the reasons they keep the security pieces at arm's length is just, you know, when something goes wrong in the security space, and it's not of the question of if, it's a question of when, they do have that arm's length to be able to keep that out and be able to remediate a little bit when something happens. >> So let's look at some of the things that we're following today. I think one of the big ones is, how will containers effect customer spending on VMware? We know people are concerned about the vTax. We also know that they're concerned about lock-in. And so, containers are this major force. Can VMware make containers a tailwind, or is it a headwind for them? >> So you look at all the acquisitions that they've made lately, Dave, CloudHealth is, from a management standpoint, in the public cloud. Heptio and Bitnami, targeting that cloud native space. Pair that with Cloud Foundry and you see, VMware and Pivotal together trying to go all-in on Kubernetes. So those 600,000 customers, VMware wants to be the group that educates you on containerization, Kubernetes, you know, how to build these new environments. For, you know, a lot of customers, it's attractive for them to just stay. "I have a relationship, "I have an enterprise licensing agreement, "I'm going to stay along with that." The question I would have is, if I want to do something in a modern way, is VMware really the best partner to choose from? Do they have the cost structure? A lot of these environments set up, you know, it's open source base, or I can work with my public cloud providers there, so why would I partner with VMware? Sure, they have a lot of smart people and they have expertise and we have a relationship, but what differentiates VMware, and is it worth paying for that licensing that they have, or will I look at alternatives? But as VMware grows their hybrid and multi-cloud deployments they absolutely are on the short list of, you know, strategic partners for most customers. >> The other big thing that we're watching is multi-cloud. I have said over and over that multi-cloud has largely been a symptom of multi-vendor. It's not necessarily, to date anyway, been a strategy of customers. Having said that, issues around security, governance, compliance have forced organizations and boards to say, "You know what, we need IT more involved, "let's make multi-cloud part of our strategy, "not only for governance and compliance "and making sure it adheres to the corporate edicts, "but also to put the right workload on the right cloud." So having some kind of strategy there is important. Who are the players there? Obviously VMware, I would say, right now, is the favorite because it's coming from a position of strength in the data center. Microsoft with it's software state, Cisco coming at it from a standpoint of network strength. Google, with Anthos, that announcement earlier this year, and, of course, Red Hat with IBM. Who's the company that I didn't mention in that list? >> Well, of course, you can't talk about cloud, Dave, without talking about AWS. So, as you stated before, they don't really want to talk about hybrid, hey, come on, multi-cloud, why would you do this? But any customer that has a multi-cloud environment, they've got AWS. And the VMware-AWS partnership is really interesting to watch. It will be, you know, where will Amazon grow in this environment as they find their customers are using multiple solutions? Amazon has lots of offerings to allow you leverage Kubernetes, but, for the most part, the messaging is still, "We are the best place for you, "if you do everything on us, "you're going to get better pricing "and all of these environments." But as you've said, Dave, we never get down to that homogeneous, you know, one vendor solution. It tends to be, you know, IT has always been this heterogeneous mess and you have different groups that purchase different things for different reasons, and we have not seen, yet, public cloud solving that for a lot of customers. If anything we often have many more silos in the clouds than we had in the data center before. >> Okay. Another big story that we're following, big trend, is the battle for networking. NSX, the software networking component, and then Cisco, who's got a combination of, obviously, hardware and software with ACI. You know, Stu, I got to say, Cisco a very impressive company. You know, 60+% market share, being able to hold that share for a long time. I've seen a lot of companies try to go up against Cisco. You know, the industry's littered with failures. It feels, however, like NSX is a disruptive force that's very hard for Cisco to deal with in a number of dimensions. We talked about multi-cloud, but networking in general. Cisco's still a major player, still, you know, owns the hardware infrastructure, obviously layering in its own software-defined strategy. But that seems to be a source of tension between the two companies. What's the customer perspective? >> Yeah, so first of all, Dave, Cisco, from a hardware perspective, is still going strong. There are some big competitors. Arista has been doing quite well into getting in, especially, a high performance, high speed environments, you know, Jayshree Ullal and that team, you know, very impressive public company that's doing quite well. >> Service providers that do really well there. >> Absolutely, but, absolutely, software is eating the world and it is impacting networking. Even when you look at Cisco's overall strategy, it is in the future. Cisco is not a networking company, they are a software company. The whole DevNet, you know, group that they have there is helping customers modernize, what we were talking about with Pivotal. Cisco is going there and helping customers create those new environments. But from a customer standpoint, they want simplicity. If my VMware is a big piece of my environment, I've probably started using NSX, NSX-T, some of these environments. As I go to my service providers, as I go to multi-cloud, that NSX piece inside my VMware cloud foundation starts to grow. I remember, Dave, a few years back, you know, Pat Gelsinger got up on a stage and was like, "This is the biggest collection of network administrators that we've ever seen!" And everybody's looking around and they're like, "Where? "We're virtualization people. "Oh, wait, just because we've got vNICs and vSwitches "and things like that." It still is a gap between kind of a hardcore networking people and the software state. But just like we see on storage, Dave, it's not like vSAN, despite it's thousands and thousands of customers, it is not the dominant player in storage. It's a big player, it's a great revenue stream, and it is expanding VMware beyond their core vSphere solutions. >> Back to Cisco real quickly. One of the things I'm very impressed with Cisco is the way in which they've developed infrastructures. Code with the DevNet group, how CCIEs are learning Python, and that's a very powerful sort of trend to watch. The other thing we're watching is VMware-AWS. How will it affect spending, you know, near-term, mid-term, long-term? Clearly it's been a momentum, you know, tailwind, for VMware today, but the questions remains, long-term, where will customers place their bets? Where will the spending be? We know that cloud is growing dramatically faster than On Prem, but it appears, at least in the near- to mid-term, for one, two, maybe three more cycles, maybe indefinitely, that the VMware-AWS relationship has been a real positive for VMware. >> Yeah, Dave, I think you stated it really well. When I talked to customers, they were a bit frozen a couple of years ago. "Ah, I know I need to do more in cloud, "but I have this environment, what do I do? "Do I stay with VMware, do I have to make a big change." And what VMware did, is they really opened things up and said, "Look, no, you can embrace cloud, and we're there for you. "We will be there to help be that bridge to the future, "if you will, so take your VMware environment, "do VMware cloud in lots of places, "and we will enable that." What we know today, the stat that we hear all the time, the old 80/20 we used to talk about was 80% keeping the lights on, now the 80% we hear about is, there's only 20% of workloads that are in public cloud today. It doesn't mean that that other 80% is going to flip overnight, but if you look over the next five to ten years, it could be a flip from 80/20 to 20/80. And as that shift happens, how much of that estate will stay under VMware licenses? Because the day after AWS made the announcement of VMware cloud on AWS, they offered some migration services. So if you just want to go on natively on the public cloud, you can do that. And Microsoft, Google, everybody has migration services, so use VMware for what I need to, but I might go more native cloud for some of those other environments. So we know it is going to continue to be a mix. Multi-cloud is what customers are doing today, and multi- and hybrid-cloud is what customers will be doing five years from now. >> The other big question we're watching is Outposts. Will VMware and Outposts get a larger share of wallet as a result of that partnership at the expense of other vendors? And so, remains to be seen, Outposts grabbed a lot of attention, that whole notion of same control plane, same hardware, same software, same data plane On Prem as in the Data Center, kind of like Oracle's same-same approach, but it's seemingly a logical one. Others are responding. Your thoughts on whether or not these two companies will dominate or the industry will respond or an equilibrium. >> Right, so first of all, right, that full same-same full stack has been something we've been talking about now, feels like for 10 years, Dave, with Oracle, IBM had a strategy on that, and you see that, but one of the things with VMware has strong strength. What they have over two decades of experiences on is making sure that I can have a software stack that can actually live in heterogeneous environments. So in the future, if we talk about if Kubernetes allows me to live in a multi-cloud environment, VMware might be able to give me some flexibility so that I can move from one hardware stack to another as I move from data centers to service providers to public clouds. So, absolutely, you know, one to watch. And VMware is smart. Amazon might be their number one partner, but they're lining up everywhere. When you see Sanjay Poonen up on stage with Thomas Kurian at Google Cloud talking about how Anthos in your data center very much requires VMware. You see Sachi Nodella up on stage talking about these kind of VMware partnerships. VMware is going to make sure that they live in all of these environments, just like they lived on all of the servers in the data center in the past. >> The other last two pieces that I want to touch on, and they're related is, as a result of Dell's ownership of VMware, are customers going to spend more with Dell? And it's clear that Dell is architecting a very tight relationship. You can see, first of all, Michael Dell putting Jeff Clarke in charge of everything Dell was brilliant, because, in a way, you know, Pat was kind of elevated as this superstar. And Michael Dell is the founder, and he's the leader of the company. So basically what he's created is this team of rivals. Now, you know, Jeff and Pat, they've worked together for decades, but very interesting. We saw them up on stage together, you know, last year, well I guess at Dell Technologies World, it was kind of awkward, but so, I love it. I love that tension of, It's very clear to me that Dell wants to integrate more tightly with VMware. It's the clear strategy, and they don't really care at this point if it's at the expense of the ecosystem. Let the ecosystem figure it out themselves. So that's one thing we're watching. Related to that is long-term, are customers going to spend more of their VMware dollars in the public cloud? Come back to Dell for a second. To me, AWS is by far the number one competitor of Dell, you know, that shift to the cloud. Clearly they've got other competitors, you know, NetApp, Huawei, you know, on and on and on, but AWS is the big one. How will cloud spending effect both Dell and AWS long-term? The numbers right now suggest that cloud's going to keep growing, $35, $40 billion run-rate company growing at 40% a year, whereas On Prem stuff's growing, you know, at best, single digits. So that trend really does favor the cloud guys. I talked to a Gartner analyst who tracks all this stuff. I said, "Can AWS continue to grow? It's so big." He said, "There's no reason, they can't stop. "The market's enormous." I tend to agree, what are your thoughts? >> Yeah, first of all, on the AWS, absolutely, I agree, Dave. They are still, if you look at the overall IT spend, AWS is still a small piece. They have, that lever that they have and the influence they have on the marketplace greatly outweighs the, you know, $30, $31 billion that they're at today, and absolutely they can keep growing. The one point, I think, what we've seen, the best success that Dell is having, it is the Dell and VMware really coming together, product development, go to market, the field is tightly, tightly, tightly alligned. The VxRail was the first real big push, and if they can do the same thing with the vCloud foundation, you know, VMware cloud on Dell hardware, that could be a real tailwind for Dell to try to grow faster as an infrastructure company, to grow more like the software companies or even the cloud companies will. Because we know, when we've run the numbers, Dave, private cloud is going to get a lot of dollars, even as public cloud continues its growth. >> I think the answer comes down to a couple things. Because right now we know that 80% of the spend and stall base is On Prem, 20% in the cloud. We're entering now the cloud 2.0, which introduces hybrid-cloud, On Prem, you know, connecting to clouds, multi-cloud, Kubernetes. So what it comes down to, to me Stu, is to what degree can Dell, VMware, and the ecosystem create that cloud experience in a hybrid world, number one? And number two, how will they be able to compete from a cost-structure standpoint? Dell's cost-structure is better than anybody else's in the On Prem world. I would argue that AWS's cost-structure is better, you know, relative to Dell, but remains to be seen. But really those two things, the cloud experience and the cost-structure, can they hold on, and how long can they hold on to that 80%? >> All right, so Dave here's the question I have for you. What are we talking about when we're talking about Dell plus VMware and even add in Pivotal? It's primarily hardware plus software. Who's the biggest in that multi-cloud space? It's IBM plus Red Hat, which you've stated emphatically, "This is a services play, and IBM has, you know, "just got, you know, services in their DNA, "and that could help supercharge where Red Hat's going "and the modernization." So is that a danger for Dell? If they bring in Pivotal, do they need to really ramp up that services? How do they do that? >> Yeah, I don't think it's a zero sum game, but I also don't think there's, it's five winners. I think that the leader, VMware right now would be my favorite, I think it's going to do very well. I think Red Hat has got, you know, a lot of good market momentum, I think they've got a captive install base, you know, with IBM and its large outsourcing business, and I think they can do pretty well, and I think number three could do okay. I think the other guys struggle. But it's so early, right now, in the hybrid-cloud world and the multi-cloud world, that if I were any one of those five I'd be going hard after it. We know Google's got the dollars, we know Microsoft has the software state, so I can see Microsoft actually doing quite well in that business, and could emerge as the, maybe they're not a long-shot right now, but they could be a, you know, three to one, four to one leader that comes out as the favorite. So, all right, we got to go. Stu, thanks very much for your insights. And thank you for watching and listening. We will be at VMworld 2019. Three days of coverage on theCUBE. Thanks for watching everybody, we'll see you next time. (upbeat music)
SUMMARY :
From the Silicon Angle Media office you know, what was going on, the I/O blender problem, and research that we did, you know, but NetApp was right there, IBM, HP, you know, and VMware acquired Nicira, beat Cisco to the punch. I look at the swings as, you know, you said, So that led to the Software-Defined Data Center, and all of the big storage players The other big thing in 2013 was, you know, but it made sense at the time to kind of spin that out of having VMware essentially buy another, you know, but all of the other server providers, you know, And the other piece that happened, of cloud, hybrid cloud, containers is the other big trend. And Dave, just to put a point on that, you know, that one of the things I thought they should do and it's not of the question of if, it's a question of when, So let's look at some of the things is VMware really the best partner to choose from? it's coming from a position of strength in the data center. It tends to be, you know, IT has always been But that seems to be a source of tension Jayshree Ullal and that team, you know, that do really well there. I remember, Dave, a few years back, you know, but it appears, at least in the near- to mid-term, now the 80% we hear about is, as in the Data Center, but one of the things with VMware has strong strength. and he's the leader of the company. and the influence they have on the marketplace and stall base is On Prem, 20% in the cloud. "This is a services play, and IBM has, you know, but they could be a, you know, three to one,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Sachi Nodella | PERSON | 0.99+ |
Jeff Clarke | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Dell | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Carl Eschenbach | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Paul Maritz | PERSON | 0.99+ |
2012 | DATE | 0.99+ |
Thomas Kurian | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
$35 | QUANTITY | 0.99+ |
$15 | QUANTITY | 0.99+ |
2002 | DATE | 0.99+ |
2013 | DATE | 0.99+ |
$30 | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Pat | PERSON | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Dave Vallante | PERSON | 0.99+ |
David Goulden | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
AirWatch | ORGANIZATION | 0.99+ |
Michael Dell | PERSON | 0.99+ |
Tad Brockway, Microsoft | VeeamON 2019
(upbeat music) >> Live From Miami Beach, Florida. It's theCUBE! Covering VeeamON 2019. Brought to you by Veeam! >> Welcome back to Miami everybody this is theCUBE, the leader in live tech coverage. My name is Dave Vellante I'm here with my co-host Peter Burris. Two days of wall to wall coverage of VeeamON 2019. They selected the Fontainebleau Hotel in hip, swanky Miami. Tad Brockway is here he's the corporate VP of Azure Storage, good to see you! >> Yeah great to see you thank you for having me. >> So you're at work for a pretty hip company, Microsoft Azure is where all the growth is, 70 plus percent growth, and you doing some cool stuff with storage. So let's get into it. Let's start with your role and kind of your swim lane if you will. >> So our team is responsible for our storage platform that includes our disc service for IAS virtual machines, our scale our storage we call Azure blob storage. We have support for files as well with a product called Azure Files, we support SMB based files, NFS based files, we have a partnership with NetApp, we're bring Azure NetApp files is what we call it, we're bringing NetApp on tap into our data centers delivering that as a first priority service we're pretty excited about that. And then a number of other services around those core capabilities. >> And that's really grown over the last several years, optionality is really the watch word there right, giving customers as many options, file, block, object, etc. How would you summarize the Azure Storage strategy? >> I like that point, optionality and really flexibility for customers to approach storage in whatever way makes sense. So there may be customers, there are customers who are developing brand new cloud base taps, maybe they'll go straight to object storage or blobs. There are many customers who have data sets and work loads on-prem that are NFS based and SMB based, they can bring those assets to our cloud as well. We're the only vendor in the industry that has a server side implementation of HDFS. So for analytics workloads we bring file system semantics for those large scale HDFS workloads. We bring them into our storage environment so that the customer can do all of the things that are possible with a file system hierarchy's for organizing their data, use ACl's to protect their data assets and that's a pretty revolutionary thing that we've done but to your point though, optionality is the key and being able to do all of those things for all of those different access types, and then being able to do that for multiple economic tiers as well from hot storage all the way down to our archive storage tier. >> And I short changed you on your title cause your also responsible for media and edge, so that includes Azure stack is that right? >> Right so we have Azure stack as well within our area and DataBox and DataBox edge, DataBox edge and Azure stack are our edge portfolio platforms. So the customers can bring cloud based applications right into their on-prem environments. >> Peter you were making a point this morning about the cloud and it's distributed nature, can you make that point I'd love to hear Tad's reaction and response. >> So Tad we've been arguing in our research here Wikibon SiliconANGLE for quite some time. The common parlance the common concept of cloud, move everything to the center was wrong. We've been saying this for probably four or five years, and we believe very strongly that the cloud really is a technology for further distributing data, further distributing computing so that you can locate data approximate to the activity that it's going to support. But do so in a way that's coherent, comprehensive, and quite frankly confident. That's what's been missing in the industry for a long time so if you look at it that way, tell us a little bit about how that approach, that thinking informs what you're doing with Azure and specifically one of the other challenges is how does then data services impact that? So maybe we'll come to that in a second I'm sure. >> Great insight by the way, I agree that the assumption had been that everything is going to move to these large data centers in the cloud and I think that is happening for sure, but what we're seeing now is that there's a greater understanding of the longer term requirements for compute and that there are a bunch of workloads that need to be in proximity to where the data is being generated and to where it's being acted upon, and there are tons of scenarios here. Manufacturing is an example where we have one of our customers who's using our DataBox edge product to monitor an assembly line as parts come out of the assembly line our DataBox edge device is used with a camera system attached to it, AI inferencing to detect defects in the assembly line, and then stop the assembly line with very low latency where a round trip to the cloud and back to do all the AI inferencing and then do the command and control to stop the assembly line that would just be too much round trip time so in many different verticals we're seeing this awareness that there are very good reasons to have compute and storage on-prem, and so that's why we're investing in Azure stack and DataBox edge in particular. Now you asked well how does data factor in to that, because it turns out in a world of IoT and basically an infinite number of devices over time, more and more data is going to be generated. That data needs to be archived somewhere so that's where public cloud comes in and all the elasticity and the scale economies of cloud. But in terms of processing that data you need to be able to have a nice strong connection between what's going on in the public cloud and what's going on on-prem, so the killer scenario here is AI. Being able to grab data as it's being generated on-prem, write it into a product like DataBox edge, DataBox edge is a storage gateway device so you can map your cameras in the use case I mentioned or for other scenarios you can route the data directly into a file share, an NFS, blob, or SMB file share, drop into DataBox edge, then DataBox edge will automatically copy it over to the cloud, but allow for local processing to local applications as if it were, in fact it is local, running in a hot SSD NVME tier, and the beautiful thing about DataBox edge it includes an FPGA device to do AI inference offloading. So this is a very modern device that intersects a whole bunch oft things all on one very simple, self contained unit. Then the data flows into the cloud where it can be archived permanently in the cloud, and then AI models can be updated using the elastic scale of cloud compute, then those models can be brought back on-prem for enhanced processing over time. So you can sort of see this virtuous cycle happening over time where the edge is getting smarter and smarter and smarter. >> So that's what you mean kind of when you talked about the intelligent cloud and the intelligent edge, I was going to ask you you just kind of explained it and you can automate this, use machine intelligence to actually determine where the data should land and minimize human involvement. You talked about driving marginal cost of storing your data to zero, which we've always talked about doing that from the standpoint of reducing or even eliminating labor cost through automation, but you've also got some cool projects to reduce the cost for storing a bit. >> Yeah. >> Maybe you could talk about some of those projects a little bit. >> Thats right so, and that was mentioned in the keynote this morning and so our vision is that we want for our customers to be able to keep their artifacts that they store on our cloud platform for thousands of years and if you think about sort of the history of humanity that's not outside the question at all, in fact wouldn't it be great to have everything that was ever generated by humankind for the thousands of years of modern or human history. We'll be able to do that with technology that we're developing so we're investing in technology to store data virtually indefinitely on glass, as well as even in DNA, and by investing in those advance types of storage that is going to allow us to drive that marginal cost down to zero over time. >> Epigenetic storage systems. I want to come back to this notion of services though, and where the data's located. From our research what we see is we see as you said, data being proximate or being housed, approximator created and acted upon, but that increasingly businesses want the options to be able to replicate that, replicates a strong word it's a loaded word, but to be able to do something similar in some other location if the action is taking place in that location too. That's what Kubernetes is kind of about, and server list computing and some of these other things are about. But it's more than just the data, it's the data, it's the data services, it's the meditate associated with that , how do you foresee at Microsoft and what role might they play in this notion of a greater federation of data services that make possible a policy driven, back up, restore, data protection architecture that's really driven by what the business needs and where the actions taking place. Is that something you were seeing in a direction that you see it going? >> Yeah absolutely and so I'll talk conceptually about our strategy in that regard and where we see that going for customers, and then maybe we can come back to the Veeam partnership as well cause I think this is all connected up. Our approach to storage, our view is that you should be able to drop all your data assets into a single storage system like we talked about that supports all the different protocols that are required, can automatically tier from very hot storage all the way down to overtime glass and DNA, and we do all of that within one storage system and then the movement across those different vertical and horizontal slices that can all be done programmatically or via policy. So customers can make a choice in the near term about how they drop their data into the cloud but then they have a lot of flexibility to do all kinds of things with it over time, and then with that we layer on the Microsoft whole set of analytics services. So all of our data and analytics products, they layer on top of this disaggregated storage system so there can be late binding of the type of processing that's used including AI to reason over that data relatively to where and how and when the data entered into the platform. So that's sort of modularity, it really future proofs the use of data over the long haul we're really excited about that, and then those data assets can then be replicated to use your term to other regions around the globe as well using our backbone. So the customers can use our network, our network is a customers network, and then the way that docs into the partnership with Veeam is that just as I mentioned in the keynote this morning, data protection is a use case that is just fundamental to enterprise IT. We can make together with customers and with Veeam, we can make data protection better today using the cloud and with the work that Veeam has done in integrating with 0365, the integration from there into Azure storage and then over time customers can start down this path of something that feels sort of mundane and it's just been a part of daily life at enterprise IT, and then that becomes an entry point into our broader longterm data strategy in the cloud. >> But following up on this if we agree that data is not going to be entirely centralized, but it's going to be more broadly distributed and that there is a need for a common set of capabilities around data protection which is a very narrowly defined term today and is probably going to evolve over the next few years. >> I agree with that. >> We think you're going to have a federated model for data protection that provides for local autonomous data protection activities that is consistent with the needs of those local data assets, but under a common policy based framework that a company like Veeam's going to be able to provide. What do you think? >> So first of all a core principle of ours is that while we're creating these platforms for large data sets to move into Azure the most important thing is that customers own their own data. So there's this balance that has to be reached in terms of cloud scale and the federated nature of cloud and these common platforms and ways of approaching data, while simultaneously making sure that customers and users are in charge of their own data assets. So those are the principles that we'll use to guide our innovation moving forward and then I agree I think we're going to see a lot of innovation when it comes to taking advantage of cloud scale, cloud flexibility and economics but also empowering customers to advantage of these things but do it on their terms. I think the futures pretty bright in that regard. >> And the operative term there is their terms. Obviously Microsoft has always had a large on-prem install base and the software estate, and so you've embraced hybrid to use that term, with your strategies. You never sort of run away from it, you never said everything's going to go into the cloud, and that's now evolving to the edge. And so my question is what are the big gaps, not necessarily organizationally or process wise, but from a technology standpoint that the industry, generally in Microsoft specifically, have to fill to make that sort of federated vision a reality. >> I mean we're just at the early stages of all this for sure in fact as we talked about this morning, the notion of hybrid which started out with use cases like backup is rapidly evolving toward a more sort of modern enduring view. I think in a lot of ways hybrid was used as this kind of temporary stop along a path to cloud, and back to our earlier discussion for by some I guess, maybe there's a debate you all are having there. But what we're seeing is the emergence of edge is being and enduring location for compute and for data, and that's where the concept of intelligent edge comes in. So the model that I talked about earlier today is about extending on-prem data assets into the cloud, where as intelligent edge is taking cloud concepts and bringing them back to the edge, in an enduring way. So it's pretty neat stuff. >> And a big part of that is much of the data if not most of the data, the vast majority even might stay at the edge permanently and of course you want to run your models up in the cloud. >> That's right, at least for realtime processing. >> Right you just don't have the time to do the round trip. Alright Tad I'll give you the last word on Azure, direction, your relationship with Veeam, the conference, take your pick. >> Yeah well I thank you, thanks great to be here. As I mentioned earlier today the partnership with Veeam and then this conference in particular is great because I really love the idea of solving a very real and urgent problem for customers today, and then helping them along that journey to the cloud so that's one of the things that makes my job a great one. >> Well we talk about digital transformation all the time on theCUBE it's real, it's not just a buzz word, it can happen without the cloud but it's not all in the central location, it's extending now to other locations. >> It reflects your data assets. >> And where your data wants to live. So Tad thanks very much for coming to theCUBE it was great to have you. >> Thanks guys! >> Alright keep it right there everybody we'll be back with our next guest. This is VeeamOn 2019 and you're watching theCUBE. (upbeat music)
SUMMARY :
Brought to you by Veeam! of Azure Storage, good to see you! and you doing some cool stuff with storage. into our data centers delivering that And that's really grown over the last several years, and then being able to do that for multiple economic tiers So the customers can bring cloud based applications right the cloud and it's distributed nature, that it's going to support. that need to be in proximity to where the data and you can automate this, use machine intelligence of those projects a little bit. that is going to allow us to drive that marginal cost down but to be able to do something similar in some is that just as I mentioned in the keynote this morning, and is probably going to evolve over the next few years. that a company like Veeam's going to be able to provide. So there's this balance that has to be reached and that's now evolving to the edge. and bringing them back to the edge, in an enduring way. And a big part of that is much of the data the conference, take your pick. and then helping them along that journey to the cloud all in the central location, it's extending now And where your data wants to live. we'll be back with our next guest.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
Tad Brockway | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Peter | PERSON | 0.99+ |
four | QUANTITY | 0.99+ |
Veeam | ORGANIZATION | 0.99+ |
thousands of years | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
Tad | PERSON | 0.99+ |
Miami | LOCATION | 0.99+ |
Miami Beach, Florida | LOCATION | 0.99+ |
NetApp | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
70 plus percent | QUANTITY | 0.99+ |
zero | QUANTITY | 0.98+ |
Azure stack | TITLE | 0.97+ |
first | QUANTITY | 0.96+ |
today | DATE | 0.95+ |
Azure NetApp | TITLE | 0.95+ |
one storage system | QUANTITY | 0.94+ |
Azure | TITLE | 0.92+ |
Two days | QUANTITY | 0.91+ |
2019 | DATE | 0.9+ |
this morning | DATE | 0.89+ |
single storage system | QUANTITY | 0.89+ |
DataBox edge | COMMERCIAL_ITEM | 0.88+ |
VeeamON | TITLE | 0.87+ |
Azure | ORGANIZATION | 0.87+ |
earlier today | DATE | 0.77+ |
VeeamOn | TITLE | 0.76+ |
Microsoft Azure | ORGANIZATION | 0.75+ |
Kubernetes | TITLE | 0.73+ |
tons of scenarios | QUANTITY | 0.73+ |
Wikibon SiliconANGLE | ORGANIZATION | 0.71+ |
theCUBE | ORGANIZATION | 0.7+ |
next few years | DATE | 0.67+ |
Fontainebleau Hotel | LOCATION | 0.64+ |
DataBox | COMMERCIAL_ITEM | 0.53+ |
IAS | TITLE | 0.53+ |
second | QUANTITY | 0.52+ |
years | DATE | 0.51+ |
last | DATE | 0.49+ |
theCUBE | TITLE | 0.47+ |
VeeamON 2019 | TITLE | 0.44+ |
DataBox | ORGANIZATION | 0.4+ |
0365 | DATE | 0.34+ |
Ronen Schwartz, Informatica | CUBEConversation, April 2019
>> From our studios in the heart of Silicon Valley, Palo Alto, California. This is a CUBE Conversation. >> Hi everyone, welcome to this CUBE Conversation here in Palo Alto, I'm John Furrier. Host of theCUBE here in theCUBE studios. I'm joined with Ronen Schwartz. Senior Vice President and General Manager of Data Integration and Cloud Integration at Informatica, CUBE alumni, been on multiple times, here to do a preview round. Informatica World coming up as well as just catch up. Ronen, great to see you. >> Really happy to see you, you guys have a beautiful place here in Palo Alto. >> I know you live right around the corner so I'm expecting to see you come on multiple times and come in and share your commentary, but I want to get your thoughts, it's been a couple of months since we last chatted, interesting turn of events. If you go back just, you know, September of last year, and then you had Amazon Reinvent. They announced Outpost, multi-cloud starts hitting the scene, first it was hybrid. First it was all public cloud. But now the realization from customers is that this is now a fully blown up cloud world. It's cloud operations, it's just public cloud for unlimited cloud natives activity, on premise for existing workloads, and a complete re-architecture of the enterprise. >> Yes, and I think from Reinvent to Google Next just a week before, I agree with you. It's a world of hybrid and a world of multi-cloud. I think a lot of exciting announcements and a lot of changes, I think from my perspective what I see is that the Informatica customers are truly adopting cloud and hybrid and as data is growing, as data is changing the cloud is the place that they actually address this opportunity in the best way. >> So I know we've talked in the past. Your title is Data Integration, Cloud Integration. Obviously integration is the key point. You're starting to see APIs going to a whole other level, with Google they had acquired Apogee, which is an API marketplace, but with microservices and service meshes and Kubernetes momentum you're starting to see the advent of more programmability. This is a big trend, how is that impacting your world? Because at the end of the day you need the data. >> Yes, it actually means that you can do more things with the data in an easier way and also it means that you can actually share it with more users within the enterprise. I think that especially the whole ability to use containers, and Kubernetes is a great example of how you can do it, it's actually giving you unparalleled scale, as well as simplicity from the obstruction perspective. And it allows more and more developers to build more value from the data that they have. So data is actually in the core. Data is the foundation, and really a lot of this new technology allows you to build up from the data more valuable capabilities. I'm really happy that you're mentioning Apogee because one of the things that Google and Informatica notice together is the need for API to actually leverage data in a better way, and we strike a very strategic partnership that has gone into the market in the last few months allowing every user of Informatica Ipaas to basically publish APIs in a native experience from the Informatica Ipass directly to Apogee and vice versa, everything that you build in Informatica Cloud is basically automatically an API inside Apogee, so users get more value from data faster. >> So can you give an example, 'cause I think this is one of the things we saw at Google as a tell sign or the canary in the cole mine whatever trend parameter is that end to end CICD pipe lining, seamless execution in any environment seems to be the trend. What you're kind of getting at is this kind of cross integration, can you give an example of that Informatica Cloud to Apogee example of benefit to the customer or use case and why that's important. >> Yes, definitely, so if I'm a retailer or a manufacturer, I'm actually looking into automate processes. There is nothing better than deleting the Ipaas from Informatica to actually automate process anything from order to cash or inventory validation or even next best recommendation coming from some AI in the backend. Once you have created this process exposing this process as an API is actually allowing multiple other services. Multiple other capabilities to very easily leverage that, right, so this is basically what we're doing, so what an individual in the retailer is doing is they're actually defining this process of order to cash, and then they're publishing it as an API in one click, at that stage anybody anywhere can very very easily consume that API and basically use this process again and again. >> And that means what? Faster execution of application development? >> It means faster execution of application development. It also means consistency and basically scale so now you don't need to redevelop that. It's available as an API, you can reuse it again and again, so you do it in a consistent way, when you need to update you need to change, you need to modernize this process you modernize it once and use it again and again. >> Sorry to drill down on kind of the unique use case here, but this points to the integration challenges out there and the opportunities. Mentioned Google Next, Google Cloud. You've got a relationship with Amazon. This is part of your strategy for ecosystem. This is critical, integration is becoming Amit Walia was saying that you can compose. Have that foundation for the data and you compose your applications, but if you got to have a lot of composition, you need to have integration points, that's going to be either APIs or some sort of glue layer. This is huge, this is like the entire thesis of cloud architecture. >> Right, and the reality that our customers are facing is basically irrelative from multi-cloud, they will use a best of breed cloud for CRM, a best of breed cloud for ERP as well as a best of breed cloud for their data warehouse, their databases as well as their analytics, AI, et cetera. In that world, the only thing that is kind of common across this cloud is the data. And if you're actually able to allow the data to reside in the best place but you keep the metadata managed centrally by software like the one at Informatica is giving you are getting the best of breed of all of these offerings without actually paying a fine for that. >> So you guys are in a lot of magic quadrants out there in terms of categories of leadership and focus on data from day one. As you talk about your ecosystem, can you explain what that means because you're also an ecosystem partner of cloud players but you also have your own ecosystem. Talk about the ecosystem, how is it laid out? What's the update, what are some of the momentum points, can you share just an overview of how that's all happening? >> Yes, definitely, so when we're looking into our partnership with Microsoft Azure, with AWS, with JCP, we're not talking about just Informatica supporting the technologies that they build, we're talking about Informatica supporting the technologies that they're building as well as their ecosystem of partners. We're talking about an end-to-end solution that supports the entire ecosystem. What that actually translates to is Informatica building services that are giving best of breed experience for users within this cloud environment and really giving you the full power of data management integration, data quality. Master data management, data security. Data catalog across all of this cloud. In a way you're right, we can look at it in the same way as like we have an ecosystem and in that ecosystem we're seeing a lot of strategic partners that are very very large, definitely all of these cloud scales are key partners for us and for our customers, but we're also seeing a huge amount of smaller, innovative vendors that are joining this ecosystem, and Informatica World in May 20th is a great place to come and actually see these vendors. We're actually showing for the first time our AI and cloud ecosystem in one place and these vendors are coming and they're showing how are they leveraging Informatica technology to basically bring new value in AI, in machine learning, in analytics to their customers. If you ask me, like, what is Informatica doing to help them, we're basically making the data available in the best way for their offering, and that kind of allowed them to focus on their innovation rather than how do they work in the different places. >> Rowen, you got ahead of me on the Informatica World question, but you just brought it out, you're doing an innovation. Let's talk about Informatica World. Because again, this data, there's a lot of sessions, so you do the normal thing. We've covered multiple years there. Integration's the key point, what are, why should someone come to Informatica World if they're a customer or a prospect? Now, you mentioned the AI zone. What's the core theme that you're going to be seeing there from your group and from the company? >> Informatica World this year is an amazing place for people to come and see the latest that happens within the cloud and hybrid journey, a great place to actually see next generation analytics and all the innovation there, it is a great place to see customer 360 and master data management and how can that change your organization as well as an amazing place to see data security and data privacy and a lot of other innovations around data. But I would actually say that it's great to see everything that Informatica can share with you. It is a better place to see what our customers and our partners are sharing. And especially from a partnership perspective Informatica World 2019, you're actually going to see leaders from Google, you're going to see leaders from Microsoft, you're going to see leaders from AWS, the people that are leading the best data warehouses in the world the best analytics in the world as well as innovators like DataRobot and Databricks that are changing the world and are actually advancing technology very very fast. >> And the AI zone, there's a cloud and AI zone. I've seen them, I know it's here from the prep. What does that mean, what's someone, AI's going to be hot, I think that's a big theme. Getting clarity around, as Amit kind of shared with us on a previous interview. AI's hot because automation kind of left the blocking and tackling. But the value of creation is going to come from using the data, where's the, and it's not integrated, you can't get the data in. If it's not integrated, you can't leverage machine learning, so having access to data makes machine learning get great. The machine learning gets great, AI is great. So tell us what's going on with it. Give a little sneak preview. >> It's actually amazing what we can do leveraging the iron machine learning today, right? I wake up in the morning and I say Alexa, good morning, and I actually get back what's the weather and what's happening. I'm getting into my car, Google is telling me how fast will I get to the office or the first meeting. I left to come here and I knew exactly what's the best route to take. A lot of that is actually leveraging AI and machine learning, I think it's not a secret that the better your data is the better the machine can learn from the data. And if your data is not good, then learning can actually be really really bad. You know, sometimes I can use, like with my kids. If their learning books are bad, there's no way that they can actually get to the right answer. The same as data, data is so critical. What we're seeing is basically data engineers, data operation becoming a super strategic function to make AI and machine learning even possible. Your ability to collect enough data to make sure that the data is ready and clean for AI and machine learning is critical. And then once the AI and machine learning eventually contributed the automation, the decision making, the recommendation, you have to put it back in to the data pipes so that you are actually able to leverage them to do the right thing. >> You know, you, I think you nailed this one. We've talked about this before but I think more important than ever, data cleansing or data cleaning was always an afterthought in the old data warehouse world where well, we're not getting the answers we wanted so you kind of have to fail to figure out that the data sucks so you had to get the data to be better, now it's much more acute in the sense that people realize that you need quality data so there's now new capabilities to make sure there's a process for doing that on the front end, not on the back end. Talk about that dynamic, because this is something that is critical in the architecture, and how you think about data pipe-lining, data management, the things that you guys do, this is an important trend. Take a minute to explain that. >> Yes, I totally agree with you and I think that the rise of the importance of data quality, and it actually is coming also as part of the pattern of data governance and we want to make sure that the processes exist to make sure that the data that we make available for our AI research, for analytics, for our executives and data workers that this data is really the right data is critical. To actually support that, what we are seeing is people defining data governance process. What are the steps that the data needs to go before it is actually available for the next step? And what is nice today is that this is not people that the data needs to go through. These are processes, automation, that can actually drive data quality, it goes from things that are very very basic. Let's remove duplicate data, but also into the fact that you actually identify anomalies in the data and you ask the right questions so that that data doesn't go in. >> Is this the kind of topics that people will hear at Informatica World? >> Definitely, they will hear about how they can actually help the organization get the data right so that machine learning automation, and hyper growth is actually possible. >> You're excited about this market, aren't you? >> Super excited, I mean I think each and every one of us, we're going to see a lot of innovation coming out and I consider myself lucky that data is actually in the center of all of this innovation and that we're actually able to help the customers and our partners be successful with that. >> Yeah, you and I were talking before you came on camera, I wish I was 23 again right now, this is a great time to be in tech, everything's coming together. You got unlimited compute, machine learning's rocking and rolling, everyone's all kinds of diverse areas to play on, it's kind of intoxicating to be in this environment, isn't it? >> I totally agree, and I will add one additional thing to the reasons, agility. Like the fact that it all is available at your fingertip, and you can actually achieve so much with very little patience is really really amazing. >> This compose ability really as the new developer modernization renaissance. It's happening. >> Yes, yes, and as we usually say it all starts from the data. >> Okay, Ronen Schwartz, we're talking Informatica World but getting an update on what's going on because data integration, cloud integration, this is the number one activity people are spending their time on. You get it right, there's huge benefits. Ronen, thanks for coming in and sharing your insights, appreciate it. >> Hey, my pleasure. >> Okay, this is theCUBE, here for CUBE Conversation here in Palo Alto, California at theCUBE headquarters, I'm John Furrier Thanks for watching. (jazz music)
SUMMARY :
From our studios in the heart of Ronen, great to see you. Really happy to see you, you guys so I'm expecting to see you come on the cloud is the place that they actually Because at the end of the day you need the data. from the Informatica Ipass directly to Apogee as a tell sign or the canary in the cole mine There is nothing better than deleting the in a consistent way, when you need to update got to have a lot of composition, you need to allow the data to reside in the best place What's the update, what are some of the that supports the entire ecosystem. What's the core theme that you're going to be that are changing the world and are And the AI zone, there's a cloud and AI zone. decision making, the recommendation, you have to that the data sucks so you had to people that the data needs to go through. get the data right so that machine learning actually in the center of all of this innovation to be in tech, everything's coming together. Like the fact that it all is available as the new developer modernization renaissance. it all starts from the data. integration, this is the number one activity Okay, this is theCUBE, here for
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ronen | PERSON | 0.99+ |
Informatica | ORGANIZATION | 0.99+ |
Ronen Schwartz | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
May 20th | DATE | 0.99+ |
April 2019 | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
September | DATE | 0.99+ |
Informatica World | ORGANIZATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Rowen | PERSON | 0.99+ |
First | QUANTITY | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Amit | PERSON | 0.98+ |
Amit Walia | PERSON | 0.98+ |
one click | QUANTITY | 0.98+ |
first meeting | QUANTITY | 0.98+ |
Kubernetes | TITLE | 0.97+ |
one | QUANTITY | 0.97+ |
JCP | ORGANIZATION | 0.97+ |
first | QUANTITY | 0.97+ |
Databricks | ORGANIZATION | 0.97+ |
DataRobot | ORGANIZATION | 0.97+ |
Apogee | ORGANIZATION | 0.95+ |
a week before | DATE | 0.95+ |
theCUBE | ORGANIZATION | 0.94+ |
each | QUANTITY | 0.93+ |
this year | DATE | 0.91+ |
first time | QUANTITY | 0.9+ |
23 | QUANTITY | 0.88+ |
one place | QUANTITY | 0.87+ |
Alexa | TITLE | 0.87+ |
Outpost | ORGANIZATION | 0.87+ |
CUBEConversation | EVENT | 0.83+ |
last year | DATE | 0.82+ |
day one | QUANTITY | 0.82+ |
Silicon Valley, | LOCATION | 0.82+ |
Informatica World 2019 | EVENT | 0.8+ |
360 | QUANTITY | 0.75+ |
Data Integration | ORGANIZATION | 0.75+ |
Informatica Ipass | ORGANIZATION | 0.71+ |
Informatica Ipaas | ORGANIZATION | 0.71+ |
last few months | DATE | 0.71+ |
Conversation | EVENT | 0.7+ |
CUBE Conversation | EVENT | 0.68+ |
Microsoft Azure | ORGANIZATION | 0.67+ |
Informatica Cloud | ORGANIZATION | 0.64+ |
Integration | ORGANIZATION | 0.59+ |
Google Cloud | ORGANIZATION | 0.57+ |
every one | QUANTITY | 0.53+ |
thing | QUANTITY | 0.5+ |
Arvind Krishna, IBM | IBM Think 2019
>> Live from San Francisco. It's the cue covering IBM thing twenty nineteen brought to you by IBM. >> Clever and welcome to the live coverage here. The Cube in San Francisco for IBM. Think twenty nineteen day Volonte where he with Urban Krishna, senior vice president of cloud and cognitive software at IBM. Man in charge of all the cloud products cloud everywhere. Aye, aye. Anywhere are great to see you. Thanks for spending time. Know you're super busy. Thanks for spending time. >> I'm ready to be here right >> now. So we talked at the Red Hat Summit last year. You essentially laid out the vision for micro Services. Coup Burnett is how this always kind of coming together than the redhead acquisition. And now you're seeing big news here at IBM. Think setting the stage here in San Francisco for a I anywhere, which is cognitive kind of all over the clouds, and then really clarity around cloud multi cloud strategy end to end workloads all kind of tied together on premise in the clouds. Super important for IBM. Explain and unpacked that force. What does it mean, >> Right? So I'm going to begin unpacking it from where actually I left off last year. So if I just for ten seconds, last year, we talked a lot about containerized platforms are going to become the future that'll be the fabric on which every enterprise is going to build their IT and their future. OK, we talked about that last year, and I think with the announced acquisition of Red Hat that gets cemented and that'll go further once that closes. Now you take that and now you take it to the next level of value. So take Watson. Watson runs as a containerized set of services. If it's a containerized set of services, it could run on what we call Cloud Private. Cloud Private in turn runs on top of OpenShift. So then you say, wherever OpenShift runs, I can run this entire stack. Where does OpenShift run today? It runs on Amazon. It runs on the IBM cloud and runs on Azure. It runs on your premise. So on the simple simple. I always like things that are simple. So Watson runs on Cloud Private runs and OpenShift runs on all these infrastructures I just mentioned that gives you Watson anywhere. You want it close to your data run it on-prem. You want to run it on Azure, run it there. You want to run it on the IBM cloud you run it there. And hence that's the complete story. >> says it was more important for you to give customers choice >> than it was to keep Watson to yourself. To try to sell >> more cloud. >> I think that every company that survives a long term learns that choice to a customer is really important and forcing customers to do things only one way is jelly in the long term. A bad strategy. So >> from a customer statement, just get the facts right on the hard news. Watson. Anywhere. Now I can run Watson via containers. Asian Open ship Things you mentioned on a ws as sheer Microsoft azure and IBM cloud cloud private. All that >> on on premise >> and on premise, all cohesively enter end. >> Correct in an identical way. Which means even if you do things one place you build up more than one place, you could go deploy a moral in another place gives you that flexibility also. >> So I'm Akash Mercy over This sounds too crazy Is too hard to do that. I've tried all this multi cloud stuff. Got all this stuff. Why is it easier? How do how do you guys make this happen? What's the key secret sauce for pulling that end to end a I anywhere on multiple clouds, on premises and through the workloads. >> Two levels. One. We go to a container infrastructure as that common layer that isolates out what is the bottom infrastructure from everything that runs on top. So going to the common services on a Cuban Eddie's in a container layer that is common across all these environments, does the isolation off the bottom infrastructure? That's hard engineering, but we do that engineering. The second piece is you've taken the Watson set of capabilities and also put them into just three pieces. What's in studio? What's an ML from water machine learning and what's an open scale? And there you have the complete set that you go need to run everywhere. So we have done that engineering as well. >> Congratulations. Get the cloud anywhere. I mean, it's cloud. It's essentially everything's every anywhere. Now you got data everywhere you got cloud everywhere. Cloud operations. Where's the multi cloud and hybrid fit in? Because now, if I could do a I anywhere via container ization, shouldn't I built? Run any workload on premise and in multiple clouds. >> So we fundamentally believe that when I was here last time, we talked about the container fabrics. And I do believe that we need to get to the point where these can run anywhere. So you take the container fabric and you can go run that anywhere, right? So so that's one piece of it, the next part of is but I now need to integrate. So I now need to bring in all my pieces. How I integrate this application with another? It's the old problem of integration back again. So whether you want to use MQ or you want to use Kafka or you want to use one of these technologies? How do we get them to couple one work flow to another work flow? How do I get them to be secure? How do I get them to be resilient in the presence of crashes in the presence of latency and all that? So that's another big piece of announcements that we're making. You can take that complete set off integration technologies, and those can run anywhere on any cloud. Again, using the same partner describes. I'm not going to go into that again. And on premise. So you can knit all of those together. >> How can you talk about the rationale for the Red Hat acquisition? Specifically in the context of developers, IBM over the years has made you know many efforts took to court developers. Now, with the redhead acquisition, it's eight million developers and talk about specifically the importance of developers and how that's changed >> your strategy or enhance your >> strategy. I'm an enhancement. It's not really a change. I think we all acknowledge developers have always been important and will remain important. I mean, IBM has done a great job, I think, over the last twenty years and both helping create the whole developer ecosystem, for example, around Job. We were a very big piece of that, not the only participant in there. There were others, but we were a big piece of that. So you not take red hat on Lenox and Open shit and Open source and J. Boss and all of these technologies. There's a big ecosystem of developers. You mentioned eight million number. But why did that set of people come along? They come along because they get a lot of value from developing on top of something that in turn has so many other people on top. I think there's half a million pieces of software which use redhead as the primary infrastructure on which they develop. So it's the network effect really. Is that value andan Africa can only come from you, keep it open, You keep it running on the widest possible base, and then they get the value that if they develop on that digger access to that and US base on which Red Hat Franz >> are, we have >> evidence that >> totally makes sense. But I want to get one dig deeper that we cover a lot of developer, the business side of developers. Not so much, no ins and outs, so developer tools and stuff. There's a lot of stack overflow. Variety of sources do that, So developers want to things they want to be in the right wave. You laying out a great platform for that, then this monetization Amazon has seen massive growth on their partner network. You guys haven't ecosystem. You mentioned that. How does this anywhere philosophy impact ecosystem because they want to party with IBM? Where's the white spaces? What's the opportunity for partners? How should they evolve with IBM? What's your What's your direction on that? >> Okay, so two kinds of partners one there's a set of partners will bring a huge set of value to their clients because they actually provide the domain knowledge. The application specify acknowledged the management expertise, the operational expertise, printable technologies, perhaps that we provide. That's what a partner's is always gonna have. Value talked yesterday at a portable conference about what, cognizant? Who's a bigger part. They do. They built a self service application for patients off a medical provider to be able to get remote access to doctors when they couldn't get enough. And that was not life threatening immediately. Well, that's a huge sort of valley that they provide built on top of our technologies and products. A second kind of partner you went on developers is people who do open those packages. I think we've been quite good. We don't tend to cannibalize our partners, unlike some others we can talk about. So for those partners who have that value, we can put our investment in other places. But we could help maybe give access to the enterprise market for those developers, which I think opens up. A lot of you >> guys make the martyr for developers. That's right. I want to ask you a question. You guys are all sleep in all in on Cooper Netease. Red hat made a great bed on Cooper Netease on. Now that you're harvesting that with the requisition, huge growth there containers. Everyone saw containers. That was kind of a no brainer. Technical world developers are. What's the importance of uber Netease? As you see Kou Bernetti starting to shrink the abstraction software overlay. In the end, this new complexity where Cooper needs a running great value. What does that mean? This trend mean for CEOs CTO CSOs as enterprise start to think, you know, cohesive set of services across on Prem multiple clouds. Cooper Nettie seems to be a key point. What is the impact of it? What does it mean? >> I think I'll go to the business. Benefit Secure binaries. In the end is an orchestration. Later takes over management complexity. It takes away the cost of doing operations in a large cluster ofthe physical resource is, I think the value for the CIA level is the following today, on average, seventy percent of the total cost and people are tied up in maintaining what you have. Thirty percent is on new. That's rough rule of Tom Technologies like communities have taken to where we wanted to go and flipped out to thirty seventy. We need to spend only thirty percent maintaining what you have. And he could then go spend seventy percent on doing innovation, which is going to make inclined, happier and your business happier. Your team's had a couple of announcements today. One was hyper protect, and the other is a lot of services to facilitate. Hybrid. Can you talk about those brats up to date on a quick one, so hyper protect means. So where do you put your data in the cloud everybody gets worried about? Well, if it's in the clear, it could get stolen. C Togo to encryption. Typically, encryption is then down with the key. Well, who manages that cake? The hyper protect services are all about that key. Management is comin across. Both are getting hybrid world across both your premise and in the cloud. And nobody in the cloud, not even our deepest system administrator in the cloud, can get access to the key. That's pretty remarkable when you think about it, and so that provide the level of safety and encryption that should give you a lot of reassurance that nobody can get hold of that data that's hyper protect. And then if I go to all of the other services were doing, sometimes I see a lot of help. Someone advice. Look, in the three client meeting I just had every one of them was asking what should keep regarded watching I slightly more nice. What should I write knew? That means a whole lot of advice that you need and how to assess what you have in what should be a correct strategy. Then once you do that, somebody will say will help me move it. Others will say, Help me manage it So all the services to go do that is a big piece of what we're announcing it end and to end in addition to but into end. But also you can cover it up. Not only give me advice, I know I got buying strategy laid out, helping move it on Oprah's do boards for me or help you manage it after I move it except >> armor. When you sit in customer meetings. Big clients write me, and when they say we want to modernize, what does that mean to you? And how do you respond to that? >> Well, some organizes. Normally today it means that you've got to bring cloud technologies. You gotta bring air technologies. You got to bring what is called digital transformation all to bear. It's got to be in the service of either client intimacy, or it's got to be in terms ofthe doing straight through processing, as opposed to the old way of doing all the business processes that you have and then you get into always got to begin with some easy wind. So I always say, Begin with the easy stuff, not begin with the harder stuff. What started the architecture that let you do the hardest off later? It's not throw away, and those are all the discussions that we have, which are always a mixture of this people process technology. That world has not changed. We need to worry about. All >> three are thanks for spending your valuable time coming on the Q. Bree. We appreciate the insight. I know you're super busy. Final question. Take take a minute. To explain this year. Think What's the core theme? What's the most important story people should pay attention to this year and IBM think in San Francisco? >> I think this two things and the borders. That is the evolution that is giving greater business value for using the word that is Chapter two off the cloud journey. And it's Chapter two off a cognitive enterprise. Chapter two means that you're not getting into solving really mission critical workloads, and that's what is happening there. And that's enabled through the mixture of what we're calling hybrid on multi cloud strategies and then the cognitive enterprises all around. How can you bring air to power every workflow? It's not a little shiny Tonda. Besides, it's in the very heart off every confirmation. >> The word of the day. Here's anywhere cloud anywhere, data anywhere. Aye, aye, anywhere that's a cube were everywhere and anywhere we could go to get the signal from the noise. Arvin Krista, senior vice president, cloud and cognitive software's new title man Architect in the Red Hat Acquisition in the cloud Multi cloud DNA. Congratulations on your success. Looking forward to following your journey. Thanks for coming on, thanks Thanks. Safe. Okay. More live coverage after this short break state with the cube dot net is where you find the videos were in San Francisco. Live here in Mosconi, North and south, bringing the IBM think twenty nineteen. Stay with us.
SUMMARY :
It's the cue covering Man in charge of all the cloud products cloud everywhere. You essentially laid out the vision for So on the simple simple. than it was to keep Watson to yourself. I think that every company that survives a long term learns that choice to a customer is really important from a customer statement, just get the facts right on the hard news. Which means even if you do things one place you build up more than one place, for pulling that end to end a I anywhere on multiple clouds, on premises and through the workloads. So going to the common services on a Cuban Eddie's in a container layer that is common across Now you got data everywhere you got cloud everywhere. So so that's one piece of it, the next part of is IBM over the years has made you know many efforts took to court developers. So it's the network effect really. What's the opportunity for partners? the management expertise, the operational expertise, printable technologies, perhaps that we provide. enterprise start to think, you know, cohesive set of services across on Prem multiple clouds. seventy percent of the total cost and people are tied up in maintaining what you have. And how do you respond to that? What started the architecture that let you do the hardest off later? What's the most important story people should pay attention to this year and IBM think in San Francisco? That is the evolution that is giving greater business value for using the word More live coverage after this short break state with the cube dot net is where you find the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Arvin Krista | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Arvind Krishna | PERSON | 0.99+ |
last year | DATE | 0.99+ |
Mosconi | LOCATION | 0.99+ |
seventy percent | QUANTITY | 0.99+ |
Urban Krishna | PERSON | 0.99+ |
second piece | QUANTITY | 0.99+ |
ten seconds | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
three pieces | QUANTITY | 0.99+ |
Thirty percent | QUANTITY | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
Tom Technologies | ORGANIZATION | 0.99+ |
OpenShift | TITLE | 0.99+ |
Two levels | QUANTITY | 0.99+ |
Lenox | ORGANIZATION | 0.99+ |
Red Hat Summit | EVENT | 0.99+ |
both | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
thirty seventy | QUANTITY | 0.99+ |
eight million | QUANTITY | 0.98+ |
three client | QUANTITY | 0.98+ |
one piece | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
more than one place | QUANTITY | 0.98+ |
Both | QUANTITY | 0.98+ |
Cloud Private | TITLE | 0.98+ |
one way | QUANTITY | 0.98+ |
two things | QUANTITY | 0.98+ |
two kinds | QUANTITY | 0.97+ |
Red Hat | ORGANIZATION | 0.97+ |
twenty | QUANTITY | 0.97+ |
thirty percent | QUANTITY | 0.97+ |
US | LOCATION | 0.96+ |
three | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
Kou Bernetti | PERSON | 0.96+ |
Watson | TITLE | 0.96+ |
Coup Burnett | PERSON | 0.95+ |
Akash Mercy | PERSON | 0.94+ |
Azure | TITLE | 0.94+ |
this year | DATE | 0.94+ |
IBM think | ORGANIZATION | 0.93+ |
Kafka | TITLE | 0.93+ |
eight million developers | QUANTITY | 0.93+ |
half a million pieces | QUANTITY | 0.93+ |
Tonda | PERSON | 0.93+ |
J. Boss | ORGANIZATION | 0.93+ |
uber Netease | ORGANIZATION | 0.92+ |
2019 | DATE | 0.9+ |
Chapter two | OTHER | 0.89+ |
Open shit | ORGANIZATION | 0.85+ |
Cooper Netease | ORGANIZATION | 0.84+ |
Cuban | OTHER | 0.83+ |