Day One Wrap | Red Hat Summit 2018
San Francisco it's the Red Hat summit 2018 brought to you by Red Hat okay welcome back everyone this is the cube live in San Francisco for Red Hat summit 2018 I'm John for the co-host of the cube and this week for three days of wall-to-wall coverage my co-host analyst is John Tory the co-founder of check reckoning and advisory and community development services firm industry legend formerly VMware's Bentley he was at the Q in 2010 our first ever cube nine years ago John Day one wrap up let's analyze what we heard and dissect and and put Red Hat into day one in the books but you know clearly it's a red-letter day for red hat so to speak your thoughts big day for open shift I think and hybrid cloud right we just saw a lot of signs here that we'll talk about that it's real there's real enterprises here real deployments in the cloud multi-cloud on-site hybrid cloud and i think there's really no doubt about that they really brought a brought the team out and you know red hat's become a bellwether relative to the tech industry because if you look at what they do there's so many irons on the fires but more the most important is that they have huge customer base in the enterprise which they've earned over a decades of work being the open source renegade to the open source darling and Tier one citizen they got a huge install basin they got to manage this so they can't just throw you know spaghetti at the wall they gotta have big solutions they're very technical company very humble but they do make some good tech bets absolutely we'll be talking with the folks from core OS tomorrow they have a couple of other action you know things we'll be talking about a lot of interesting partnerships the the most you know the thing here Linux is real and it's is the 20-year growth and that it's real in the enterprise and I mean the top line think the top line slowed and John is is is kubernetes than the gnu/linux for the cloud and I got to say there's some reality there yeah it's there's no doubt about it I mean then I've got my notes here just my summary for the day is on that point the new wave is here okay the glue layer that kubernetes and containers provide on top of say Linux in this case OpenShift a you know alternative past layer just a few years ago becomes the centerpiece of red hats you know architecture really providing some amazing benefits so I think what's clear is that this new shift this new wave is massive and we've heard on the cube multiple references to tcp/ip HTTP these are seminal moments where there's a massive inflection point where the games just radically changes for the better wealth creation happens startups boom new brands emerged that we've never heard of that just come out of the woodwork entrepreneurial activity hits an all-time high and they all these things are coming yeah I said John I was really impressed if we talk to a number of folks who are involved with technologies that some people might call legacy right we the Java programmers the IBM WebSphere folks they've been you you look at these technologies solid proven tested but yet still over here and adapted for today right and they talked about how they're fitting into openshift how they're fitting into modern application development and you're not leaving those people behind they're really here and you know the old joke going back to say Microsoft when Steve Ballmer was the CEO hell will freeze over when Linux isn't in in Microsoft ecosystem look today no further than what's going on in their developer Commerce called Microsoft build where Linux is the centerpiece of their open-source strategy and Microsoft has transformed themselves into a total open-source world so you know now you got Oracle with giving up Java II calling a Jakarta essentially bringing Java into an the Eclipse community huge move it's a kind of a nuance point but that's another signal of the shifts going on out in the open where communities aren't just yesterday's open source model a new generation of open source actors are coming in a new model I think the CNC F is showing it the Linux Foundation proves that you can have commercialization downstream with open source projects as that catalyst point as a big deal and I think that is happening at a new new level and it's super exciting to see yeah I mean open source is the new normal sure that that works it's in the enterprise but that doesn't mean that open source disappears it actually means that open source and communities and companies coming together to drive innovation actually gets more and more important I kind of thought well you know it's open source well everybody does open source but actually the the dynamics we're seeing of these both large companies partnering with small companies foundations like you talked about the Linux cutlasses various parts the Linux Foundation cloud boundary foundation etc right are really making a big impact well we had earlier on assistant general counsel David Levine and bringing about open source I think one key thing that's notable is this next generation of open source wave comes is the business model of open source and operationalizing it in not just server development lifecycle but in the business operation so for example spending resources on managing proprietary products with that have open source components separate from the community is a resource that you don't have to spend anymore if you just contribute everything to open source that energy can go away so I think open source projects and the product monetization component not new concepts is now highlighted as a bonafide competitive advantage across the company not just proven but like operationally sound legally verified certified and I think also you have to look at the distribution of open source versus the operation and management of open source we see a lot of management managed kubernetes coming out and in fact we didn't talk about today Microsoft big announcement here at the show Microsoft is on Azure is running a managed open ship not not kubernetes they already have kubernetes they're running a managed open ship another way of adding value to an open open source platforms to date directly to the IT operator honestly do you think these kind of deals would happen if you go back four years three years ago oh no way as you're running an open shift absolutely I mean were you crazy the you know the kingdom is turned upside down absolutely this is a notable point I want to get your reaction is because I see this absolutely as validation to the new wave being here with kubernetes containers as a de facto rallying point an inflection point big deals are happening IBM and Red Hat big deal we just talked about them with the players here two bellwether saying we're getting behind containers and two bays in a big way from that relationship essentially it changes the game literally overnight for IBM changes the game for Red Hat I think a little bit more for IBM than Red Hat already gets a ton of benefit but IBM instantly gets a cloud strategy that has a real scalable product market to it Arvind the the head of research laid that out and IBM now can go and compete with major players on deals with the private cloud more deals are coming absolutely this is the beginning now that everyone snapped into place is saying okay kubernetes and containers we now understand this the rallying cry a de facto standard I think a formation is going to happen in the next six to 12 months of major major major players now I mean we are in a not one size does not fit all world John so I mean we will continue to see healthy ecosystems I mean mesosphere and DT cos is still out there Dockers still out there right you will see very functional communities and and functioning application platforms and cloud platforms but you got to say the momentum is here I mean look at amine docker mace those fears look at when things like this happened this is my opinion so I'm just gonna say it out there when you have de facto standards that happen like this it's an opportunity to differentiate so I think what's gonna happen is docker meso sphere and others including the legacy guys like IBM and in others they have to differentiate their products they have to compete software companies so I think docker I think is come tonight at docker con but my opinion looking at from the outside is I think Dockers realized looking we can't make money from containers kubernetes is happening we're a great standard in that let's be a software company let's differentiate around kubernetes so this is just more pressure or more call-to-action to deliver good software hey it's never been of somebody said it's never been a better time to be an IT and IT infrastructure right this is a you think that the tools we have available to us super-powerful another key point I want to get your reaction on with kubernetes and containers this kind of de facto standardization is breathing new life into good initiatives and legacy projects so you think about OpenStack okay OpenStack gets a nice segmented approach is now clear with a where the swim lanes are you're an app developer you go over here and if you are a network and infrastructure guy you're going here but middleware a from talk to the Red Hat guys here we talk to IBM those legacy and apps can put a container around it and don't have to be thrown away and take their natural course now I think it's gonna be a three line through this holy a second life is for legacy and stuff and then to cloud is and it's in second inning because now you have the enablement for cloud your reaction the enablement of cloud Ibn iBM has cloud and then the market shares of nm who you believe they're not in that they're in the top three but they're not double digits according to synergy research and he bought us a little bit higher but still if you compare public cloud they're small they look at IBM's and tire and small base and saying if they have a specialty cloud that can be assembled quit Nellie yeah and scaled and maybe instantly successfully overnight yeah I think a few years ago you know there was a lot different always a few years back it always looks confusing right a few years back we were still arguing public cloud private cloud as private cloud ed is what is a true private cloud is that even valuable I still see people on Twitter making fun of everything anybody who's not 100% into the full public cloud which means they must not have talked to you know a lot of IT folks who have to business to run today so I think you're saying it's a it's a it's a multivalent world multi-cloud there's going to be differentiated clouds there's going to be operational clouds there's gonna be financial clouds and just it's it seems clear that you know from the perspective of right now here in San Francisco and 2018 that that you know the purpose of public-private hybrid seems pretty clear just like the purpose of like I said we're gonna in two weeks we'll be an openstack summit I mean the purpose of that seems pretty clear it's it's funny it's like I had this argument and each Assateague he thinks everything should go the public cloud goes eaten has one of the public clouds but he's kind of right and I and I and we talked about this way I with him I said if everything is running cloud operation we're talking about cloud ops we're talking about how its managed how its deployed code bases across the board if everything is clarified from an OP raishin standpoint the Dearing on Prem and cloud and IOT edge is there's no difference stuffs moving around so you almost treats a data center as an edge network so now it's sexually all cloud in my mind so then and also you do have to keep in mind time time horizons right anybody who has to do work the today this quarter right has to keep in mind what's what what portfolio of business deeds and tools do I have right now versus what it's gonna look like in a few years all right so I want to get your thoughts on your walk away from today I'll start my walk away from day one was talking some of the practitioners Macquarie Bank and Amadeus to me they're a tell signed the canary in the coalmine what's happening horizontally scalable synchronous infrastructure the new model is here now we're seeing them saying things like it's a streaming world not just Kafka for streaming data streaming services levels of granularity that at workers traded with containers and kubernetes up and down the stack to me architects who think that way will have a preferred advantage over everybody else that to me was like okay we're seeing it play out I guess I totally agree right the future isn't evenly distributed my takeaway though is there's certainly a future here and the people we talked to today are doing real-world enterprise scale multi-cloud micro services and modern architectures incorporating their legacy applications and components and that and they're just doing it and they're not even breaking a sweat so I think IT has really changed ok day one coverage continues day two tomorrow we have three days of wall-to-wall coverage day two and then finally day three Thursday here in San Francisco this is the cubes live coverage go to the cube dotnet to check out all the videos they're gonna be going up as soon as they are done live here and check out all the cube alumni and check out Silicon angle comm for all news coverage then of course you got tech reckoning Jon's company's the co-founder of for John Fourier and John Shroyer that's day one in the books thanks for watching see you tomorrow
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Levine | PERSON | 0.99+ |
John | PERSON | 0.99+ |
John Shroyer | PERSON | 0.99+ |
Steve Ballmer | PERSON | 0.99+ |
John Tory | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Java II | TITLE | 0.99+ |
John Fourier | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
20-year | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Linux Foundation | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Jon | PERSON | 0.99+ |
Macquarie Bank | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Java | TITLE | 0.99+ |
three days | QUANTITY | 0.99+ |
John Day | PERSON | 0.99+ |
CNC F | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
nine years ago | DATE | 0.99+ |
San Francisco | LOCATION | 0.98+ |
Thursday | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
three days | QUANTITY | 0.98+ |
2010 | DATE | 0.98+ |
two bays | QUANTITY | 0.98+ |
Arvind | PERSON | 0.98+ |
yesterday | DATE | 0.98+ |
Linux | TITLE | 0.98+ |
tonight | DATE | 0.97+ |
Eclipse | TITLE | 0.97+ |
over a decades | QUANTITY | 0.97+ |
first | QUANTITY | 0.96+ |
12 months | QUANTITY | 0.96+ |
Red Hat Summit 2018 | EVENT | 0.96+ |
Amadeus | ORGANIZATION | 0.96+ |
this week | DATE | 0.96+ |
Assateague | PERSON | 0.95+ |
one key thing | QUANTITY | 0.95+ |
Red Hat | TITLE | 0.94+ |
Jakarta | LOCATION | 0.94+ |
second | QUANTITY | 0.93+ |
day two | QUANTITY | 0.93+ |
Azure | TITLE | 0.93+ |
gnu | TITLE | 0.92+ |
day three | QUANTITY | 0.91+ |
two weeks | QUANTITY | 0.91+ |
Red Hat summit 2018 | EVENT | 0.9+ |
one | QUANTITY | 0.9+ |
red hat | EVENT | 0.9+ |
day one | QUANTITY | 0.89+ |
Red Hat summit 2018 | EVENT | 0.87+ |
new wave | EVENT | 0.85+ |
two | QUANTITY | 0.84+ |
few years ago | DATE | 0.84+ |
day one | QUANTITY | 0.84+ |
six | QUANTITY | 0.84+ |
Tier one | QUANTITY | 0.83+ |
ORGANIZATION | 0.82+ | |
day one | QUANTITY | 0.82+ |
a few years ago | DATE | 0.82+ |
few years back | DATE | 0.8+ |
Day One Afternoon Keynote | Red Hat Summit 2018
[Music] [Music] [Music] [Music] ladies and gentlemen please welcome Red Hat senior vice president of engineering Matt Hicks [Music] welcome back I hope you're enjoying your first day of summit you know for us it is a lot of work throughout the year to get ready to get here but I love the energy walking into someone on that first opening day now this morning we kick off with Paul's keynote and you saw this morning just how evolved every aspect of open hybrid cloud has become based on an open source innovation model that opens source the power and potential of open source so we really brought me to Red Hat but at the end of the day the real value comes when were able to make customers like yourself successful with open source and as much passion and pride as we put into the open source community that requires more than just Red Hat given the complexity of your various businesses the solution set you're building that requires an entire technology ecosystem from system integrators that can provide the skills your domain expertise to software vendors that are going to provide the capabilities for your solutions even to the public cloud providers whether it's on the hosting side or consuming their services you need an entire technological ecosystem to be able to support you and your goals and that is exactly what we are gonna talk about this afternoon the technology ecosystem we work with that's ready to help you on your journey now you know this year's summit we talked about earlier it is about ideas worth exploring and we want to make sure you have all of the expertise you need to make those ideas a reality so with that let's talk about our first partner we have him today and that first partner is IBM when I talk about IBM I have a little bit of a nostalgia and that's because 16 years ago I was at IBM it was during my tenure at IBM where I deployed my first copy of Red Hat Enterprise Linux for a customer it's actually where I did my first professional Linux development as well you and that work on Linux it really was the spark that I had that showed me the potential that open source could have for enterprise customers now iBM has always been a steadfast supporter of Linux and a great Red Hat partner in fact this year we are celebrating 20 years of partnership with IBM but even after 20 years two decades I think we're working on some of the most innovative work that we ever have before so please give a warm welcome to Arvind Krishna from IBM to talk with us about what we are working on Arvind [Applause] hey my pleasure to be here thank you so two decades huh that's uh you know I think anything in this industry to going for two decades is special what would you say that that link is made right Hatton IBM so successful look I got to begin by first seeing something that I've been waiting to say for years it's a long strange trip it's been and for the San Francisco folks they'll get they'll get the connection you know what I was just thinking you said 16 it is strange because I probably met RedHat 20 years ago and so that's a little bit longer than you but that was out in Raleigh it was a much smaller company and when I think about the connection I think look IBM's had a long long investment and a long being a long fan of open source and when I think of Linux Linux really lights up our hardware and I think of the power box that you were showing this morning as well as the mainframe as well as all other hardware Linux really brings that to life and I think that's been at the root of our relationship yeah absolutely now I alluded to a little bit earlier we're working on some new stuff and this time it's a little bit higher in the software stack and we have before so what do you what would you say spearheaded that right so we think of software many people know about some people don't realize a lot of the words are called critical systems you know like reservation systems ATM systems retail banking a lot of the systems run on IBM software and when I say IBM software names such as WebSphere and MQ and db2 all sort of come to mind as being some of that software stack and really when I combine that with some of what you were talking about this morning along hybrid and I think this thing called containers you guys know a little about combining the two we think is going to make magic yeah and I certainly know containers and I think for myself seeing the rise of containers from just the introduction of the technology to customers consuming at mission-critical capacities it's been probably one of the fastest technology cycles I've ever seen before look we completely agree with that when you think back to what Paul talks about this morning on hybrid and we think about it we are made of firm commitment to containers all of our software will run on containers and all of our software runs Rell and you put those two together and this belief on hybrid and containers giving you their hybrid motion so that you can pick where you want to run all the software is really I think what has brought us together now even more than before yeah and the best part I think I've liked we haven't just done the product in downstream alignment we've been so tied in our technology approach we've been aligned all the way to the upstream communities absolutely look participating upstream participating in these projects really bringing all the innovation to bear you know when I hear all of you talk about you can't just be in a single company you got to tap into the world of innovation and everybody should contribute we firmly believe that instead of helping to do that is kind of why we're here yeah absolutely now the best part we're not just going to tell you about what we're doing together we're actually going to show you so how every once you tell the audience a little bit more about what we're doing I will go get the demo team ready in the back so you good okay so look we're doing a lot here together we're taking our software and we are begging to put it on top of Red Hat and openshift and really that's what I'm here to talk about for a few minutes and then we go to show it to you live and the demo guard should be with us so it'll hopefully go go well so when we look at extending our partnership it's really based on three fundamental principles and those principles are the following one it's a hybrid world every enterprise wants the ability to span across public private and their own premise world and we got to go there number two containers are strategic to both of us enterprise needs the agility you need a way to easily port things from place to place to place and containers is more than just wrapping something up containers give you all of the security the automation the deploy ability and we really firmly believe that and innovation is the path forward I mean you got to bring all the innovation to bear whether it's around security whether it's around all of the things we heard this morning around going across multiple infrastructures right the public or private and those are three firm beliefs that both of us have together so then explicitly what I'll be doing here number one all the IBM middleware is going to be certified on top of openshift and rel and through cloud private from IBM so that's number one all the middleware is going to run in rental containers on OpenShift on rail with all the cloud private automation and deployability in there number two we are going to make it so that this is the complete stack when you think about from hardware to hypervisor to os/2 the container platform to all of the middleware it's going to be certified up and down all the way so that you can get comfort that this is certified against all the cyber security attacks that come your way three because we do the certification that means a complete stack can be deployed wherever OpenShift runs so that way you give the complete flexibility and you no longer have to worry about that the development lifecycle is extended all the way from inception to production and the management plane then gives you all of the delivery and operation support needed to lower that cost and lastly professional services through the IBM garages as well as the Red Hat innovation labs and I think that this combination is really speaks to the power of both companies coming together and both of us working together to give all of you that flexibility and deployment capabilities across one can't can't help it one architecture chart and that's the only architecture chart I promise you so if you look at it right from the bottom this speaks to what I'm talking about you begin at the bottom and you have a choice of infrastructure the IBM cloud as well as other infrastructure as a service virtual machines as well as IBM power and IBM mainframe as is the infrastructure choices underneath so you choose what what is best suited for the workload well with the container service with the open shift platform managing all of that environment as well as giving the orchestration that kubernetes gives you up to the platform services from IBM cloud private so it contains the catalog of all middle we're both IBM's as well as open-source it contains all the deployment capability to go deploy that and it contains all the operational management so things like come back up if things go down worry about auto scaling all those features that you want come to you from there and that is why that combination is so so powerful but rather than just hear me talk about it I'm also going to now bring up a couple of people to talk about it and what all are they going to show you they're going to show you how you can deploy an application on this environment so you can think of that as either a cloud native application but you can also think about it as how do you modernize an application using micro services but you don't want to just keep your application always within its walls you also many times want to access different cloud services from this and how do you do that and I'm not going to tell you which ones they're going to come and tell you and how do you tackle the complexity of both hybrid data data that crosses both from the private world to the public world and as well as target the extra workloads that you want so that's kind of the sense of what you're going to see through through the demonstrations but with that I'm going to invite Chris and Michael to come up I'm not going to tell you which one's from IBM which runs from Red Hat hopefully you'll be able to make the right guess so with that Chris and Michael [Music] so so thank you Arvind hopefully people can guess which ones from Red Hat based on the shoes I you know it's some really exciting stuff that we just heard there what I believe that I'm I'm most excited about when I look out upon the audience and the opportunity for customers is with this announcement there are quite literally millions of applications now that can be modernized and made available on any cloud anywhere with the combination of IBM cloud private and OpenShift and I'm most thrilled to have mr. Michael elder a distinguished engineer from IBM here with us today and you know Michael would you maybe describe for the folks what we're actually going to go over today absolutely so when you think about how do I carry forward existing applications how do I build new applications as well you're creating micro services that always need a mixture of data and messaging and caching so this example application shows java-based micro services running on WebSphere Liberty each of which are then leveraging things like IBM MQ for messaging IBM db2 for data operational decision manager all of which is fully containerized and running on top of the Red Hat open chip container platform and in fact we're even gonna enhance stock trader to help it understand how you feel but okay hang on so I'm a little slow to the draw sometimes you said we're gonna have an application tell me how I feel exactly exactly you think about your enterprise apps you want to improve customer service understanding how your clients feel can't help you do that okay well this I'd like to see that in action all right let's do it okay so the first thing we'll do is we'll actually take a look at the catalog and here in the IBM cloud private catalog this is all of the content that's available to deploy now into this hybrid solution so we see workloads for IBM will see workloads for other open source packages etc each of these are packaged up as helm charts that are deploying a set of images that will be certified for Red Hat Linux and in this case we're going to go through and start with a simple example with a node out well click a few actions here we'll give it a name now do you have your console up over there I certainly do all right perfect so we'll deploy this into the new old namespace and will deploy notate okay alright anything happening of course it's come right up and so you know what what I really like about this is regardless of if I'm used to using IBM clout private or if I'm used to working with open shift yeah the experience is well with the tool of whatever I'm you know used to dealing with on a daily basis but I mean you know I got to tell you we we deployed node ourselves all the time what about and what about when was the last time you deployed MQ on open shift you never I maybe never all right let's fix that so MQ obviously is a critical component for messaging for lots of highly transactional systems here we'll deploy this as a container on the platform now I'm going to deploy this one again into new worlds I'm gonna disable persistence and for my application I'm going to need a queue manager so I'm going to have it automatically setup my queue manager as well now this will deploy a couple of things what do you see I see IBM in cube all right so there's your stateful set running MQ and of course there's a couple of other components that get stood up as needed here including things like credentials and secrets and the service etc but all of this is they're out of the box ok so impressive right but that's the what I think you know what I'm really looking at is maybe how a well is this running you know what else does this partnership bring when I look at IBM cloud private windows inches well so that's a key reason about why it's not just about IBM middleware running on open shift but also IBM cloud private because ultimately you need that common management plane when you deploy a container the next thing you have to worry about is how do I get its logs how do I manage its help how do I manage license consumption how do I have a common security plan right so cloud private is that enveloping wrapper around IBM middleware to provide those capabilities in a common way and so here we'll switch over to our dashboard this is our Griffin and Prometheus stack that's deployed also now on cloud private running on OpenShift and we're looking at a different namespace we're looking at the stock trader namespace we'll go back to this app here momentarily and we can see all the different pieces what if you switch over to the stock trader workspace on open shipped yeah I think we might be able to do that here hey there it is alright and so what you're gonna see here all the different pieces of this op right there's d b2 over here I see the portfolio Java microservice running on Webster Liberty I see my Redis cash I see MQ all of these are the components we saw in the architecture picture a minute ago ya know so this is really great I mean so maybe let's take a look at the actual application I see we have a fine stock trader app here now we mentioned understanding how I feel exactly you know well I feel good that this is you know a brand new stock trader app versus the one from ten years ago that don't feel like we used forever so the key thing is this app is actually all of those micro services in addition to things like business rules etc to help understand the loyalty program so one of the things we could do here is actually enhance it with a a AI service from Watson this is tone analyzer it helps me understand how that user actually feels and will be able to go through and submit some feedback to understand that user ok well let's see if we can take a look at that so I tried to click on youth clearly you're not very happy right now here I'll do one quick thing over here go for it we'll clear a cache for our sample lab so look you guys don't actually know as Michael and I just wrote this no js' front end backstage while Arvin was actually talking with Matt and we deployed it real-time using continuous integration and continuous delivery that we have available with openshift well the great thing is it's a live demo right so we're gonna do it all live all the time all right so you mentioned it'll tell me how I'm feeling right so if we look at so right there it looks like they're pretty angry probably because our cache hadn't been cleared before we started the demo maybe well that would make me angry but I should be happy because I mean I have a lot of money well it's it's more than I get today for sure so but you know again I don't want to remain angry so does Watson actually understand southern I know it speaks like eighty different languages but well you know I'm from South Carolina to understand South Carolina southern but I don't know about your North Carolina southern alright well let's give it a go here y'all done a real real know no profanity now this is live I've done a real real nice job on this here fancy demo all right hey all right likes me now all right cool and the key thing is just a quick note right it's showing you've got a free trade so we can integrate those business rules and then decide to I do put one trade if you're angry give me more it's all bringing it together into one platform all running on open show yeah and I can see the possibilities right of we've not only deployed services but getting that feedback from our customers to understand well how well the services are being used and are people really happy with what they have hey listen Michael this was amazing I read you joining us today I hope you guys enjoyed this demo as well so all of you know who this next company is as I look out through the crowd based on what I can actually see with the sun shining down on me right now I can see their influence everywhere you know Sports is in our everyday lives and these guys are equally innovative in that space as they are with hybrid cloud computing and they use that to help maintain and spread their message throughout the world of course I'm talking about Nike I think you'll enjoy this next video about Nike and their brand and then we're going to hear directly from my twitting about what they're doing with Red Hat technology new developments in the top story of the day the world has stopped turning on its axis top scientists are currently racing to come up with a solution everybody going this way [Music] the wrong way [Music] please welcome Nike vice president of infrastructure engineering Mike witig [Music] hi everybody over the last five years at Nike we have transformed our technology landscape to allow us to connect more directly to our consumers through our retail stores through Nike comm and our mobile apps the first step in doing that was redesigning our global network to allow us to have direct connectivity into both Asia and AWS in Europe in Asia and in the Americas having that proximity to those cloud providers allows us to make decisions about application workload placement based on our strategy instead of having design around latency concerns now some of those workloads are very elastic things like our sneakers app for example that needs to burst out during certain hours of the week there's certain moments of the year when we have our high heat product launches and for those type of workloads we write that code ourselves and we use native cloud services but being hybrid has allowed us to not have to write everything that would go into that app but rather just the parts that are in that application consumer facing experience and there are other back-end systems certain core functionalities like order management warehouse management finance ERP and those are workloads that are third-party applications that we host on relevent over the last 18 months we have started to deploy certain elements of those core applications into both Azure and AWS hosted on rel and at first we were pretty cautious that we started with development environments and what we realized after those first successful deployments is that are the impact of those cloud migrations on our operating model was very small and that's because the tools that we use for monitoring for security for performance tuning didn't change even though we moved those core applications into Azure in AWS because of rel under the covers and getting to the point where we have that flexibility is a real enabler as an infrastructure team that allows us to just be in the yes business and really doesn't matter where we want to deploy different workload if either cloud provider or on-prem anywhere on the planet it allows us to move much more quickly and stay much more directed to our consumers and so having rel at the core of our strategy is a huge enabler for that flexibility and allowing us to operate in this hybrid model thanks very much [Applause] what a great example it's really nice to hear an IQ story of using sort of relish that foundation to enable their hybrid clout enable their infrastructure and there's a lot that's the story we spent over ten years making that possible for rel to be that foundation and we've learned a lot in that but let's circle back for a minute to the software vendors and what kicked off the day today with IBM IBM s one of the largest software portfolios on the planet but we learned through our journey on rel that you need thousands of vendors to be able to sport you across all of your different industries solve any challenge that you might have and you need those vendors aligned with your technology direction this is doubly important when the technology direction is changing like with containers we saw that two years ago bread had introduced our container certification program now this program was focused on allowing you to identify vendors that had those shared technology goals but identification by itself wasn't enough in this fast-paced world so last year we introduced trusted content we introduced our container health index publicly grading red hats images that form the foundation for those vendor images and that was great because those of you that are familiar with containers know that you're taking software from vendors you're combining that with software from companies like Red Hat and you are putting those into a single container and for you to run those in a mission-critical capacity you have to know that we can both stand by and support those deployments but even trusted content wasn't enough so this year I'm excited that we are extending once again to introduce trusted operations now last week we announced that cube con kubernetes conference the kubernetes operator SDK the goal of the kubernetes operators is to allow any software provider on kubernetes to encode how that software should run this is a critical part of a container ecosystem not just being able to find the vendors that you want to work with not just knowing that you can trust what's inside the container but knowing that you can efficiently run that software now the exciting part is because this is so closely aligned with the upstream technology that today we already have four partners that have functioning operators specifically Couchbase dynaTrace crunchy and black dot so right out of the gate you have security monitoring data store options available to you these partners are really leading the charge in terms of what it means to run their software on OpenShift but behind these four we have many more in fact this morning we announced over 60 partners that are committed to building operators they're taking their domain expertise and the software that they wrote that they know and extending that into how you are going to run that on containers in environments like OpenShift this really brings the power of being able to find the vendors being able to trust what's inside and know that you can run their software as efficiently as anyone else on the planet but instead of just telling you about this we actually want to show you this in action so why don't we bring back up the demo team to give you a little tour of what's possible with it guys thanks Matt so Matt talked about the concept of operators and when when I think about operators and what they do it's taking OpenShift based services and making them even smarter giving you insight into how they do things for example have we had an operator for the nodejs service that I was running earlier it would have detected the problem and fixed itself but when we look at it what really operators do when I look at it from an ecosystem perspective is for ISVs it's going to be a catalyst that's going to allow them to make their services as manageable and it's flexible and as you know maintainable as any public cloud service no matter where OpenShift is running and to help demonstrate this I've got my buddy Rob here Rob are we ready on the demo front we're ready awesome now I notice this screen looks really familiar to me but you know I think we want to give folks here a dev preview of a couple of things well we want to show you is the first substantial integration of the core OS tectonic technology with OpenShift and then the other thing is we are going to dive in a little bit more into operators and their usefulness so Rob yeah so what we're looking at here is the service catalog that you know and love and openshift and we've got a few new things in here we've actually integrated operators into the Service Catalog and I'm going to take this filter and give you a look at some of them that we have today so you can see we've got a list of operators exposed and this is the same way that your developers are already used to integrating with products they're right in your catalog and so now these are actually smarter services but how can we maybe look at that I mentioned that there's maybe a new view I'm used to seeing this as a developer but I hear we've got some really cool stuff if I'm the administrator of the console yeah so we've got a whole new side of the console for cluster administrators to get a look at under the infrastructure versus this dev focused view that we're looking at today today so let's go take a look at it so the first thing you see here is we've got a really rich set of monitoring and health status so we can see that we've got some alerts firing our control plane is up and we can even do capacity planning anything that you need to do to maintenance your cluster okay so it's it's not only for the the services in the cluster and doing things that you know I may be normally as a human operator would have to do but this this console view also gives me insight into the infrastructure itself right like maybe the nodes and maybe handling the security context is that true yes so these are new capabilities that we're bringing to open shift is the ability to do node management things like drain and unscheduled nodes to do day-to-day maintenance and then as well as having security constraints and things like role bindings for example and the exciting thing about this is this is a view that you've never been able to see before it's cross-cutting across namespaces so here we've got a number of admin bindings and we can see that they're connected to a number of namespaces and these would represent our engineering teams all the groups that are using the cluster and we've never had this view before this is a perfect way to audit your security you know it actually is is pretty exciting I mean I've been fortunate enough to be on the up and shift team since day one and I know that operations view is is something that we've you know strived for and so it's really exciting to see that we can offer that now but you know really this was a we want to get into what operators do and what they can do for us and so maybe you show us what the operator console looks like yeah so let's jump on over and see all the operators that we have installed on the cluster you can see that these mirror what we saw on the Service Catalog earlier now what we care about though is this Couchbase operator and we're gonna jump into the demo namespace as I said you can share a number of different teams on a cluster so it's gonna jump into this namespace okay cool so now what we want to show you guys when we think about operators you know we're gonna have a scenario here where there's going to be multiple replicas of a Couchbase service running in the cluster and then we're going to have a stateful set and what's interesting is those two things are not enough if I'm really trying to run this as a true service where it's highly available in persistent there's things that you know as a DBA that I'm normally going to have to do if there's some sort of node failure and so what we want to demonstrate to you is where operators combined with the power that was already within OpenShift are now coming together to keep this you know particular database service highly available and something that we can continue using so Rob what have you got there yeah so as you can see we've got our couch based demo cluster running here and we can see that it's up and running we've got three members we've got an off secret this is what's controlling access to a UI that we're gonna look at in a second but what really shows the power of the operator is looking at this view of the resources that it's managing you can see that we've got a service that's doing load balancing into the cluster and then like you said we've got our pods that are actually running the software itself okay so that's cool so maybe for everyone's benefit so we can show that this is happening live could we bring up the the Couchbase console please and keep up the openshift console both sides so what we see there we go so what we see on the on the right hand side is obviously the same console Rob was working in on the left-hand side as you can see by the the actual names of the pods that are there the the couch based services that are available and so Rob maybe um let's let's kill something that's always fun to do on stage yeah this is the power of the operator it's going to recover it so let's browse on over here and kill node number two so we're gonna forcefully kill this and kick off the recovery and I see right away that because of the integration that we have with operators the Couchbase console immediately picked up that something has changed in the environment now why is that important normally a human being would have to get that alert right and so with operators now we've taken that capability and we've realized that there has been a new event within the environment this is not something that you know kubernetes or open shipped by itself would be able to understand now I'm presuming we're gonna end up doing something else it's not just seeing that it failed and sure enough there we go remember when you have a stateful application rebalancing that data and making it available is just as important as ensuring that the disk is attached so I mean Rob thank you so much for you know driving this for us today and being here I mean you know not only Couchbase but as was mentioned by matt we also have you know crunchy dynaTrace and black duck I would encourage you all to go visit their booths out on the floor today and understand what they have available which are all you know here with a dev preview and then talk to the many other partners that we have that are also looking at operators so again rub thank you for joining us today Matt come on out okay this is gonna make for an exciting year of just what it means to consume container base content I think containers change how customers can get that I believe operators are gonna change how much they can trust running that content let's circle back to one more partner this next partner we have has changed the landscape of computing specifically with their work on hardware design work on core Linux itself you know in fact I think they've become so ubiquitous with computing that we often overlook the technological marvels that they've been able to overcome now for myself I studied computer engineering so in the late 90s I had the chance to study processor design I actually got to build one of my own processors now in my case it was the most trivial processor that you could imagine it was an 8-bit subtractor which means it can subtract two numbers 256 or smaller but in that process I learned the sheer complexity that goes into processor design things like wire placements that are so close that electrons can cut through the insulation in short and then doing those wire placements across three dimensions to multiple layers jamming in as many logic components as you possibly can and again in my case this was to make a processor that could subtract two numbers but once I was done with this the second part of the course was studying the Pentium processor now remember that moment forever because looking at what the Pentium processor was able to accomplish it was like looking at alien technology and the incredible thing is that Intel our next partner has been able to keep up that alien like pace of innovation twenty years later so we're excited have Doug Fisher here let's hear a little bit more from Intel for business wide open skies an open mind no matter the context the idea of being open almost only suggests the potential of infinite possibilities and that's exactly the power of open source whether it's expanding what's possible in business the science and technology or for the greater good which is why-- open source requires the involvement of a truly diverse community of contributors to scale and succeed creating infinite possibilities for technology and more importantly what we do with it [Music] you know what Intel one of our core values is risk-taking and I'm gonna go just a bit off script for a second and say I was just backstage and I saw a gentleman that looked a lot like Scott Guthrie who runs all of Microsoft's cloud enterprise efforts wearing a red shirt talking to Cormier I'm just saying I don't know maybe I need some more sleep but that's what I saw as we approach Intel's 50th anniversary these words spoken by our co-founder Robert Noyce are as relevant today as they were decades ago don't be encumbered by history this is about breaking boundaries in technology and then go off and do something wonderful is about innovation and driving innovation in our industry and Intel we're constantly looking to break boundaries to advance our technology in the cloud in enterprise space that is no different so I'm going to talk a bit about some of the boundaries we've been breaking and innovations we've been driving at Intel starting with our Intel Xeon platform Orion Xeon scalable platform we launched several months ago which was the biggest and mark the most advanced movement in this technology in over a decade we were able to drive critical performance capabilities unmatched agility and added necessary and sufficient security to that platform I couldn't be happier with the work we do with Red Hat and ensuring that those hero features that we drive into our platform they fully expose to all of you to drive that innovation to go off and do something wonderful well there's taking advantage of the performance features or agility features like our advanced vector extensions or avx-512 or Intel quick exist those technologies are fully embraced by Red Hat Enterprise Linux or whether it's security technologies like txt or trusted execution technology are fully incorporated and we look forward to working with Red Hat on their next release to ensure that our advancements continue to be exposed and their platform and all these workloads that are driving the need for us to break boundaries and our technology are driving more and more need for flexibility and computing and that's why we're excited about Intel's family of FPGAs to help deliver that additional flexibility for you to build those capabilities in your environment we have a broad set of FPGA capabilities from our power fish at Mac's product line all the way to our performance product line on the 6/10 strat exten we have a broad set of bets FPGAs what i've been talking to customers what's really exciting is to see the combination of using our Intel Xeon scalable platform in combination with FPGAs in addition to the acceleration development capabilities we've given to software developers combining all that together to deliver better and better solutions whether it's helping to accelerate data compression well there's pattern recognition or data encryption and decryption one of the things I saw in a data center recently was taking our Intel Xeon scalable platform utilizing the capabilities of FPGA to do data encryption between servers behind the firewall all the while using the FPGA to do that they preserve those precious CPU cycles to ensure they delivered the SLA to the customer yet provided more security for their data in the data center one of the edges in cyber security is innovation and route of trust starts at the hardware we recently renewed our commitment to security with our security first pledge has really three elements to our security first pledge first is customer first urgency we have now completed the release of the micro code updates for protection on our Intel platforms nine plus years since launch to protect against things like the side channel exploits transparent and timely communication we are going to communicate timely and openly on our Intel comm website whether it's about our patches performance or other relevant information and then ongoing security assurance we drive security into every one of our products we redesigned a portion of our processor to add these partition capability which is adding additional walls between applications and user level privileges to further secure that environment from bad actors I want to pause for a second and think everyone in this room involved in helping us work through our security first pledge this isn't something we do on our own it takes everyone in this room to help us do that the partnership and collaboration was next to none it's the most amazing thing I've seen since I've been in this industry so thank you we don't stop there we continue to advance our security capabilities cross-platform solutions we recently had a conference discussion at RSA where we talked about Intel Security Essentials where we deliver a framework of capabilities and the end that are in our silicon available for those to innovate our customers and the security ecosystem to innovate on a platform in a consistent way delivering that assurance that those capabilities will be on that platform we also talked about things like our security threat technology threat detection technology is something that we believe in and we launched that at RSA incorporates several elements one is ability to utilize our internal graphics to accelerate some of the memory scanning capabilities we call this an accelerated memory scanning it allows you to use the integrated graphics to scan memory again preserving those precious cycles on the core processor Microsoft adopted this and are now incorporated into their defender product and are shipping it today we also launched our threat SDK which allows partners like Cisco to utilize telemetry information to further secure their environments for cloud workloads so we'll continue to drive differential experiences into our platform for our ecosystem to innovate and deliver more and more capabilities one of the key aspects you have to protect is data by 2020 the projection is 44 zettabytes of data will be available 44 zettabytes of data by 2025 they project that will grow to a hundred and eighty s data bytes of data massive amount of data and what all you want to do is you want to drive value from that data drive and value from that data is absolutely critical and to do that you need to have that data closer and closer to your computation this is why we've been working Intel to break the boundaries in memory technology with our investment in 3d NAND we're reducing costs and driving up density in that form factor to ensure we get warm data closer to the computing we're also innovating on form factors we have here what we call our ruler form factor this ruler form factor is designed to drive as much dense as you can in a 1u rack we're going to continue to advance the capabilities to drive one petabyte of data at low power consumption into this ruler form factor SSD form factor so our innovation continues the biggest breakthrough and memory technology in the last 25 years in memory media technology was done by Intel we call this our 3d crosspoint technology and our 3d crosspoint technology is now going to be driven into SSDs as well as in a persistent memory form factor to be on the memory bus giving you the speed of memory characteristics of memory as well as the characteristics of storage given a new tier of memory for developers to take full advantage of and as you can see Red Hat is fully committed to integrating this capability into their platform to take full advantage of that new capability so I want to thank Paul and team for engaging with us to make sure that that's available for all of you to innovate on and so we're breaking boundaries and technology across a broad set of elements that we deliver that's what we're about we're going to continue to do that not be encumbered by the past your role is to go off and doing something wonderful with that technology all ecosystems are embracing this and driving it including open source technology open source is a hub of innovation it's been that way for many many years that innovation that's being driven an open source is starting to transform many many businesses it's driving business transformation we're seeing this coming to light in the transformation of 5g driving 5g into the networked environment is a transformational moment an open source is playing a pivotal role in that with OpenStack own out and opie NFV and other open source projects were contributing to and participating in are helping drive that transformation in 5g as you do software-defined networks on our barrier breaking technology we're also seeing this transformation rapidly occurring in the cloud enterprise cloud enterprise are growing rapidly and innovation continues our work with virtualization and KVM continues to be aggressive to adopt technologies to advance and deliver more capabilities in virtualization as we look at this with Red Hat we're now working on Cube vert to help move virtualized workloads onto these platforms so that we can now have them managed at an open platform environment and Cube vert provides that so between Intel and Red Hat and the community we're investing resources to make certain that comes to product as containers a critical feature in Linux becomes more and more prevalent across the industry the growth of container elements continues at a rapid rapid pace one of the things that we wanted to bring to that is the ability to provide isolation without impairing the flexibility the speed and the footprint of a container with our clear container efforts along with hyper run v we were able to combine that and create we call cotta containers we launched this at the end of last year cotta containers is designed to have that container element available and adding elements like isolation both of these events need to have an orchestration and management capability Red Hat's OpenShift provides that capability for these workloads whether containerized or cube vert capabilities with virtual environments Red Hat openshift is designed to take that commercial capability to market and we've been working with Red Hat for several years now to develop what we call our Intel select solution Intel select solutions our Intel technology optimized for downstream workloads as we see a growth in a workload will work with a partner to optimize a solution on Intel technology to deliver the best solution that could be deployed quickly our effort here is to accelerate the adoption of these type of workloads in the market working with Red Hat's so now we're going to be deploying an Intel select solution design and optimized around Red Hat OpenShift we expect the industry's start deploying this capability very rapidly I'm excited to announce today that Lenovo is committed to be the first platform company to deliver this solution to market the Intel select solution to market will be delivered by Lenovo now I talked about what we're doing in industry and how we're transforming businesses our technology is also utilized for greater good there's no better example of this than the worked by dr. Stephen Hawking it was a sad day on March 14th of this year when dr. Stephen Hawking passed away but not before Intel had a 20-year relationship with dr. Hawking driving breakthrough capabilities innovating with him driving those robust capabilities to the rest of the world one of our Intel engineers an Intel fellow which is the highest technical achievement you can reach at Intel got to spend 10 years with dr. Hawking looking at innovative things they could do together with our technology and his breakthrough innovative thinking so I thought it'd be great to bring up our Intel fellow Lema notch Minh to talk about her work with dr. Hawking and what she learned in that experience come on up Elina [Music] great to see you Thanks something going on about the breakthrough breaking boundaries and Intel technology talk about how you use that in your work with dr. Hawking absolutely so the most important part was to really make that technology contextually aware because for people with disability every single interaction takes a long time so whether it was adapting for example the language model of his work predictor to understand whether he's gonna talk to people or whether he's writing a book on black holes or to even understand what specific application he might be using and then making sure that we're surfacing only enough actions that were relevant to reduce that amount of interaction so the tricky part is really to make all of that contextual awareness happen without totally confusing the user because it's constantly changing underneath it so how is that your work involving any open source so you know the problem with assistive technology in general is that it needs to be tailored to the specific disability which really makes it very hard and very expensive because it can't utilize the economies of scale so basically with the system that we built what we wanted to do is really enable unleashing innovation in the world right so you could take that framework you could tailor to a specific sensor for example a brain computer interface or something like that where you could actually then support a different set of users so that makes open-source a perfect fit because you could actually build and tailor and we you spoke with dr. Hawking what was this view of open source is it relevant to him so yeah so Stephen was adamant from the beginning that he wanted a system to benefit the world and not just himself so he spent a lot of time with us to actually build this system and he was adamant from day one that he would only engage with us if we were commit to actually open sourcing the technology that's fantastic and you had the privilege of working with them in 10 years I know you have some amazing stories to share so thank you so much for being here thank you so much in order for us to scale and that's what we're about at Intel is really scaling our capabilities it takes this community it takes this community of diverse capabilities it takes two births thought diverse thought of dr. Hawking couldn't be more relevant but we also are proud at Intel about leading efforts of diverse thought like women and Linux women in big data other areas like that where Intel feels that that diversity of thinking and engagement is critical for our success so as we look at Intel not to be encumbered by the past but break boundaries to deliver the technology that you all will go off and do something wonderful with we're going to remain committed to that and I look forward to continue working with you thank you and have a great conference [Applause] thank God now we have one more customer story for you today when you think about customers challenges in the technology landscape it is hard to ignore the public cloud these days public cloud is introducing capabilities that are driving the fastest rate of innovation that we've ever seen in our industry and our next customer they actually had that same challenge they wanted to tap into that innovation but they were also making bets for the long term they wanted flexibility and providers and they had to integrate to the systems that they already have and they have done a phenomenal job in executing to this so please give a warm welcome to Kerry Pierce from Cathay Pacific Kerry come on thanks very much Matt hi everyone thank you for giving me the opportunity to share a little bit about our our cloud journey let me start by telling you a little bit about Cathay Pacific we're an international airline based in Hong Kong and we serve a passenger and a cargo network to over 200 destinations in 52 countries and territories in the last seventy years and years seventy years we've made substantial investments to develop Hong Kong as one of the world's leading transportation hubs we invest in what matters most to our customers to you focusing on our exemplary service and our great product and it's both on the ground and in the air we're also investing and expanding our network beyond our multiple frequencies to the financial districts such as Tokyo New York and London and we're connecting Asia and Hong Kong with key tech hubs like San Francisco where we have multiple flights daily we're also connecting Asia in Hong Kong to places like Tel Aviv and our upcoming destination of Dublin in fact 2018 is actually going to be one of our biggest years in terms of network expansion and capacity growth and we will be launching in September our longest flight from Hong Kong direct to Washington DC and that'll be using a state-of-the-art Airbus a350 1000 aircraft so that's a little bit about Cathay Pacific let me tell you about our journey through the cloud I'm not going to go into technical details there's far smarter people out in the audience who will be able to do that for you just focus a little bit about what we were trying to achieve and the people side of it that helped us get there we had a couple of years ago no doubt the same issues that many of you do I don't think we're unique we had a traditional on-premise non-standardized fragile infrastructure it didn't meet our infrastructure needs and it didn't meet our development needs it was costly to maintain it was costly to grow and it really inhibited innovation most importantly it slowed the delivery of value to our customers at the same time you had the hype of cloud over the last few years cloud this cloud that clouds going to fix the world we were really keen on making sure we didn't get wound up and that so we focused on what we needed we started bottom up with a strategy we knew we wanted to be clouded Gnostic we wanted to have active active on-premise data centers with a single network and fabric and we wanted public clouds that were trusted and acted as an extension of that environment not independently we wanted to avoid single points of failure and we wanted to reduce inter dependencies by having loosely coupled designs and finally we wanted to be scalable we wanted to be able to cater for sudden surges of demand in a nutshell we kind of just wanted to make everything easier and a management level we wanted to be a broker of services so not one size fits all because that doesn't work but also not one of everything we want to standardize but a pragmatic range of services that met our development and support needs and worked in harmony with our public cloud not against it so we started on a journey with red hat we implemented Red Hat cloud forms and ansible to manage our hybrid cloud we also met implemented Red Hat satellite to maintain a manager environment we built a Red Hat OpenStack on crimson vironment to give us an alternative and at the same time we migrated a number of customer applications to a production public cloud open shift environment but it wasn't all Red Hat you love heard today that the Red Hat fits within an overall ecosystem we looked at a number of third-party tools and services and looked at developing those into our core solution I think at last count we had tried and tested somewhere past eight different tools and at the moment we still have around 62 in our environment that help us through that journey but let me put the technical solution aside a little bit because it doesn't matter how good your technical solution is if you don't have the culture and the people to get it right as a group we needed to be aligned for delivery and we focused on three core behaviors we focused on accountability agility and collaboration now I was really lucky we've got a pretty fantastic team for whom that was actually pretty easy but but again don't underestimate the importance of getting the culture and the people right because all the technology in the world doesn't matter if you don't have that right I asked the team what did we do differently because in our situation we didn't go out and hire a bunch of new people we didn't go out and hire a bunch of consultants we had the staff that had been with us for 10 20 and in some cases 30 years so what did we do differently it was really simple we just empowered and supported our staff we knew they were the smart ones they were the ones that were dealing with a legacy environment and they had the passion to make the change so as a team we encouraged suggestions and contributions from our overall IT community from the bottom up we started small we proved the case we told the story and then we got by him and only did did we implement wider the benefits the benefit through our staff were a huge increase in staff satisfaction reduction and application and platform outage support incidents risk free and failsafe application releases work-life balance no more midnight deployments and our application and infrastructure people could really focus on delivering customer value not on firefighting and for our end customers the people that travel with us it was really really simple we could provide a stable service that allowed for faster releases which meant we could deliver value faster in terms of stats we migrated 16 production b2c applications to a public cloud OpenShift environment in 12 months we decreased provisioning time from weeks or occasionally months we were waiting for hardware two minutes and we had a hundred percent availability of our key customer facing systems but most importantly it was about people we'd built a culture a culture of innovation that was built on a foundation of collaboration agility and accountability and that permeated throughout the IT organization not those just those people that were involved in the project everyone with an IT could see what good looked like and to see what it worked what it looked like in terms of working together and that was a key foundation for us the future for us you will have heard today everything's changing so we're going to continue to develop our open hybrid cloud onboard more public cloud service providers continue to build more modern applications and leverage the emerging technology integrate and automate everything we possibly can and leverage more open source products with the great support from the open source community so there you have it that's our journey I think we succeeded by not being over awed and by starting with the basics the technology was key obviously it's a cool component but most importantly it was a way we approached our transition we had a clear strategy that was actually developed bottom-up by the people that were involved day to day and we empowered those people to deliver and that provided benefits to both our staff and to our customers so thank you for giving the opportunity to share and I hope you enjoy the rest of the summer [Applause] I got one thanks what a great story would a great customer story to close on and we have one more partner to come up and this is a partner that all of you know that's Microsoft Microsoft has gone through an amazing transformation they've we've built an incredibly meaningful partnership with them all the way from our open source collaboration to what we do in the business side we started with support for Red Hat Enterprise Linux on hyper-v and that was truly just the beginning today we're announcing one of the most exciting joint product offerings on the market today let's please give a warm welcome to Paul correr and Scott Scott Guthrie to tell us about it guys come on out you know Scot welcome welcome to the Red Hat summer thanks for coming really appreciate it great to be here you know many surprises a lot of people when we you know published a list of speakers and then you rock you were on it and you and I are on stage here it's really really important and exciting to us exciting new partnership we've worked together a long time from the hypervisor up to common support and now around hybrid hybrid cloud maybe from your perspective a little bit of of what led us here well you know I think the thing that's really led us here is customers and you know Microsoft we've been on kind of a transformation journey the last several years where you know we really try to put customers at the center of everything that we do and you know as part of that you quickly learned from customers in terms of I'm including everyone here just you know you've got a hybrid of state you know both in terms of what you run on premises where it has a lot of Red Hat software a lot of Microsoft software and then really is they take the journey to the cloud looking at a hybrid of state in terms of how do you run that now between on-premises and a public cloud provider and so I think the thing that both of us are recognized and certainly you know our focus here at Microsoft has been you know how do we really meet customers with where they're at and where they want to go and make them successful in that journey and you know it's been fantastic working with Paul and the Red Hat team over the last two years in particular we spend a lot of time together and you know really excited about the journey ahead so um maybe you can share a bit more about the announcement where we're about to make today yeah so it's it's it's a really exciting announcement it's and really kind of I think first of its kind in that we're delivering a Red Hat openshift on Azure service that we're jointly developing and jointly managing together so this is different than sort of traditional offering where it's just running inside VMs and it's sort of two vendors working this is really a jointly managed service that we're providing with full enterprise support with a full SLA where the you know single throat to choke if you will although it's collectively both are choke the throats in terms of making sure that it works well and it's really uniquely designed around this hybrid world and in that it supports will support both Windows and Linux containers and it role you know it's the same open ship that runs both in the public cloud on Azure and on-premises and you know it's something that we hear a lot from customers I know there's a lot of people here that have asked both of us for this and super excited to be able to talk about it today and we're gonna show off the first demo of it just a bit okay well I'm gonna ask you to elaborate a bit more about this how this fits into the bigger Microsoft picture and I'll get out of your way and so thanks again thank you for coming here we go thanks Paul so I thought I'd spend just a few minutes talking about wouldn't you know that some of the work that we're doing with Microsoft Asher and the overall Microsoft cloud I didn't go deeper in terms of the new offering that we're announcing today together with red hat and show demo of it actually in action in a few minutes you know the high level in terms of you know some of the work that we've been doing at Microsoft the last couple years you know it's really been around this this journey to the cloud that we see every organization going on today and specifically the Microsoft Azure we've been providing really a cloud platform that delivers the infrastructure the application and kind of the core computing needs that organizations have as they want to be able to take advantage of what the cloud has to offer and in terms of our focus with Azure you know we've really focused we deliver lots and lots of different services and features but we focused really in particular on kind of four key themes and we see these four key themes aligning very well with the journey Red Hat it's been on and it's partly why you know we think the partnership between the two companies makes so much sense and you know for us the thing that we've been really focused on has been with a or in terms of how do we deliver a really productive cloud meaning how do we enable you to take advantage of cutting-edge technology and how do we kind of accelerate the successful adoption of it whether it's around the integration of managed services that we provide both in terms of the application space in the data space the analytic and AI space but also in terms of just the end-to-end management and development tools and how all those services work together so that teams can basically adopt them and be super successful yeah we deeply believe in hybrid and believe that the world is going to be a multi cloud and a multi distributed world and how do we enable organizations to be able to take the existing investments that they already have and be able to easily integrate them in a public cloud and with a public cloud environment and get immediate ROI on day one without how to rip and replace tons of solutions you know we're moving very aggressively in the AI space and are looking to provide a rich set of AI services both finished AI models things like speech detection vision detection object motion etc that any developer even at non data scientists can integrate to make application smarter and then we provide a rich set of AI tooling that enables organizations to build custom models and be able to integrate them also as part of their applications and with their data and then we invest very very heavily on trust Trust is sort of at the core of a sure and we now have more compliant certifications than any other cloud provider we run in more countries than any other cloud provider and we really focus around unique promises around data residency data sovereignty and privacy that are really differentiated across the industry and terms of where Iser runs today we're in 50 regions around the world so our region for us is typically a cluster of multiple data centers that are grouped together and you can see we're pretty much on every continent with the exception of Antarctica today and the beauty is you're going to be able to take the Red Hat open shift service and run it on ashore in each of these different locations and really have a truly global footprint as you look to build and deploy solutions and you know we've seen kind of this focus on productivity hybrid intelligence and Trust really resonate in the market and about 90 percent of Fortune 500 companies today are deployed on Azure and you heard Nike talked a little bit earlier this afternoon about some of their journeys as they've moved to a dot public cloud this is a small logo of just a couple of the companies that are on ashore today and what I do is actually even before we dive into the open ship demo is actually just show a quick video you know one of the companies thing there are actually several people from that organization here today Deutsche Bank who have been working with both Microsoft and Red Hat for many years Microsoft on the other side Red Hat both on the rel side and then on the OpenShift side and it's just one of these customers that have helped bring the two companies together to deliver this managed openshift service on Azure and so I'm just going to play a quick video of some of the folks that Deutsche Bank talking about their experiences and what they're trying to get out of it so we could roll the video that'd be great technology is at the absolute heart of Deutsche Bank we've recognized that the cost of running our infrastructure was particularly high there was a enormous amount of under utilization we needed a platform which was open to polyglot architecture supporting any kind of application workload across the various business lines of the third we analyzed over 60 different vendor products and we ended up with Red Hat openshift I'm super excited Microsoft or supporting Linux so strongly to adopting a hybrid approach we chose as here because Microsoft was the ideal partner to work with on constructs around security compliance business continuity as you as in all the places geographically that we need to be we have applications now able to go from a proof of concept to production in three weeks that is already breaking records openshift gives us given entities and containers allows us to apply the same sets of processes automation across a wide range of our application landscape on any given day we run between seven and twelve thousand containers across three regions we start see huge levels of cost reduction because of the level of multi-tenancy that we can achieve through containers open ship gives us an abstraction layer which is allows us to move our applications between providers without having to reconfigure or recode those applications what's really exciting for me about this journey is the way they're both Red Hat and Microsoft have embraced not just what we're doing but what each other are doing and have worked together to build open shift as a first-class citizen with Microsoft [Applause] in terms of what we're announcing today is a new fully managed OpenShift service on Azure and it's really the first fully managed service provided end-to-end across any of the cloud providers and it's jointly engineer operated and supported by both Microsoft and Red Hat and that means again sort of one service one SLA and both companies standing for a link firmly behind it really again focusing around how do we make customers successful and as part of that really providing the enterprise-grade not just isolates but also support and integration testing so you can also take advantage of all your rel and linux-based containers and all of your Windows server based containers and how can you run them in a joint way with a common management stack taking the advantage of one service and get maximum density get maximum code reuse and be able to take advantage of a containerized world in a better way than ever before and make this customer focus is very much at the center of what both companies are really centered around and so what if I do be fun is rather than just talk about openshift as actually kind of show off a little bit of a journey in terms of what this move to take advantage of it looks like and so I'd like to invite Brendan and Chris onstage who are actually going to show off a live demo of openshift on Azure in action and really walk through how to provision the service and basically how to start taking advantage of it using the full open ship ecosystem so please welcome Brendan and Chris we're going to join us on stage for a demo thanks God thanks man it's been a good afternoon so you know what we want to get into right now first I'd like to think Brandon burns for joining us from Microsoft build it's a busy week for you I'm sure your own stage there a few times as well you know what I like most about what we just announced is not only the business and technical aspects but it's that operational aspect the uniqueness the expertise that RedHat has for running OpenShift combined with the expertise that Microsoft has within Azure and customers are going to get this joint offering if you will with you know Red Hat OpenShift on Microsoft Azure and so you know kind of with that again Brendan I really appreciate you being here maybe talk to the folks about what we're going to show yeah so we're going to take a look at what it looks like to deploy OpenShift on to Azure via the new OpenShift service and the real selling point the really great part of this is the the deep integration with a cloud native app API so the same tooling that you would use to create virtual machines to create disks trade databases is now the tooling that you're going to use to create an open chip cluster so to show you this first we're going to create a resource group here so we're going to create that resource group in East us using the AZ tool that's the the azure command-line tooling a resource group is sort of a folder on Azure that holds all of your stuff so that's gonna come back into the second I've created my resource group in East us and now we're gonna use that exact same tool calling into into Azure api's to provision an open shift cluster so here we go we have AZ open shift that's our new command line tool putting it into that resource group I'm gonna get into East us alright so it's gonna take a little bit of time to deploy that open shift cluster it's doing a bunch of work behind the scenes provisioning all kinds of resources as well as credentials to access a bunch of different as your API so are we actually able to see this to you yeah so we can cut over to in just a second we can cut over to that resource group in a reload so Brendan while relating the beauty of what you know the teams have been doing together already is the fact that now open shift is a first-class citizen as it were yeah absolutely within the agent so I presume not only can I do a deployment but I can do things like scale and check my credentials and pretty much everything that I could do with any other service with that that's exactly right so we can anything that you you were used to doing via the my computer has locked up there we go the demo gods are totally with me oh there we go oh no I hit reload yeah that was that was just evil timing on the house this is another use for operators as we talked about earlier today that's right my dashboard should be coming up do I do I dare click on something that's awesome that was totally it was there there we go good job so what's really interesting about this I've also heard that it deploys you know in as little as five to six minutes which is really good for customers they want to get up and running with it but all right there we go there it is who managed to make it see that shows that it's real right you see the sweat coming off of me there but there you can see the I feel it you can see the various resources that are being created in order to create this openshift cluster virtual machines disks all of the pieces provision for you automatically via that one single command line call now of course it takes a few minutes to to create the cluster so in order to show the other side of that integration the integration between openshift and Azure I'm going to cut over to an open shipped cluster that I already have created alright so here you can see my open shift cluster that's running on Microsoft Azure I'm gonna actually log in over here and the first sign you're gonna see of the integration is it's actually using my credentials my login and going through Active Directory and any corporate policies that I may have around smart cards two-factor off anything like that authenticate myself to that open chef cluster so I'll accept that it can access my and now we're gonna load up the OpenShift web console so now this looks familiar to me oh yeah so if anybody's used OpenShift out there this is the exact same console and what we're going to show though is how this console via the open service broker and the open service broker implementation for Azure integrates natively with OpenShift all right so we can go down here and we can actually see I want to deploy a database I'm gonna deploy Mongo as my key value store that I'm going to use but you know like as we talk about management and having a OpenShift cluster that's managed for you I don't really want to have to manage my database either so I'm actually going to use cosmos DB it's a native Azure service it's a multilingual database that offers me the ability to access my data in a variety of different formats including MongoDB fully managed replicated around the world a pretty incredible service so I'm going to go ahead and create that so now Brendan what's interesting I think to me is you know we talked about the operational aspects and clearly it's not you and I running the clusters but you do need that way to interface with it and so when customers are able to deploy this all of this is out of the box there's no additional contemporary like this is what you get when you create when you use that tool to create that open chef cluster this is what you get with all of that integration ok great step through here and go ahead don't have any IP ranges there we go all right and we create that binding all right and so now behind the scenes openshift is integrated with the azure api's with all of my credentials to go ahead and create that distributed database once it's done provisioning actually all of the credentials necessary to access the database are going to be automatically populated into kubernetes available for me inside of OpenShift via service discovery to access from my application without any further work so I think that really shows not only the power of integrating openshift with an azure based API but actually the power of integrating a Druze API is inside of OpenShift to make a truly seamless experience for managing and deploying your containers across a variety of different platforms yeah hey you know Brendan this is great I know you've got a flight to catch because I think you're back onstage in a few hours but you know really appreciate you joining us today absolutely I look forward to seeing what else we do yeah absolutely thank you so much thanks guys Matt you want to come back on up thanks a lot guys if you have never had the opportunity to do a live demo in front of 8,000 people it'll give you a new appreciation for standing up there and doing it and that was really good you know every time I get the chance just to take a step back and think about the technology that we have at our command today I'm in awe just the progress over the last 10 or 20 years is incredible on to think about what might come in the next 10 or 20 years really is unthinkable you even forget 10 years what might come in the next five years even the next two years but this can create a lot of uncertainty in the environment of what's going to be to come but I believe I am certain about one thing and that is if ever there was a time when any idea is achievable it is now just think about what you've seen today every aspect of open hybrid cloud you have the world's infrastructure at your fingertips and it's not stopping you've heard about this the innovation of open source how fast that's evolving and improving this capability you've heard this afternoon from an entire technology ecosystem that's ready to help you on this journey and you've heard from customer after customer that's already started their journey in the successes that they've had you're one of the neat parts about this afternoon you will aren't later this week you will actually get to put your hands on all of this technology together in our live audience demo you know this is what some it's all about for us it's a chance to bring together the technology experts that you can work with to help formulate how to pull off those ideas we have the chance to bring together technology experts our customers and our partners and really create an environment where everyone can experience the power of open source that same spark that I talked about when I was at IBM where I understood the but intial that open-source had for enterprise customers we want to create the environment where you can have your own spark you can have that same inspiration let's make this you know in tomorrow's keynote actually you will hear a story about how open-source is changing medicine as we know it and literally saving lives it is a great example of expanding the ideas it might be possible that we came into this event with so let's make this the best summit ever thank you very much for being here let's kick things off right head down to the Welcome Reception in the expo hall and please enjoy the summit thank you all so much [Music] [Music]
SUMMARY :
from the bottom this speaks to what I'm
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Doug Fisher | PERSON | 0.99+ |
Stephen | PERSON | 0.99+ |
Brendan | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Deutsche Bank | ORGANIZATION | 0.99+ |
Robert Noyce | PERSON | 0.99+ |
Deutsche Bank | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Michael | PERSON | 0.99+ |
Arvind | PERSON | 0.99+ |
20-year | QUANTITY | 0.99+ |
March 14th | DATE | 0.99+ |
Matt | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Nike | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
Hong Kong | LOCATION | 0.99+ |
Antarctica | LOCATION | 0.99+ |
Scott Guthrie | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
Asia | LOCATION | 0.99+ |
Washington DC | LOCATION | 0.99+ |
London | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
two minutes | QUANTITY | 0.99+ |
Arvin | PERSON | 0.99+ |
Tel Aviv | LOCATION | 0.99+ |
two numbers | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
Paul correr | PERSON | 0.99+ |
September | DATE | 0.99+ |
Kerry Pierce | PERSON | 0.99+ |
30 years | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.99+ |
8-bit | QUANTITY | 0.99+ |
Mike witig | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
2025 | DATE | 0.99+ |
five | QUANTITY | 0.99+ |
dr. Hawking | PERSON | 0.99+ |
Linux | TITLE | 0.99+ |
Arvind Krishna | PERSON | 0.99+ |
Dublin | LOCATION | 0.99+ |
first partner | QUANTITY | 0.99+ |
Rob | PERSON | 0.99+ |
first platform | QUANTITY | 0.99+ |
Matt Hicks | PERSON | 0.99+ |
today | DATE | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
OpenShift | TITLE | 0.99+ |
last week | DATE | 0.99+ |
Day One Morning Keynote | Red Hat Summit 2018
[Music] [Music] [Music] [Laughter] [Laughter] [Laughter] [Laughter] [Music] [Music] [Music] [Music] you you [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Applause] [Music] wake up feeling blessed peace you warned that Russia ain't afraid to show it I'll expose it if I dressed up riding in that Chester roasted nigga catch you slippin on myself rocks on I messed up like yes sir [Music] [Music] [Music] [Music] our program [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] you are not welcome to Red Hat summit 2018 2018 [Music] [Music] [Music] [Laughter] [Music] Wow that is truly the coolest introduction I've ever had thank you Wow I don't think I feel cool enough to follow an interaction like that Wow well welcome to the Red Hat summit this is our 14th annual event and I have to say looking out over this audience Wow it's great to see so many people here joining us this is by far our largest summit to date not only did we blow through the numbers we've had in the past we blew through our own expectations this year so I know we have a pretty packed house and I know people are still coming in so it's great to see so many people here it's great to see so many familiar faces when I had a chance to walk around earlier it's great to see so many new people here joining us for the first time I think the record attendance is an indication that more and more enterprises around the world are seeing the power of open source to help them with their challenges that they're facing due to the digital transformation that all of enterprises around the world are going through the theme for the summit this year is ideas worth exploring and we intentionally chose that because as much as we are all going through this digital disruption and the challenges associated with it one thing I think is becoming clear no one person and certainly no one company has the answers to these challenges right this isn't a problem where you can go buy a solution this is a set of capabilities that we all need to build it's a set of cultural changes that we all need to go through and that's going to require the best ideas coming from so many different places so we're not here saying we have the answers we're trying to convene the conversation right we want to serve as a catalyst bringing great minds together to share ideas so we all walk out of here at the end of the week a little wiser than when we first came here we do have an amazing agenda for you we have over 7,000 attendees we may be pushing 8,000 by the time we got through this morning we have 36 keynote speakers and we have a hundred and twenty-five breakout sessions and have to throw in one plug scheduling 325 breakout sessions is actually pretty difficult and so we used the Red Hat business optimizer which is an AI constraint solver that's new in the Red Hat decision manager to help us plan the summit because we have individuals who have a clustered set of interests and we want to make sure that when we schedule two breakout sessions we do it in a way that we don't have overlapping sessions that are really important to the same individual so we tried to use this tool and what we understand about people's interest in history of what they wanted to do to try to make sure that we spaced out different times for things of similar interests for similar people as well as for people who stood in the back of breakouts before and I know I've done that too we've also used it to try to optimize room size so hopefully we will do our best to make sure that we've appropriately sized the spaces for those as well so it's really a phenomenal tool and I know it's helped us a lot this year in addition to the 325 breakouts we have a lot of our customers on stage during the main sessions and so you'll see demos you'll hear from partners you'll hear stories from so many of our customers not on our point of view of how to use these technologies but their point of views of how they actually are using these technologies to solve their problems and you'll hear over and over again from those keynotes that it's not just about the technology it's about how people are changing how people are working to innovate to solve those problems and while we're on the subject of people I'd like to take a moment to recognize the Red Hat certified professional of the year this is known award we do every year I love this award because it truly recognizes an individual for outstanding innovation for outstanding ideas for truly standing out in how they're able to help their organization with Red Hat technologies Red Hat certifications help system administrators application developers IT architects to further their careers and help their organizations by being able to advance their skills and knowledge of Red Hat products and this year's winner really truly is a great example about how their curiosity is helped push the limits of what's possible with technology let's hear a little more about this year's winner when I was studying at the University I had computer science as one of my subjects and that's what created the passion from the very beginning they were quite a few institutions around my University who were offering Red Hat Enterprise Linux as a course and a certification paths through to become an administrator Red Hat Learning subscription has offered me a lot more than any other trainings that have done so far that gave me exposure to so many products under red hair technologies that I wasn't even aware of I started to think about the better ways of how these learnings can be put into the real life use cases and we started off with a discussion with my manager saying I have to try this product and I really want to see how it really fits in our environment and that product was Red Hat virtualization we went from deploying rave and then OpenStack and then the open shift environment we wanted to overcome some of the things that we saw as challenges to the speed and rapidity of release and code etc so it made perfect sense and we were able to do it in a really short space of time so you know we truly did use it as an Innovation Lab I think idea is everything ideas can change the way you see things an Innovation Lab was such an idea that popped into my mind one fine day and it has transformed the way we think as a team and it's given that playpen to pretty much everyone to go and test their things investigate evaluate do whatever they like in a non-critical non production environment I recruited Neha almost 10 years ago now I could see there was a spark a potential with it and you know she had a real Drive a real passion and you know here we are nearly ten years later I'm Neha Sandow I am a Red Hat certified engineer all right well everyone please walk into the states to the stage Neha [Music] [Applause] congratulations thank you [Applause] I think that - well welcome to the red has some of this is your first summit yes it is thanks so much well fantastic sure well it's great to have you here I hope you have a chance to engage and share some of your ideas and enjoy the week thank you thank you congratulations [Applause] neha mentioned that she first got interest in open source at university and it made me think red hats recently started our Red Hat Academy program that looks to programmatically infuse Red Hat technologies in universities around the world it's exploded in a way we had no idea it's grown just incredibly rapidly which i think shows the interest that there really is an open source and working in an open way at university so it's really a phenomenal program I'm also excited to announce that we're launching our newest open source story this year at Summit it's called the science of collective discovery and it looks at what happens when communities use open hardware to monitor the environment around them and really how they can make impactful change based on that technologies the rural premier that will be at 5:15 on Wednesday at McMaster Oni West and so please join us for a drink and we'll also have a number of the experts featured in that and you can have a conversation with them as well so with that let's officially start the show please welcome red hat president of products and technology Paul Cormier [Music] Wow morning you know I say it every year I'm gonna say it again I know I repeat myself it's just amazing we are so proud here to be here today too while you all week on how far we've come with opens with open source and with the products that we that we provide at Red Hat so so welcome and I hope the pride shows through so you know I told you Seven Summits ago on this stage that the future would be open and here we are just seven years later this is the 14th summit but just seven years later after that and much has happened and I think you'll see today and this week that that prediction that the world would be open was a pretty safe predict prediction but I want to take you just back a little bit to see how we started here and it's not just how Red Hat started here this is an open source in Linux based computing is now in an industry norm and I think that's what you'll you'll see in here this week you know we talked back then seven years ago when we put on our prediction about the UNIX error and how Hardware innovation with x86 was it was really the first step in a new era of open innovation you know companies like Sun Deck IBM and HP they really changed the world the computing industry with their UNIX models it was that was really the rise of computing but I think what we we really saw then was that single company innovation could only scale so far could really get so far with that these companies were very very innovative but they coupled hardware innovation with software innovation and as one company they could only solve so many problems and even which comp which even complicated things more they could only hire so many people in each of their companies Intel came on the scene back then as the new independent hardware player and you know that was really the beginning of the drive for horizontal computing power and computing this opened up a brand new vehicle for hardware innovation a new hardware ecosystem was built around this around this common hardware base shortly after that Stallman and leanness they had a vision of his of an open model that was created and they created Linux but it was built around Intel this was really the beginning of having a software based platform that could also drive innovation this kind of was the beginning of the changing of the world here that system-level innovation now having a hardware platform that was ubiquitous and a software platform that was open and ubiquitous it really changed this system level innovation and that continues to thrive today it was only possible because it was open this could not have happened in a closed environment it allowed the best ideas from anywhere from all over to come in in win only because it was the best idea that's what drove the rate of innovation at the pace you're seeing today and it which has never been seen before we at Red Hat we saw the need to bring this innovation to solve real-world problems in the enterprise and I think that's going to be the theme of the show today you're going to see us with our customers and partners talking about and showing you some of those real-world problems that we are sought solving with this open innovation we created rel back then for this for the enterprise it started it's it it wasn't successful because it's scaled it was secure and it was enterprise ready it once again changed the industry but this time through open innovation this gave the hardware ecosystem a software platform this open software platform gave the hardware ecosystem a software platform to build around it Unleashed them the hardware side to compete and thrive it enabled innovation from the OEMs new players building cheaper faster servers even new architectures from armed to power sprung up with this change we have seen an incredible amount of hardware innovation over the last 15 years that same innovation happened on the software side we saw powerful implementations of bare metal Linux distributions out in the market in fact at one point there were 300 there are over 300 distributions out in the market on the foundation of Linux powerful open-source equivalents were even developed in every area of Technology databases middleware messaging containers anything you could imagine innovation just exploded around the Linux platform in innovation it's at the core also drove virtualization both Linux and virtualization led to another area of innovation which you're hearing a lot about now public cloud innovation this innovation started to proceed at a rate that we had never seen before we had never experienced this in the past in this unprecedented speed of innovation and software was now possible because you didn't need a chip foundry in order to innovate you just needed great ideas in the open platform that was out there customers seeing this innovation in the public cloud sparked it sparked their desire to build their own linux based cloud platforms and customers are now are now bringing that cloud efficiency on-premise in their own data centers public clouds demonstrated so much efficiency the data centers and architects wanted to take advantage of it off premise on premise I'm sorry within their own we don't within their own controlled environments this really allowed companies to make the most of existing investments from data centers to hardware they also gained many new advantages from data sovereignty to new flexible agile approaches I want to bring Burr and his team up here to take a look at what building out an on-premise cloud can look like today Bure take it away I am super excited to be with all of you here at Red Hat summit I know we have some amazing things to show you throughout the week but before we dive into this demonstration I want you to take just a few seconds just a quick moment to think about that really important event your life that moment you turned on your first computer maybe it was a trs-80 listen Claire and Atari I even had an 83 b2 at one point but in my specific case I was sitting in a classroom in Hawaii and I could see all the way from Diamond Head to Pearl Harbor so just keep that in mind and I turn on an IBM PC with dual floppies I don't remember issuing my first commands writing my first level of code and I was totally hooked it was like a magical moment and I've been hooked on computers for the last 30 years so I want you to hold that image in your mind for just a moment just a second while we show you the computers we have here on stage let me turn this over to Jay fair and Dini here's our worldwide DevOps manager and he was going to show us his hardware what do you got Jay thank you BER good morning everyone and welcome to Red Hat summit we have so many cool things to show you this week I am so happy to be here and you know my favorite thing about red hat summit is our allowed to kind of share all of our stories much like bird just did we also love to you know talk about the hardware and the technology that we brought with us in fact it's become a bit of a competition so this year we said you know let's win this thing and we actually I think we might have won we brought a cloud with us so right now this is a private cloud for throughout the course of the week we're going to turn this into a very very interesting open hybrid cloud right before your eyes so everything you see here will be real and happening right on this thing right behind me here so thanks for our four incredible partners IBM Dell HP and super micro we've built a very vendor heterogeneous cloud here extra special thanks to IBM because they loaned us a power nine machine so now we actually have multiple architectures in this cloud so as you know one of the greatest benefits to running Red Hat technology is that we run on just about everything and you know I can't stress enough how powerful that is how cost-effective that is and it just makes my life easier to be honest so if you're interested the people that built this actual rack right here gonna be hanging out in the customer success zone this whole week it's on the second floor the lobby there and they'd be glad to show you exactly how they built this thing so let me show you what we actually have in this rack so contained in this rack we have 1056 physical chorus right here we have five and a half terabytes of RAM and just in case we threw 50 terabytes of storage in this thing so burr that's about two million times more powerful than that first machine you boot it up thanks to a PC we're actually capable of putting all the power needs and cooling right in this rack so there's your data center right there you know it occurred to me last night that I can actually pull the power cord on this thing and kick it up a notch we could have the world's first mobile portable hybrid cloud so I'm gonna go ahead and unplug no no no no no seriously it's not unplug the thing we got it working now well Berg gets a little nervous but next year we're rolling this thing around okay okay so to recap multiple vendors check multiple architectures check multiple public clouds plug right into this thing check and everything everywhere is running the same software from Red Hat so that is a giant check so burn Angus why don't we get the demos rolling awesome so we have totally we have some amazing hardware amazing computers on this stage but now we need to light it up and we have Angus Thomas who represents our OpenStack engineering team and he's going to show us what we can do with this awesome hardware Angus thank you Beth so this was an impressive rack of hardware to Joe has bought a pocket stage what I want to talk about today is putting it to work with OpenStack platform director we're going to turn it from a lot of potential into a flexible scalable private cloud we've been using director for a while now to take care of managing hardware and orchestrating the deployment of OpenStack what's new is that we're bringing the same capabilities for on-premise manager the deployment of OpenShift director deploying OpenShift in this way is the best of both worlds it's bare-metal performance but with an underlying infrastructure as a service that can take care of deploying in new instances and scaling out and a lot of the things that we expect from a cloud provider director is running on a virtual machine on Red Hat virtualization at the top of the rack and it's going to bring everything else under control what you can see on the screen right now is the director UI and as you see some of the hardware in the rack is already being managed at the top level we have information about the number of cores in the amount of RAM and the disks that each machine have if we dig in a bit there's information about MAC addresses and IPs and the management interface the BIOS kernel version dig a little deeper and there is information about the hard disks all of this is important because we want to be able to make sure that we put in workloads exactly where we want them Jay could you please power on the two new machines at the top of the rack sure all right thank you so when those two machines come up on the network director is going to see them see that they're new and not already under management and is it immediately going to go into the hardware inspection that populates this database and gets them ready for use so we also have profiles as you can see here profiles are the way that we match the hardware in a machine to the kind of workload that it's suited to this is how we make sure that machines that have all the discs run Seth and machines that have all the RAM when our application workouts for example there's two ways these can be set when you're dealing with a rack like this you could go in an individually tag each machine but director scales up to data centers so we have a rules matching engine which will automatically take the hardware profile of a new machine and make sure it gets tagged in exactly the right way so we can automatically discover new machines on the network and we can automatically match them to a profile that's how we streamline and scale up operations now I want to talk about deploying the software we have a set of validations we've learned over time about the Miss configurations in the underlying infrastructure which can cause the deployment of a multi node distributed application like OpenStack or OpenShift to fail if you have the wrong VLAN tags on a switch port or DHCP isn't running where it should be for example you can get into a situation which is really hard to debug a lot of our validations actually run before the deployment they look at what you're intending to deploy and they check in the environment is the way that it should be and they'll preempts problems and obviously preemption is a lot better than debugging something new that you probably have not seen before is director managing multiple deployments of different things side by side before we came out on stage we also deployed OpenStack on this rack just to keep me honest let me jump over to OpenStack very quickly a lot of our opens that customers will be familiar with this UI and the bare metal deployment of OpenStack on our rack is actually running a set of virtual machines which is running Gluster you're going to see that put to work later on during the summit Jay's gone to an awful lot effort to get this Hardware up on the stage so we're going to use it as many different ways as we can okay let's deploy OpenShift if I switch over to the deployed a deployment plan view there's a few steps first thing you need to do is make sure we have the hardware I already talked about how director manages hardware it's smart enough to make sure that it's not going to attempt to deploy into machines they're already in use it's only going to deploy on machines that have the right profile but I think with the rack that we have here we've got enough next thing is the deployment configuration this is where you get to customize exactly what's going to be deployed to make sure that it really matches your environment if they're external IPs for additional services you can set them here whatever it takes to make sure that the deployment is going to work for you as you can see on the screen we have a set of options around enable TLS for encryption network traffic if I dig a little deeper there are options around enabling ipv6 and network isolation so that different classes of traffic there are over different physical NICs okay then then we have roles now roles this is essentially about the software that's going to be put on each machine director comes with a set of roles for a lot of the software that RedHat supports and you can just use those or you can modify them a little bit if you need to add a monitoring agent or whatever it might be or you can create your own custom roles director has quite a rich syntax for custom role definition and custom Network topologies whatever it is you need in order to make it work in your environment so the rawls that we have right now are going to give us a working instance of openshift if I go ahead and click through the validations are all looking green so right now I can click the button start to the deploy and you will see things lighting up on the rack directors going to use IPMI to reboot the machines provisioned and with a trail image was the containers on them and start up the application stack okay so one last thing once the deployment is done you're going to want to keep director around director has a lot of capabilities around what we call de to operational management bringing in new Hardware scaling out deployments dealing with updates and critically doing upgrades as well so having said all of that it is time for me to switch over to an instance of openshift deployed by a director running on bare metal on our rack and I need to hand this over to our developer team so they can show what they can do it thank you that is so awesome Angus so what you've seen now is going from bare metal to the ultimate private cloud with OpenStack director make an open shift ready for our developers to build their next generation applications thank you so much guys that was totally awesome I love what you guys showed there now I have the honor now I have the honor of introducing a very special guest one of our earliest OpenShift customers who understands the necessity of the private cloud inside their organization and more importantly they're fundamentally redefining their industry please extend a warm welcome to deep mar Foster from Amadeus well good morning everyone a big thank you for having armadillos here and myself so as it was just set I'm at Mario's well first of all we are a large IT provider in the travel industry so serving essentially Airlines hotel chains this distributors like Expedia and others we indeed we started very early what was OpenShift like a bit more than three years ago and we jumped on it when when Retta teamed with Google to bring in kubernetes into this so let me quickly share a few figures about our Mario's to give you like a sense of what we are doing and the scale of our operations so some of our key KPIs one of our key metrics is what what we call passenger borders so that's the number of customers that physically board a plane over the year so through our systems it's roughly 1.6 billion people checking in taking the aircrafts on under the Amarillo systems close to 600 million travel agency bookings virtually all airlines are on the system and one figure I want to stress out a little bit is this one trillion availability requests per day that's when I read this figure my mind boggles a little bit so this means in continuous throughput more than 10 million hits per second so of course these are not traditional database transactions it's it's it's highly cached in memory and these applications are running over like more than 100,000 course so it's it's it's really big stuff so today I want to give some concrete feedback what we are doing so I have chosen two applications products of our Mario's that are currently running on production in different in different hosting environments as the theme here is of this talk hybrid cloud and so I want to give some some concrete feedback of how we architect the applications and of course it stays relatively high level so here I have taken one of our applications that is used in the hospitality environment so it's we have built this for a very large US hotel chain and it's currently in in full swing brought into production so like 30 percent of the globe or 5,000 plus hotels are on this platform not so here you can see that we use as the path of course on openshift on that's that's the most central piece of our hybrid cloud strategy on the database side we use Oracle and Couchbase Couchbase is used for the heavy duty fast access more key value store but also to replicate data across two data centers in this case it's running over to US based data centers east and west coast topology that are fit so run by Mario's that are fit with VMware on for the virtualization OpenStack on top of it and then open shift to host and welcome the applications on the right hand side you you see the kind of tools if you want to call them tools that we use these are the principal ones of course the real picture is much more complex but in essence we use terraform to map to the api's of the underlying infrastructure so they are obviously there are differences when you run on OpenStack or the Google compute engine or AWS Azure so some some tweaking is needed we use right at ansible a lot we also use puppet so you can see these are really the big the big pieces of of this sense installation and if we look to the to the topology again very high high level so these two locations basically map the data centers of our customers so they are in close proximity because the response time and the SLA is of this application is are very tight so that's an example of an application that is architectures mostly was high ability and high availability in minds not necessarily full global worldwide scaling but of course it could be scaled but here the idea is that we can swing from one data center to the unit to the other in matters of of minutes both take traffic data is fully synchronized across those data centers and while the switch back and forth is very fast the second example I have taken is what we call the shopping box this is when people go to kayak or Expedia and they're getting inspired where they want to travel to this is really the piece that shoots most of transit of the transactions into our Mario's so we architect here more for high scalability of course availability is also a key but here scaling and geographical spread is very important so in short it runs partially on-premise in our Amarillo Stata Center again on OpenStack and we we deploy it mostly in the first step on the Google compute engine and currently as we speak on Amazon on AWS and we work also together with Retta to qualify the whole show on Microsoft Azure here in this application it's it's the same building blocks there is a large swimming aspect to it so we bring Kafka into this working with records and another partner to bring Kafka on their open shift because at the end we want to use open shift to administrate the whole show so over time also databases and the topology here when you look to the physical deployment topology while it's very classical we use the the regions and the availability zone concept so this application is spread over three principal continental regions and so it's again it's a high-level view with different availability zones and in each of those availability zones we take a hit of several 10,000 transactions so that was it really in very short just to give you a glimpse on how we implement hybrid clouds I think that's the way forward it gives us a lot of freedom and it allows us to to discuss in a much more educated way with our customers that sometimes have already deals in place with one cloud provider or another so for us it's a lot of value to set two to leave them the choice basically what up that was a very quick overview of what we are doing we were together with records are based on open shift essentially here and more and more OpenStack coming into the picture hope you found this interesting thanks a lot and have a nice summer [Applause] thank you so much deeper great great solution we've worked with deep Marv and his team for a long for a long time great solution so I want to take us back a little bit I want to circle back I sort of ended talking a little bit about the public cloud so let's circle back there you know even so even though some applications need to run in various footprints on premise there's still great gains to be had that for running certain applications in the public cloud a public cloud will be as impactful to to the industry as as UNIX era was of computing was but by itself it'll have some of the same limitations and challenges that that model had today there's tremendous cloud innovation happening in the public cloud it's being driven by a handful of massive companies and much like the innovation that sundeck HP and others drove in a you in the UNIX era of community of computing many customers want to take advantage of the best innovation no matter where it comes from buddy but as they even eventually saw in the UNIX era they can't afford the best innovation at the cost of a siloed operating environment with the open community we are building a hybrid application platform that can give you access to the best innovation no matter which vendor or which cloud that it comes from letting public cloud providers innovate and services beyond what customers or anyone can one provider can do on their own such as large scale learning machine learning or artificial intelligence built on the data that's unique probably to that to that one cloud but consumed in a common way for the end customer across all applications in any environment on any footprint in in their overall IT infrastructure this is exactly what rel brought brought to our customers in the UNIX era of computing that consistency across any of those footprints obviously enterprises will have applications for all different uses some will live on premise some in the cloud hybrid cloud is the only practical way forward I think you've been hearing that from us for a long time it is the only practical way forward and it'll be as impactful as anything we've ever seen before I want to bring Byrne his team back to see a hybrid cloud deployment in action burr [Music] all right earlier you saw what we did with taking bare metal and lighting it up with OpenStack director and making it openshift ready for developers to build their next generation applications now we want to show you when those next turn and generation applications and what we've done is we take an open shift and spread it out and installed it across Asia and Amazon a true hybrid cloud so with me on stage today as Ted who's gonna walk us through an application and Brent Midwood who's our DevOps engineer who's gonna be making sure he's monitoring on the backside that we do make sure we do a good job so at this point Ted what have you got for us Thank You BER and good morning everybody this morning we are running on the stage in our private cloud an application that's providing its providing fraud detection detect serves for financial transactions and our customer base is rather large and we occasionally take extended bursts of traffic of heavy traffic load so in order to keep our latency down and keep our customers happy we've deployed extra service capacity in the public cloud so we have capacity with Microsoft Azure in Texas and with Amazon Web Services in Ohio so we use open chip container platform on all three locations because openshift makes it easy for us to deploy our containerized services wherever we want to put them but the question still remains how do we establish seamless communication across our entire enterprise and more importantly how do we balance the workload across these three locations in such a way that we efficiently use our resources and that we give our customers the best possible experience so this is where Red Hat amq interconnect comes in as you can see we've deployed a MQ interconnect alongside our fraud detection applications in all three locations and if I switch to the MQ console we'll see the topology of the app of the network that we've created here so the router inside the on stage here has made connections outbound to the public routers and AWS and Azure these connections are secured using mutual TLS authentication and encrypt and once these connections are established amq figures out the best way auda matically to route traffic to where it needs to get to so what we have right now is a distributed reliable broker list message bus that expands our entire enterprise now if you want to learn more about this make sure that you catch the a MQ breakout tomorrow at 11:45 with Jack Britton and David Ingham let's have a look at the message flow and we'll dive in and isolate the fraud detection API that we're interested in and what we see is that all the traffic is being handled in the private cloud that's what we expect because our latencies are low and they're acceptable but now if we take a little bit of a burst of increased traffic we're gonna see that an EQ is going to push a little a bi traffic out onto the out to the public cloud so as you're picking up some of the load now to keep the Layton sees down now when that subsides as your finishes up what it's doing and goes back offline now if we take a much bigger load increase you'll see two things first of all asher is going to take a bigger proportion than it did before and Amazon Web Services is going to get thrown into the fray as well now AWS is actually doing less work than I expected it to do I expected a little bit of bigger a slice there but this is a interesting illustration of what's going on for load balancing mq load balancing is sending requests to the services that have the lowest backlog and in order to keep the Layton sees as steady as possible so AWS is probably running slowly for some reason and that's causing a and Q to push less traffic its way now the other thing you're going to notice if you look carefully this graph fluctuate slightly and those fluctuations are caused by all the variances in the network we have the cloud on stage and we have clouds in in the various places across the country there's a lot of equipment locked layers of virtualization and networking in between and we're reacting in real-time to the reality on the digital street so BER what's the story with a to be less I noticed there's a problem right here right now we seem to have a little bit performance issue so guys I noticed that as well and a little bit ago I actually got an alert from red ahead of insights letting us know that there might be some potential optimizations we could make to our environment so let's take a look at insights so here's the Red Hat insights interface you can see our three OpenShift deployments so we have the set up here on stage in San Francisco we have our Azure deployment in Texas and we also have our AWS deployment in Ohio and insights is highlighting that that deployment in Ohio may have some issues that need some attention so Red Hat insights collects anonymized data from manage systems across our customer environment and that gives us visibility into things like vulnerabilities compliance configuration assessment and of course Red Hat subscription consumption all of this is presented in a SAS offering so it's really really easy to use it requires minimal infrastructure upfront and it provides an immediate return on investment what insights is showing us here is that we have some potential issues on the configuration side that may need some attention from this view I actually get a look at all the systems in our inventory including instances and containers and you can see here on the left that insights is highlighting one of those instances as needing some potential attention it might be a candidate for optimization this might be related to the issues that you were seeing just a minute ago insights uses machine learning and AI techniques to analyze all collected data so we combine collected data from not only the system's configuration but also with other systems from across the Red Hat customer base this allows us to compare ourselves to how we're doing across the entire set of industries including our own vertical in this case the financial services industry and we can compare ourselves to other customers we also get access to tailored recommendations that let us know what we can do to optimize our systems so in this particular case we're actually detecting an issue here where we are an outlier so our configuration has been compared to other configurations across the customer base and in this particular instance in this security group were misconfigured and so insights actually gives us the steps that we need to use to remediate the situation and the really neat thing here is that we actually get access to a custom ansible playbook so if we want to automate that type of a remediation we can use this inside of Red Hat ansible tower Red Hat satellite Red Hat cloud forms it's really really powerful the other thing here is that we can actually apply these recommendations right from within the Red Hat insights interface so with just a few clicks I can select all the recommendations that insights is making and using that built-in ansible automation I can apply those recommendations really really quickly across a variety of systems this type of intelligent automation is really cool it's really fast and powerful so really quickly here we're going to see the impact of those changes and so we can tell that we're doing a little better than we were a few minutes ago when compared across the customer base as well as within the financial industry and if we go back and look at the map we should see that our AWS employment in Ohio is in a much better state than it was just a few minutes ago so I'm wondering Ted if this had any effect and might be helping with some of the issues that you were seeing let's take a look looks like went green now let's see what it looks like over here yeah doesn't look like the configuration is taking effect quite yet maybe there's some delay awesome fantastic the man yeah so now we're load balancing across the three clouds very much fantastic well I have two minute Ted I truly love how we can route requests and dynamically load transactions across these three clouds a truly hybrid cloud native application you guys saw here on on stage for the first time and it's a fully portable application if you build your applications with openshift you can mover from cloud to cloud to cloud on stage private all the way out to the public said it's totally awesome we also have the application being fully managed by Red Hat insights I love having that intelligence watching over us and ensuring that we're doing everything correctly that is fundamentally awesome thank you so much for that well we actually have more to show you but you're going to wait a few minutes longer right now we'd like to welcome Paul back to the stage and we have a very special early Red Hat customer an Innovation Award winner from 2010 who's been going boldly forward with their open hybrid cloud strategy please give a warm welcome to Monty Finkelstein from Citigroup [Music] [Music] hi Marty hey Paul nice to see you thank you very much for coming so thank you for having me Oh our pleasure if you if you wanted to we sort of wanted to pick your brain a little bit about your experiences and sort of leading leading the charge in computing here so we're all talking about hybrid cloud how has the hybrid cloud strategy influenced where you are today in your computing environment so you know when we see the variable the various types of workload that we had an hour on from cloud we see the peaks we see the valleys we see the demand on the environment that we have we really determined that we have to have a much more elastic more scalable capability so we can burst and stretch our environments to multiple cloud providers these capabilities have now been proven at City and of course we consider what the data risk is as well as any regulatory requirement so how do you how do you tackle the complexity of multiple cloud environments so every cloud provider has its own unique set of capabilities they have they're own api's distributions value-added services we wanted to make sure that we could arbitrate between the different cloud providers maintain all source code and orchestration capabilities on Prem to drive those capabilities from within our platforms this requires controlling the entitlements in a cohesive fashion across our on Prem and Wolfram both for security services automation telemetry as one seamless unit can you talk a bit about how you decide when you to use your own on-premise infrastructure versus cloud resources sure so there are multiple dimensions that we take into account right so the first dimension we talk about the risk so low risk - high risk and and really that's about the data classification of the environment we're talking about so whether it's public or internal which would be considered low - ooh confidential PII restricted sensitive and so on and above which is really what would be considered a high-risk the second dimension would be would focus on demand volatility and responsiveness sensitivity so this would range from low response sensitivity and low variability of the type of workload that we have to the high response sensitivity and high variability of the workload the first combination that we focused on is the low risk and high variability and high sensitivity for response type workload of course any of the workloads we ensure that we're regulatory compliant as well as we achieve customer benefits with within this environment so how can we give developers greater control of their their infrastructure environments and still help operations maintain that consistency in compliance so the main driver is really to use the public cloud is scale speed and increased developer efficiencies as well as reducing cost as well as risk this would mean providing develop workspaces and multiple environments for our developers to quickly create products for our customers all this is done of course in a DevOps model while maintaining the source and artifacts registry on-prem this would allow our developers to test and select various middleware products another product but also ensure all the compliance activities in a centrally controlled repository so we really really appreciate you coming by and sharing that with us today Monte thank you so much for coming to the red echo thanks a lot thanks again tamati I mean you know there's these real world insight into how our products and technologies are really running the businesses today that's that's just the most exciting part so thank thanks thanks again mati no even it with as much progress as you've seen demonstrated here and you're going to continue to see all week long we're far from done so I want to just take us a little bit into the path forward and where we we go today we've talked about this a lot innovation today is driven by open source development I don't think there's any question about that certainly not in this room and even across the industry as a whole that's a long way that we've come from when we started our first summit 14 years ago with over a million open source projects out there this unit this innovation aggregates into various community platforms and it finally culminates in commercial open source based open source developed products these products run many of the mission-critical applications in business today you've heard just a couple of those today here on stage but it's everywhere it's running the world today but to make customers successful with that interact innovation to run their real-world business applications these open source products have to be able to leverage increase increasingly complex infrastructure footprints we must also ensure a common base for the developer and ultimately the application no matter which footprint they choose as you heard mati say the developers want choice here no matter which no matter which footprint they are ultimately going to run their those applications on they want that flexibility from the data center to possibly any public cloud out there in regardless of whether that application was built yesterday or has been running the business for the last 10 years and was built on 10-year old technology this is the flexibility that developers require today but what does different infrastructure we may require different pieces of the technical stack in that deployment one example of this that Effects of many things as KVM which provides the foundation for many of those use cases that require virtualization KVM offers a level of consistency from a technical perspective but rel extends that consistency to add a level of commercial and ecosystem consistency for the application across all those footprints this is very important in the enterprise but while rel and KVM formed the foundation other technologies are needed to really satisfy the functions on these different footprints traditional virtualization has requirements that are satisfied by projects like overt and products like Rev traditional traditional private cloud implementations has requirements that are satisfied on projects like OpenStack and products like Red Hat OpenStack platform and as applications begin to become more container based we are seeing many requirements driven driven natively into containers the same Linux in different forms provides this common base across these four footprints this level of compatible compatibility is critical to operators who must best utilize the infinite must better utilize secure and deploy the infrastructure that they have and they're responsible for developers on the other hand they care most about having a platform that can creates that consistency for their applications they care about their services and the services that they need to consume within those applications and they don't want limitations on where they run they want service but they want it anywhere not necessarily just from Amazon they want integration between applications no matter where they run they still want to run their Java EE now named Jakarta EE apps and bring those applications forward into containers and micro services they need able to orchestrate these frameworks and many more across all these different footprints in a consistent secure fashion this creates natural tension between development and operations frankly customers amplify this tension with organizational boundaries that are holdover from the UNIX era of computing it's really the job of our platforms to seamlessly remove these boundaries and it's the it's the goal of RedHat to seamlessly get you from the old world to the new world we're gonna show you a really cool demo demonstration now we're gonna show you how you can automate this transition first we're gonna take a Windows virtual machine from a traditional VMware deployment we're gonna convert it into a KVM based virtual machine running in a container all under the kubernetes umbrella this makes virtual machines more access more accessible to the developer this will accelerate the transformation of those virtual machines into cloud native container based form well we will work this prot we will worked as capability over the product line in the coming releases so we can strike the balance of enabling our developers to move in this direction we want to be able to do this while enabling mission-critical operations to still do their job so let's bring Byrne his team back up to show you this in action for one more thanks all right what Red Hat we recognized that large organizations large enterprises have a substantial investment and legacy virtualization technology and this is holding you back you have thousands of virtual machines that need to be modernized so what you're about to see next okay it's something very special with me here on stage we have James Lebowski he's gonna be walking us through he's represents our operations folks and he's gonna be walking us through a mass migration but also is Itamar Hine who's our lead developer of a very special application and he's gonna be modernizing container izing and optimizing our application all right so let's get started James thanks burr yeah so as you can see I have a typical VMware environment here I'm in the vSphere client I've got a number of virtual machines a handful of them that make up my one of my applications for my development environment in this case and what I want to do is migrate those over to a KVM based right at virtualization environment so what I'm gonna do is I'm gonna go to cloud forms our cloud management platform that's our first step and you know cloud forms actually already has discovered both my rev environment and my vSphere environment and understands the compute network and storage there so you'll notice one of the capabilities we built is this new capability called migrations and underneath here I could begin to there's two steps and the first thing I need to do is start to create my infrastructure mappings what this will allow me to do is map my compute networking storage between vSphere and Rev so cloud forms understands how those relate let's go ahead and create an infrastructure mapping I'll call that summit infrastructure mapping and then I'm gonna begin to map my two environments first the compute so the clusters here next the data stores so those virtual machines happen to live on datastore - in vSphere and I'll target them a datastore data to inside of my revenue Arman and finally my networks those live on network 100 so I'll map those from vSphere to rover so once my infrastructure is map the next step I need to do is actually begin to create a plan to migrate those virtual machines so I'll continue to the plan wizard here I'll select the infrastructure mapping I just created and I'll select migrate my development environment from those virtual machines to Rev and then I need to import a CSV file the CSV file is going to contain a list of all the virtual machines that I want to migrate that were there and that's it once I hit create what's going to happen cloud forms is going to begin in an automated fashion shutting down those virtual machines begin converting them taking care of all the minutia that you'd have to do manually it's gonna do that all automatically for me so I don't have to worry about all those manual interactions and no longer do I have to go manually shut them down but it's going to take care of that all for me you can see the migrations kicked off here this is the I've got the my VMs are migrating here and if I go back to the screen here you can see that we're gonna start seeing those shutdown okay awesome but as people want to know more information about this how would they dive deeper into this technology later this week yeah it's a great question so we have a workload portability session in the hybrid cloud on Wednesday if you want to see a presentation that deep dives into this topic and how some of the methodologies to migrate and then on Thursday we actually have a hands-on lab it's the IT optimization VM migration lab that you can check out and as you can see those are shutting down here yeah we see a powering off right now that's fantastic absolutely so if I go back now that's gonna take a while you got to convert all the disks and move them over but we'll notice is previously I had already run one migration of a single application that was a Windows virtual machine running and if I browse over to Red Hat virtualization I can see on the dashboard here I could browse to virtual machines I have migrated that Windows virtual machine and if I open up a tab I can now browse to my Windows virtual machine which is running our wingtip toy store application our sample application here and now my VM has been moved over from Rev to Vita from VMware to Rev and is available for Itamar all right great available to our developers all right Itamar what are you gonna do for us here well James it's great that you can save cost by moving from VMware to reddit virtualization but I want to containerize our application and with container native virtualization I can run my virtual machine on OpenShift like any other container using Huebert a kubernetes operator to run and manage virtual machines let's look at the open ship service catalog you can see we have a new virtualization section here we can import KVM or VMware virtual machines or if there are already loaded we can create new instances of them for the developer to work with just need to give named CPU memory we can do other virtualization parameters and create our virtual machines now let's see how this looks like in the openshift console the cool thing about KVM is virtual machines are just Linux processes so they can act and behave like other open shipped applications we build in more than a decade of virtualization experience with KVM reddit virtualization and OpenStack and can now benefit from kubernetes and open shift to manage and orchestrate our virtual machines since we know this virtual machine this container is actually a virtual machine we can do virtual machine stuff with it like shutdown reboot or open a remote desktop session to it but we can also see this is just a container like any other container in openshift and even though the web application is running inside a Windows virtual machine the developer can still use open shift mechanisms like services and routes let's browse our web application using the OpenShift service it's the same wingtip toys application but this time the virtual machine is running on open shift but we're not done we want to containerize our application since it's a Windows virtual machine we can open a remote desktop session to it we see we have here Visual Studio and an asp.net application let's start container izing by moving the Microsoft sequel server database from running inside the Windows virtual machine to running on Red Hat Enterprise Linux as an open shipped container we'll go back to the open shipped Service Catalog this time we'll go to the database section and just as easily we'll create a sequel server container just need to accept the EULA provide password and choose the Edition we want and create a database and again we can see the sequel server is just another container running on OpenShift now let's take let's find the connection details for our database to keep this simple we'll take the IP address of our database service go back to the web application to visual studio update the IP address in the connection string publish our application and go back to browse it through OpenShift fortunately for us the user experience team heard we're modernizing our application so they pitched in and pushed new icons to use with our containerized database to also modernize the look and feel it's still the same wingtip toys application it's running in a virtual machine on openshift but it's now using a containerized database to recap we saw that we can run virtual machines natively on openshift like any other container based application modernize and mesh them together we containerize the database but we can use the same approach to containerize any part of our application so some items here to deserve repeating one thing you saw is Red Hat Enterprise Linux burning sequel server in a container on open shift and you also saw Windows VM where the dotnet native application also running inside of open ships so tell us what's special about that that seems pretty crazy what you did there exactly burr if we take a look under the hood we can use the kubernetes commands to see the list of our containers in this case the sequel server and the virtual machine containers but since Q Bert is a kubernetes operator we can actually use kubernetes commands like cube Cpl to list our virtual machines and manage our virtual machines like any other entity in kubernetes I love that so there's your crew meta gem oh we can see the kind says virtual machine that is totally awesome now people here are gonna be very excited about what they just saw we're gonna get more information and when will this be coming well you know what can they do to dive in this will be available as part of reddit Cloud suite in tech preview later this year but we are looking for early adopters now so give us a call also come check our deep dive session introducing container native virtualization Thursday 2:00 p.m. awesome that is so incredible so we went from the old to the new from the close to the open the Red Hat way you're gonna be seeing more from our demonstration team that's coming Thursday at 8 a.m. do not be late if you like what you saw this today you're gonna see a lot more of that going forward so we got some really special things in store for you so at this point thank you so much in tomorrow thank you so much you guys are awesome yeah now we have one more special guest a very early adopter of Red Hat Enterprise Linux we've had over a 12-year partnership and relationship with this organization they've been a steadfast Linux and middleware customer for many many years now please extend a warm welcome to Raj China from the Royal Bank of Canada thank you thank you it's great to be here RBC is a large global full-service is back we have the largest bank in Canada top 10 global operate in 30 countries and run five key business segments personal commercial banking investor in Treasury services capital markets wealth management and insurance but honestly unless you're in the banking segment those five business segments that I just mentioned may not mean a lot to you but what you might appreciate is the fact that we've been around in business for over 150 years we started our digital transformation journey about four years ago and we are focused on new and innovative technologies that will help deliver the capabilities and lifestyle our clients are looking for we have a very simple vision and we often refer to it as the digitally enabled bank of the future but as you can appreciate transforming a hundred fifty year old Bank is not easy it certainly does not happen overnight to that end we had a clear unwavering vision a very strong innovation agenda and most importantly a focus towards a flawless execution today in banking business strategy and IT strategy are one in the same they are not two separate things we believe that in order to be the number one bank we have to have the number one tactic there is no question that most of today's innovations happens in the open source community RBC relies on RedHat as a key partner to help us consume these open source innovations in a manner that it meets our enterprise needs RBC was an early adopter of Linux we operate one of the largest footprints of rel in Canada same with tables we had tremendous success in driving cost out of infrastructure by partnering with rahat while at the same time delivering a world-class hosting service to your business over our 12 year partnership Red Hat has proven that they have mastered the art of working closely with the upstream open source community understanding the needs of an enterprise like us in delivering these open source innovations in a manner that we can consume and build upon we are working with red hat to help increase our agility and better leverage public and private cloud offerings we adopted virtualization ansible and containers and are excited about continuing our partnership with Red Hat in this journey throughout this journey we simply cannot replace everything we've had from the past we have to bring forward these investments of the past and improve upon them with new and emerging technologies it is about utilizing emerging technologies but at the same time focusing on the business outcome the business outcome for us is serving our clients and delivering the information that they are looking for whenever they need it and in whatever form factor they're looking for but technology improvements alone are simply not sufficient to do a digital transformation creating the right culture of change and adopting new methodologies is key we introduced agile and DevOps which has boosted the number of adult projects at RBC and increase the frequency at which we do new releases to our mobile app as a matter of fact these methodologies have enabled us to deliver apps over 20x faster than before the other point about around culture that I wanted to mention was we wanted to build an engineering culture an engineering culture is one which rewards curiosity trying new things investing in new technologies and being a leader not necessarily a follower Red Hat has been a critical partner in our journey to date as we adopt elements of open source culture in engineering culture what you seen today about red hearts focus on new technology innovations while never losing sight of helping you bring forward the investments you've already made in the past is something that makes Red Hat unique we are excited to see red arts investment in leadership in open source technologies to help bring the potential of these amazing things together thank you that's great the thing you know seeing going from the old world to the new with automation so you know the things you've seen demonstrated today they're they're they're more sophisticated than any one company could ever have done on their own certainly not by using a proprietary development model because of this it's really easy to see why open source has become the center of gravity for enterprise computing today with all the progress open-source has made we're constantly looking for new ways of accelerating that into our products so we can take that into the enterprise with customers like these that you've met what you've met today now we recently made in addition to the Red Hat family we brought in core OS to the Red Hat family and you know adding core OS has really been our latest move to accelerate that innovation into our products this will help the adoption of open shift container platform even deeper into the enterprise and as we did with the Linux core platform in 2002 this is just exactly what we did with with Linux back then today we're announcing some exciting new technology directions first we'll integrate the benefits of automated operations so for example you'll see dramatic improvements in the automated intelligence about the state of your clusters in OpenShift with the core OS additions also as part of open shift will include a new variant of rel called Red Hat core OS maintaining the consistency of rel farhat for the operation side of the house while allowing for a consumption of over-the-air updates from the kernel to kubernetes later today you'll hear how we are extending automated operations beyond customers and even out to partners all of this starting with the next release of open shift in July now all of this of course will continue in an upstream open source innovation model that includes continuing container linux for the community users today while also evolving the commercial products to bring that innovation out to the enterprise this this combination is really defining the platform of the future everything we've done for the last 16 years since we first brought rel to the commercial market because get has been to get us just to this point hybrid cloud computing is now being deployed multiple times in enterprises every single day all powered by the open source model and powered by the open source model we will continue to redefine the software industry forever no in 2002 with all of you we made Linux the choice for enterprise computing this changed the innovation model forever and I started the session today talking about our prediction of seven years ago on the future being open we've all seen so much happen in those in those seven years we at Red Hat have celebrated our 25th anniversary including 16 years of rel and the enterprise it's now 2018 open hybrid cloud is not only a reality but it is the driving model in enterprise computing today and this hybrid cloud world would not even be possible without Linux as a platform in the open source development model a build around it and while we have think we may have accomplished a lot in that time and we may think we have changed the world a lot we have but I'm telling you the best is yet to come now that Linux and open source software is firmly driving that innovation in the enterprise what we've accomplished today and up till now has just set the stage for us together to change the world once again and just as we did with rel more than 15 years ago with our partners we will make hybrid cloud the default in the enterprise and I will take that bet every single day have a great show and have fun watching the future of computing unfold right in front of your eyes see you later [Applause] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] anytime [Music]
SUMMARY :
account right so the first dimension we
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
James Lebowski | PERSON | 0.99+ |
Brent Midwood | PERSON | 0.99+ |
Ohio | LOCATION | 0.99+ |
Monty Finkelstein | PERSON | 0.99+ |
Ted | PERSON | 0.99+ |
Texas | LOCATION | 0.99+ |
2002 | DATE | 0.99+ |
Canada | LOCATION | 0.99+ |
five and a half terabytes | QUANTITY | 0.99+ |
Marty | PERSON | 0.99+ |
Itamar Hine | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
David Ingham | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
RBC | ORGANIZATION | 0.99+ |
two machines | QUANTITY | 0.99+ |
Paul | PERSON | 0.99+ |
Jay | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Hawaii | LOCATION | 0.99+ |
50 terabytes | QUANTITY | 0.99+ |
Byrne | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
second floor | QUANTITY | 0.99+ |
Red Hat Enterprise Linux | TITLE | 0.99+ |
Asia | LOCATION | 0.99+ |
Raj China | PERSON | 0.99+ |
Dini | PERSON | 0.99+ |
Pearl Harbor | LOCATION | 0.99+ |
Thursday | DATE | 0.99+ |
Jack Britton | PERSON | 0.99+ |
8,000 | QUANTITY | 0.99+ |
Java EE | TITLE | 0.99+ |
Wednesday | DATE | 0.99+ |
Angus | PERSON | 0.99+ |
James | PERSON | 0.99+ |
Linux | TITLE | 0.99+ |
thousands | QUANTITY | 0.99+ |
Joe | PERSON | 0.99+ |
today | DATE | 0.99+ |
two applications | QUANTITY | 0.99+ |
two new machines | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Burr | PERSON | 0.99+ |
Windows | TITLE | 0.99+ |
2018 | DATE | 0.99+ |
Citigroup | ORGANIZATION | 0.99+ |
2010 | DATE | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
each machine | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Visual Studio | TITLE | 0.99+ |
July | DATE | 0.99+ |
Red Hat | TITLE | 0.99+ |
aul Cormier | PERSON | 0.99+ |
Diamond Head | LOCATION | 0.99+ |
first step | QUANTITY | 0.99+ |
Neha Sandow | PERSON | 0.99+ |
two steps | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
UNIX | TITLE | 0.99+ |
second dimension | QUANTITY | 0.99+ |
seven years later | DATE | 0.99+ |
seven years ago | DATE | 0.99+ |
this week | DATE | 0.99+ |
36 keynote speakers | QUANTITY | 0.99+ |
first level | QUANTITY | 0.99+ |
OpenShift | TITLE | 0.99+ |
first step | QUANTITY | 0.99+ |
16 years | QUANTITY | 0.99+ |
30 countries | QUANTITY | 0.99+ |
vSphere | TITLE | 0.99+ |
Day One Kickoff - Red Hat Summit 2017 - #RHSummit - #theCUBE
>> Announcer: Live from Boston, Massachusetts, it's theCUBE, covering Red Hat Summit 2017, brought to you by Red Hat. >> In 1993, two years before the height of Microsoft's dominance and amidst a sea of Unix competitors, Red Hat was founded. The company baked over the course of about 20 years and became a dominant open source company and is leading the trend towards cloud and hybrid cloud and containers. Welcome to Boston, everybody. Welcome to Red Hat Summit. This is theCUBE, the worldwide leader in live tech coverage. I'm here with Stu Miniman and Rebecca Knight, my co-hosts for the week, folks. Great to see you guys. Stu, this is your hundredth Red Hat Summit. >> Stu: It's only my fourth because it's the fourth of theCUBE, 13th year of the show itself, Dave, but great to be back here in Boston, you know, our home stadium for Rebecca, you, and me. Glad to have, a little gloomy today, but it's supposed to be nice weather by the time they take 4,000 of the 6,000 attendees here to Fenway on Wednesday, it's supposed to be some nice weather. Beautiful in New England, Red Hat Summit this week, OpenStack Summit next week, so great to be in the hub. >> Dave: And Rebecca, I felt like, well, first of all, great to be working with you. First time for us together. I thought the open was right in your wheelhouse. They opened with a video and the theme was can machines think. What did you make of that? >> So, what really strikes me about this conference is that it's about the technology, it's about the new, the digital transformation that Red Hat is helping facilitate all these companies making, but it's also about really reimagining the workplace of the future. The theme this year is about the individual and powering the individual. So much of what we're going to hear is about how do we engage developers to, to make this digital transformation for these companies? How do we give them the tools they need, not only just the technology, but also the change in mindset and the change in behaviors that they need, to collaborate with others, not only within their own teams, but within different parts of the organization to make these changes? >> So Red Hat's been on a tier, for anybody who follows the company, they do about 2.4 billion dollars a year in revenue, but more importantly, 3 billion dollars in bookings. Unlike many companies who are doing a shift from legacy, you know, trying to keep alive their old business and bring up the new business, Red Hat has a number of tailwinds and one of those is subscription business. Take a company like Oracle for instance, or IBM, that's shifting from a model of upfront, perpetual license into a subscription model. Red Hat, Stu, has always been there and you're seeing it in the numbers, a billion dollars plus on the balance sheet, just really great momentum. The stock price is up. What's your take on all of it? >> Dave, we've watched so many companies in technologies, where you have this huge wave of hype and then how does revenue go? Does it follow, does it peak, and then does it crash? Linux is one of those kind of slow-burn growths. I mean, I remember back, I started working with Red Hat back in 2000, and when I talked to enterprises back then, it was like, "Hey, are you using Linux?" They were like, "No." And they were like, "Wait, Bob in the back corner, "he's been using Linux stuff, "and he's doing some cool stuff." I watched over the next, you know, five to 10 years. It was a slow growth. It just kind of permeated every corner of what we did. I've mentioned, when we do this show, it's like, you know, Red Hat, a 15 billion dollar market cap or whatever, but we wouldn't have Google if it wasn't for the Linux adoption in the world today. So much of the Internet is based on that. You commented during the keynote, Dave, you look at the developer wave, the cloud wave, containers, you know, the shifting to kind of a subscription model rather than kind of the capping. All of those are things that kind of help lift Red Hat. It's where they're growing. It's why they've had 60 consecutive quarters of revenue growth. Now, it's not the 50% revenue growth like some of the cloud guys today or not explosive, but steady, solid, they're customers love them, great excitement here, great geek show, lots of hoodies and backpacks at the show here and exciting to watch. We've got lots of new technologies and announcements and things to dig into the next three days. >> It's interesting, you know, Rebecca, Stu and I had the pleasure of-- We were handing out with some big MIT brains last year in London talking about the second Machine Age and how humans have always replaced machines or machines have always replaced humans. Now, it's in the cognitive world. You see, again, the theme of this morning, a lot of it was AI related. Of course, the controversy there is that as machines replace humans, it hollows out the core of the middle class, the middle working class. But, the reality is that everything is getting digitized and those types of skills are going to be fundamental for growth in personal vocations, the economy. What do you think? >> I agree completely. I think that really the future is going to be humans and machines working side by side together. Last year, Jim Whitehurst was up here at Red Hat talking about how so much of what we still need to see from human workers is creativity, is judgment, is thought, is insight. Right now, machines still aren't quite there yet. The question is teaching machines to think and really having these two beings working together, collaborating together, and that really is where we're seeing things change. >> We talk all the time on theCUBE about companies are essentially, all companies are becoming software companies. Marc Andreessen said software's leading the world. Marc Benioff said they'll be more SAS companies coming from non-tech firms than tech firms. Behind all that, Stu, we heard a bunch of sort of geeky technologies today, but what are the things that are powering Red Hat's momentum? We talked about hybrid cloud, open source, containers. Help us unpack all that stuff. >> Yeah, so first of all, right, what is that next kind of billion dollar opportunity? One of the main pieces for Red Hat is OpenShift. Now, when we first started covering this show, it was like, ah, we know about infrastructures as a service and software as a service, but maybe platform as a service is where it's going. That's kind of where OpenShift was. Today, Paths, we said it a year or two ago, Paths is kind of passe, where OpenShift is a solution that creates a platform, that allows Red Hat to deliver newer technologies as a service. Containers and Kubernetes, I didn't hear Kubernetes mentioned in the keynote, but Red Hat is the largest enterprise contributor. It's basically Google, a bunch of independent people, and then Red Hat is a major contributor to Kubernetes, helping to drive that adoption, that whole next generation application development is where Red Hat is key, that migration to microservices. As we see that transition, it was interesting to see kind of the application discussion. It was how can we take, how can we help you build those new apps, but then how do we take our existing apps? At the Google show, at this show, and some other shows, it's been kind of the lift if shift movement, it's kind of cool again and not cool because we're doing, it's helping to take those legacy applications, move them into a more modern era and that's where OpenShift, there was like the announcement of the OpenShift.io, all the tools they have from Ansible and Jboss, all of these open source projects that Red Hat is very much a core part of that are going to help drive that next wave and help drive them-- There was an announcement, it was mentioned briefly today. I know they're going to talk more about it tomorrow, but the press release went out about a deeper partnership with Amazon Web Services. I think this is likely going to be the number one thing we talk about leaving the show, which is deeper partnership to say my application can live in AWS on OpenShift or can live in my data center on premises and still using AWS services with OpenShift. That whole hybrid or multicloud story that we built out, Red Hat's trying to make a good place why they should be there and extend for AWS because we know that that's the place that they need to compete against Microsoft with all their entire Azure play, Vmware trying to play that, so multifaceted, really interesting dynamic from a competitive standpoint. The opportunity would be billions of dollars opportunity for a company like Red Hat. >> Great, alright, we've got to wrap, but we will be covering those announcements and others. That AWS announcement knocks down all the major clouds now: Azure, Google, AWS, IBM. I guess Oracle's left., but in China. >> Stu: Support Oracle in application, but, you know. >> In terms of clouds. Alright, so keep it right there everybody. We'll be back. Wall-to-wall coverage here from Boston at the Red Hat Summit. This is theCUBE. We'll be right back.
SUMMARY :
brought to you by Red Hat. and is leading the trend towards cloud of the 6,000 attendees here to Fenway on Wednesday, and the theme was can machines think. and the change in behaviors that they need, a billion dollars plus on the balance sheet, the shifting to kind of a subscription model Stu and I had the pleasure of-- I think that really the future is going to be We talk all the time on theCUBE it's been kind of the lift if shift movement, all the major clouds now: at the Red Hat Summit.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Marc Andreessen | PERSON | 0.99+ |
Marc Benioff | PERSON | 0.99+ |
Rebecca | PERSON | 0.99+ |
Jim Whitehurst | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
4,000 | QUANTITY | 0.99+ |
London | LOCATION | 0.99+ |
1993 | DATE | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Last year | DATE | 0.99+ |
2000 | DATE | 0.99+ |
last year | DATE | 0.99+ |
New England | LOCATION | 0.99+ |
Wednesday | DATE | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
China | LOCATION | 0.99+ |
five | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
One | QUANTITY | 0.99+ |
Ansible | ORGANIZATION | 0.99+ |
Linux | TITLE | 0.99+ |
today | DATE | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
Red Hat Summit | EVENT | 0.99+ |
fourth | QUANTITY | 0.99+ |
3 billion dollars | QUANTITY | 0.99+ |
OpenShift.io | TITLE | 0.99+ |
13th year | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
next week | DATE | 0.99+ |
Red Hat Summit 2017 | EVENT | 0.99+ |
OpenStack Summit | EVENT | 0.98+ |
Jboss | ORGANIZATION | 0.98+ |
10 years | QUANTITY | 0.98+ |
OpenShift | TITLE | 0.98+ |
two | QUANTITY | 0.98+ |
about 20 years | QUANTITY | 0.97+ |
this week | DATE | 0.97+ |
Today | DATE | 0.97+ |
Vmware | ORGANIZATION | 0.97+ |
Red Hat | TITLE | 0.97+ |
60 consecutive quarters | QUANTITY | 0.96+ |
this year | DATE | 0.95+ |
First time | QUANTITY | 0.95+ |
Azure | ORGANIZATION | 0.95+ |
15 billion dollar | QUANTITY | 0.95+ |
first | QUANTITY | 0.94+ |
second | QUANTITY | 0.94+ |
billion dollar | QUANTITY | 0.94+ |
one | QUANTITY | 0.92+ |
Day One Wrap - Red Hat Summit 2017
>> Announcer: Live from Boston, Massachusetts, it's The Cube covering Red Hat Summit 2017. Brought to you by Red hat. >> I'm joined by my co-host, Stu Miniman. Stu, this is day one of the conference: 20 keynotes, six general sessions, people from 70 countries gathered here in Boston, Massachusetts. You are a Red Hat Summit veteran. Thoughts, impressions of the first day. What has struck you really? >> So first of all, it's like Red Hat itself. The company just keeps growing. It's just one of those, you know, strong progress. We talked a little bit over the intro this morning with Dave Vellante as, what is it, 60 quarters consecutively that the company has had revenue growth. It's like, I've worked for a lot of tech companies. It's like, I remember when I worked for (mumbles) when they were doing it (mumbles). They have a miss and the stock kind of drops. IBM, you know, has had quarter and quarter and things like this, but with all of these waves and look, Red Hat's not the biggest company out there, but they are an important player in many changes in the ecosystem. This is one of my favorite developer shows that we cover at the show. Of course, Open Source, we used to say, okay, software's leaving the world and Open Source is eating software. Red Hat's right in the middle of this. I think most people agree. There is really only one way to Red Hat. There's not going to be a Red Hat of something else. There's no one else to really capture that. They got involved at a certain point in time where they could have that model, but they've extended it. They understand what they're doing. They're getting involved in a lot of interesting technologies and there's a lot of people, like most conferences that we go to, there's a lot of passionate people that are really interested, very tech savvy group here, going into all of these breakouts. Many came yesterday for some things. They're coming for a whole week to just dig in, do demos. Down on the show floor, they've got little coating challenges and VR things. I mean there's just a lot of pieces of the show and we only get to see a part of it, but I've enjoyed the customers, the executives, and only one day of three that we're covering so far. >> It is early days in the summit, but where would you say that we are in terms of the maturity of the cloud? We heard from Jim Whitehurst, the CEO, he's going to be on the program tomorrow. He talked about how cloud strategy really is the #1 thing on customers' mind. The cloud is not new and we are really evolving and is maturing, where are we? >> Right, a couple of stats from the keynote this morning. It was 84% of customers have a cloud strategy. Now those of us in the analyst world, we might say, "Well, let's see whether they really have a strategy "they understand," and 59% have a multi-cloud environment which doesn't surprise us. Most people, the joke we used to have was, you had two types of customers, those that were using Amazon and those that didn't realize that some group was using Amazon, reminds me of a comment I made earlier, about like Linux itself. There was always, 15 years ago, big companies would be like, "oh, no, we're a Unix shop," or "we're looking at windows." No, no, no, there's the guy in the corner. He's been using Linux for awhile and that's been a big driver, so cloud absolutely is maturing. I loved, it was an interesting discussion we had with Paul Cormier towards the end of the day. We were seeing Ramgji from Google talking about how we've got the infrastructure and we've got the applications. And I'm an infrastructure guy, but I knew from day one, the reason you build infrastructures is because of your application. If I can just buy SaaS, I don't care about the infrastructure underneath it. The SaaS provider sure does. We talked a lot to SaaS providers as to how they're building their solution. If I'm using infrastructure as a service, you know, there's some I need to understand the infrastructure and there's plenty of infrastructure here, everything from, there's the storage and networking teams, Open Source is permeating every corner of the environment, so it's maturing, but in many ways it's gotten more complex. Cloud was supposed to, many of us thought, simplify the environment, but boy, it seems that many of the things that we had in previous ways as it gets more mature, gets a little bit more complex. Red Hat tries to take those pieces together, build them into solutions. We've talked about there's Red Hat Linux. Enterprise Linux is the platform that can live in many environments. Open Shift is something that allows to encapsulate all of those services, things like containers, we're working with our cloud data applications, and how I want to build them, Open Shift's going to help and you know Cooper Netties goes into the mix so Red Hat is places strategic bets, and, you know, has a strong position in the number place and has big partners. It's really interesting to see. We've had a couple on already, and we'll have many on through the week from key providers in the infrastructure and cloud players out there. >> I think the theme of this year's conference is the power of the individual, and it really is. I mean, we heard from Sam Ramji who said, "This is the age of the developer." Developers have more respect, more veneration, than ever before and yet we also heard from Sandra Rivera, it is also harder than it has ever been before to be a developer because there is just so much data and it's hard to know the difference between the good data and the bad data and where you find the right insights to make decisions that drive the business on that data and if you're a developer, you might not have the business savvy to do that, so it's a real balance here that the companies and developers themselves are trying to strike. Are they doing a good job? I mean, is it still too early? >> It's funny. When you say that it makes me think of in the machine-learning space, it's how do we get the data to train the machine to understand what is good or not, and you know, I wish they'd done that for us when we all went to college because in my job, it's always like, okay, what data can we trust? Well, if you remember from Princess Bride, it was like, with Versini, it was like, well, I know a vendor told me information, so therefore, I know I can't trust that data, but if I take someone else's data, you know, it gets very confusing as it, what I'm saying, is any single piece of data a lot of times you know you can throw that out because maybe it's good, maybe it's not, but how do I get, understand the trends, understand what's going on. I love talking to practitioners here that when they're talking about their business and the impact it's had. We had one of the customers on today was like, "Look, I deployed this, and I have like $6 million "worth of savings in my business year every year. I mean, that's hard information, hard to argue with it. Now are there other solutions that might do that? Sure, but yeah, it's challenging to understand what's good data, what's not good data. As an industry, you know, whether that's the kind of the people or the machines themselves. >> I think the other question that we're all grappling with here is that, and you talked about this earlier, just talking about the evolution of Red Hat that you've seen in coming to this summit all these years. This is a company founded in 1993. Today it has a market cap of $15 billion, 2.4 billion in revenue, nearly 8,000 employees. Can a big company, and it's a big company now, can it innovate, can it truly innovate and we heard in the keynote one of the things that Jim Whitehurst was trying to do was to cultivate a startup mindset. Is that possible? >> Yeah, it's a great question, and I know, Rebecca, you and I've been talking about this throughout the week so far as to big companies have challenges because there are the structure and the organization and what drives the business. What's interesting about Red Hat, of course, is that sure they have products, but underneath it, it's all Open Source, so community is in their DNA. As Paul Cormier said, he's like "We couldn't "buy a company and do it closed-source again." They did that a couple years ago, it didn't go well. They were going to transition it, but it's been a case study that's been written up. (talking over each other) >> Me and Jim in the room alone, yes. >> Absolutely, so what's interesting is Red Hat is more like a community in many ways. As Jim Whitehurst spoke, is the open organizations so they act more like an Open Source community than they do a company, of course, that being said, they're profitable, they have employees, they have benefits, they have locations all around the world so it's been interesting to see how Red Hat adopts certain technologies, contributes to them. You know, it would be interesting to see who else Jim Whitehurst tomorrow and say okay, you know, what is a product that was developed by Red Hat versus a project that was taken in by Red Hat, something I've seen over the last three or four years, a lot of acquisitions they made, it was, let's take Open Stack for example. There is a big survey that's done twice a year that said what are people using and what are they interested in with Open Stack, and it felt like that was the buying guide for Red Hat because it was like, "Oh, okay, here's the sent-to-us stuff, "that was pretty interesting. "Well, we can't buy Konica, we'll buy Sento West," and that comes under the umbrella. "Oh, there's this storage management piece "that actually is open source that people "are using for Open Stack, well let me buy that one, too." So Red Hat has become inquisitive, but it's to get deeper engagement in the community. They are all Open Source so always there is that balance in big companies of what do I do with R & D and what do I do with M & A? And Red Hat has done both. I think they've done a good job of moving the industry forward. Innovation is a lot of times a buzz word, but they do some good stuff. They contribute a lot. People here are very positive about what's going on. Just because they haven't created the next flying car or things like that. >> But they're on that. We heard here that they're thinking about it. I mean, I think that's also, I didn't mean to ask the question insinuating that they're not innovating, but I do think that particularly at a time where we are seeing Microsoft years of no growth, Intel, stalled growth, you know, what is Red Hat's secret sauce, and also what is going to the breaking point for these other lagging enterprise companies? When will we see some new ideas and fresh perspective? >> Yeah, it's interesting 'cause we write this whole, the shift of what's happening with cloud, the wave of the machine-learning, the augmented intelligence or artificial intelligence, how much is that going to ding the traditional companies, especially the infrastructure companies. Red Hat touches it, but they're much broader. Their growth, they're an Open Source company. It's interesting. I've seen a lot of other companies, the Open Sourced-based ones, "Oh, we're not "an Open Source company. "We're an enterprise software company," or "software company." I'm sure we asked Red Hat if they were a software company, they will say well, of course, like everything we deliver is software, but at their DNA, they are Open Source, and that kind of sets them apart from the pack even though there are other examples Dave Vellante went through this morning of other companies that are heavily involved in Open Source, struggling with that how do we monetize Open Source. >> Well, is it a problem with the business model? Why is it so challenging? >> It's a great question. The first time I interviewed Jim Whitehurst, it's like "Jim, why aren't there more billion dollar Open Source companies," and his answer was, you know, Not being flippy," he's like, "Look, selling free is hard." >> Yeah, that's a great point, but I think that we should, we need to dig a little deeper and hopefully we can get to the bottom of that by day three. >> Absolutely, and I tell ya, I'm sitting here listening to, you know, we'll be doing the Cloud Foundry Summit in June there, which is pivotal as making a lot of money with that, but most of the other companies not doing so much. We were just a Docker Con. A couple weeks ago, Docker Company seems to be growing, doing well. They just changed their CEO today so hot news out on SiliconANGLE.com. Ben Golub, the CEO, I just interviewed him a couple weeks ago and now he's moving the board, but they're bringing the Chairman of the Board to be CEO, so we look at all these companies: Cloudera just IPO'd. Hortonworks is a public company. These companies that have Open Core or Open Source as a major piece of what they're doing, none have had the just measured growth and success that Red Hat does, so you know, Red Hat has a case study. It still seems to be one that stands alone category by themselves, but you know, partnering and growing and doing great, and it's exciting to cover. >> Day two, anything you're particularly excited about? >> Yeah, so I got a taste of the AWS-enhanced partnership talking about how Open Shift is going to have deeper integration and we talked a little bit with Paul Cormier so I suspect Jim Whitehurst will be talking to him about it. We have one of the main guys involved in that from Red Hat side will be on our program tomorrow. So the keynote tomorrow, I'll be watching here. Maybe there'll be a special guest during the keynote talk about that announcement some, but you know, obviously a space we watch real closely. We had Optum, one of the customers on today, he said, "I use Open Shift and I'm using Amazon and want to do it most and this is a game-changer for me," so we think this is really interest to watch, really, you talked about maturity early in this segment here, the maturity of hybrid cloud. If Amazon starts to get deeper into the data centers, partnering with companies like Red Hat and like VMware, that will help them to stave off some of the competition that's coming at them. (mumbles) to Microsoft and Google who's getting Cooper Netties everywhere. Lots more to dig in with. There's some announcements today but a lot more to come and you know, more customers, more partners, more Red Hatters. >> That's great, great. Well, we are looking forward to being back here tomorrow bright and early. Thank you for joining us. I'm Rebecca Knight for Stu Miniman. We'll see you back here tomorrow. (innovative tones)
SUMMARY :
Brought to you by Red hat. Thoughts, impressions of the first day. that the company has had revenue growth. It is early days in the summit, but where would you say that many of the things that we had in previous ways the good data and the bad data and where you find We had one of the customers on today was like, just talking about the evolution of Red Hat that is that sure they have products, but underneath it, of moving the industry forward. I didn't mean to ask the question insinuating the shift of what's happening with cloud, Open Source companies," and his answer was, you know, and hopefully we can get to the bottom of that by day three. but most of the other companies not doing so much. We have one of the main guys involved in that We'll see you back here tomorrow.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Sam Ramji | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Sandra Rivera | PERSON | 0.99+ |
Ben Golub | PERSON | 0.99+ |
Rebecca | PERSON | 0.99+ |
Jim Whitehurst | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Paul Cormier | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
$6 million | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
1993 | DATE | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
84% | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
2.4 billion | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
20 keynotes | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
$15 billion | QUANTITY | 0.99+ |
June | DATE | 0.99+ |
Cloudera | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
six general sessions | QUANTITY | 0.99+ |
59% | QUANTITY | 0.99+ |
Red hat | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
Linux | TITLE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Stu | PERSON | 0.99+ |
70 countries | QUANTITY | 0.99+ |
Hortonworks | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
two types | QUANTITY | 0.99+ |
Cooper Netties | PERSON | 0.99+ |
today | DATE | 0.99+ |
Open Shift | TITLE | 0.99+ |
first day | QUANTITY | 0.99+ |
Docker Company | ORGANIZATION | 0.98+ |
Sento West | ORGANIZATION | 0.98+ |
three | QUANTITY | 0.98+ |
15 years ago | DATE | 0.98+ |
single piece | QUANTITY | 0.98+ |
Red Hat Summit 2017 | EVENT | 0.98+ |
Day two | QUANTITY | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
Cloud Foundry Summit | EVENT | 0.98+ |
nearly 8,000 employees | QUANTITY | 0.98+ |
twice a year | QUANTITY | 0.97+ |
one day | QUANTITY | 0.97+ |
Docker | ORGANIZATION | 0.97+ |
Konica | ORGANIZATION | 0.97+ |
60 quarters | QUANTITY | 0.97+ |
Red Hatters | ORGANIZATION | 0.97+ |
VMware | ORGANIZATION | 0.96+ |
day three | QUANTITY | 0.96+ |
Tony Jeffries, Dell Technologies & Honoré LaBourdette, Red Hat | MWC Barcelona 2023
>> theCUBE's live coverage is made possible by funding from Dell Technologies: "Creating technologies that drive human progress." >> Good late afternoon from Barcelona, Spain at the Theater of Barcelona. It's Lisa Martin and Dave Nicholson of "theCUBE" covering MWC23. This is our third day of continuous wall-to-wall coverage on theCUBE. And you know we're going to be here tomorrow as well. We've been having some amazing conversations about the ecosystem. And we're going to continue those conversations next. Honore Labourdette is here, the VP global partner, Ecosystem Success Team, Telco Media and Entertainment at Red Hat. And Tony Jeffries joins us as well, a Senior Director of Product Management, Telecom Systems Business at Dell. Welcome to the theCUBE. >> Thank you. >> Thank you. >> Great to have both of you here. So we're going to be talking about the evolution of the telecom stack. We've been talking a lot about disaggregation the last couple of days. Honore, starting with you, talk about the evolution of the telecom stock. You were saying before we went live this is your 15th at least MWC. So you've seen a lot of evolution, but what are some of the things you're seeing right now? >> Well, I think the interesting thing about disaggregation, which is a key topic, right? 'Cause it's so relative to 5G and the 5G core and the benefits and the features of 5G core around disaggregation. But one thing we have to remember, when you disaggregate, you separate things. You have to bring those things back together again in a different way. And that's predominantly what we're doing in our partnership with Dell, is we're bringing those disaggregated components back together in a cohesive way that takes advantage of the new technology, at the same time taking out the complexity and making it easier for our Telco customers to deploy and to scale and to get much more, accelerate the time to revenue. So the trend now is, what we're seeing is two things I would say. One is how do we solve for the complexity with the disaggregation? And how do we leverage the ecosystem as a partner in order to help solve for some of those challenges? >> Tony, jump on in, talk about what you guys announced last week, Dell and Red Hat, and how it's addressing the complexities that Honore was saying, "Hey, they're there." >> Yeah. You know, our customers, our operators are saying, "Hey, I want disaggregation." "I want competition in the market." But at the same time who's going to support all this disaggregation, right? And so at the end of the day, there's going to be an operator that's going to have to figure this out. They're going to have an SLA that they're going to have to meet. And so they're going to want to go with a best-in-class partner with Red Hat and Dell, in terms of our infrastructure and their software together as one combined engineered system. And that's what we call a Dell Telecom infrastructure block for Red Hat. And so at the end of the day, things may go wrong, and if they do, who are they going to call for that support? And that's also really a key element of an engineered system, is this experience that they get both with Red Hat and with Dell together supporting the customer as one. Which is really important to solve this disaggregated problem that can arise from a disaggregated open network situation, yeah. >> So what is the market, the go to market motion look like? People have loyalties in the IT space to technologies that they've embraced and been successful with for years and years. So you have folks in the marketplace who are diehard, you know, dyed red, Red Hat folks. Is it primarily a pull from them? How does that work? How do you approach that to your, what are your end user joint customers? What does that look like from your perspective? >> Sure, well, interestingly enough both Red Hat and Dell have been in the marketplace for a very long time, right? So we do have the brand with those Telco customers for these solutions. What we're seeing with this solution is, it's an emerging market. It's an emerging market for a new technology. So there's an opportunity for both Red Hat and Dell together to leverage our brands with those customers with no friction in the marketplace as we go to market together. So our field sales teams will be motivated to, you know, take advantage of the solution for their customers, as will the Dell team. And I'll let Tony speak to the Dell, go to market. >> Yeah. You know, so we really co-sell together, right? We're the key partners. Dell will end up fulfilling that order, right? We send these engineered systems through our factories and we send that out either directly to a customer or to a OTEL lab, like an intermediate lab where we can further refine and customize that offer for that particular customer. And so we got a lot of options there, but we're essentially co-selling. And Dell is fulfilling that from an infrastructure perspective, putting Red Hat software on top and the licensing for that support. So it's a really good mix. >> And I think, if I may, one of the key differentiators is the actual capabilities that we're bringing together inside of this pre-integrated solution. So it includes the Red Hat OpenShift which is the container software, but we also add our advanced cluster management as well as our Ansible automation. And then Dell adds their orchestration capability along with the features and functionalities of the platform. And we put that together and we offer capability, remote automation orchestration and management capabilities that again reduces the operating expense, reduces the complexity, allows for easy scale. So it's, you know, certainly it's all about the partnership but it's also the capabilities of the combined technology. >> I was just going to ask about some of the numbers, and you mentioned some of them. Reduction of TCO I imagine is also a big capability that this solution enables besides reducing OpEx. Talk about the TCO reduction. 'Cause I know there's some numbers there that Dell and Red Hat have already delivered to the market. >> Yeah. You know, so these infrastructure blocks are designed specifically for Core, or for RAN, or for the Edge. We're starting out initially in the Core, but we've done some market research with a company called ACG. And ACG has looked at day zero, day one and day two TCO, FTE hours saved. And we're looking at over 40 to 50% TCO savings over you know, five year period, which is quite significant in terms of cost savings at a TCO level. But also we have a lot of numbers around power consumption and savings around power consumption. But also just that experience for our operator that says, hey, I'm going to go to one company to get the best in class from Red Hat and Dell together. That saves a lot of time in procurement and that entire ordering process as well. So you get a lot of savings that aren't exactly seen in the FTE hours around TCO, but just in that overall experience by talking to one company to get the best of both from both Red Hat and Dell together. >> I think the comic book character Charlie Brown once said, "The most discouraging thing in the world is having a lot of potential." (laughing) >> Right. >> And so when we talk about disaggregating and then reaggregating or reintegrating, that means choice. >> Tony: Yeah. >> How does an operator approach making that choice? Because, yeah, it sounds great. We have this integration lab and you have all these choices. Well, how do I decide, how does a person decide? This is a question for Honore from a Red Hat perspective, what's the secret sauce that you believe differentiates the Red Hat-infused stack versus some other assemblage of gear? >> Well, there's a couple of key characteristics, and the one that I think is most prevalent is that we're open, right? So "open" is in Red Hat's DNA because we're an open source technology company, and with that open source technology and that open platform, our customers can now add workloads. They have options to choose the workloads that they want to run on that open source platform. As they choose those workloads, they can be confident that those workloads have been certified and validated on our platform because we have a very robust ecosystem of ISVs that have already completed that process with open source, with Red Hat OpenShift. So then we take the Red Hat OpenShift and we put it on the Dell platform, which is market leader platform, right? Combine those two things, the customers can be confident that they can put those workloads on the combined platform that we're offering and that those workloads would run. So again, it goes back to making it simpler, making it easy to procure, easy to run workloads, easy to deploy, easy to operate. And all of that of course equates to saving time always equates to saving money. >> Yeah. Absolutely. >> Oh, I thought you wanted to continue. >> No, I think Honore sort of, she nailed it. You know, Red Hat is so dominant in 5G, and what they're doing in the market, especially in the Core and where we're going into the RAN, you know, next steps are to validate those workloads, those workload vendors on top of a stack. And the Red Hat leader in the Core is key, right? It's instant credibility in the core market. And so that's one of the reasons why we, Dell, want to partner with with Red Hat for the core market and beyond. We're going to be looking at not only Core but moving into RAN very soon. But then we do, we take that validated workload on top of that to optimize that workload and then be able to instantiate that in the core and the RAN. It's just a really streamlined, good experience for our operators. At the end of the day, we want happy customers in between our mutual customer base. And that's what you get whenever you do that combined stack together. >> Were operators, any operators, and you don't have to mention them by name, involved in the evolution of the infra blocks? I'm just curious how involved they were in helping to co-develop this. I imagine they were to some degree. >> Yeah, I could take that one. So, in doing so, yeah, we can't be myopic and just assume that we nailed it the first time, right? So yeah, we do work with partners all the way up and down the stack. A lot of our engineering work with Red Hat also brings in customer experience that is key to ensure that you're building and designing the right architecture for the Core. I would like to use the names, I don't know if I should, but a lot of those names are big names that are leaders in our industry. But yeah, their footprints, their fingerprints are all over those design best practices, those architectural designs that we build together. And then we further that by doing those validated workloads on top of that. So just to really prove the point that it's optimized for the Core, RAN, Edge kind of workload. >> And it's a huge added value for Red Hat to have a partner like Dell who can take all of those components, take the workload, take the Red Hat software, put it on the platform, and deliver that out to the customers. That's really, you know, a key part of the partnership and the value of the partnership because nobody really does that better than Dell. That center of excellence around delivery and support. >> Can you share any feedback from any of those nameless operators in terms of... I'm even kind of wondering what the catalyst was for the infra block. Was it operators saying, "Ah, we have these challenges here"? Was it the evolution of the Telco stack and Dell said, "We can come in with Red Hat and solve this problem"? And what's been some of their feedback? >> Yeah, it really comes down to what Honore said about, okay, you know, when we are looking at day zero, which is primarily your design, how much time savings can we do by creating that stack for them, right? We have industry experts designing that Core stack that's optimized for different levels of spectrum. When we do that we save a lot of time in terms of FTE hours for our architects, our operators, and then it goes into day one, right? Which is the deployment aspect for saving tons of hours for our operators by being able to deploy this. Speed to market is key. That ultimately ends up in, you know, faster time to revenue for our customers, right? So it's, when they see that we've already done the pre-work that they don't have to, that's what really resonates for them in terms of that, yeah. >> Honore, Lisa and I happen to be veterans of the Cloud native space, and what we heard from a lot of the folks in that ecosystem is that there is a massive hunger for developers to be able to deploy and manage and orchestrate environments that consist of Cloud native application infrastructure, microservices. >> Right. >> What we've heard here is that 5G equals Cloud native application stacks. Is that a fair assessment of the environment? And what are you seeing from a supply and demand for that kind of labor perspective? Is there still a hunger for those folks who develop in that space? >> Well, there is, because the very nature of an open source, Kubernetes-based container platform, which is what OpenShift is, the very nature of it is to open up that code so that developers can have access to the code to develop the workloads to the platform, right? And so, again, the combination of bringing together the Dell infrastructure with the Red Hat software, it doesn't change anything. The developer, the development community still has access to that same container platform to develop to, you know, Cloud native types of application. And you know, OpenShift is Red Hat's hybrid Cloud platform. So it runs on-prem, it runs in the public Cloud, it runs at the edge, it runs at the far edge. So any of the development community that's trying to develop Cloud native applications can develop it on this platform as they would if they were developing on an OpenShift platform in the public Cloud. >> So in "The Graduate", the advice to the graduate was, "Plastics." Plastics. As someone who has more children than I can remember, I forget how many kids I have. >> Four. >> That's right, I have four. That's right. (laughing) Three in college and grad school already at this point. Cloud native, I don't know. Kubernetes definitely a field that's going to, it's got some legs? >> Yes. >> Okay. So I can get 'em off my payroll quickly. >> Honore: Yes, yes. (laughing) >> Okay, good to know. Good to know. Any thoughts on that open Cloud native world? >> You know, there's so many changes that's going to happen in Kubernetes and services that you got to be able to update quickly. CICD, obviously the topic is huge. How quickly can we keep these systems up to date with new releases, changes? That's a great thing about an engineered system is that we do provide that lifecycle management for three to five years through this engagement with our customers. So we're constantly keeping them up with the latest and the greatest. >> David: Well do those customers have that expertise in-house, though? Do they have that now? Or is this a seismic cultural shift in those environments? >> Well, you know, they do have a lot of that experience, but it takes a lot of that time, and we're taking that off of their plate and putting that within us on our system, within our engineered system, and doing that automatically for them. And so they don't have to check in and try to understand what the release certification matrix is. Every quarter we're providing that to them. We're communicating out to the operator, telling them what's coming up latest and greatest, not only in terms of the software but the hardware and how to optimize it all together. That's the beauty of these systems. These are five year relationships with our operators that we're providing that lifecycle management end to end, for years to come. >> Lisa: So last question. You talked about joint GTM availability. When can operators get their hands on this? >> Yes. Yes. It's currently slated for early September release. >> Lisa: Awesome. So sometime this year? >> Yes. >> Well guys, thank you so much for talking with us today about Dell, Red Hat, what you're doing to really help evolve the telecom stack. We appreciate it. Next time come back with a customer, we can dig into it. That'd be fun. >> We sure will, absolutely. That may happen today actually, a little bit later. Not to let the cat out the bag, but good news. >> All right, well, geez, you're going to want to stick around. Thank you so much for your time. For our guests and for Dave Nicholson. This is Lisa Martin of theCUBE at MWC23 from Barcelona, Spain. We'll be back after a short break. (calm music)
SUMMARY :
that drive human progress." at the Theater of Barcelona. of the telecom stock. accelerate the time to revenue. and how it's addressing the complexities And so at the end of the day, the IT space to technologies in the marketplace as we and the licensing for that support. that again reduces the operating expense, about some of the numbers, in the FTE hours around TCO, in the world is having that means choice. the Red Hat-infused stack versus And all of that of course equates to And so that's one of the of the infra blocks? and just assume that we nailed and the value of the partnership Was it the evolution of the Which is the deployment aspect of the Cloud native space, of the environment? So any of the development So in "The Graduate", the Three in college and grad (laughing) Okay, good to know. is that we do provide but the hardware and how to Lisa: So last question. It's currently slated for So sometime this year? help evolve the telecom stack. the bag, but good news. going to want to stick around.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Tony | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
ACG | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Tony Jeffries | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Honore | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
five year | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Charlie Brown | PERSON | 0.99+ |
Honore Labourdette | PERSON | 0.99+ |
four | QUANTITY | 0.99+ |
OTEL | ORGANIZATION | 0.99+ |
third day | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Barcelona, Spain | LOCATION | 0.99+ |
last week | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Three | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
early September | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
one company | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
Four | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
Red Hat | TITLE | 0.98+ |
Red Hat OpenShift | TITLE | 0.98+ |
this year | DATE | 0.98+ |
OpenShift | TITLE | 0.97+ |
Dell Technologies MWC 2023 Exclusive Booth Tour with David Nicholson
>> And I'm here at Dell's Presence at MWC with vice president of marketing for telecom and Edge Computing, Aaron Chaisson. Aaron, how's it going? >> Doing great. How's it going today, Dave? >> It's going pretty well. Pretty excited about what you've got going here and I'm looking forward to getting the tour. You ready to take a closer look? >> Ready to do it. Let's go take a look! For us in the telecom ecosystem, it's really all about how we bring together the different players that are innovating across the industry to drive value for our CSP customers. So, it starts really, for us, at the ecosystem layer, bringing partners, bringing telecommunication providers, bringing (stutters) a bunch of different technologies together to innovate together to drive new value. So Paul, take us a little bit through what we're doing to- to develop and bring in these partnerships and develop our ecosystem. >> Uh, sure. Thank you Aaron. Uh, you know, one of the things that we've been focusing on, you know, Dell is really working with many players in the open telecom ecosystem. Network equipment providers, independent software vendors, and the communication service providers. And, you know, through our lines of business or open telecom ecosystem labs, what we want to do is bring 'em together into a community with the goal of really being able to accelerate open innovation and, uh, open solutions into the market. And that's what this community is really about, is being able to, you know, have those communications, develop those collaborations whether it's through, you know, sharing information online, having webinars dedicated to sharing Dell information, whether it's our next generation hardware portfolio we announced here at the show, our use case directory, our- how we're dealing with new service opportunities, but as well as the community to share, too, which I think is an exciting way for us to be able to, you know- what is the knowledge thing? As well as activities at other events that we have coming up. So really the key thing I think about, the- the open telecom ecosystem community, it's collaboration and accelerating the open industry forward. >> So- So Aaron, if I'm hearing this correctly you're saying that you can't just say, "Hey, we're open", and throw a bunch of parts in a box and have it work? >> No, we've got to work together to integrate these pieces to be able to deliver value, and, you know, we opened up a- (stutters) in our open ecosystem labs, we started a- a self-certification process a couple of months back. We've already had 13 partners go through that, we've got 16 more in the pipeline. Everything you see in this entire booth has been innovated and worked with partnerships from Intel to Microsoft to, uh, to (stutters) Wind River and Red Hat and others. You go all the way around the booth, everything here has partnerships at its core. And why don't we go to the next section here where we're going to be showing how we're pulling that all together in our open ecosystems labs to drive that innovation? >> So Aaron, you talked about the kinds of validation and testing that goes on, so that you can prove out an open stack to deliver the same kinds of reliability and performance and availability that we expect from a wireless network. But in the opens- in the open world, uh, what are we looking at here? >> Yeah absolutely. So one of the- one of the challenges to a very big, broad open ecosystem is the complexity of integrating, deploying, and managing these, especially at telecom scale. You're not talking about thousands of servers in one site, you're talking about one server in thousands of sites. So how do you deploy that predictable stack and then also manage that at scale? I'm going to show you two places where we're talkin' about that. So, this is actually representing an area that we've been innovating in recently around creating an integrated infrastructure and virtualization stack for the telecom industry. We've been doing this for years in IT with VxBlocks and VxRails and others. Here what you see is we got, uh, Dell hardware infrastructure, we've got, uh, an open platform for virtualization providers, in this case we've created an infrastructure block for Red Hat to be able to supply an infrastructure for core operations and Packet Cores for telecoms. On the other side of this, you can actually see what we're doing with Wind River to drive innovation around RAN and being able to simplify RAN- vRAN and O-RAN deployments. >> What does that virtualization look like? Are we talking about, uh, traditional virtual machines with OSs, or is this containerized cloud native? What does it look like? >> Yeah, it's actually both, so it can support, uh, virtual, uh-uh, software as well as containerized software, so we leverage the (indistinct) distributions for these to be able to deploy, you know, cloud native applications, be able to modernize how they're deploying these applications across the telecom network. So in this case with Red Hat, uh, (stutters) leveraging OpenShift in order to support containerized apps in your Packet Core environments. >> So what are- what are some of the kinds of things that you can do once you have infrastructure like this deployed? >> Yeah, I mean by- by partnering broadly across the ecosystem with VMware, with Red Hat, uh, with- with Wind River and with others, it gives them the ability to be able to deploy the right virtualization software in their network for the types of applications they're deploying. They might want to use Red Hat in their core, they may want to use Wind River in their RAM, they may want to use, uh, Microsoft or VMware for their- for their Edge workloads, and we allow them to be able to deploy all those, but centrally manage those with a common user interface and a common set of APIs. >> Okay, well I'm dying to understand the link between this and the Lego city that the viewers can't see, yet, but it's behind me. Let's take a look. >> So let's take a look at the Lego city that shows how we not deploy just one of these, but dozens or hundreds of these at scale across a cityscape. >> So Aaron, I know we're not in Copenhagen. What's all the Lego about? >> Yeah, so the Lego city here is to show- and, uh, really there's multiple points of Presence across an entire Metro area that we want to be able to manage if we're a telecom provider. We just talked about one infrastructure block. What if I wanted to deploy dozens of these across the city to be able to manage my network, to be able to manage, uh, uh- to be able to deploy private mobility potentially out into a customer enterprise environment, and be able to manage all of these, uh, very simply and easily from a common interface? >> So it's interesting. Now I think I understand why you are VP of marketing for both telecom and Edge. Just heard- just heard a lot about Edge and I can imagine a lot of internet of things, things, hooked up at that Edge. >> Yeah, so why don't we actually go over to another area? We're actually going to show you how one small microbrewery (stutters) in one of our cities nearby, uh, (stutters) my hometown in Massachusetts is actually using this technology to go from more of an analyzed- analog world to digitizing their business to be able to brew better beer. >> So Aaron, you bring me to a brewery. What do we have- what do we have going on here? >> Yeah, so, actually (stutters) about- about a year ago or so, I- I was able to get my team to come together finally after COVID to be able to meet each other and have a nice team event. One of those nights, we went out to dinner at a- at a brewery called "Exhibit 'A'" in Massachusetts, and they actually gave us a tour of their facilities and showed us how they actually go through the process of brewing beer. What we saw as we were going through it, interestingly, was that everything was analog. They literally had people with pen and paper walking around checking time and temperature and the process of brewing the beer, and they weren't asking for help, but we actually saw an opportunity where what we're doing to help businesses digitize what they're doing in their manufacturing floor can actually help them optimize how they build whatever product they're building, in this case it was beer. >> Hey Warren, good to meet you! What do we have goin' on? >> Yeah, it's all right. So yeah, basically what we did is we took some of their assets in the, uh, brewery that were completely manually monitored. People were literally walking around the floor with clipboards, writing down values. And we censorized the asset, in this case fermentation tanks and we measured the, uh, pressure and the temperature, which in fermentation are very key to monitor those, because if they get out of range the entire batch of beer can go bad or you don't get the consistency from batch to batch if you don't tightly monitor those. So we censorized the fermentation tank, brought that into an industrial I/O network, and then brought that into a Dell gateway which is connected 5G up to the cloud, which then that data comes to a tablet or a phone, which they, rather than being out on the floor and monitor it, can look at this data remotely at any time. >> So I'm not sure the exact date, the first time we have evidence of beer being brewed by humanity... >> Yep. >> But I know it's thousands of years ago. So it's taken that long to get to the point where someone had to come along, namely Dell, to actually digitally transform the beer business. Is this sort of proof that if you can digitally transform this, you can digitally transform anything? >> Absolutely. You name it, anything that's being manufactured, sold, uh, uh, taken care of, (stutters) any business out there that's looking to be able to be modernize and deliver better service to their customers can benefit from technologies like this. >> So we've taken a look at the ecosystem, the way that you validate architectures, we've seen an example of that kind of open architecture. Now we've seen a real world use case. Do you want to take a look a little deeper under the covers and see what's powering all of this? >> We just this week announced a new line of servers that power Edge and RAN use cases, and I want to introduce Mike to kind of take us through what we've been working on and really what the power of what this providing. >> Hey Mike, welcome to theCube. >> Oh, glad- glad to be here. So, what I'd really like to talk about are the three new XR series servers that we just announced last week and we're showing here at Mobile World Congress. They are all short depth, ruggedized, uh, very environmentally tolerant, and able to withstand, you know, high temperatures, high humidities, and really be deployed to places where traditional data center servers just can't handle, you know, due to one fact or another, whether it's depth or the temperature. And so, the first one I'd like to show you is the XR7620. This is, uh, 450 millimeters deep, it's designed for, uh, high levels of acceleration so it can support up to 2-300 watt, uh, GPUs. But what I really want to show you over here, especially for Mobile World Congress, is our new XR8000. The XR8000 is based on Intel's latest Sapphire Rapids technology, and this is- happens to be one of the first, uh, EE boost processors that is out, and basically what it is (stutters) an embedded accelerator that makes, uh, the- the processing of vRAN loads very, uh, very efficient. And so they're actually projecting a, uh, 3x improvement, uh, of processing per watt over the previous generation of processors. This particular unit is also sledded. It's very much like, uh, today's traditional baseband unit, so it's something that is designed for low TCO and easy maintenance in the field. This is the frew. When anything fails, you'll pull one out, you pop a new one in, it comes back into service, and the- the, uh, you know, your radio is- is, uh, minimally disrupted. >> Yeah, would you describe this as quantitative and qualitative in terms of the kinds of performance gains that these underlying units are delivering to us? I mean, this really kind of changes the game, doesn't it? It's not just about more, is it about different also in terms of what we can do? >> Well we are (stutters) to his point, we are able to bring in new accelerator technologies. Not only are we doing it with the Intel, uh, uh, uh, of the vRAN boost technologies, but also (stutters) we can bring it, too, but there's another booth here where we're actually working with our own accelerator cards and other accelerator cards from our partners across the industry to be able to deliver the price and performance capabilities required by a vRAN or an O-RAN deployment in the network. So it's not- it's not just the chip technology, it's the integration and the innovation we're doing with others, as well as, of course, the unique power cooling capabilities that Dell provides in our servers that really makes these the most efficient way of being able to power a network. >> Any final thoughts recapping the whole picture here? >> Yeah, I mean I would just say if anybody's, uh, i- is still here in Mobile World Congress, wants to come and learn what we're doing, I only showed you a small section of the demos we've got here. We've got 13 demos across on 8th floor here. Uh, for those of you who want to talk to us (stutters) and have meetings with us, we've got 13 meeting rooms back there, over 500 costumer partner meetings this week, we've got some whisper suites for those of you who want to come and talk to us but we're innovating on going forward. So, you know, there's a lot that we're doing, we're really excited, there's a ton of passion at this event, and, uh, we're really excited about where the industry is going and our role in it. >> 'Preciate the tour, Aaron. Thanks Mike. >> Mike: Thank you! >> Well, for theCube... Again, Dave Nicholson here. Thanks for joining us on this tour of Dell's Presence here at MWC 2023.
SUMMARY :
with vice president of marketing for it going today, Dave? to getting the tour. the industry to drive value and the communication service providers. to be able to deliver value, and availability that we one of the challenges to a to be able to deploy, you know, the ecosystem with and the Lego city that the the Lego city that shows how What's all the Lego about? Yeah, so the Lego city here is to show- think I understand why you are to be able to brew better beer. So Aaron, you bring me to and temperature and the process to batch if you don't So I'm not sure the to get to the point that's looking to be able to the way that you validate architectures, to kind of take us through and really be deployed to the industry to be able to come and talk to us but we're 'Preciate the tour, Aaron. Thanks for joining us on this
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Aaron | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Aaron Chaisson | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Massachusetts | LOCATION | 0.99+ |
Mike | PERSON | 0.99+ |
Copenhagen | LOCATION | 0.99+ |
Warren | PERSON | 0.99+ |
13 partners | QUANTITY | 0.99+ |
David Nicholson | PERSON | 0.99+ |
13 demos | QUANTITY | 0.99+ |
450 millimeters | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
two places | QUANTITY | 0.99+ |
XR7620 | COMMERCIAL_ITEM | 0.99+ |
one site | QUANTITY | 0.99+ |
XR8000 | COMMERCIAL_ITEM | 0.99+ |
dozens | QUANTITY | 0.99+ |
Lego | ORGANIZATION | 0.99+ |
8th floor | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Edge | ORGANIZATION | 0.98+ |
this week | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
first time | QUANTITY | 0.98+ |
three | QUANTITY | 0.98+ |
Wind River | ORGANIZATION | 0.98+ |
hundreds | QUANTITY | 0.98+ |
13 meeting rooms | QUANTITY | 0.98+ |
thousands of years ago | DATE | 0.97+ |
thousands of servers | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
Wind River | ORGANIZATION | 0.97+ |
OpenShift | TITLE | 0.97+ |
Red Hat | ORGANIZATION | 0.97+ |
Red Hat | TITLE | 0.97+ |
one server | QUANTITY | 0.96+ |
3x | QUANTITY | 0.96+ |
Red Hat | TITLE | 0.96+ |
Mobile World Congress | EVENT | 0.95+ |
One | QUANTITY | 0.94+ |
first | QUANTITY | 0.94+ |
Mobile World Congress | EVENT | 0.93+ |
16 more | QUANTITY | 0.93+ |
first one | QUANTITY | 0.92+ |
Edge | TITLE | 0.92+ |
over 500 costumer partner meetings | QUANTITY | 0.92+ |
dozens of these | QUANTITY | 0.9+ |
MWC 2023 | EVENT | 0.88+ |
thousands of sites | QUANTITY | 0.88+ |
about a year ago | DATE | 0.87+ |
Sapphire Rapids | OTHER | 0.87+ |
RAN- vRAN | TITLE | 0.87+ |
one small microbrewery | QUANTITY | 0.86+ |
Edge Computing | ORGANIZATION | 0.86+ |
Wind River | TITLE | 0.83+ |
one infrastructure block | QUANTITY | 0.82+ |
up to 2-300 watt | QUANTITY | 0.82+ |
RAN | TITLE | 0.81+ |
VMware | ORGANIZATION | 0.8+ |
Peter Fetterolf, ACG Business Analytics & Charles Tsai, Dell Technologies | MWC Barcelona 2023
>> Narrator: TheCUBE's live coverage is made possible by funding from Dell Technologies. Creating technologies that drive human progress. (light airy music) >> Hi, everybody, welcome back to the Fira in Barcelona. My name is Dave Vellante. I'm here with my co-host Dave Nicholson. Lisa Martin is in the house. John Furrier is pounding the news from our Palo Alto studio. We are super excited to be talking about cloud at the edge, what that means. Charles Tsai is here. He's the Senior Director of product management at Dell Technologies and Peter Fetterolf is the Chief Technology Officer at ACG Business Analytics, a firm that goes deep into the TCO and the telco space, among other things. Gents, welcome to theCUBE. Thanks for coming on. Thank you. >> Good to be here. >> Yeah, good to be here. >> So I've been in search all week of the elusive next wave of monetization for the telcos. We know they make great money on connectivity, they're really good at that. But they're all talking about how they can't let this happen again. Meaning we can't let the over the top vendors yet again, basically steal our cookies. So we're going to not mess it up this time. We're going to win in the monetization. Charles, where are those monetization opportunities? Obviously at the edge, the telco cloud at the edge. What is that all about and where's the money? >> Well, Dave, I think from a Dell's perspective, what we want to be able to enable operators is a solution that enable them to roll out services much quicker, right? We know there's a lot of innovation around IoT, MEG and so on and so forth, but they continue to rely on traditional technology and way of operations is going to take them years to enable new services. So what Dell is doing is now, creating the entire vertical stack from the hardware through CAST and automation that enable them, not only to push out services very quickly, but operating them using cloud principles. >> So it's when you say the entire vertical stack, it's the integrated hardware components with like, for example, Red Hat on top- >> Right. >> Or a Wind River? >> That's correct. >> Okay, and then open API, so the developers can create workloads, I presume data companies. We just had a data conversation 'cause that was part of the original stack- >> That's correct. >> So through an open ecosystem, you can actually sort of recreate that value, correct? >> That's correct. >> Okay. >> So one thing Dell is doing, is we are offering an infrastructure block where we are taking over the overhead of certifying every release coming from the Red Hat or the Wind River of the world, right? We want telcos to spend their resources on what is going to generate them revenue. Not the overhead of creating this cloud stack. >> Dave, I remember when we went through this in the enterprise and you had companies like, you know, IBM with the AS400 and the mainframe saying it's easier to manage, which it was, but it's still, you know, it was subsumed by the open systems trend. >> Yeah, yeah. And I think that's an important thing to probe on, is this idea of what is, what exactly does it mean to be cloud at the edge in the telecom space? Because it's a much used term. >> Yeah. >> When we talk about cloud and edge, in sort of generalized IT, but what specifically does it mean? >> Yeah, so when we talk about telco cloud, first of all it's kind of different from what you're thinking about public cloud today. And there's a couple differences. One, if you look at the big hyperscaler public cloud today, they tend to be centralized in huge data centers. Okay, telco cloud, there are big data centers, but then there's also regional data centers. There are edge data centers, which are your typical like access central offices that have turned data centers, and then now even cell sites are becoming mini data centers. So it's distributed. I mean like you could have like, even in a country like say Germany, you'd have 30,000 soul sites, each one of them being a data center. So it's a very different model. Now the other thing I want to go back to the question of monetization, okay? So how do you do monetization? The only way to do that, is to be able to offer new services, like Charles said. How do you offer new services? You have to have an open ecosystem that's going to be very, very flexible. And if we look at where telcos are coming from today, they tend to be very inflexible 'cause they're all kind of single vendor solutions. And even as we've moved to virtualization, you know, if you look at packet core for instance, a lot of them are these vertical stacks of say a Nokia or Ericson or Huawei where you know, you can't really put any other vendors or any other solutions into that. So basically the idea is this kind of horizontal architecture, right? Where now across, not just my central data centers, but across my edge data centers, which would be traditionally my access COs, as well as my cell sites. I have an open environment. And we're kind of starting with, you know, packet core obviously with, and UPFs being distributed, but now open ran or virtual ran, where I can have CUs and DUs and I can split CUs, they could be at the soul site, they could be in edge data centers. But then moving forward, we're going to have like MEG, which are, you know, which are new kinds of services, you know, could be, you know, remote cars it could be gaming, it could be the Metaverse. And these are going to be a multi-vendor environment. So one of the things you need to do is you need to have you know, this cloud layer, and that's what Charles was talking about with the infrastructure blocks is helping the service providers do that, but they still own their infrastructure. >> Yeah, so it's still not clear to me how the service providers win that game but we can maybe come back to that because I want to dig into TCO a little bit. >> Sure. >> Because I have a lot of friends at Dell. I don't have a lot of friends at HPE. I've always been critical when they take an X86 server put a name on it that implies edge and they throw it over the fence to the edge, that's not going to work, okay? We're now seeing, you know we were just at the Dell booth yesterday, you did the booth crawl, which was awesome. Purpose-built servers for this environment. >> Charles: That's right. >> So there's two factors here that I want to explore in TCO. One is, how those next gen servers compare to the previous gen, especially in terms of power consumption but other factors and then how these sort of open ran, open ecosystem stacks compared to proprietary stacks. Peter, can you help us understand those? >> Yeah, sure. And Charles can comment on this as well. But I mean there, there's a couple areas. One is just moving the next generation. So especially on the Intel side, moving from Ice Lake to the Sapphire Rapids is a big deal, especially when it comes to the DU. And you know, with the radios, right? There's the radio unit, the RU, and then there's the DU the distributed unit, and the CU. The DU is really like part of the radio, but it's virtualized. When we moved from Ice lake to Sapphire Rapids, which is third generation intel to fourth generation intel, we're literally almost doubling the performance in the DU. And that's really important 'cause it means like almost half the number of servers and we're talking like 30, 40, 50,000 servers in some cases. So, you know, being able to divide that by two, that's really big, right? In terms of not only the the cost but all the TCO and the OpEx. Now another area that's really important, when I was talking moving from these vertical silos to the horizontal, the issue with the vertical silos is, you can't place any other workloads into those silos. So it's kind of inefficient, right? Whereas when we have the horizontal architecture, now you can place workloads wherever you want, which basically also means less servers but also more flexibility, more service agility. And then, you know, I think Charles can comment more, specifically on the XR8000, some things Dell's doing, 'cause it's really exciting relative to- >> Sure. >> What's happening in there. >> So, you know, when we start looking at putting compute at the edge, right? We recognize the first thing we have to do is understand the environment we are going into. So we spend with a lot of time with telcos going to the south side, going to the edge data center, looking at operation, how do the engineer today deal with maintenance replacement at those locations? Then based on understanding the operation constraints at those sites, we create innovation and take a traditional server, remodel it to make sure that we minimize the disruption to the operations, right? Just because we are helping them going from appliances to open compute, we do not want to disrupt what is have been a very efficient operation on the remote sites. So we created a lot of new ideas and develop them on general compute, where we believe we can save a lot of headache and disruptions and still provide the same level of availability, resiliency, and redundancy on an open compute platform. >> So when we talk about open, we don't mean generic? Fair? See what I mean? >> Open is more from the software workload perspective, right? A Dell server can run any type of workload that customer intend. >> But it's engineered for this? >> Environment. >> Environment. >> That's correct. >> And so what are some of the environmental issues that are dealt with in the telecom space that are different than the average data center? >> The most basic one, is in most of the traditional cell tower, they are deployed within cabinets instead of racks. So they are depth constraints that you just have no access to the rear of the chassis. So that means on a server, is everything you need to access, need to be in the front, nothing should be in the back. Then you need to consider how labor union come into play, right? There's a lot of constraint on who can go to a cell tower and touch power, who can go there and touch compute, right? So we minimize all that disruption through a modular design and make it very efficient. >> So when we took a look at XR8000, literally right here, sitting on the desk. >> Uh-huh. >> Took it apart, don't panic, just pulled out some sleds and things. >> Right, right. >> One of the interesting demonstrations was how it compared to the size of a shoe. Now apparently you hired someone at Dell specifically because they wear a size 14 shoe, (Charles laughs) so it was even more dramatic. >> That's right. >> But when you see it, and I would suggest that viewers go back and take a look at that segment, specifically on the hardware. You can see exactly what you just referenced. This idea that everything is accessible from the front. Yeah. >> So I want to dig in a couple things. So I want to push back a little bit on what you were saying about the horizontal 'cause there's the benefit, if you've got the horizontal infrastructure, you can run a lot more workloads. But I compare it to the enterprise 'cause I, that was the argument, I've made that argument with converged infrastructure versus say an Oracle vertical stack, but it turned out that actually Oracle ran Oracle better, okay? Is there an analog in telco or is this new open architecture going to be able to not only service the wide range of emerging apps but also be as resilient as the proprietary infrastructure? >> Yeah and you know, before I answer that, I also want to say that we've been writing a number of white papers. So we have actually three white papers we've just done with Dell looking at infrastructure blocks and looking at vertical versus horizontal and also looking at moving from the previous generation hardware to the next generation hardware. So all those details, you can find the white papers, and you can find them either in the Dell website or at the ACG research website >> ACGresearch.com? >> ACG research. Yeah, if you just search ACG research, you'll find- >> Yeah. >> Lots of white papers on TCO. So you know, what I want to say, relative to the vertical versus horizontal. Yeah, obviously in the vertical side, some of those things will run well, I mean it won't have issues. However, that being said, as we move to cloud native, you know, it's very high performance, okay? In terms of the stack, whether it be a Red Hat or a VMware or other cloud layers, that's really become much more mature. It now it's all CNF base, which is really containerized, very high performance. And so I don't think really performance is an issue. However, my feeling is that, if you want to offer new services and generate new revenue, you're not going to do it in vertical stacks, period. You're going to be able to do a packet core, you'll be able to do a ran over here. But now what if I want to offer a gaming service? What if I want to do metaverse? What if I want to do, you have to have an environment that's a multi-vendor environment that supports an ecosystem. Even in the RAN, when we look at the RIC, and the xApps and the rApps, these are multi-vendor environments that's going to create a lot of flexibility and you can't do that if you're restricted to, I can only have one vendor running on this hardware. >> Yeah, we're seeing these vendors work together and create RICs. That's obviously a key point, but what I'm hearing is that there may be trade offs, but the incremental value is going to overwhelm that. Second question I have, Peter is, TCO, I've been hearing a lot about 30%, you know, where's that 30% come from? Is it Op, is it from an OpEx standpoint? Is it labor, is it power? Is it, you mentioned, you know, cutting the number of servers in half. If I can unpack the granularity of that TCO, where's the benefit coming from? >> Yeah, the answer is yes. (Peter and Charles laugh) >> Okay, we'll do. >> Yeah, so- >> One side that, in terms of, where is the big bang for the bucks? >> So I mean, so you really need to look at the white paper to see details, but definitely power, definitely labor, definitely reducing the number of servers, you know, reducing the CapEx. The other thing is, is as you move to this really next generation horizontal telco cloud, there's the whole automation and orchestration, that is a key component as well. And it's enabled by what Dell is doing. It's enabled by the, because the thing is you're not going to have end-to-end automation if you have all this legacy stuff there or if you have these vertical stacks where you can't integrate. I mean you can automate that part and then you have separate automation here, you separate. you need to have integrated automation and orchestration across the whole thing. >> One other point I would add also, right, on the hardware perspective, right? With the customized hardware, what we allow operator to do is, take out the existing appliance and push a edge optimized server without reworking the entire infrastructure. There is a significant saving where you don't have to rethink about what is my power infrastructure, right? What is my security infrastructure? The server is designed to leverage the existing, what is already there. >> How should telco, Charles, plan for this transformation? Are there specific best practices that you would recommend in terms of the operational model? >> Great question. I think first thing is do an inventory of what you have. Understand what your constraints are and then come to Dell, we will love to consult with you, based on our experience on the best practices. We know how to minimize additional changes. We know how to help your support engineer, understand how to shift appliance based operation to a cloud-based operation. >> Is that a service you offer? Is that a pre-sales freebie? What is maybe both? >> It's both. >> Yeah. >> It's both. >> Yeah. >> Guys- >> Just really quickly. >> We're going to wrap. >> The, yeah. Dave loves the TCO discussion. I'm always thinking in terms of, well how do you measure TCO when you're comparing something where you can't do something to an environment where you're going to be able to do something new? And I know that that's always the challenge in any kind of emerging market where things are changing, any? >> Well, I mean we also look at, not only TCO, but we look at overall business case. So there's basically service at GLD and revenue and then there's faster time to revenues. Well, and actually ACG, we actually have a platform called the BAE or Business Analytics Engine that's a very sophisticated simulation cloud-based platform, where we can actually look at revenue month by month. And we look at what's the impact of accelerating revenue by three months. By four months. >> So you're looking into- >> By six months- >> So you're forward looking. You're just not consistently- >> So we're not just looking at TCO, we're looking at the overall business case benefit. >> Yeah, exactly right. There's the TCO, which is the hard dollars. >> Right. >> CFO wants to see that, he or she needs to see that. But you got to, you can convince that individual, that there's a business case around it. >> Peter: Yeah. >> And then you're going to sign up for that number. >> Peter: Yeah. >> And they're going to be held to it. That's the story the world wants. >> At the end of the day, telcos have to be offered new services 'cause look at all the money that's been spent. >> Dave: Yeah, that's right. >> On investment on 5G and everything else. >> 0.5 trillion over the next seven years. All right, guys, we got to go. Sorry to cut you off. >> Okay, thank you very much. >> But we're wall to wall here. All right, thanks so much for coming on. >> Dave: Fantastic. >> All right, Dave Vellante, for Dave Nicholson. Lisa Martin's in the house. John Furrier in Palo Alto Studios. Keep it right there. MWC 23 live from the Fira in Barcelona. (light airy music)
SUMMARY :
that drive human progress. and Peter Fetterolf is the of the elusive next wave of creating the entire vertical of the original stack- or the Wind River of the world, right? AS400 and the mainframe in the telecom space? So one of the things you need to do how the service providers win that game the fence to the edge, to the previous gen, So especially on the Intel side, We recognize the first thing we have to do from the software workload is in most of the traditional cell tower, sitting on the desk. Took it apart, don't panic, One of the interesting demonstrations accessible from the front. But I compare it to the Yeah and you know, Yeah, if you just search ACG research, and the xApps and the rApps, but the incremental value Yeah, the answer is yes. and then you have on the hardware perspective, right? inventory of what you have. Dave loves the TCO discussion. and then there's faster time to revenues. So you're forward looking. So we're not just There's the TCO, But you got to, you can And then you're going to That's the story the world wants. At the end of the day, and everything else. Sorry to cut you off. But we're wall to wall here. Lisa Martin's in the house.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Charles | PERSON | 0.99+ |
Charles Tsai | PERSON | 0.99+ |
Peter Fetterolf | PERSON | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
Ericson | ORGANIZATION | 0.99+ |
Huawei | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
30 | QUANTITY | 0.99+ |
telco | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
ACG Business Analytics | ORGANIZATION | 0.99+ |
30% | QUANTITY | 0.99+ |
three months | QUANTITY | 0.99+ |
ACG | ORGANIZATION | 0.99+ |
TCO | ORGANIZATION | 0.99+ |
four months | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Second | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
0.5 trillion | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
two factors | QUANTITY | 0.99+ |
six months | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Oracle | ORGANIZATION | 0.98+ |
MWC 23 | EVENT | 0.98+ |
Germany | LOCATION | 0.98+ |
Red Hat | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
XR8000 | COMMERCIAL_ITEM | 0.98+ |
Ice Lake | COMMERCIAL_ITEM | 0.98+ |
One | QUANTITY | 0.97+ |
one vendor | QUANTITY | 0.97+ |
Palo Alto Studios | LOCATION | 0.97+ |
third generation | QUANTITY | 0.97+ |
fourth generation | QUANTITY | 0.96+ |
40, 50,000 servers | QUANTITY | 0.96+ |
theCUBE | ORGANIZATION | 0.96+ |
telcos | ORGANIZATION | 0.95+ |
telco cloud | ORGANIZATION | 0.95+ |
each one | QUANTITY | 0.95+ |
John Kreisa, Couchbase | MWC Barcelona 2023
>> Narrator: TheCUBE's live coverage is made possible by funding from Dell Technologies, creating technologies that drive human progress. (upbeat music intro) (logo background tingles) >> Hi everybody, welcome back to day three of MWC23, my name is Dave Vellante and we're here live at the Theater of Barcelona, Lisa Martin, David Nicholson, John Furrier's in our studio in Palo Alto. Lot of buzz at the show, the Mobile World Daily Today, front page, Netflix chief hits back in fair share row, Greg Peters, the co-CEO of Netflix, talking about how, "Hey, you guys want to tax us, the telcos want to tax us, well, maybe you should help us pay for some of the content. Your margins are higher, you have a monopoly, you know, we're delivering all this value, you're bundling Netflix in, from a lot of ISPs so hold on, you know, pump the brakes on that tax," so that's the big news. Lockheed Martin, FOSS issues, AI guidelines, says, "AI's not going to take over your job anytime soon." Although I would say, your job's going to be AI-powered for the next five years. We're going to talk about data, we've been talking about the disaggregation of the telco stack, part of that stack is a data layer. John Kreisa is here, the CMO of Couchbase, John, you know, we've talked about all week, the disaggregation of the telco stacks, they got, you know, Silicon and operating systems that are, you know, real time OS, highly reliable, you know, compute infrastructure all the way up through a telemetry stack, et cetera. And that's a proprietary block that's really exploding, it's like the big bang, like we saw in the enterprise 20 years ago and we haven't had much discussion about that data layer, sort of that horizontal data layer, that's the market you play in. You know, Couchbase obviously has a lot of telco customers- >> John: That's right. >> We've seen, you know, Snowflake and others launch telco businesses. What are you seeing when you talk to customers at the show? What are they doing with that data layer? >> Yeah, so they're building applications to drive and power unique experiences for their users, but of course, it all starts with where the data is. So they're building mobile applications where they're stretching it out to the edge and you have to move the data to the edge, you have to have that capability to deliver that highly interactive experience to their customers or for their own internal use cases out to that edge, so seeing a lot of that with Couchbase and with our customers in telco. >> So what do the telcos want to do with data? I mean, they've got the telemetry data- >> John: Yeah. >> Now they frequently complain about the over-the-top providers that have used that data, again like Netflix, to identify customer demand for content and they're mopping that up in a big way, you know, certainly Amazon and shopping Google and ads, you know, they're all using that network. But what do the telcos do today and what do they want to do in the future? They're all talking about monetization, how do they monetize that data? >> Yeah, well, by taking that data, there's insight to be had, right? So by usage patterns and what's happening, just as you said, so they can deliver a better experience. It's all about getting that edge, if you will, on their competition and so taking that data, using it in a smart way, gives them that edge to deliver a better service and then grow their business. >> We're seeing a lot of action at the edge and, you know, the edge can be a Home Depot or a Lowe's store, but it also could be the far edge, could be a, you know, an oil drilling, an oil rig, it could be a racetrack, you know, certainly hospitals and certain, you know, situations. So let's think about that edge, where there's maybe not a lot of connectivity, there might be private networks going in, in the future- >> John: That's right. >> Private 5G networks. What's the data flow look like there? Do you guys have any customers doing those types of use cases? >> Yeah, absolutely. >> And what are they doing with the data? >> Yeah, absolutely, we've got customers all across, so telco and transportation, all kinds of service delivery and healthcare, for example, we've got customers who are delivering healthcare out at the edge where they have a remote location, they're able to deliver healthcare, but as you said, there's not always connectivity, so they need to have the applications, need to continue to run and then sync back once they have that connectivity. So it's really having the ability to deliver a service, reliably and then know that that will be synced back to some central server when they have connectivity- >> So the processing might occur where the data- >> Compute at the edge. >> How do you sync back? What is that technology? >> Yeah, so there's, so within, so Couchbase and Couchbase's case, we have an autonomous sync capability that brings it back to the cloud once they get back to whether it's a private network that they want to run over, or if they're doing it over a public, you know, wifi network, once it determines that there's connectivity and, it can be peer-to-peer sync, so different edge apps communicating with each other and then ultimately communicating back to a central server. >> I mean, the other theme here, of course, I call it the software-defined telco, right? But you got to have, you got to run on something, got to have hardware. So you see companies like AWS putting Outposts, out to the edge, Outposts, you know, doesn't really run a lot of database to mind, I mean, it runs RDS, you know, maybe they're going to eventually work with companies like... I mean, you're a partner of AWS- >> John: We are. >> Right? So do you see that kind of cloud infrastructure that's moving to the edge? Do you see that as an opportunity for companies like Couchbase? >> Yeah, we do. We see customers wanting to push more and more of that compute out to the edge and so partnering with AWS gives us that opportunity and we are certified on Outpost and- >> Oh, you are? >> We are, yeah. >> Okay. >> Absolutely. >> When did that, go down? >> That was last year, but probably early last year- >> So I can run Couchbase at the edge, on Outpost? >> Yeah, that's right. >> I mean, you know, Outpost adoption has been slow, we've reported on that, but are you seeing any traction there? Are you seeing any nibbles? >> Starting to see some interest, yeah, absolutely. And again, it has to be for the right use case, but again, for service delivery, things like healthcare and in transportation, you know, they're starting to see where they want to have that compute, be very close to where the actions happen. >> And you can run on, in the data center, right? >> That's right. >> You can run in the cloud, you know, you see HPE with GreenLake, you see Dell with Apex, that's essentially their Outposts. >> Yeah. >> They're saying, "Hey, we're going to take our whole infrastructure and make it as a service." >> Yeah, yeah. >> Right? And so you can participate in those environments- >> We do. >> And then so you've got now, you know, we call it supercloud, you've got the on-prem, you've got the, you can run in the public cloud, you can run at the edge and you want that consistent experience- >> That's right. >> You know, from a data layer- >> That's right. >> So is that really the strategy for a data company is taking or should be taking, that horizontal layer across all those use cases? >> You do need to think holistically about it, because you need to be able to deliver as a, you know, as a provider, wherever the customer wants to be able to consume that application. So you do have to think about any of the public clouds or private networks and all the way to the edge. >> What's different John, about the telco business versus the traditional enterprise? >> Well, I mean, there's scale, I mean, one thing they're dealing with, particularly for end user-facing apps, you're dealing at a very very high scale and the expectation that you're going to deliver a very interactive experience. So I'd say one thing in particular that we are focusing on, is making sure we deliver that highly interactive experience but it's the scale of the number of users and customers that they have, and the expectation that your application's always going to work. >> Speaking of applications, I mean, it seems like that's where the innovation is going to come from. We saw yesterday, GSMA announced, I think eight APIs telco APIs, you know, we were talking on theCUBE, one of the analysts was like, "Eight, that's nothing," you know, "What do these guys know about developers?" But you know, as Daniel Royston said, "Eight's better than zero." >> Right? >> So okay, so we're starting there, but the point being, it's all about the apps, that's where the innovation's going to come from- >> That's right. >> So what are you seeing there, in terms of building on top of the data app? >> Right, well you have to provide, I mean, have to provide the APIs and the access because it is really, the rubber meets the road, with the developers and giving them the ability to create those really rich applications where they want and create the experiences and innovate and change the way that they're giving those experiences. >> Yeah, so what's your relationship with developers at Couchbase? >> John: Yeah. >> I mean, talk about that a little bit- >> Yeah, yeah, so we have a great relationship with developers, something we've been investing more and more in, in terms of things like developer relations teams and community, Couchbase started in open source, continue to be based on open source projects and of course, those are very developer centric. So we provide all the consistent APIs for developers to create those applications, whether it's something on Couchbase Lite, which is our kind of edge-based database, or how they can sync that data back and we actually automate a lot of that syncing which is a very difficult developer task which lends them to one of the developer- >> What I'm trying to figure out is, what's the telco developer look like? Is that a developer that comes from the enterprise and somebody comes from the blockchain world, or AI or, you know, there really doesn't seem to be a lot of developer talk here, but there's a huge opportunity. >> Yeah, yeah. >> And, you know, I feel like, the telcos kind of remind me of, you know, a traditional legacy company trying to get into the developer world, you know, even Oracle, okay, they bought Sun, they got Java, so I guess they have developers, but you know, IBM for years tried with Bluemix, they had to end up buying Red Hat, really, and that gave them the developer community. >> Yep. >> EMC used to have a thing called EMC Code, which was a, you know, good effort, but eh. And then, you know, VMware always trying to do that, but, so as you move up the stack obviously, you have greater developer affinity. Where do you think the telco developer's going to come from? How's that going to evolve? >> Yeah, it's interesting, and I think they're... To kind of get to your first question, I think they're fairly traditional enterprise developers and when we break that down, we look at it in terms of what the developer persona is, are they a front-end developer? Like they're writing that front-end app, they don't care so much about the infrastructure behind or are they a full stack developer and they're really involved in the entire application development lifecycle? Or are they living at the backend and they're really wanting to just focus in on that data layer? So we lend towards all of those different personas and we think about them in terms of the APIs that we create, so that's really what the developers are for telcos is, there's a combination of those front-end and full stack developers and so for them to continue to innovate they need to appeal to those developers and that's technology, like Couchbase, is what helps them do that. >> Yeah and you think about the Apples, you know, the app store model or Apple sort of says, "Okay, here's a developer kit, go create." >> John: Yeah. >> "And then if it's successful, you're going to be successful and we're going to take a vig," okay, good model. >> John: Yeah. >> I think I'm hearing, and maybe I misunderstood this, but I think it was the CEO or chairman of Ericsson on the day one keynotes, was saying, "We are going to monetize the, essentially the telemetry data, you know, through APIs, we're going to charge for that," you know, maybe that's not the best approach, I don't know, I think there's got to be some innovation on top. >> John: Yeah. >> Now maybe some of these greenfield telcos are going to do like, you take like a dish networks, what they're doing, they're really trying to drive development layers. So I think it's like this wild west open, you know, community that's got to be formed and right now it's very unclear to me, do you have any insights there? >> I think it is more, like you said, Wild West, I think there's no emerging standard per se for across those different company types and sort of different pieces of the industry. So consequently, it does need to form some more standards in order to really help it grow and I think you're right, you have to have the right APIs and the right access in order to properly monetize, you have to attract those developers or you're not going to be able to monetize properly. >> Do you think that if, in thinking about your business and you know, you've always sold to telcos, but now it's like there's this transformation going on in telcos, will that become an increasingly larger piece of your business or maybe even a more important piece of your business? Or it's kind of be steady state because it's such a slow moving industry? >> No, it is a big and increasing piece of our business, I think telcos like other enterprises, want to continue to innovate and so they look to, you know, technologies like, Couchbase document database that allows them to have more flexibility and deliver the speed that they need to deliver those kinds of applications. So we see a lot of migration off of traditional legacy infrastructure in order to build that new age interface and new age experience that they want to deliver. >> A lot of buzz in Silicon Valley about open AI and Chat GPT- >> Yeah. >> You know, what's your take on all that? >> Yeah, we're looking at it, I think it's exciting technology, I think there's a lot of applications that are kind of, a little, sort of innovate traditional interfaces, so for example, you can train Chat GPT to create code, sample code for Couchbase, right? You can go and get it to give you that sample app which gets you a headstart or you can actually get it to do a better job of, you know, sorting through your documentation, like Chat GPT can do a better job of helping you get access. So it improves the experience overall for developers, so we're excited about, you know, what the prospect of that is. >> So you're playing around with it, like everybody is- >> Yeah. >> And potentially- >> Looking at use cases- >> Ways tO integrate, yeah. >> Hundred percent. >> So are we. John, thanks for coming on theCUBE. Always great to see you, my friend. >> Great, thanks very much. >> All right, you're welcome. All right, keep it right there, theCUBE will be back live from Barcelona at the theater. SiliconANGLE's continuous coverage of MWC23. Go to siliconangle.com for all the news, theCUBE.net is where all the videos are, keep it right there. (cheerful upbeat music outro)
SUMMARY :
that drive human progress. that's the market you play in. We've seen, you know, and you have to move the data to the edge, you know, certainly Amazon that edge, if you will, it could be a racetrack, you know, Do you guys have any customers the applications, need to over a public, you know, out to the edge, Outposts, you know, of that compute out to the edge in transportation, you know, You can run in the cloud, you know, and make it as a service." to deliver as a, you know, and the expectation that But you know, as Daniel Royston said, and change the way that they're continue to be based on open or AI or, you know, there developer world, you know, And then, you know, VMware and so for them to continue to innovate about the Apples, you know, and we're going to take data, you know, through APIs, are going to do like, you and the right access in and so they look to, you know, so we're excited about, you know, yeah. Always great to see you, Go to siliconangle.com for all the news,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Greg Peters | PERSON | 0.99+ |
Daniel Royston | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Ericsson | ORGANIZATION | 0.99+ |
David Nicholson | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
John Kreisa | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
GSMA | ORGANIZATION | 0.99+ |
Java | TITLE | 0.99+ |
Lowe | ORGANIZATION | 0.99+ |
first question | QUANTITY | 0.99+ |
Lockheed Martin | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Oracle | ORGANIZATION | 0.99+ |
telcos | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
Eight | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Chat GPT | TITLE | 0.99+ |
Hundred percent | QUANTITY | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
telco | ORGANIZATION | 0.98+ |
Couchbase | ORGANIZATION | 0.98+ |
John Furrier | PERSON | 0.98+ |
siliconangle.com | OTHER | 0.98+ |
Apex | ORGANIZATION | 0.98+ |
Home Depot | ORGANIZATION | 0.98+ |
early last year | DATE | 0.98+ |
Barcelona | LOCATION | 0.98+ |
20 years ago | DATE | 0.98+ |
MWC23 | EVENT | 0.97+ |
Bluemix | ORGANIZATION | 0.96+ |
Sun | ORGANIZATION | 0.96+ |
SiliconANGLE | ORGANIZATION | 0.96+ |
theCUBE | ORGANIZATION | 0.95+ |
GreenLake | ORGANIZATION | 0.94+ |
Apples | ORGANIZATION | 0.94+ |
Snowflake | ORGANIZATION | 0.93+ |
Outpost | ORGANIZATION | 0.93+ |
VMware | ORGANIZATION | 0.93+ |
zero | QUANTITY | 0.93+ |
EMC | ORGANIZATION | 0.91+ |
day three | QUANTITY | 0.9+ |
today | DATE | 0.89+ |
Mobile World Daily Today | TITLE | 0.88+ |
Wild West | ORGANIZATION | 0.88+ |
theCUBE.net | OTHER | 0.87+ |
app store | TITLE | 0.86+ |
one thing | QUANTITY | 0.86+ |
EMC Code | TITLE | 0.86+ |
Couchbase | TITLE | 0.85+ |
SiliconANGLE News | Red Hat Collaborates with Nvidia, Samsung and Arm on Efficient, Open Networks
(upbeat music) >> Hello, everyone; I'm John Furrier with SiliconANGLE NEWS and host of theCUBE, and welcome to our SiliconANGLE NEWS MWC NEWS UPDATE in Barcelona where MWC is the premier event for the cloud telecommunication industry, and in the news here is Red Hat, Red Hat announcing a collaboration with NVIDIA, Samsung and Arm on Efficient Open Networks. Red Hat announced updates across various fields including advanced 5G telecommunications cloud, industrial edge, artificial intelligence, and radio access networks, RAN, and Efficiency. Red Hat's enterprise Kubernetes platform, OpenShift, has added support for NVIDIA's converged accelerators and aerial SDK facilitating RAND deployments on industry standard service across hybrid and multicloud platforms. This composable infrastructure enables telecom firms to support heavier compute demands for edge computing, AI, private 5G, and more, and just also helps network operators adopt open architectures, allowing them to choose non-proprietary components from multiple suppliers. In addition to the NVIDIA collaboration, Red Hat is working with Samsung to offer a new vRAN solution for service providers to better manage their open RAN networks. They're also working with UK chip designer, Arm, to create new networking solutions for energy efficient Red Hat Open Source Kubernetes-based Efficient Power Level Exporter project, or Kepler, has been donated to the open Cloud Native Compute Foundation, allowing enterprise to better understand their cloud native workloads and power consumptions. Kepler can also help in the development of sustainable software by creating less power hungry applications. Again, Red Hat continuing to provide OpenSource, OpenRAN, and contributing an open source project to the CNCF, continuing to create innovation for developers, and, of course, Red Hat knows what, a lot about operating systems and the telco could be the next frontier. That's SiliconANGLE NEWS. I'm John Furrier; thanks for watching. (monotone music)
SUMMARY :
and in the news here is Red Hat,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
NVIDIA | ORGANIZATION | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Cloud Native Compute Foundation | ORGANIZATION | 0.99+ |
CNCF | ORGANIZATION | 0.98+ |
UK | LOCATION | 0.95+ |
OpenRAN | TITLE | 0.93+ |
telco | ORGANIZATION | 0.93+ |
Kubernetes | TITLE | 0.92+ |
Kepler | ORGANIZATION | 0.9+ |
SiliconANGLE NEWS | ORGANIZATION | 0.88+ |
vRAN | TITLE | 0.88+ |
SiliconANGLE | ORGANIZATION | 0.87+ |
Arm | ORGANIZATION | 0.87+ |
MWC | EVENT | 0.86+ |
Arm on Efficient Open Networks | ORGANIZATION | 0.86+ |
theCUBE | ORGANIZATION | 0.84+ |
OpenShift | TITLE | 0.78+ |
Hat | TITLE | 0.73+ |
SiliconANGLE News | ORGANIZATION | 0.65+ |
OpenSource | TITLE | 0.61+ |
NEWS | ORGANIZATION | 0.51+ |
Red | ORGANIZATION | 0.5+ |
SiliconANGLE | TITLE | 0.43+ |
Danielle Royston, TelcoDR | MWC Barcelona 2023
>> Announcer: theCUBE's live coverage is made possible by funding from Dell Technologies. Creating technologies that drive human progress. (upbeat music) >> Hi everybody. Welcome back to Barcelona. We're here at the Fira Live, theCUBE's ongoing coverage of day two of MWC 23. Back in 2021 was my first Mobile World Congress. And you know what? It was actually quite an experience because there was nobody there. I talked to my friend, who's now my co-host, Chris Lewis about what to expect. He said, Dave, I don't think a lot of people are going to be there, but Danielle Royston is here and she's the CEO of Totoge. And that year when Erickson tapped out of its space she took out 60,000 square feet and built out Cloud City. If it weren't for Cloud City, there would've been no Mobile World Congress in June and July of 2021. DR is back. Great to see you. Thanks for coming on. >> It's great to see you. >> Chris. Awesome to see you. >> Yeah, Chris. Yep. >> Good to be back. Yep. >> You guys remember the narrative back then. There was this lady running around this crazy lady that I met at at Google Cloud next saying >> Yeah. Yeah. >> the cloud's going to take over Telco. And everybody's like, well, this lady's nuts. The cloud's been leaning in, you know? >> Yeah. >> So what do you think, I mean, what's changed since since you first caused all those ripples? >> I mean, I have to say that I think that I caused a lot of change in the industry. I was talking to leaders over at AWS yesterday and they were like, we've never seen someone push like you have and change so much in a short period of time. And Telco moves slow. It's known for that. And they're like, you are pushing buttons and you're getting people to change and thank you and keep going. And so it's been great. It's awesome. >> Yeah. I mean, it was interesting, Chris, we heard on the keynotes we had Microsoft, Satya came in, Thomas Curian came in. There was no AWS. And now I asked CMO of GSMA about that. She goes, hey, we got a great relationship with it, AWS. >> Danielle: Yeah. >> But why do you think they weren't here? >> Well, they, I mean, they are here. >> Mean, not here. Why do you think they weren't profiled? >> They weren't on the keynote stage. >> But, you know, at AWS, a lot of the times they want to be the main thing. They want to be the main part of the show. They don't like sharing the limelight. I think they just didn't want be on the stage with the Google CLoud guys and the these other guys, what they're doing they're building out, they're doing so much stuff. As Danielle said, with Telcos change in the ecosystem which is what's happening with cloud. Cloud's making the Telcos think about what the next move is, how they fit in with the way other people do business. Right? So Telcos never used to have to listen to anybody. They only listened to themselves and they dictated the way things were done. They're very successful and made a lot of money but they're now having to open up they're having to leverage the cloud they're having to leverage the services that (indistinct words) and people out provide and they're changing the way they work. >> So, okay in 2021, we talked a lot about the cloud as a potential disruptor, and your whole premise was, look you got to lean into the cloud, or you're screwed. >> Danielle: Yeah. >> But the flip side of that is, if they lean into the cloud too much, they might be screwed. >> Danielle: Yeah. >> So what's that equilibrium? Have they been able to find it? Are you working with just the disruptors or how's that? >> No I think they're finding it right. So my talk at MWC 21 was all about the cloud is a double-edged sword, right? There's two sides to it, and you definitely need to proceed through it with caution, but also I don't know that you have a choice, right? I mean, the multicloud, you know is there another industry that spends more on CapEx than Telco? >> No. >> Right. The hyperscalers are doing it right. They spend, you know, easily approaching over a $100 billion in CapEx that rivals this industry. And so when you have a player like that an industry driving, you know and investing so much Telco, you're always complaining how everyone's riding your coattails. This is the opportunity to write someone else's coattails. So jump on, right? I think you don't have a choice especially if other Telco competitors are using hyperscalers and you don't, they're going to be left behind. >> So you advise these companies all the time, but >> I mean, the issue is they're all they're all using all the hyperscalers, right? So they're the multi, the multiple relationships. And as Danielle said, the multi-layer of relationship they're using the hyperscalers to change their own internal operational environments to become more IT-centric to move to that software centric Telco. And they're also then with the hyperscalers going to market in different ways sometimes with them, sometimes competing with them. What what it means from an analyst point of view is you're suddenly changing the dynamic of a market where we used to have nicely well defined markets previously. Now they're, everyone's in it together, you know, it's great. And, and it's making people change the way they think about services. What I, what I really hope it changes more than anything else is the way the customers at the end of the, at the end of the supply, the value chain think this is what we can get hold of this stuff. Now we can go into the network through the cloud and we can get those APIs. We can draw on the mechanisms we need to to run our personal lives, to run our business lives. And frankly, society as a whole. It's really exciting. >> Then your premise is basically you were saying they should ride on the top over the top of the cloud vendor. >> Yeah. Right? >> No. Okay. But don't they lose the, all the data if they do that? >> I don't know. I mean, I think the hyperscalers are not going to take their data, right? I mean, that would be a really really bad business move if Google Cloud and Azure and and AWS start to take over that, that data. >> But they can't take it. >> They can't. >> From regulate, from sovereignty and regulation. >> They can't because of regulation, but also just like business, right? If they started taking their data and like no enterprises would use them. So I think, I think the data is safe. I think you, obviously every country is different. You got to understand the different rules and regulations for data privacy and, and how you keep it. But I think as we look at the long term, right and we always talk about 10 and 20 years there's going to be a hyperscaler region in every country right? And there will be a way for every Telco to use it. I think their data will be safe. And I think it just, you're going to be able to stand on on the shoulders of someone else for once and use the building blocks of software that these guys provide to make better experiences for subscribers. >> You guys got to explain this to me because when I say data I'm not talking about, you know, personal information. I'm talking about all the telemetry, you know, all the all the, you know the plumbing. >> Danielle: Yeah. >> Data, which is- >> It will increasingly be shared because you need to share it in order to deliver the services in the streamline efficient way that needs to be deliver. >> Did I hear the CEO of Ericsson Wright where basically he said, we're going to charge developers for access to that data through APIs. >> What the Ericsson have done, obviously with the Vage acquisition is they want to get into APIs. So the idea is you're exposing features, quality policy on demand type features for example, or even pulling we still use that a lot of SMS, right? So pulling those out using those APIs. So it will be charged in some way. Whether- >> Man: Like Twitter's charging me for APIs, now I API calls, you >> Know what it is? I think it's Twilio. >> Man: Oh, okay. >> Right. >> Man: No, no, that's sure. >> There's no reason why telcos couldn't provide a Twilio like service itself. >> It's a horizontal play though right? >> Danielle: Correct because developers need to be charged by the API. >> But doesn't there need to be an industry standard to do that as- >> Well. I think that's what they just announced. >> Industry standard. >> Danielle: I think they just announced that. Yeah. Right now I haven't looked at that API set, right? >> There's like eight of them. >> There's eight of them. Twilio has, it's a start you got to start somewhere Dave. (crosstalk) >> And there's all, the TM forum is all the other standard >> Right? Eight is better than zero- >> Right? >> Haven't got plenty. >> I mean for an industry that didn't really understand APIs as a feature, as a product as a service, right? For Mats Granryd, the deputy general of GSMA to stand on the keynote stage and say we partnered and we're unveiling, right. Pay by the use APIs. I was for it. I was like, that is insane. >> I liked his keynote actually, because I thought he was going to talk about how many attendees and how much economic benefiting >> Danielle: We're super diverse. >> He said, I would usually talk about that and you know greening in the network by what you did talk about a little bit. But, but that's, that surprised me. >> Yeah. >> But I've seen in the enterprise this is not my space as, you know, you guys don't live this but I've seen Oracle try to get developers. IBM had to pay $35 billion trying to get for Red Hat to get developers, right? EMC used to have a thing called EMC code, failed. >> I mean they got to do something, right? So 4G they didn't really make the business case the ROI on the investment in the network. Here we are with 5G, same discussion is having where's the use case? How are we going to monetize and make the ROI on this massive investment? And now they're starting to talk about 6G. Same fricking problem is going to happen again. And so I think they need to start experimenting with new ideas. I don't know if it's going to work. I don't know if this new a API network gateway theme that Mats talked about yesterday will work. But they need to start unbundling that unlimited plan. They need to start charging people who are using the network more, more money. Those who are using it less, less. They need to figure this out. This is a crisis for them. >> Yeah our own CEO, I mean she basically said, Hey, I'm for net neutrality, but I want to be able to charge the people that are using it more and more >> To make a return on, on a capital. >> I mean it costs billions of dollars to build these networks, right? And they're valuable. We use them and we talked about this in Cloud City 21, right? The ability to start building better metaverses. And I know that's a buzzword and everyone hates it, but it's true. Like we're working from home. We need- there's got to be a better experience in Zoom in 2D, right? And you need a great network for that metaverse to be awesome. >> You do. But Danielle, you don't need cellular for doing that, do you? So the fixed network is as important. >> Sure. >> And we're at mobile worlds. But actually what we beginning to hear and Crystal Bren did say this exactly, it's about the comp the access is sort of irrelevant. Fixed is better because it's more the cost the return on investment is better from fiber. Mobile we're going to change every so many years because we're a new generation. But we need to get the mechanism in place to deliver that. I actually don't agree that we should everyone should pay differently for what they use. It's a universal service. We need it as individuals. We need to make it sustainable for every user. Let's just not go for the biggest user. It's not, it's not the way to build it. It won't work if you do that you'll crash the system if you do that. And, and the other thing which I disagree on it's not about standing on the shoulders and benefiting from what- It's about cooperating across all levels. The hyperscalers want to work with the telcos as much as the telcos want to work with the hyperscalers. There's a lot of synergy there. There's a lot of ways they can work together. It's not one or the other. >> But I think you're saying let the cloud guys do the heavy lifting and I'm - >> Yeah. >> Not at all. >> And so you don't think so because I feel like the telcos are really good at pipes. They've always been good at pipes. They're engineers. >> Danielle: Yeah. >> Are they hanging on to the to the connectivity or should they let that go and well and go toward the developer. >> I mean AWS had two announcements on the 21st a week before MWC. And one was that telco network builder. This is literally being able to deploy a network capability at AWS with keystrokes. >> As a managed service. >> Danielle: Correct. >> Yeah. >> And so I don't know how the telco world I felt the shock waves, right? I was like, whoa, that seems really big. Because they're taking something that previously was like bread and butter. This is what differentiates each telco and now they've standardized it and made it super easy so anyone can do it. Now do I think the five nines of super crazy hardcore network criteria will be built on AWS this way? Probably not, but no >> It's not, it's not end twin. So you can't, no. >> Right. But private networks could be built with this pretty easily, right? And so telcos that don't have as much funding, right. Smaller, more experiments. I think it's going to change the way we think about building networks in telcos >> And those smaller telcos I think are going to be more developer friendly. >> Danielle: Yeah. >> They're going to have business models that invite those developers in. And that's, it's the disruption's going to come from the ISVs and the workloads that are on top of that. >> Well certainly what Dish is trying to do, right? Dish is trying to build a- they launched it reinvent a developer experience. >> Dave: Yeah. >> Right. Built around their network and you know, again I don't know, they were not part of this group that designed these eight APIs but I'm sure they're looking with great intent on what does this mean for them. They'll probably adopt them because they want people to consume the network as APIs. That's their whole thing that Mark Roanne is trying to do. >> Okay, and then they're doing open ran. But is it- they're not really cons- They're not as concerned as Rakuten with the reliability and is that the right play? >> In this discussion? Open RAN is not an issue. It really is irrelevant. It's relevant for the longer term future of the industry by dis aggregating and being able to share, especially ran sharing, for example, in the short term in rural environments. But we'll see some of that happening and it will change, but it will also influence the way the other, the existing ran providers build their services and offer their value. Look you got to remember in the relationship between the equipment providers and the telcos are very dramatically. Whether it's Ericson, NOKIA, Samsung, Huawei, whoever. So those relations really, and the managed services element to that depends on what skills people have in-house within the telco and what service they're trying to deliver. So there's never one size fits all in this industry. >> You're very balanced in your analysis and I appreciate that. >> I try to be. >> But I am not. (chuckles) >> So when Dr went off, this is my question. When Dr went off a couple years ago on the cloud's going to take over the world, you were skeptical. You gave a approach. Have you? >> I still am. >> Have you moderated your thoughts on that or- >> I believe the telecom industry is is a very strong industry. It's my industry of course I love it. But the relationship it is developing much different relationships with the ecosystem players around it. You mentioned developers, you mentioned the cloud players the equipment guys are changing there's so many moving parts to build the telco of the future that every country needs a very strong telco environment to be able to support the site as a whole. People individuals so- >> Well I think two years ago we were talking about should they or shouldn't they, and now it's an inevitability. >> I don't think we were Danielle. >> All using the hyperscalers. >> We were always going to need to transform the telcos from the conservative environments in which they developed. And they've had control of everything in order to reduce if they get no extra revenue at all, reducing the cost they've got to go on a cloud migration path to do that. >> Amenable. >> Has it been harder than you thought? >> It's been easier than I thought. >> You think it's gone faster than >> It's gone way faster than I thought. I mean pushing on this flywheel I thought for sure it would take five to 10 years it is moving. I mean the maths comp thing the AWS announcements last week they're putting in hyperscalers in Saudi Arabia which is probably one of the most sort of data private places in the world. It's happening really fast. >> What Azure's doing? >> I feel like I can't even go to sleep. Because I got to keep up with it. It's crazy. >> Guys. >> This is awesome. >> So awesome having you back on. >> Yeah. >> Chris, thanks for co-hosting. Appreciate you stay here. >> Yep. >> Danielle, amazing. We'll see you. >> See you soon. >> A lot of action here. We're going to come out >> Great. >> Check out your venue. >> Yeah the Togi buses that are outside. >> The big buses. You got a great setup there. We're going to see you on Wednesday. Thanks again. >> Awesome. Thanks. >> All right. Keep it right there. We'll be back to wrap up day two from MWC 23 on theCUBE. (upbeat music)
SUMMARY :
coverage is made possible I talked to my friend, who's Awesome to see you. Yep. Good to be back. the narrative back then. the cloud's going to take over Telco. I mean, I have to say that And now I asked CMO of GSMA about that. Why do you think they weren't profiled? on the stage with the Google CLoud guys talked a lot about the cloud But the flip side of that is, I mean, the multicloud, you know This is the opportunity to I mean, the issue is they're all over the top of the cloud vendor. the data if they do that? and AWS start to take But I think as we look I'm talking about all the in the streamline efficient Did I hear the CEO of Ericsson Wright So the idea is you're exposing I think it's Twilio. There's no reason why telcos need to be charged by the API. what they just announced. Danielle: I think got to start somewhere Dave. of GSMA to stand on the greening in the network But I've seen in the enterprise I mean they got to do something, right? of dollars to build these networks, right? So the fixed network is as important. Fixed is better because it's more the cost because I feel like the telcos Are they hanging on to the This is literally being able to I felt the shock waves, right? So you can't, no. I think it's going to going to be more developer friendly. And that's, it's the is trying to do, right? consume the network as APIs. is that the right play? It's relevant for the longer and I appreciate that. But I am not. on the cloud's going to take I believe the telecom industry is Well I think two years at all, reducing the cost I mean the maths comp thing Because I got to keep up with it. Appreciate you stay here. We'll see you. We're going to come out We're going to see you on Wednesday. We'll be back to wrap up day
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Danielle | PERSON | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Chris | PERSON | 0.99+ |
Chris Lewis | PERSON | 0.99+ |
Ericsson | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Huawei | ORGANIZATION | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
Mark Roanne | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Wednesday | DATE | 0.99+ |
Thomas Curian | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Danielle Royston | PERSON | 0.99+ |
Saudi Arabia | LOCATION | 0.99+ |
eight | QUANTITY | 0.99+ |
Telcos | ORGANIZATION | 0.99+ |
$35 billion | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
GSMA | ORGANIZATION | 0.99+ |
Ericson | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
60,000 square feet | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
June | DATE | 0.99+ |
Mats Granryd | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
NOKIA | ORGANIZATION | 0.99+ |
Eight | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
Barcelona | LOCATION | 0.99+ |
2021 | DATE | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
two years ago | DATE | 0.99+ |
CapEx | ORGANIZATION | 0.99+ |
Totoge | ORGANIZATION | 0.99+ |
two sides | QUANTITY | 0.99+ |
Mobile World Congress | EVENT | 0.99+ |
MWC 23 | EVENT | 0.99+ |
Crystal Bren | PERSON | 0.99+ |
10 years | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
Satya | PERSON | 0.98+ |
two announcements | QUANTITY | 0.98+ |
Ericsson Wright | ORGANIZATION | 0.98+ |
Dish | ORGANIZATION | 0.98+ |
billions of dollars | QUANTITY | 0.98+ |
Mats | PERSON | 0.98+ |
20 years | QUANTITY | 0.98+ |
day two | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Twilio | ORGANIZATION | 0.97+ |
telcos | ORGANIZATION | 0.97+ |
Red Hat | TITLE | 0.97+ |
theCUBE | ORGANIZATION | 0.96+ |
Dave Duggal, EnterpriseWeb & Azhar Sayeed, Red Hat | MWC Barcelona 2023
>> theCUBE's live coverage is made possible by funding from Dell Technologies. Creating technologies that drive human progress. (ambient music) >> Lisa: Hey everyone, welcome back to Barcelona, Spain. It's theCUBE Live at MWC 23. Lisa Martin with Dave Vellante. This is day two of four days of cube coverage but you know that, because you've already been watching yesterday and today. We're going to have a great conversation next with EnterpriseWeb and Red Hat. We've had great conversations the last day and a half about the Telco industry, the challenges, the opportunities. We're going to unpack that from this lens. Please welcome Dave Duggal, founder and CEO of EnterpriseWeb and Azhar Sayeed is here, Senior Director Solution Architecture at Red Hat. >> Guys, it's great to have you on the program. >> Yes. >> Thank you Lisa, >> Great being here with you. >> Dave let's go ahead and start with you. Give the audience an overview of EnterpriseWeb. What kind of business is it? What's the business model? What do you guys do? >> Okay so, EnterpriseWeb is reinventing middleware, right? So the historic middleware was to build vertically integrated stacks, right? And those stacks are now such becoming the rate limiters for interoperability for so the end-to-end solutions that everybody's looking for, right? Red Hat's talking about the unified platform. You guys are talking about Supercloud, EnterpriseWeb addresses that we've built middleware based on serverless architecture, so lightweight, low latency, high performance middleware. And we're working with the world's biggest, we sell through channels and we work through partners like Red Hat Intel, Fortnet, Keysight, Tech Mahindra. So working with some of the biggest players that have recognized the value of our innovation, to deliver transformation to the Telecom industry. >> So what are you guys doing together? Is this, is this an OpenShift play? >> Is it? >> Yeah. >> Yeah, so we've got two projects right her on the floor at MWC throughout the various partners, where EnterpriseWeb is actually providing an application layer, sorry application middleware over Red Hat's, OpenShift and we're essentially generating operators so Red Hat operators, so that all our vendors, and, sorry vendors that we onboard into our catalog can be deployed easily through the OpenShift platform. And we allow those, those vendors to be flexibly composed into network services. So the real challenge for operators historically is that they, they have challenges onboarding the vendors. It takes a long time. Each one of them is a snowflake. They, you know, even though there's standards they don't all observe or follow the same standards. So we make it easier using models, right? For, in a model driven process to on boards or streamline that onboarding process, compose functions into services deploy those services seamlessly through Red Hat's OpenShift, and then manage the, the lifecycle, like the quality of service and the SLAs for those services. >> So Red Hat obviously has pretty prominent Telco business has for a while. Red Hat OpenStack actually is is pretty popular within the Telco business. People thought, "Oh, OpenStack, that's dead." Actually, no, it's actually doing quite well. We see it all over the place where for whatever reason people want to build their own cloud. And, and so, so what's happening in the industry because you have the traditional Telcos we heard in the keynotes that kind of typical narrative about, you know, we can't let the over the top vendors do this again. We're, we're going to be Apifi everything, we're going to monetize this time around, not just with connectivity but the, but the fact is they really don't have a developer community. >> Yes. >> Yet anyway. >> Then you have these disruptors over here that are saying "Yeah, we're going to enable ISVs." How do you see it? What's the landscape look like? Help us understand, you know, what the horses on the track are doing. >> Sure. I think what has happened, Dave, is that the conversation has moved a little bit from where they were just looking at IS infrastructure service with virtual machines and OpenStack, as you mentioned, to how do we move up the value chain and look at different applications. And therein comes the rub, right? You have applications with different requirements, IT network that have various different requirements that are there. So as you start to build those cloud platform, as you start to modernize those set of applications, you then start to look at microservices and how you build them. You need the ability to orchestrate them. So some of those problem statements have moved from not just refactoring those applications, but actually now to how do you reliably deploy, manage in a multicloud multi cluster way. So this conversation around Supercloud or this conversation around multicloud is very >> You could say Supercloud. That's okay >> (Dave Duggal and Azhar laughs) >> It's absolutely very real though. The reason why it's very real is, if you look at transformations around Telco, there are two things that are happening. One, Telco IT, they're looking at partnerships with hybrid cloud, I mean with public cloud players to build a hybrid environment. They're also building their own Telco Cloud environment for their network functions. Now, in both of those spaces, they end up operating two to three different environments themselves. Now how do you create a level of abstraction across those? How do you manage that particular infrastructure? And then how do you orchestrate all of those different workloads? Those are the type of problems that they're actually beginning to solve. So they've moved on from really just putting that virtualizing their application, putting it on OpenStack to now really seriously looking at "How do I build a service?" "How do I leverage the catalog that's available both in my private and public and build an overall service process?" >> And by the way what you just described as hybrid cloud and multicloud is, you know Supercloud is what multicloud should have been. And what, what it originally became is "I run on this cloud and I run on this cloud" and "I run on this cloud and I have a hybrid." And, and Supercloud is meant to create a common experience across those clouds. >> Dave Duggal: Right? >> Thanks to, you know, Supercloud middleware. >> Yeah. >> Right? And, and so that's what you guys do. >> Yeah, exactly. Exactly. Dave, I mean, even the name EnterpriseWeb, you know we started from looking from the application layer down. If you look at it, the last 10 years we've looked from the infrastructure up, right? And now everybody's looking northbound saying "You know what, actually, if I look from the infrastructure up the only thing I'll ever build is silos, right?" And those silos get in the way of the interoperability and the agility the businesses want. So we take the perspective as high level abstractions, common tools, so that if I'm a CXO, I can look down on my environments, right? When I'm really not, I honestly, if I'm an, if I'm a CEO I don't really care or CXO, I don't really care so much about my infrastructure to be honest. I care about my applications and their behavior. I care about my SLAs and my quality of service, right? Those are the things I care about. So I really want an EnterpriseWeb, right? Something that helps me connect all my distributed applications all across all of the environments. So I can have one place a consistency layer that speaks a common language. We know that there's a lot of heterogeneity down all those layers and a lot of complexity down those layers. But the business doesn't care. They don't want to care, right? They want to actually take their applications deploy them where they're the most performant where they're getting the best cost, right? The lowest and maybe sustainability concerns, all those. They want to address those problems, meet their SLAs meet their quality service. And you know what, if it's running on Amazon, great. If it's running on Google Cloud platform, great. If it, you know, we're doing one project right here that we're demonstrating here is with with Amazon Tech Mahindra and OpenShift, where we took a disaggregated 5G core, right? So this is like sort of latest telecom, you know net networking software, right? We're deploying pulling elements of that network across core, across Amazon EKS, OpenShift on Red Hat ROSA, as well as just OpenShift for cloud. And we, through a single pane of deployment and management, we deployed the elements of the 5G core across them and then connected them in an end-to-end process. That's Telco Supercloud. >> Dave Vellante: So that's an O-RAN deployment. >> Yeah that's >> So, the big advantage of that, pardon me, Dave but the big advantage of that is the customer really doesn't care where the components are being served from for them. It's a 5G capability. It happens to sit in different locations. And that's, it's, it's about how do you abstract and how do you manage all those different workloads in a cohesive way? And that's exactly what EnterpriseWeb is bringing to the table. And what we do is we abstract the underlying infrastructure which is the cloud layer. So if, because AWS operating environment is different then private cloud operating environment then Azure environment, you have the networking is set up is different in each one of them. If there is a way you can abstract all of that and present it in a common operating model it becomes a lot easier than for anybody to be able to consume. >> And what a lot of customers tell me is the way they deal with multicloud complexity is they go with mono cloud, right? And so they'll lose out on some of the best services >> Absolutely >> If best of, so that's not >> that's not ideal, but at the end of the day, agree, developers don't want to muck with all the plumbing >> Dave Duggal: Yep. >> They want to write code. >> Azhar: Correct. >> So like I come back to are the traditional Telcos leaning in on a way that they're going to enable ISVs and developers to write on top of those platforms? Or are there sort of new entrance and disruptors? And I know, I know the answer is both >> Dave Duggal: Yep. >> but I feel as though the Telcos still haven't, traditional Telcos haven't tuned in to that developer affinity, but you guys sell to them. >> What, what are you seeing? >> Yeah, so >> What we have seen is there are Telcos fall into several categories there. If you look at the most mature ones, you know they are very eager to move up the value chain. There are some smaller very nimble ones that have actually doing, they're actually doing something really interesting. For example, they've provided sandbox environments to developers to say "Go develop your applications to the sandbox environment." We'll use that to build an net service with you. I can give you some interesting examples across the globe that, where that is happening, right? In AsiaPac, particularly in Australia, ANZ region. There are a couple of providers who have who have done this, but in, in, in a very interesting way. But the challenges to them, why it's not completely open or public yet is primarily because they haven't figured out how to exactly monetize that. And, and that's the reason why. So in the absence of that, what will happen is they they have to rely on the ISV ecosystem to be able to build those capabilities which they can then bring it on as part of the catalog. But in Latin America, I was talking to one of the providers and they said, "Well look we have a public cloud, we have our own public cloud, right?" What we want do is use that to offer localized services not just bring everything in from the top >> But, but we heard from Ericson's CEO they're basically going to monetize it by what I call "gouge", the developers >> (Azhar laughs) >> access to the network telemetry as opposed to saying, "Hey, here's an open platform development on top of it and it will maybe create something like an app store and we'll take a piece of the action." >> So ours, >> to be is a better model. >> Yeah. So that's perfect. Our second project that we're showing here is with Intel, right? So Intel came to us cause they are a reputation for doing advanced automation solutions. They gave us carte blanche in their labs. So this is Intel Network Builders they said pick your partners. And we went with the Red Hat, Fort Net, Keysite this company KX doing AIML. But to address your DevX, here's Intel explicitly wants to get closer to the developers by exposing their APIs, open APIs over their infrastructure. Just like Red Hat has APIs, right? And so they can expose them northbound to developers so developers can leverage and tune their applications, right? But the challenge there is what Intel is doing at the low level network infrastructure, right? Is fundamentally complex, right? What you want is an abstraction layer where develop and this gets to, to your point Dave where you just said like "The developers just want to get their job done." or really they want to focus on the business logic and accelerate that service delivery, right? So the idea here is an EnterpriseWeb they can literally declaratively compose their services, express their intent. "I want this to run optimized for low latency. I want this to run optimized for energy consumption." Right? And that's all they say, right? That's a very high level statement. And then the run time translates it between all the elements that are participating in that service to realize the developer's intent, right? No hands, right? Zero touch, right? So that's now a movement in telecom. So you're right, it's taking a while because these are pretty fundamental shifts, right? But it's intent based networking, right? So it's almost two parts, right? One is you have to have the open APIs, right? So that the infrastructure has to expose its capabilities. Then you need abstractions over the top that make it simple for developers to take, you know, make use of them. >> See, one of the demonstrations we are doing is around AIOps. And I've had literally here on this floor, two conversations around what I call as network as a platform. Although it sounds like a cliche term, that's exactly what Dave was describing in terms of exposing APIs from the infrastructure and utilizing them. So once you get that data, then now you can do analytics and do machine learning to be able to build models and figure out how you can orchestrate better how you can monetize better, how can how you can utilize better, right? So all of those things become important. It's just not about internal optimization but it's also about how do you expose it to third party ecosystem to translate that into better delivery mechanisms or IOT capability and so on. >> But if they're going to charge me for every API call in the network I'm going to go broke (team laughs) >> And I'm going to get really pissed. I mean, I feel like, I'm just running down, Oracle. IBM tried it. Oracle, okay, they got Java, but they don't they don't have developer jobs. VMware, okay? They got Aria. EMC used to have a thing called code. IBM had to buy Red Hat to get to the developer community. (Lisa laughs) >> So I feel like the telcos don't today have those developer shops. So, so they have to partner. [Azhar] Yes. >> With guys like you and then be more open and and let a zillion flowers bloom or else they're going to get disrupted in a big way and they're going to it's going to be a repeat of the over, over the top in, in in a different model that I can't predict. >> Yeah. >> Absolutely true. I mean, look, they cannot be in the connectivity business. Telcos cannot be just in the connectivity business. It's, I think so, you know, >> Dave Vellante: You had a fry a frozen hand (Dave Daggul laughs) >> off that, you know. >> Well, you know, think about they almost have to go become over the top on themselves, right? That's what the cloud guys are doing, right? >> Yeah. >> They're riding over their backbone that by taking a creating a high level abstraction, they in turn abstract away the infrastructure underneath them, right? And that's really the end game >> Right? >> Dave Vellante: Yeah. >> Is because now, >> they're over the top it's their network, it's their infrastructure, right? They don't want to become bid pipes. >> Yep. >> Now you, they can take OpenShift, run that in any cloud. >> Yep. >> Right? >> You can run that in hybrid cloud, enterprise web can do the application layer configuration and management. And together we're running, you know, OSI layers one through seven, east to west, north to south. We're running across the the RAN, the core and the transport. And that is telco super cloud, my friend. >> Yeah. Well, >> (Dave Duggal laughs) >> I'm dominating the conversation cause I love talking super cloud. >> I knew you would. >> So speaking of super superpowers, when you're in customer or prospective customer conversations with providers and they've got, obviously they're they're in this transformative state right now. How, what do you describe as the superpower between Red Hat and EnterpriseWeb in terms of really helping these Telcos transforms. But at the end of the day, the connectivity's there the end user gets what they want, which is I want this to work wherever I am. >> Yeah, yeah. That's a great question, Lisa. So I think the way you could look at it is most software has, has been evolved to be specialized, right? So in Telcos' no different, right? We have this in the enterprise, right? All these specialized stacks, all these components that they wire together in the, in you think of Telco as a sort of a super set of enterprise problems, right? They have all those problems like magnified manyfold, right? And so you have specialized, let's say orchestrators and other tools for every Telco domain for every Telco layer. Now you have a zoo of orchestrators, right? None of them were designed to work together, right? They all speak a specific language, let's say quote unquote for doing a specific purpose. But everything that's interesting in the 21st century is across layers and across domains, right? If a siloed static application, those are dead, right? Nobody's doing those anymore. Even developers don't do those developers are doing composition today. They're not doing, nobody wants to hear about a 6 million lines of code, right? They want to hear, "How did you take these five things and bring 'em together for productive use?" >> Lisa: Right. How did you deliver faster for my enterprise? How did you save me money? How did you create business value? And that's what we're doing together. >> I mean, just to add on to Dave, I was talking to one of the providers, they have more than 30,000 nodes in their infrastructure. When I say no to your servers running, you know, Kubernetes,running open stack, running different components. If try managing that in one single entity, if you will. Not possible. You got to fragment, you got to segment in some way. Now the question is, if you are not exposing that particular infrastructure and the appropriate KPIs and appropriate things, you will not be able to efficiently utilize that across the board. So you need almost a construct that creates like a manager of managers, a hierarchical structure, which would allow you to be more intelligent in terms of how you place those, how you manage that. And so when you ask the question about what's the secret sauce between the two, well this is exactly where EnterpriseWeb brings in that capability to analyze information, be more intelligent about it. And what we do is provide an abstraction of the cloud layer so that they can, you know, then do the right job in terms of making sure that it's appropriate and it's consistent. >> Consistency is key. Guys, thank you so much. It's been a pleasure really digging through EnterpriseWeb. >> Thank you. >> What you're doing >> with Red Hat. How you're helping the organization transform and Supercloud, we can't forget Supercloud. (Dave Vellante laughs) >> Fight Supercloud. Guys, thank you so much for your time. >> Thank you so much Lisa. >> Thank you. >> Thank you guys. >> Very nice. >> Lisa: We really appreciate it. >> For our guests and for Dave Vellante, I'm Lisa Martin. You're watching theCUBE, the leader in live tech coverage coming to you live from MWC 23. We'll be back after a short break.
SUMMARY :
that drive human progress. the challenges, the opportunities. have you on the program. What's the business model? So the historic middleware So the real challenge for happening in the industry What's the landscape look like? You need the ability to orchestrate them. You could say Supercloud. And then how do you orchestrate all And by the way Thanks to, you know, And, and so that's what you guys do. even the name EnterpriseWeb, you know that's an O-RAN deployment. of that is the customer but you guys sell to them. on the ISV ecosystem to be able take a piece of the action." So that the infrastructure has and figure out how you And I'm going to get So, so they have to partner. the over, over the top in, in in the connectivity business. They don't want to become bid pipes. OpenShift, run that in any cloud. And together we're running, you know, I'm dominating the conversation the end user gets what they want, which is And so you have specialized, How did you create business value? You got to fragment, you got to segment Guys, thank you so much. and Supercloud, we Guys, thank you so much for your time. to you live from MWC 23.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Dave Duggal | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Telcos | ORGANIZATION | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Fortnet | ORGANIZATION | 0.99+ |
Keysight | ORGANIZATION | 0.99+ |
EnterpriseWeb | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
21st century | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
two projects | QUANTITY | 0.99+ |
Telcos' | ORGANIZATION | 0.99+ |
Latin America | LOCATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Dave Daggul | PERSON | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
second project | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Fort Net | ORGANIZATION | 0.99+ |
Barcelona, Spain | LOCATION | 0.99+ |
telco | ORGANIZATION | 0.99+ |
more than 30,000 nodes | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
OpenShift | TITLE | 0.99+ |
Java | TITLE | 0.99+ |
three | QUANTITY | 0.99+ |
KX | ORGANIZATION | 0.99+ |
Azhar Sayeed | PERSON | 0.98+ |
One | QUANTITY | 0.98+ |
Tech Mahindra | ORGANIZATION | 0.98+ |
two conversations | QUANTITY | 0.98+ |
yesterday | DATE | 0.98+ |
five things | QUANTITY | 0.98+ |
telcos | ORGANIZATION | 0.97+ |
four days | QUANTITY | 0.97+ |
Azhar | PERSON | 0.97+ |
Udayan Mukherjee, Intel & Manish Singh, Dell Techhnologies | MWC Barcelona 2023
(soft corporate jingle) >> Announcer: theCUBE's live coverage is made possible by funding from Dell Technologies. Creating technologies that drive human progress. (upbeat jingle intro) >> Welcome back to Barcelona. We're here live at the Fira. (laughs) Just amazing day two of MWC23. It's packed today. It was packed yesterday. It's even more packed today. All the news is flowing. Check out siliconangle.com. John Furrier is in the studio in Palo Alto breaking all the news. And, we are here live. Really excited to have Udayan Mukherjee who's the Senior Fellow and Chief Architect of wireless product at Network and Edge for Intel. And, Manish Singh is back. He's the CTO of Telecom Systems Business at Dell Jets. Welcome. >> Thank you. >> Thank you >> We're going to talk about greening the network. I wonder, Udayan, if you could just set up why that's so important. I mean, it's obvious that it's an important thing, great for the environment, but why is it extra important in Telco? >> Yeah, thank you. Actually, I'll tell you, this morning I had a discussion with an operator. The first thing he said, that the electricity consumption is more expensive nowadays that total real estate that he's spending money on. So, it's like that is the number one thing that if you can change that, bring that power consumption down. And, if you talk about sustainability, look what is happening in Europe, what's happening in all the electricity areas. That's the critical element that we need to address. Whether we are defining chip, platforms, storage systems, that's the number one mantra right now. You know, reduce the power. Electricity consumption, because it's a sustainable planet that we are living in. >> So, you got CapEx and OpEx. We're talking about the big piece of OpEx is now power consumption? >> Power Consumption >> That's the point. Okay, so in my experience, servers are the big culprit for power consumption, which is powered by core semiconductors and microprocessors. So, what's the strategy to reduce the power consumption? You're probably not going to reduce the bill overall. You maybe just can keep pace, but from a technical standpoint, how do you attack that? >> Yeah, there are multiple defined ways of adding. Obviously the process technology, that micro (indistinct) itself is evolving to make it more low-power systems. But, even within the silicon, the server that we develop, if you look in a CPU, there is a lot of power states. So, if you have a 32 code platform, as an example, every code you can vary the frequency and the C-states, power states. So, if you look into any traffic, whether it's a radio access network, packet code. At any given time the load is not peak. So, your power consumption, actual what we are drawing from the wall, it also needs to vary with that. So, that's how if you look into this there's a huge savings. If you go to Intel booth or Ericson booth or anyone, you will see right now every possible, the packet code, radio access network, everything network. They're talking about our energy consumption, how they're lowering this. These states, as we call it power states, C-state P-state they've built in intel chip for a long time. The cloud providers are taking advantage of it. But Telcos, with even two generation before they used to actually switch it off in the bios. I say no, we need peak. Now, that thing is changing. Now, it's all like, how do I take advantage of the built in technologies? >> I remember the enterprise virtualization, Manish, was a big play. I remember PG&E used to give rebates to customers that would install virtualized software, VMware. >> And SSDs. >> Yeah. And SSDs, you know, yes. Because, the spinning disc was, but, nowhere near with a server consumption. So, how virtualized is the telco network? And then, what I'm saying is there other things, other knobs, you can of course turn. So, what's your perspective on this as a server player? >> Yeah, absolutely. Let me just back up a little bit and start at the big picture to share what Udayan said. Here, day two, every conversation I've had yesterday and today morning with every operator, every CTO, they're coming in and first topic they're talking about is energy. And, the reason is, A, it's the right thing to do, sustainability, but, it's also becoming a P&L issue. And, the reason it's becoming a P&L issue is because we are in this energy inflationary environment where the energy costs are constantly going up. So, it's becoming really important for the service providers to really drive more efficiency onto their networks, onto their infrastructure. Number one. Two, then to your question on what all knobs need to be turned on, and what are the knobs? So, Udayan talked about within the intel, silicon, the C-states, P-states and all these capabilities that are being brought up, absolutely important. But again, if we take a macro view of it. First of all, there are opportunities to do infrastructure audit. What's on, why is it on, does it need to be on right now? Number two, there are opportunities to do infrastructure upgrade. And, what I mean by that is as you go from previous generation servers to next generation servers, better cooling, better performance. And through all of that you start to gain power usage efficiency inside a data center. And, you take that out more into the networks you start to achieve same outcomes on the network site. Think about from a cooling perspective, air cooling but for that matter, even liquid cooling, especially inside the data centers. All opportunities around PUE, because PUE, power usage efficiency and improvement on PUE is an opportunity. But, I'll take it even further. Workloads that are coming onto it, core, RAN, these workloads based on the dynamic traffic. Look, if you look at the traffic inside a network, it's not constant, it's varied. As the traffic patterns change, can you reduce the amount of infrastructure you're using? I.e. reduce the amount of power that you're using and when the traffic loads are going up. So, the workloads themselves need to become more smarter about that. And last, but not the least. From an orchestration layer if you think about it, where you are placing these workloads, and depending on what's available, you can start to again, drive better energy outcomes. And, not to forget acceleration. Where you need acceleration, can you have the right hardware infrastructures delivering the right kind of accelerations to again, improve those energy efficiency outcomes. So, it's a complex problem. But, there are a lot of levers, lot of tools that are in place that the service providers, the technology builders like us, are building the infrastructure, and then the workload providers all come together to really solve this problem. >> Yeah, Udayan, Manish mentioned this idea of moving from one generation to a new generation and gaining benefits. Out there on the street, if you will. Most of the time it's an N plus 2 migration. It's not just moving from last generation to this next generation, but it's really a generation ago. So, those significant changes in the dynamics around power density and cooling are meaningful? You talk about where performance should be? We start talking about the edge. It's hard to have a full-blown raised data center floor edge everywhere. Do these advances fundamentally change the kinds of things that you can do at the base of a tower? >> Yeah, absolutely. Manish talked about that, the dynamic nature of the workload. So, we are using a lot of this AIML to actually predict. Like for example, your multiple force in a systems. So, why is the 32 core as a system, why is all running? So, your traffic profile in the night times. So, you are in the office areas, in the night has gone home and nowadays everybody's working from remote anyway. So, why is this thing a full blown, spending the TDP, the total power and extreme powers. You bring it down, different power states, C-states. We talked about it. Deeper C-states or P-states, you bring the frequency down. So, lot of those automation, even at the base of the tower. Lot of our deployment right now, we are doing a whole bunch of massive MIMO deployment. Virtual RAN in Verizon network. All actually cell-site deployment. Those eight centers are very close to the cell-site. And, they're doing aggressive power management. So, you don't have to go to a huge data centers, even there's a small rack of systems, four to five, 10 systems, you can do aggressive power management. And, you built it up that way. >> Okay. >> If I may just build on what Udayan said. I mean if you look at the radio access network, right? And, let's start at the bottom of the tower itself. The infrastructure that's going in there, especially with Open RAN, if you think about it, there are opportunities now to do a centralized RAN where you could do more BBU pooling. And, with that, not only on a given tower but across a given given coverage area, depending on what the traffics are, you can again get the infrastructure to become more efficient in terms of what traffic, what needs are, and really start to benefit. The pooling gains which is obviously going to give you benefit on the CapEx side, but from an energy standpoint going to give you benefits on the OpEx side of things. So that's important. The second thing I will say is we cannot forget, especially on the radio access side of things, that it's not just the bottom of the tower what's happening there. What's happening on the top of the tower especially with the radio, that's super important. And, that goes into how do you drive better PA efficiency, how do you drive better DPD in there? This is where again, applying AI machine learning there is a significant amount of opportunity there to improve the PA performance itself. But then, not only that, looking at traffic patterns. Can you do sleep modes, micro sleep modes to deep sleep modes. Turning down the cells itself, depending on the traffic patterns. So, these are all areas that are now becoming more and more important. And, clearly with our ecosystem of partners we are continuing to work on these. >> So we hear from the operators, it's an OpEx issue. It's hitting the P&L. They're in search of PUE of one. And, they've historically been wasteful, they go full throttle. And now, you're saying with intelligence you can optimize that consumption. So, where does the intelligence live? Is it in the rig. Where is it all throughout the network? Is it in the silicon? Maybe you could paint a picture as to where those smarts exist. >> I can start. It's across the stack. It starts, we talked about the C-states, P-states. If you want to take advantage of that, that intelligence is in the workload, which has to understand when can I really start to clock things down or turn off the cores. If you really look at it from a traffic pattern perspective you start to really look at a rig level where you can have power. And, we are working with the ecosystem partners who are looking at applying machine learning on that to see what can we really start to turn on, turn off, throttle things down, depending on what the, so yes, it's across the stack. And lastly, again, I'll go back to cannot forget orchestration, where you again have the ability to move some of these workloads and look at where your workload placements are happening depending on what infrastructure is and what the traffic needs are at that point in time. So it's, again, there's no silver bullet. It has to be looked across the stack. >> And, this is where actually if I may, last two years a sea change has happened. People used to say, okay there are C-states and P-states, there's silicon every code. OS operating system has a governor built in. We rely on that. So, that used to be the way. Now that applications are getting smarter, if you look at a radio access network or the packet core on the control plane signaling application, they're more aware of the what is the underlying silicon power state sleep states are available. So, every time they find some of these areas there's no enough traffic there, they immediately goes to a transition. So, the workload has become more intelligent. The rig application we talked about. Every possible rig application right now are apps on xApps. Most of them are on energy efficiency. How are they using it? So, I think lot more even the last two years. >> Can I just say one more thing there right? >> Yeah. >> We cannot forget the infrastructure as well, right? I mean, that's the most important thing. That's where the energy is really getting drawn in. And, constant improvement on the infrastructure. And, I'll give you some data points, right? If you really look at the power at servers, right? From 2013 to 2023, like a decade. 85% energy intensity improvement, right? So, these gains are coming from performance with better cooling, better technology applications. So, that's super critical, that's important. And, also to just give you another data point. Apart from the infrastructure what cache layers we are running and how much CPU and compute requirements are there, that's also important. So, looking at it from a cache perspective are we optimizing the required infrastructure blocks for radio access versus core? And again, really taking that back to energy efficiency outcomes. So, some of the work we've been doing with Wind River and Red Hat and some of our ecosystem partners around that for radio access network versus core. Really again, optimizing for those different use cases and the outcomes of those start to come in from an energy utilization perspective >> So, 85% improvement in power consumption. Of course you're doing, I don't know, 2, 300% more work, right? So, let's say, and I'm just sort of spit balling numbers but, let's say that historically powers on the P&L has been, I don't know, single digits, maybe 10%. Now, it's popping up the much higher. >> Udayan: Huge >> Right? >> I mean, I don't know what the number is. Is it over 20% in some cases or is it, do you have a sense of that? Or let's say it is. The objective I presume is you're probably not going to lower the power bill overall, but you're going to be able to lower the percent of cost on the OpEx as you grow, right? I mean, we're talking about 5G networks. So much more data >> Capacity increasing. >> Yeah, and so is it, am I right about that the carriers, the best they can hope for is to sort of stay even on that percentage or maybe somewhat lower that percentage? Or, do you think they can actually cut the bill? What's the goal? What are they trying to do? >> The goal is to cut the bill. >> It is! >> And the way you get started to cut the bill is, as I said, first of all on the radio side. Start to see where the improvements are and look, there's not a whole lot there to be done. I mean, the PS are as efficient as they can be, but as I said, there are things in DPD and all that still can be improved. But then, sleep modes and all, yes there are efficiencies in there. But, I'll give you one important, another interesting data point. We did a work with ACG Research on our 16G platform. The power edge service that we have recently launched based on Intel's Sapphire Rapids. And, if you look at the study there. 30% TCR reduction, 10% in CapEx gains, 30% in OpEx gains from moving away from these legacy monolithic architectures to cloud native architectures. And, a large part of that OpEx gain really starts to come from energy to the point of 800 metric tonnes of carbon reduction to the point of you could have, and if you really translate that to around 160 homes electric use per year, right? So yes, I mean the opportunity there is to reduce the bill. >> Wow, that's big, big goal guys. We got to run. But, thank you for informing the audience on the importance and how you get there. So, appreciate that. >> One thing that bears mentioning really quickly before we wrap, a lot of these things we're talking about are happening in remote locations. >> Oh, back to that point of distributed nature of telecom. >> Yes, we talked about a BBU being at the base of a tower that could be up on a mountain somewhere. >> No, you made the point. You can't just say, oh, hey we're going to go find ambient air or going to go... >> They don't necessarily... >> Go next to a waterfall. >> We don't necessarily have the greatest hydro tower. >> All right, we got to go. Thanks you guys. Alright, keep it right there. Wall to wall coverage is day two of theCUBE's coverage of MWC 23. Stay right there, we'll be right back. (corporate outro jingle)
SUMMARY :
that drive human progress. John Furrier is in the studio about greening the network. So, it's like that is the number one thing We're talking about the big piece of OpEx reduce the power consumption? So, if you look into any traffic, I remember the enterprise Because, the spinning disc was, So, the workloads themselves the kinds of things that you So, you are in the office areas, to give you benefit on the CapEx side, Is it in the rig. that intelligence is in the workload, So, the workload has and the outcomes of those start to come in historically powers on the P&L on the OpEx as you grow, right? And the way you get on the importance and how you get there. before we wrap, a lot of these Oh, back to that point of being at the base of a tower No, you made the point. the greatest hydro tower. Thanks you guys.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Manish Singh | PERSON | 0.99+ |
PG&E | ORGANIZATION | 0.99+ |
Wind River | ORGANIZATION | 0.99+ |
Telcos | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Udayan Mukherjee | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
2013 | DATE | 0.99+ |
85% | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
10% | QUANTITY | 0.99+ |
2, 300% | QUANTITY | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
30% | QUANTITY | 0.99+ |
CapEx | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
four | QUANTITY | 0.99+ |
Barcelona | LOCATION | 0.99+ |
32 code | QUANTITY | 0.99+ |
Udayan | PERSON | 0.99+ |
eight centers | QUANTITY | 0.99+ |
one generation | QUANTITY | 0.99+ |
Manish | PERSON | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
OpEx | ORGANIZATION | 0.99+ |
two generation | QUANTITY | 0.99+ |
today morning | DATE | 0.99+ |
10 systems | QUANTITY | 0.99+ |
32 core | QUANTITY | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
today | DATE | 0.99+ |
800 metric tonnes | QUANTITY | 0.98+ |
2023 | DATE | 0.98+ |
ACG Research | ORGANIZATION | 0.98+ |
five | QUANTITY | 0.98+ |
Sapphire Rapids | COMMERCIAL_ITEM | 0.98+ |
over 20% | QUANTITY | 0.98+ |
first topic | QUANTITY | 0.98+ |
around 160 homes | QUANTITY | 0.97+ |
First | QUANTITY | 0.97+ |
xApps | TITLE | 0.97+ |
intel | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.96+ |
second thing | QUANTITY | 0.96+ |
Dell Jets | ORGANIZATION | 0.95+ |
Two | QUANTITY | 0.94+ |
last two years | DATE | 0.94+ |
first thing | QUANTITY | 0.93+ |
Dell Techhnologies | ORGANIZATION | 0.9+ |
P&L | ORGANIZATION | 0.9+ |
day two | QUANTITY | 0.89+ |
Ericson | ORGANIZATION | 0.89+ |
this morning | DATE | 0.88+ |
one more thing | QUANTITY | 0.88+ |
Edge | ORGANIZATION | 0.88+ |
MWC 23 | EVENT | 0.87+ |
MWC | EVENT | 0.86+ |
Telecom Systems Business | ORGANIZATION | 0.84+ |
Number two | QUANTITY | 0.8+ |
MWC23 | EVENT | 0.8+ |
first | QUANTITY | 0.78+ |
Network | ORGANIZATION | 0.78+ |
5G | QUANTITY | 0.76+ |
One thing | QUANTITY | 0.76+ |
OpEx | TITLE | 0.7+ |
single digits | QUANTITY | 0.69+ |
RAN | TITLE | 0.68+ |
theCUBE | ORGANIZATION | 0.63+ |
two | QUANTITY | 0.62+ |
16G | OTHER | 0.61+ |
Verizon | ORGANIZATION | 0.57+ |
Udayan | ORGANIZATION | 0.56+ |
OpEx | OTHER | 0.53+ |
Tibor Fabry Asztalos, Dell Technologies & Gautam Bhagra, Dell Technologies | MWC Barcelona 2023
>> Announcer: "theCUBE's" live coverage is made possible by funding from Dell Technologies, creating technologies that drive human progress. (upbeat music) >> Good evening, everyone. Live from Barcelona, Spain, it's "theCUBE". We are at Mobile World, MWC, excuse me, '23. New name this year. I'm Lisa Martin with Dave Vellante. Dave, we have had some great conversations. This is only day one of four days of coverage from "theCUBE" but one of the things that we've been talking about is disaggregation. You've wrote about it in your breaking analysis. We've been talking about it. Today is a big thing that's happening. We're going to be talking about that next. >> Yeah, open ecosystems require integration. Integration requires certification. And so, you got to have labs. We're going to talk about that and what value that brings to the community. >> Right. Please welcome Tibor Fabry-Asztalos, senior vice president of telecom systems and product engineering at Dell. >> Hi. >> And back to "theCUBE" after a couple of hours, Gautam Bhagra, vice president of partnerships at Dell. Guys, great to have you here. >> I love to be here. Thank you. >> Great to be here. >> So, day one, I'm sure lots of conversations, lots of meetings, lots of jet lag that we're all trying to get over. Talk about, Gautam, we'll start with you. Talk about the disaggregation era. What it is intended to support? What is it intended to enable? >> Yeah, so I mean, I think to be honest with you, Lisa, we spoke about this earlier also, like the whole vision with the disaggregation is to make sure our telco providers can take the benefits of having the innovation that comes along with it, right? So currently, we all know they're tied into like lock systems, which kind of constricts them in going after this whole innovative space. So, our hope is by working with our operators and our partners, we can help make that disaggregation journey a lot easier and work on some of these challenges, and make it easier for the telcos to innovate and consolidate going forward. So, we're working very closely and we talked about the community this morning. We're working very closely with Tibor and his team from an engineering perspective to help build those solutions with our partners and we're excited about the announcements we made this morning. >> When you hear challenges from this ecosystem, can you stack rank 'em? What are you hearing? Kind of what's top of mind? And so, the top three, if you would. >> Some of the challenges are just to define moving from a closed system and open system, just to making sure that the acceptance of that to see what's the value proposition is for an open system and then for the carriers to see the path going from a closed system to an open system. Of course, at the end, people realize the value at the end and speed of innovation that you're going to get all the new technologies and new features, functionality you get in an open system. But then the challenge comes with it, how you actually integrate those and then validate them, and you are to deploy them. So in a sense, that's the opportunity and also some of the challenge along the way. And that's where, as Gautam said, that's where we are also looking at playing the key role with the OTEL lab, the Open Telecom Ecosystem Lab, where we take these pieces of the open ecosystem and have combined them, validate them, and provide the pipeline to the customer. Pre-integration and then full integration into the production network. >> Those challenges, I presume, vary whether you're talking to a greenfield network operator versus somebody who's got a 40, 50 year history, a hundred-year history in the business, right? I mean migration is a big issue for them, right? Whereas the greenfield, we heard from DISH earlier, they want to drive innovation so they might be willing to sacrifice some other areas. So, is that a fair summarization and what are you hearing? >> [Tibor and Gautam] Yeah. >> Absolutely it is. I mean, that's where you see that DISH being kind of a leader in the space, as they were deploying in greenfield, they defined what the open ecosystem should look like, defined all the components of it, how you integrate them, validate them, and they were able to, well, go through it and deploy it. To your point, for an open, closed systems, as how you actually start transforming the existing network into the open one, that's going to go to a different process, right? You need to figure out how these new open systems can interrupt and work together with existing networks. So, that's one likely some of those carriers will start in an isolated area and grow from there. Deploy an open system in a rural area, for example, and then build from there. >> So, what a bank would do is they say, "Okay, we're going to write in our own abstraction layer." >> Gautam: Yeah. >> Right? "Using microservices, we're going to connect to the cloud. And we're going to, you know, put maybe some lower risk applications in the cloud first and then we're going to create our own cloud." Is there a similar dynamic here? >> Yeah, I mean, so I think you're spot on, right? Like, I think one of the things that we are seeing with the telco operators that we've spoken to is they're very risk averse. >> Yep. >> Right, they have very strong SLA requirements. They cannot go down even for a second. So, what that basically means is the innovation aspect is constrained by the risks that they perceive on any changes that you want to make on the architecture. So, the question that comes up is how do we make it easier for them to not worry about the bare minimum requirements of making sure the network's running and working while thinking about the new innovative technologies and solutions you want to build on the start. So, back to your bank example, nine years ago, no one in a bank even was thinking about like applications that will run on the cloud. Like for them, it was like a side project. They'll try and test something, see if it works, and then they'll think about cloud in the future, right? But now, core applications on banks are actually being built on public cloud. I think we see the same happening with the telco operators as well. Right now, they're understanding the move from a closed ecosystem to an open ecosystem. They understand the value proposition. On the core side, it's already happening a lot. And I think they are slowly moving there and that's where I think Tibor and team have been doing a great job working with our customers to make the transition happen. >> But there are so many permutations. >> Right. >> And integration points. How is Dell addressing that across the ecosystem? >> So, to give you an example, we talked about OTEL, which is our brand new, kind of 13,000 square feet lab that we kind of inaugurated last year based in Round Rock, Texas. >> Dave: Open Telecom. >> Dave and Tibor: Ecosystem Lab. >> Correct, great. And so, as part of that, that's a physical lab but more importantly, that's kind of a community where partners, customers come together to actually, and collaborate and work on these solutions. And as part of this, we also develop what we call the SIP, or Solution Integration Platform, to enable exactly what you just said. Making sure that we have a platform that actually can take all these various components, validate them individually, combine them, and then provide a DevOps and GitOps model, how you actually combine them, provide the BOM or SBOM, and then push that to pre-production and deployments for our customers. So, that's part of the challenge as we talked earlier. And that's how Dell and we are looking at actually enabling this basically, the validation of this disaggregated wall. >> Oh. >> Sorry, I just wanted to- >> Go ahead. >> just going to add one more point, right? So, when we look at the partners that we are working with as well in the OTEL and there are three ways we are working with them. At the bare minimum, we want to make sure that solution will run on the Dell infrastructure and the hardware, right? So, we have the self-certification process. We had a lot of good uptake on it and we are seeing a lot more come in. In fact, I had a check-in with "theCUBE" this morning in our side and it's more than a hundred plus partners already interested in going through that. Awesome. Then we have other places where we work on with partners to build reference architectures together, right? So, we want some sort of validated solution that will work together that we can take to the market. And then we also have engineered solutions that we are building with partners like the infrastructure block offering that we have taken where it's all pre-packaged, pre-built by Dell, working very closely with our partners. So, the telcos don't have to worry about deployment, integration, and everything else that comes along. >> And I presume the security supply chain is part of that- >> Yes. >> bill of materials- >> Absolutely. >> you just described. >> Yeah. >> Exactly. >> And that would include all those levels, the engineered systems, the reference architectures as well? And how do you decide like candidates, we can't do it all, right? So, it's the big markets get the engineered system, is that right? How do you adjudicate there? >> Yeah, so I mean, I think there are a couple of angles to look at it, right? I think the first and foremost is where we see the biggest demand is coming from the customers in terms of the stack they already have and where they have the pain points. >> Dave: Okay. >> Right, so this is why we are working with Red Hat and Wind River, as an example, because they are in most of the deployments that we are aware of with the customers and where we see an opportunity for Dell to partner with these partners. I think we are seeing a lot of new players also coming up the stack. And as they come up the stack and we find opportunities to co-build and co-innovate, absolutely we'll be building joint solutions with them as well. >> Where are you on, from a partnership perspective, on the strategic vision? You mentioned a number of things that have already been accomplished, quite a few. But from your journey perspective on that strategy, where are you? >> Yeah, so it's a really good question. I think we really want to be the partner of choice for all technology and services company within the telecom space. We're looking to drive the transformation in the network area, right? So, that's the vision that we have in the telecom system business from a partnership side. We have created some really good strategic partnerships with key providers, with independent software vendors, the network equipment providers. We're having some really good, strategic conversations with them. You've heard some of the announcement come out today, the work we are doing with Nokia, with Samsung, the Red Hat announcement, the Wind River, and so on and so forth. And there's a lot more in the pipeline. But more importantly, we want to grow the impact of the ecosystem. So, that's why we are launching the partner community today as well to make that happen. >> How does the lab work? Who has access to it? Can I self-certify? If I can self-certify, how do you make sure that I'm following the rules, all of the stuff- >> Sure. >> that you would- >> Absolutely. >> expect. >> So yes, you can self-certify, that's Gautam just mentioned. We already had quite a few ISVs go through that self-certification. And then there's also, there's reference architecture that's being done and other engineered solutions that we talked about earlier. And the lab is set up in a way that when needed, test lines can be isolated. So, only certain set of partners have access to it. So, it's made up in a way that enables collaborations. At the same times, it kind of enables a certain set of customers and partners working together without having challenges of having a completely open system. >> Okay, but so, if I want to do something with you guys and let's say, I am a candidate for an engineered system, so how does it work? Somebody's got to buy the equipment, right? He's got to ship it, right? There's a lot of Dell equipment involved. >> Tibor: That's correct. >> There's other third-party CapEx software, et cetera. So, you fund that, the partners fund that, it's a hybrid funding model, how does that all get done? >> So today, for obviously, we work closely with those partners. The engineered solutions we've developed so far, we've been funding it largely and as you said, is Dell infrastructure plus the cast layers and the cloud players we work with. So, we actually put those in place. We funded them, of course, with participation from them. And that's being done through those labs. >> Okay, great. So, you guys are providing that benefit to the ecosystem. Writing checks, bringing engineering talent to the table. >> Gautam: Yeah. >> Okay. >> And at the same time, I mean, it's a partnership at the end of the day, right? So, depending on the kind of partnership we are. So, if you're an ISV, it's fairly simple. Come into our labs. You don't have to worry about the infrastructure. >> Sure. >> Run it all in our labs and you're good. If you're a hardware vendor or a NEP, network equipment provider, that's where it gets interesting where they need to send us stuff, we need to send them stuff. And usually, like Tibor mentioned, it's a joint collaboration. We all put in our chips on the table and we work together. >> So, when you're having conversations with prospective partners, obviously different types of partners, Gautam, that you just talked about, what's in it for them? What's the value proposition? What does this community- >> Gautam: Yeah. >> give them from a competitive advantage standpoint? >> Yeah, so I mean there are, so the way I think about it, right? There are three things that Dell is bringing to the table. The first one is our experience and expertise on doing this transformation within the enterprise space and the learnings we have from there that we're bringing to telco now, right? So, Dell's been working with enterprises for many, many years. We are one of the big providers there. We all know what transformation enterprise went through. >> Tibor: Telco transformation, IT transformation. >> Exactly. And that's the experience we have, which we're bringing to telco. The second one is our investment, both from a go-to market side as well as the way we are working with our sales and marketing, and so on and so forth, with the engineering side. And finally, I think, and this for me is the best one, is Dell is a very partner-centric organization. >> Lisa: Yes. >> Our strategy is built around partnerships. So, that's the other piece that we bring to the table. >> Where are the labs? Oh, go ahead. >> And what's one more note on that, and also, we are talking about the engineered solutions. There's also the supply chain then because that's a basically appliance and then that goes to Dell's supply chain, which is best in class. >> Dave: And where are the labs? How many are there? >> So Round Rock, Texas is the biggest one, the 13,000 square feet. We also have extension to it. We just announced opening one in Cork for the EME market to making sure that we can cover any regulatory challenges. But also, basically any test lines that we need to cover that have latency challenges. That's why we want to make sure that we have labs in other areas as well. >> And the go-to market, is it an overlay organization, a dedicated organization? >> Yeah, so it's a bit of both as you know. But yeah, in the telecom business unit, we have a dedicated sales organization as well as an alliance organization working very closely with product and engineering to take it to market. >> Given the strength and the breadth of the partner program in the community, based on this is only day one of MWC but is there anything that you've heard today that excites you where telecom is going and where Dell and its ecosystem is going and really burgeoning? >> Oh, I've had I don't know how many meetings since 6:00 AM this morning. So, it's been an amazing event and we're just having so many great conversations with partners, our customers. And I think a lot of today is all about figuring out what our strategy and our vision is, where is each side going and what the overlap is. I think the end result's going to be follow up conversations with a lot of these partners that we are working with or will be working with soon. And then thinking about, do we build engineered solutions together? Do we go validated route? Like we going to figure that out. But I mean, for me, this is like the perfect place to come and share your vision and strategy and understand what we are trying to solve for. >> To me, what's been interesting that all the interactions and discussions are about how to get to or render open ecosystem. That's great to see that the focus is on how to make it work versus still questioning it and I think that's pretty good. >> Well, you guys launched this business I think during the pandemic, right? >> Yes. >> Yeah, that's right. >> So I mean, you could do a lot over Zoom, but as we were talking about earlier, having the face-to-face interaction, there's no replacement for it. The 6:00 AM meetings versus the 30 minute zoom calls and your body language, I mean, you learn so much that you can take away from these events. >> Absolutely. Seeing someone in 3D is so different and it's good to build that relationship and rapport as well with the folks. >> I agree. >> It is. There's so much value in the hallway conversations that you can't have over Zoom. So, I guess last question for you as we head into to day two, what are some of the things that we can be on the lookout for from Dell and its ecosystem? >> Hmm. >> Interesting. (Tibor chuckling) >> I mean, all our announcements are out. I think what you can look at for us to really be leading in this segment, taking a leadership role, and continuously looking at how we can really enable the open ecosystem and how we can provide more value there, and how we can see how we can lead in this space. >> How you can lead in this space. >> Yeah, I mean for me, I mean, day two is like, I have a lot more meetings in day two than day one so I don't know if it's like people flying in today or what, but it's amazing to just meet the partners and customers. >> So, that theme of velocity for you is going to keep going. >> Oh, it's not stopping. (Lisa laughing) That's for sure. We are excited about it. >> Well, thank you for carving out some time to talk to with us on "theCUBE" about the partner program, the open ecosystem and the commitment to growing that and enabling partners to really differentiate their services with Dell. We appreciate it. >> We appreciate it as well. >> Thank you very much. >> Thank you for having us. >> Thanks. >> Our pleasure. For our guests and for Dave Vellante, I'm Lisa Martin. You're watching "theCUBE" live in Barcelona, Spain at MWC '23. Day one of our coverage. Be right back with our final guest of the day so stick around. (upbeat music continues)
SUMMARY :
that drive human progress. from "theCUBE" but one of the things And so, you got to have labs. of telecom systems and Guys, great to have you here. I love to be here. Talk about the disaggregation era. for the telcos to innovate And so, the top three, and provide the pipeline to the customer. Whereas the greenfield, we a leader in the space, So, what a bank would do is they say, applications in the cloud first things that we are seeing So, the question that comes that across the ecosystem? So, to give you an example, So, that's part of the At the bare minimum, we want to make sure in terms of the stack they already have that we are aware of with the customers on the strategic vision? So, that's the vision that we have And the lab is set up in the equipment, right? the partners fund that, and the cloud players we work with. that benefit to the ecosystem. So, depending on the kind We all put in our chips on the and the learnings we have from there Tibor: Telco transformation, And that's the experience we have, So, that's the other piece Where are the labs? and then that goes to Dell's supply chain, to making sure that we can of both as you know. that we are working with that all the interactions having the face-to-face interaction, different and it's good to build that we can be on the lookout for and how we can see how we the partners and customers. So, that theme of velocity We are excited about it. about the partner program, final guest of the day
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Samsung | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Gautam | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Tibor | PERSON | 0.99+ |
Gautam Bhagra | PERSON | 0.99+ |
30 minute | QUANTITY | 0.99+ |
Tibor Fabry-Asztalos | PERSON | 0.99+ |
OTEL | ORGANIZATION | 0.99+ |
13,000 square feet | QUANTITY | 0.99+ |
6:00 AM | DATE | 0.99+ |
last year | DATE | 0.99+ |
telco | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Cork | LOCATION | 0.99+ |
Wind River | ORGANIZATION | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Barcelona, Spain | LOCATION | 0.99+ |
Round Rock, Texas | LOCATION | 0.99+ |
Tibor Fabry Asztalos | PERSON | 0.99+ |
Today | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
more than a hundred plus partners | QUANTITY | 0.99+ |
DISH | ORGANIZATION | 0.99+ |
three ways | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
40, 50 year | QUANTITY | 0.98+ |
'23 | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
Open Telecom Ecosystem Lab | ORGANIZATION | 0.98+ |
nine years ago | DATE | 0.98+ |
each side | QUANTITY | 0.98+ |
Mobile World | LOCATION | 0.98+ |
four days | QUANTITY | 0.98+ |
theCUBE | TITLE | 0.97+ |
this year | DATE | 0.97+ |
first one | QUANTITY | 0.97+ |
day two | QUANTITY | 0.96+ |
MWC | EVENT | 0.96+ |
OTEL lab | ORGANIZATION | 0.95+ |
day one | QUANTITY | 0.95+ |
6:00 AM this morning | DATE | 0.95+ |
Manish Singh, Dell Technologies & Doug Wolff, Dell Technologies | MWC Barcelona 2023
>> Announcer: theCUBE's live coverage is made possible by funding from Dell Technologies, creating technologies that drive human progress. (upbeat music) >> Welcome to the Fira in Barcelona, everybody. This is theCUBE's coverage of MWC 23, day one of that coverage. We have four days of wall-to-wall action going on, the place is going crazy. I'm here with Dave Nicholson, Lisa Martin is also in the house. Today's ecosystem day, and we're really excited to have Manish Singh who's the CTO of the Telecom Systems Business unit at Dell Technologies. He's joined by Doug Wolf who's the head of strategy for the Telecom Systems Business unit at Dell. Gents, welcome. What a show. I mean really the first major MWC or used to be Mobile World Congress since you guys have launched your telecom business, you kind of did that sort of in the Covid transition, but really exciting, obviously a huge, huge venue to match the huge market. So Manish, how did you guys get into this? What did you see? What was the overall thinking to get Dell into this business? >> Manish: Yeah, well, I mean just to start with you know, if you look at the telecom ecosystem today, the service providers in particular, they are looking for network transformation, driving more disaggregation into their network so that they can get better utilization of the infrastructure, but then also get more agility, more cloud native characteristics onto their, for their networks in particular. And then further on, it's important for them to really start to accelerate the pace of innovation on the networks itself, to start more supply chain diversity, that's one of the challenges that they've been having. And so there've been all these market forces that have been really getting these service providers to really start to transform the way they have built the infrastructure in the past, which was legacy monolithic architectures to more cloud native disaggregated. And from a Dell perspective, you know, that really gives us the permission to play, to really, given all the expertise on the work we have done in the IT with all the IT transformations to leverage all that expertise and bring that to the service providers and really help them in accelerating their network transformation. So that's where the journey started. We've been obviously ever since then working on expanding the product portfolio on our compute platforms to bring Teleco great compute platforms with more capabilities than we can talk about that. But then working with partners and building the ecosystem to again create this disaggregated and open ecosystem that will be more cloud native and really meet the objective that the service providers are after. >> Dave Vellante: Great, thank you. So, Doug the strategy obviously is to attack this market, as Manish said, from an open standpoint, that's sort of new territory. It's like a little bit like the wild, wild west. So maybe you could double click on what Manish was saying from a, from a strategy standpoint, yes, the Telecos need to be more flexible, they need to be more open, but they also need this reliability piece. So talk about that from a strategy standpoint of what you guys saw. >> Doug: Yeah, absolutely. As Manish mentioned, you know, Dell getting into open systems isn't something new. You know, Dell has been kind of playing in that world for years and years, but the opportunity in Telecom that came was opening of the RAN, the core network, the edge, all of these with 5G really created a wide opening for us. So we started developing products and solutions, you know, built our first Telecom grade servers for open RAN over the last year, we'll talk about those at the show. But you know, as, as Manish mentioned, an open ecosystem is new to Telecom. I've been in the Telecom business along with Manish for, you know, 25 plus years and this is a new thing that they're embarking on. So started with virtualization about five, six years ago, and now moving to cloud native architectures on the core, suddenly there's this need to have multiple parties partner really well, share specifications, and put that together for an operator to consume. And I think that's just the start of really where all the challenges are and the opportunities that we see. >> Where are we in this transition cycle? When the average consumer hears 5G, feels like it's been around for a long time because it was hyped beforehand. >> Doug: Yeah. >> If you're talking about moving to an open infrastructure model from a proprietary closed model, when is the opportunity for Dell to become part of that? Is it, are there specific sites that have already transitioned to 5G, therefore they've either made the decision to be open or not? Or are there places where the 5G transition has taken place, and they might then make a transition to open brand with 5G? Where, where are we in that cycle? What does the opportunity look like? >> I'll kind of take it from the typology of the operator, and I'm sure Manish will build on this, but if I look back on the core, started to get virtualized you know, back around 2015-16 with some of the lead operators like AT&T et cetera. So Dell has been partnering with those operators for some years. So it really, it's happening on the core, but it's moving with 5G to more of a cloud-like architecture, number one. And number two, they're going beyond just virtualizing the network. You know, they previously had used OpenStack and most of them are migrating to more of a cloud native architecture that Manish mentioned. And that is a bit different in terms of there's more software vendors in that ecosystem because the software is disaggregated also. So Dell's been playing in the core for a number of years, but we brought out new solutions we've announced at the show for the core. And the parts that are really starting that transition of maybe where the core was back in 2015 is on the RAN and on the edge in particular. >> Because NFV kind of predated the ascendancy of cloud. >> Exactly, yeah. >> Right, so it really didn't have the impact that people had hoped. And there's some, when you look back, 'cause it's not same wine, new bottle as the open systems movement, there are a lot of similarities but you know, you mentioned cloud, and cloud native, you really didn't have, back in the nineties, true engineered systems. You didn't really have AI that, you know, to speak of at the sort of volume of the data that we have. So Manish, from a CTO's perspective, how are you attacking some of those differences in bringing that to market? >> Manish: Yeah, I mean, I think you touched on some very important points there. So first of all, the duck's point, a lot of this transformation started in the core, right? And as the technology evolution progress, the opportunities opened up. It has now come into the edge and the radio access network as well, in particular with open RAN. And so when we talk about the disaggregation of the infrastructure from the software itself and an open ecosystem, this now starts to create the opportunity to accelerate innovation. And I really want to pick up on the point that you'd said on AI, for example. AI and machine learning bring a whole new set of capabilities and opportunities for these service providers to drive better optimization, better performance, better sustainability and energy efficiency on their infrastructure, on and on and on. But to really tap into these technologies, they really need to open that up to third parties implementation solutions that are coming up. And again, the end objective remains to accelerate that innovation. Now that said, all these things need to be brought together, right? And delivered and deployed in the network without any degradation in the KPIs and actually improving the performance on different vectors, right? So this is what the current state of play is. And with this aggregation I'm definitely a believer that all these new technologies, including AI, machine learning, and there's a whole area, host area of problems that can be solved and attacked and are actually getting attacked by applying AI and machine learning onto these networks. >> Open obviously is good. Nobody's ever going to, you know, argue that open is a bad thing. It's like democracy is a good thing, right? At least amongst us. And so, but, the RAN, the open RAN, has to be as reliable and performant, right, as these, closed networks. Or maybe not, maybe it doesn't have to be identical. Just has to be close enough in order for that tipping point to occur. Is that a fair summarization? What are you guys hearing from carriers in terms of their willingness to sort of put their toe in the water and, and what could we expect in terms of the maturity model of, of open RAN and adoption? >> Right, so I mean I think on, on performance that, that's a tough one. I think the operators will demand performance and you've seen experiments, you've really seen more of the Greenfield operators kind of launch. >> Okay. >> Doug: Open RAN or vRAN type solutions. >> So they're going to disrupt. >> Doug: Yeah, they're going to disrupt. >> Yeah. >> Doug: And there's flexibility in an open RAN architecture also for 5G that they, that they're interested in and I think the Brownfield operators are too, but let's say maybe the Greenfield jump first in terms of doing that from a mass deployment perspective. But I still think that it's going to be critical to meet very similar SLAs and end user performance. And, you know, I think that's where, you know, maturity of that model is what's required. I think Brownfield operators are conservative in terms of, you know, going with something they know, but the opportunities and the benefits of that architecture and building new flexible, potentially cost advantaged over time solutions, that's what the, where the real interest is going forward. >> And new services that you can introduce much more quickly. You know, the interesting thing about Dell to me, you don't compete with the carriers, the public cloud vendors though, the carriers are concerned about them sort of doing an end run on them. So you provide a potential partnership for the carriers that's non-threatening, right? 'Cause you're, you're an arms dealer, you're selling hardware and software, right? But, but how do you see that? Because we heard in the keynote today, one of the Teleco, I think it was the chairman of Telefonica said, you know, cloud guys can't do this alone. You know, they need, you know, this massive, you know, build out. And so, what do you think about that in terms of your relationship with the carriers not being threatening? I mean versus say potentially the cloud guys, who are also your partners, I understand, it's a really interesting dynamic, isn't it? >> Manish: Yeah, I mean I think, you know, I mean, the way I look at it, the carriers actually need someone like Dell who really come in who can bring in the right capabilities, the right infrastructure, but also bring in the ecosystem together and deliver a performance solution that they can deploy and that they can trust, number one. Number two, to your point on cloud, I mean, from a Dell perspective, you know, we announced our Dell Telecom Multicloud Foundation and as part of that last year in September, we announced what we call is the Dell Telecom Infrastructure Blocks. The first one we announced with Wind River, and this is, think of it as the, you know, hardware and the cashier all pre-integrated with lot of automation around it, factory integrated, you know, delivered to customers in an integrated model with all the licenses, everything. And so it starts to solve the day zero, day one, day two integration deployment and then lifecycle management for them. So to broaden the discussion, our view is it's a multicloud world, the future is multicloud where you can have different clouds which can be optimized for different workloads. So for example, while our work with Wind River initially was very focused on virtualization of the radio access network, we just announced our infrastructure block with Red Hat, which is very much targeted and optimized for core network and edge, right? So, you know, there are different workflows which will require different capabilities also. And so, you know, again, we are bringing those things to these service providers to again, bring those cloud characteristics and cloud native architecture for their network. >> And It's going to be hybrid, to your point. >> David N.: And you, just hit on something, you said cloud characteristics. >> Yeah. >> If you look at this through the lens of kind of the general world of IT, sometimes when people hear the word cloud, they immediately leap to the idea that it's a hyperscale cloud provider. In this scenario we're talking about radio towers that have intelligence living on them and physically at the base. And so the cloud characteristics that you're delivering might be living physically in these remote locations all over the place, is that correct? >> Yeah, I mean that, that's true. That will definitely happen over time. But I think, I think we've seen the hyperscalers enter, you know, public cloud providers, enter at the edge and they're dabbling maybe with private, but I think the public RAN is another further challenge. I think that maybe a little bit down the road for them. So I think that is a different characteristic that you're talking about managing the macro RAN environment. >> Manish: If I may just add one more perspective of this cloud, and I mean, again, the hyperscale cloud, right? I mean that world's been great when you can centralize a lot of compute capability and you can then start to, you know, do workload aggregation and use the infrastructure more efficient. When it comes to Telecom, it is inherently it distributed architecture where you have access, you talked about radio access, your port, and it is inherently distributed because it has to provide the coverage and capacity. And so, you know, it does require different kind of capabilities when you're going out and about, and this is where I was talking about things like, you know, we just talked, we just have been working on our bare metal orchestration, right? This is what we are bringing is a capability where you can actually have distributed infrastructure, you can deploy, you can actually manage, do lifecycle management, in a distributed multicloud form. So it does require, you know, different set of capabilities that need to be enabled. >> Some, when talking about cloud, would argue that it's always been information technology, it always will be information technology, and especially as what we might refer to as public cloud or hyperscale cloud providers, are delivering things essentially on premises. It's like, well, is that cloud? Because it feels like some of those players are going to be delivering physical infrastructure outside of their own data centers in order to address this. It seems the nature, the nature of the beast is that some of these things need to be distributed. So it seems perfectly situated for Dell. That's why you guys are both at Dell now and not working for other Telecom places, right? >> Exactly. Exactly, yes. >> It's definitely an exciting space. It's transformed, the networks are under transformation and I do think that Dell's very well positioned to, to really help the customers, the service providers in accelerating their transformation journey with an open ecosystem. >> Dave V.: You've got the brand, and the breadth, and the resources to actually attract an ecosystem. But I wonder if you could sort of take us through your strategy of ecosystem, the challenges that you've seen in developing that ecosystem and what the vision is that ultimately, what's the outcome going to be of that open ecosystem? >> Yeah, I can start. So maybe just to give you the big picture, right? I mean the big picture, is disaggregation with performance, right, TCO models to the service providers, right? And it starts at the infrastructure layer, builds on bringing these cloud capabilities, the cast layer, right? Bringing the right accelerators. All of this requires to pull the ecosystem. So give you an example on the infrastructure in a Teleco grade servers like XR8000 with Sapphire, the new intel processors that we've just announced, and an extended array of servers. These are Teleco grade, short depth, et cetera. You know, the Teleco great characteristic. Working with the partners like Marvel for bringing in the accelerators in there, that's important to again, drive the performance and optimize for the TCO. Working then with partners like Wind River, Red Hat, et cetera, to bring in the cast capabilities so you can start to see how this ecosystem starts to build up. And then very recently we announced our private 5G solution with AirSpan and Expeto on the core site. So bringing those workloads together. Similarly, we have an open RAN solution we announce with Fujitsu. So it's, it's open, it's disaggregated, but bringing all these together. And one of the last things I would say is, you know, to make all this happen and make all of these, we've also been putting together our OTEL, our open Telecom ecosystem lab, which is very much geared, really gives this open ecosystem a playground where they can come in and do all that heavy lifting, which is anyways required, to do the integration, optimization, and board. So put all these capabilities in place, but the end goal, the end vision again, is that cloud native disaggregated infrastructure that starts to innovate at the speed of software and scales at the speed of cloud. >> And this is different than the nineties. You didn't have something like OTEL back then, you know, you didn't have the developer ecosystem that you have today because on top of everything that you just said, Manish, are new workloads and new applications that are going to be developed. Doug, anything you'd add to what Manish said? >> Doug: Yeah, I mean, as Manish said, I think adding to the infrastructure layers, which are, you know, critical for us to, to help integrate, right? Because we kind of took a vertical Teleco stack and we've disaggregated it, and it's gotten a little bit more complex. So our Solutions Dell Technology infrastructure block, and our lab infrastructure with OTEL, helps put those pieces together. But without the software players in this, you know, that's what we really do, I think in OTEL. And that's just starting to grow. So integrating with those software providers with that integration is something that the operators need. So we fill a gap there in terms of either providing engineered solutions so they can readily build on or actually bringing in that software provider. And I think that's what you're going to see more from us going forward is just extending that ecosystem even further. More software players effectively. >> In thinking about O-RAN, are they, is it possible to have the low latency, the high performance, the reliability capabilities that carriers are used to and the flexibility? Or can you sort of prioritize one over the other from a go to market and rollout standpoint and optimize one, maybe get a foothold in the market? How do you see that balance? >> Manish: Oh the answer is absolutely yes you can have both We are on that journey, we are on that journey. This is where all these things I was talking about in terms of the right kind of accelerators, right kind of capabilities on the infrastructure, obviously retargeting the software, there are certain changes, et cetera that need to be done on the software itself to make it more cloud native. And then building all the surrounding capabilities around the CICD pipeline and all where it's not just day zero or day one that you're doing the cloud-like lifecycle management of this infrastructure. But the answer to your point, yes, absolutely. It's possible, the technology is there, and the ecosystem is coming together, and that's the direction. Now, are there challenges? Absolutely there are challenges, but directionally that's the direction the industry is moving to. >> Dave V.: I guess my question, Manish, is do they have to go in lockstep? Because I would argue that the public cloud when it first came out wasn't nearly as functional as what I could get from my own data center in terms of recovery, you know, backup and recovery is a perfect example and it took, you know, a decade plus to get there. But it was the flexibility, and the openness, and the developer affinity, the programmability, that attracted people. Do you see O-RAN following a similar path? Or does it, my question is does it have to have that carrier class reliability today? >> David N.: Everything on day one, does it have to have everything on day one? >> Yeah, I mean, I would say, you know, like again, the Greenfield operators I think we're, we're willing do a little bit more experimentation. I think the operators, Brownfield operators that have existing, you know, deployments, they're going to want to be closer. But I think there's room for innovation here. And clearly, you know, Manish came from, from Meta and we're, we've been very involved with TIP, we're very involved with the O-RAN alliance, and as Manish mentioned, with all those accelerators that we're working with on our infrastructure, that is a space that we're trying to help move the ball forward. So I think you're seeing deployments from mainstream operators, but it's maybe not in, you know, downtown New York deployment, they're more rural deployments. I think that's getting at, you know, kind of your question is there's maybe a little bit more flexibility there, they get to experiment with the technology and the flexibility and then I think it will start to evolve >> Dave V.: And that's where the disruption's going to come from, I think. >> David N.: Well, where was the first place you could get reliable 4K streaming of video content? It wasn't ABC, CBS, NBC. It was YouTube. >> Right. >> So is it possible that when you say Greenfield, are a lot of those going to be what we refer to as private 5G networks where someone may set up a private 5G network that has more functions and capabilities than the public network? >> That's exactly where I was going is that, you know, that that's why you're seeing us getting very active in 5G solutions that Manish mentioned with, you know, Expeto and AirSpan. There's more of those that we haven't publicly announced. So I think you'll be seeing more announcements from us, but that is really, you know, a new opportunity. And there's spectrum there also, right? I mean, there's public and private spectrum. We plan to work directly with the operators and do it in their spectrum when needed. But we also have solutions that will do it, you know, on non-public spectrum. >> So let's close out, oh go ahead. You you have something to add there? >> I'm just going to add one more point to Doug's point, right? Is if you look on the private 5G and the end customer, it's the enterprise, right? And they're, they're not a service provider. They're not a carrier. They're more used to deploying, you know, enterprise infrastructure, maintaining, managing that. So, you know, private 5G, especially with this open ecosystem and with all the open run capabilities, it naturally tends to, you know, blend itself very well to meet those requirements that the enterprise would have. >> And people should not think of private 5G as a sort of a replacement for wifi, right? It's to to deal with those, you know, intense situations that can afford the additional cost, but absolutely require the reliability and the performance and, you know, never go down type of scenario. Is that right? >> Doug: And low latencies usually, the primary characteristics, you know, for things like Industry 4.0 manufacturing requirements, those are tough SLAs. They're just, they're different than the operator SLAs for coverage and, you know, cell performance. They're now, you know, Five9 type characteristics, but on a manufacturing floor. >> That's why we don't use wifi on theCUBE to broadcast, we need a hard line. >> Yeah, but why wouldn't it replace wifi over time? I mean, you know, I still have a home phone number that's hardwired to align, but it goes to a voicemail. We don't even have handset anymore for it, yeah. >> I think, well, unless the cost can come down, but I think that wifi is flexible, it's cheap. It's, it's kind of perfect for that. >> Manish: And it's good technology. >> Dave V.: And it works great. >> David N.: For now, for now. >> Dave V.: But you wouldn't want it in those situations, and you're arguing that maybe. >> I'm saying eventually, what, put a sim in a device, I don't know, you know, but why not? >> Yeah, I mean, you know, and Dell offers, you know, from our laptop, you know, our client side, we do offer wifi, we do offer 4G and 5G solutions. And I think those, you know, it's a volume and scale issue, I think for the cost structure you're talking about. >> Manish: Come to our booth and see the connected laptop. >> Dave V.: Well let's, let's close on that. Why don't you guys talk a little bit about what you're going on at the show, I did go by the booth, you got a whole big lineup of servers. You got some, you know, cool devices going on. So give us the rundown and you know, let's end with the takeaways here. >> The simple rundown, a broad range of new powered servers, broad range addressing core, edge, RAN, optimized for those with all the different kind of acceleration capabilities. You can see that, you can see infrastructure blocks. These are with Wind River, with Red Hat. You can see OTEL, the open telecom ecosystem lab where all that playground, the integration, the real work, the real sausage makings happening. And then you will see some interesting solutions in terms of co-creation that we are doing, right? So you, you will see all of that and not to forget the connected laptops. >> Dave V.: Yeah, yeah, cool. >> Doug: Yeah and, we mentioned it before, but just to add on, I think, you know, for private 5G, you know, we've announced a few offers here at the show with partners. So with Expeto and AirSpan in particular, and I think, you know, I just want to emphasize the partnerships that we're doing. You know, we're doing some, you know, fundamental integration on infrastructure, bare metal and different options for the operators to get engineered systems. But building on that ecosystem is really, the move to cloud native is where Dell is trying to get in front of. And we're offering solutions and a much larger ecosystem to go after it. >> Dave V.: Great. Manish and Doug, thanks for coming on the program. It was great to have you, awesome discussion. >> Thank you for having us. >> Thanks for having us. >> All right, Dave Vellante for Dave Nicholson and Lisa Martin. We're seeing the disaggregation of the Teleco network into open ecosystems with integration from companies like Dell and others. Keep it right there for theCUBE's coverage of MWC 23. We'll be right back. (upbeat tech music)
SUMMARY :
that drive human progress. I mean really the first just to start with you know, of what you guys saw. for open RAN over the last year, When the average consumer hears 5G, and on the edge in particular. the ascendancy of cloud. in bringing that to market? So first of all, the duck's point, And so, but, the RAN, the open RAN, the Greenfield operators but the opportunities and the And new services that you and this is, think of it as the, you know, And It's going to be you said cloud characteristics. and physically at the base. you know, public cloud providers, So it does require, you know, the nature of the beast Exactly, yes. the service providers in and the resources to actually So maybe just to give you ecosystem that you have today something that the operators need. But the answer to your and it took, you know, a does it have to have that have existing, you know, deployments, going to come from, I think. you could get reliable 4K but that is really, you You you have something to add there? that the enterprise would have. It's to to deal with those, you know, the primary characteristics, you know, we need a hard line. I mean, you know, I still the cost can come down, Dave V.: But you wouldn't And I think those, you know, and see the connected laptop. So give us the rundown and you know, and not to forget the connected laptops. the move to cloud native is where Dell coming on the program. of the Teleco network
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Doug | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Fujitsu | ORGANIZATION | 0.99+ |
ABC | ORGANIZATION | 0.99+ |
2015 | DATE | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Doug Wolf | PERSON | 0.99+ |
OTEL | ORGANIZATION | 0.99+ |
CBS | ORGANIZATION | 0.99+ |
Manish Singh | PERSON | 0.99+ |
NBC | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
David N. | PERSON | 0.99+ |
AT&T | ORGANIZATION | 0.99+ |
Marvel | ORGANIZATION | 0.99+ |
AirSpan | ORGANIZATION | 0.99+ |
Brownfield | ORGANIZATION | 0.99+ |
Telefonica | ORGANIZATION | 0.99+ |
Greenfield | ORGANIZATION | 0.99+ |
Teleco | ORGANIZATION | 0.99+ |
Manish | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Expeto | ORGANIZATION | 0.99+ |
Wind River | ORGANIZATION | 0.99+ |
YouTube | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Dave V. | PERSON | 0.99+ |
Manish | PERSON | 0.99+ |
MWC 23 | EVENT | 0.99+ |
Doug Wolff | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
Dell Telecom Multicloud Foundation | ORGANIZATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
September | DATE | 0.99+ |
Mobile World Congress | EVENT | 0.99+ |
25 plus years | QUANTITY | 0.99+ |
O-RAN | ORGANIZATION | 0.99+ |
Telecos | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
Breaking Analysis: MWC 2023 goes beyond consumer & deep into enterprise tech
>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR, this is Breaking Analysis with Dave Vellante. >> While never really meant to be a consumer tech event, the rapid ascendancy of smartphones sucked much of the air out of Mobile World Congress over the years, now MWC. And while the device manufacturers continue to have a major presence at the show, the maturity of intelligent devices, longer life cycles, and the disaggregation of the network stack, have put enterprise technologies front and center in the telco business. Semiconductor manufacturers, network equipment players, infrastructure companies, cloud vendors, software providers, and a spate of startups are eyeing the trillion dollar plus communications industry as one of the next big things to watch this decade. Hello, and welcome to this week's Wikibon CUBE Insights, powered by ETR. In this Breaking Analysis, we bring you part two of our ongoing coverage of MWC '23, with some new data on enterprise players specifically in large telco environments, a brief glimpse at some of the pre-announcement news and corresponding themes ahead of MWC, and some of the key announcement areas we'll be watching at the show on theCUBE. Now, last week we shared some ETR data that showed how traditional enterprise tech players were performing, specifically within the telecoms vertical. Here's a new look at that data from ETR, which isolates the same companies, but cuts the data for what ETR calls large telco. The N in this cut is 196, down from 288 last week when we included all company sizes in the dataset. Now remember the two dimensions here, on the y-axis is net score, or spending momentum, and on the x-axis is pervasiveness in the data set. The table insert in the upper left informs how the dots and companies are plotted, and that red dotted line, the horizontal line at 40%, that indicates a highly elevated net score. Now while the data are not dramatically different in terms of relative positioning, there are a couple of changes at the margin. So just going down the list and focusing on net score. Azure is comparable, but slightly lower in this sector in the large telco than it was overall. Google Cloud comes in at number two, and basically swapped places with AWS, which drops slightly in the large telco relative to overall telco. Snowflake is also slightly down by one percentage point, but maintains its position. Remember Snowflake, overall, its net score is much, much higher when measuring across all verticals. Snowflake comes down in telco, and relative to overall, a little bit down in large telco, but it's making some moves to attack this market that we'll talk about in a moment. Next are Red Hat OpenStack and Databricks. About the same in large tech telco as they were an overall telco. Then there's Dell next that has a big presence at MWC and is getting serious about driving 16G adoption, and new servers, and edge servers, and other partnerships. Cisco and Red Hat OpenShift basically swapped spots when moving from all telco to large telco, as Cisco drops and Red Hat bumps up a bit. And VMware dropped about four percentage points in large telco. Accenture moved up dramatically, about nine percentage points in big telco, large telco relative to all telco. HPE dropped a couple of percentage points. Oracle stayed about the same. And IBM surprisingly dropped by about five points. So look, I understand not a ton of change in terms of spending momentum in the large sector versus telco overall, but some deltas. The bottom line for enterprise players is one, they're just getting started in this new disruption journey that they're on as the stack disaggregates. Two, all these players have experience in delivering horizontal solutions, but now working with partners and identifying big problems to be solved, and three, many of these companies are generally not the fastest moving firms relative to smaller disruptive disruptors. Now, cloud has been an exception in fairness. But the good news for the legacy infrastructure and IT companies is that the telco transformation and the 5G buildout is going to take years. So it's moving at a pace that is very favorable to many of these companies. Okay, so looking at just some of the pre-announcement highlights that have hit the wire this week, I want to give you a glimpse of the diversity of innovation that is occurring in the telecommunication space. You got semiconductor manufacturers, device makers, network equipment players, carriers, cloud vendors, enterprise tech companies, software companies, startups. Now we've included, you'll see in this list, we've included OpeRAN, that logo, because there's so much buzz around the topic and we're going to come back to that. But suffice it to say, there's no way we can cover all the announcements from the 2000 plus exhibitors at the show. So we're going to cherry pick here and make a few call outs. Hewlett Packard Enterprise announced an acquisition of an Italian private cellular network company called AthoNet. Zeus Kerravala wrote about it on SiliconANGLE if you want more details. Now interestingly, HPE has a partnership with Solana, which also does private 5G. But according to Zeus, Solona is more of an out-of-the-box solution, whereas AthoNet is designed for the core and requires more integration. And as you'll see in a moment, there's going to be a lot of talk at the show about private network. There's going to be a lot of news there from other competitors, and we're going to be watching that closely. And while many are concerned about the P5G, private 5G, encroaching on wifi, Kerravala doesn't see it that way. Rather, he feels that these private networks are really designed for more industrial, and you know mission critical environments, like factories, and warehouses that are run by robots, et cetera. 'Cause these can justify the increased expense of private networks. Whereas wifi remains a very low cost and flexible option for, you know, whatever offices and homes. Now, over to Dell. Dell announced its intent to go hard after opening up the telco network with the announcement that in the second half of this year it's going to begin shipping its infrastructure blocks for Red Hat. Remember it's like kind of the converged infrastructure for telco with a more open ecosystem and sort of more flexible, you know, more mature engineered system. Dell has also announced a range of PowerEdge servers for a variety of use cases. A big wide line bringing forth its 16G portfolio and aiming squarely at the telco space. Dell also announced, here we go, a private wireless offering with airspan, and Expedo, and a solution with AthoNet, the company HPE announced it was purchasing. So I guess Dell and HPE are now partnering up in the private wireless space, and yes, hell is freezing over folks. We'll see where that relationship goes in the mid- to long-term. Dell also announced new lab and certification capabilities, which we said last week was going to be critical for the further adoption of open ecosystem technology. So props to Dell for, you know, putting real emphasis and investment in that. AWS also made a number of announcements in this space including private wireless solutions and associated managed services. AWS named Deutsche Telekom, Orange, T-Mobile, Telefonica, and some others as partners. And AWS announced the stepped up partnership, specifically with T-Mobile, to bring AWS services to T-Mobile's network portfolio. Snowflake, back to Snowflake, announced its telecom data cloud. Remember we showed the data earlier, it's Snowflake not as strong in the telco sector, but they're continuing to move toward this go-to market alignment within key industries, realigning their go-to market by vertical. It also announced that AT&T, and a number of other partners, are collaborating to break down data silos specifically in telco. Look, essentially, this is Snowflake taking its core value prop to the telco vertical and forming key partnerships that resonate in the space. So think simplification, breaking down silos, data sharing, eventually data monetization. Samsung previewed its future capability to allow smartphones to access satellite services, something Apple has previously done. AMD, Intel, Marvell, Qualcomm, are all in the act, all the semiconductor players. Qualcomm for example, announced along with Telefonica, and Erickson, a 5G millimeter network that will be showcased in Spain at the event this coming week using Qualcomm Snapdragon chipset platform, based on none other than Arm technology. Of course, Arm we said is going to dominate the edge, and is is clearly doing so. It's got the volume advantage over, you know, traditional Intel, you know, X86 architectures. And it's no surprise that Microsoft is touting its open AI relationship. You're going to hear a lot of AI talk at this conference as is AI is now, you know, is the now topic. All right, we could go on and on and on. There's just so much going on at Mobile World Congress or MWC, that we just wanted to give you a glimpse of some of the highlights that we've been watching. Which brings us to the key topics and issues that we'll be exploring at MWC next week. We touched on some of this last week. A big topic of conversation will of course be, you know, 5G. Is it ever going to become real? Is it, is anybody ever going to make money at 5G? There's so much excitement around and anticipation around 5G. It has not lived up to the hype, but that's because the rollout, as we've previous reported, is going to take years. And part of that rollout is going to rely on the disaggregation of the hardened telco stack, as we reported last week and in previous Breaking Analysis episodes. OpenRAN is a big component of that evolution. You know, as our RAN intelligent controllers, RICs, which essentially the brain of OpenRAN, if you will. Now as we build out 5G networks at massive scale and accommodate unprecedented volumes of data and apply compute-hungry AI to all this data, the issue of energy efficiency is going to be front and center. It has to be. Not only is it a, you know, hot political issue, the reality is that improving power efficiency is compulsory or the whole vision of telco's future is going to come crashing down. So chip manufacturers, equipment makers, cloud providers, everybody is going to be doubling down and clicking on this topic. Let's talk about AI. AI as we said, it is the hot topic right now, but it is happening not only in consumer, with things like ChatGPT. And think about the theme of this Breaking Analysis in the enterprise, AI in the enterprise cannot be ChatGPT. It cannot be error prone the way ChatGPT is. It has to be clean, reliable, governed, accurate. It's got to be ethical. It's got to be trusted. Okay, we're going to have Zeus Kerravala on the show next week and definitely want to get his take on private networks and how they're going to impact wifi. You know, will private networks cannibalize wifi? If not, why not? He wrote about this again on SiliconANGLE if you want more details, and we're going to unpack that on theCUBE this week. And finally, as always we'll be following the data flows to understand where and how telcos, cloud players, startups, software companies, disruptors, legacy companies, end customers, how are they going to make money from new data opportunities? 'Cause we often say in theCUBE, don't ever bet against data. All right, that's a wrap for today. Remember theCUBE is going to be on location at MWC 2023 next week. We got a great set. We're in the walkway in between halls four and five, right in Congress Square, stand CS-60. Look for us, we got a full schedule. If you got a great story or you have news, stop by. We're going to try to get you on the program. I'll be there with Lisa Martin, co-hosting, David Nicholson as well, and the entire CUBE crew, so don't forget to come by and see us. I want to thank Alex Myerson, who's on production and manages the podcast, and Ken Schiffman, as well, in our Boston studio. Kristen Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hof is our editor-in-chief over at SiliconANGLE.com. He does some great editing. Thank you. All right, remember all these episodes they are available as podcasts wherever you listen. All you got to do is search Breaking Analysis podcasts. I publish each week on Wikibon.com and SiliconANGLE.com. All the video content is available on demand at theCUBE.net, or you can email me directly if you want to get in touch David.Vellante@SiliconANGLE.com or DM me @DVellante, or comment on our LinkedIn posts. And please do check out ETR.ai for the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights, powered by ETR. Thanks for watching. We'll see you next week at Mobile World Congress '23, MWC '23, or next time on Breaking Analysis. (bright music)
SUMMARY :
bringing you data-driven in the mid- to long-term.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Nicholson | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Alex Myerson | PERSON | 0.99+ |
Orange | ORGANIZATION | 0.99+ |
Qualcomm | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Telefonica | ORGANIZATION | 0.99+ |
Kristen Martin | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Spain | LOCATION | 0.99+ |
T-Mobile | ORGANIZATION | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
Deutsche Telekom | ORGANIZATION | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Marvell | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
AT&T | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Rob Hof | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
40% | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
AthoNet | ORGANIZATION | 0.99+ |
Erickson | ORGANIZATION | 0.99+ |
Congress Square | LOCATION | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
next week | DATE | 0.99+ |
Mobile World Congress | EVENT | 0.99+ |
Solana | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
two dimensions | QUANTITY | 0.99+ |
ETR | ORGANIZATION | 0.99+ |
MWC '23 | EVENT | 0.99+ |
MWC | EVENT | 0.99+ |
288 | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
this week | DATE | 0.98+ |
Solona | ORGANIZATION | 0.98+ |
David.Vellante@SiliconANGLE.com | OTHER | 0.98+ |
telco | ORGANIZATION | 0.98+ |
Two | QUANTITY | 0.98+ |
each week | QUANTITY | 0.97+ |
Zeus Kerravala | PERSON | 0.97+ |
MWC 2023 | EVENT | 0.97+ |
about five points | QUANTITY | 0.97+ |
theCUBE.net | OTHER | 0.97+ |
Red Hat | ORGANIZATION | 0.97+ |
Snowflake | TITLE | 0.96+ |
one | QUANTITY | 0.96+ |
Databricks | ORGANIZATION | 0.96+ |
three | QUANTITY | 0.96+ |
theCUBE Studios | ORGANIZATION | 0.96+ |
Breaking Analysis: MWC 2023 highlights telco transformation & the future of business
>> From the Cube Studios in Palo Alto in Boston, bringing you data-driven insights from The Cube and ETR. This is "Breaking Analysis" with Dave Vellante. >> The world's leading telcos are trying to shed the stigma of being monopolies lacking innovation. Telcos have been great at operational efficiency and connectivity and living off of transmission, and the costs and expenses or revenue associated with that transmission. But in a world beyond telephone poles and basic wireless and mobile services, how will telcos modernize and become more agile and monetize new opportunities brought about by 5G and private wireless and a spate of new innovations and infrastructure, cloud data and apps? Hello, and welcome to this week's Wikibon CUBE Insights powered by ETR. In this breaking analysis and ahead of Mobile World Congress or now, MWC23, we explore the evolution of the telco business and how the industry is in many ways, mimicking transformations that took place decades ago in enterprise IT. We'll model some of the traditional enterprise vendors using ETR data and investigate how they're faring in the telecommunications sector, and we'll pose some of the key issues facing the industry this decade. First, let's take a look at what the GSMA has in store for MWC23. GSMA is the host of what used to be called Mobile World Congress. They've set the theme for this year's event as "Velocity" and they've rebranded MWC to reflect the fact that mobile technology is only one part of the story. MWC has become one of the world's premier events highlighting innovations not only in Telco, mobile and 5G, but the collision between cloud, infrastructure, apps, private networks, smart industries, machine intelligence, and AI, and more. MWC comprises an enormous ecosystem of service providers, technology companies, and firms from virtually every industry including sports and entertainment. And as well, GSMA, along with its venue partner at the Fira Barcelona, have placed a major emphasis on sustainability and public and private partnerships. Virtually every industry will be represented at the event because every industry is impacted by the trends and opportunities in this space. GSMA has said it expects 80,000 attendees at MWC this year, not quite back to 2019 levels, but trending in that direction. Of course, attendance from Chinese participants has historically been very high at the show, and obviously the continued travel issues from that region are affecting the overall attendance, but still very strong. And despite these concerns, Huawei, the giant Chinese technology company. has the largest physical presence of any exhibitor at the show. And finally, GSMA estimates that more than $300 million in economic benefit will result from the event which takes place at the end of February and early March. And The Cube will be back at MWC this year with a major presence thanks to our anchor sponsor, Dell Technologies and other supporters of our content program, including Enterprise Web, ArcaOS, VMware, Snowflake, Cisco, AWS, and others. And one of the areas we're interested in exploring is the evolution of the telco stack. It's a topic that's often talked about and one that we've observed taking place in the 1990s when the vertically integrated IBM mainframe monopoly gave way to a disintegrated and horizontal industry structure. And in many ways, the same thing is happening today in telecommunications, which is shown on the left-hand side of this diagram. Historically, telcos have relied on a hardened, integrated, and incredibly reliable, and secure set of hardware and software services that have been fully vetted and tested, and certified, and relied upon for decades. And at the top of that stack on the left are the crown jewels of the telco stack, the operational support systems and the business support systems. For the OSS, we're talking about things like network management, network operations, service delivery, quality of service, fulfillment assurance, and things like that. For the BSS systems, these refer to customer-facing elements of the stack, like revenue, order management, what products they sell, billing, and customer service. And what we're seeing is telcos have been really good at operational efficiency and making money off of transport and connectivity, but they've lacked the innovation in services and applications. They own the pipes and that works well, but others, be the over-the-top content companies, or private network providers and increasingly, cloud providers have been able to bypass the telcos, reach around them, if you will, and drive innovation. And so, the right-most diagram speaks to the need to disaggregate pieces of the stack. And while the similarities to the 1990s in enterprise IT are greater than the differences, there are things that are different. For example, the granularity of hardware infrastructure will not likely be as high where competition occurred back in the 90s at every layer of the value chain with very little infrastructure integration. That of course changed in the 2010s with converged infrastructure and hyper-converged and also software defined. So, that's one difference. And the advent of cloud, containers, microservices, and AI, none of that was really a major factor in the disintegration of legacy IT. And that probably means that disruptors can move even faster than did the likes of Intel and Microsoft, Oracle, Cisco, and the Seagates of the 1990s. As well, while many of the products and services will come from traditional enterprise IT names like Dell, HPE, Cisco, Red Hat, VMware, AWS, Microsoft, Google, et cetera, many of the names are going to be different and come from traditional network equipment providers. These are names like Ericsson and Huawei, and Nokia, and other names, like Wind River, and Rakuten, and Dish Networks. And there are enormous opportunities in data to help telecom companies and their competitors go beyond telemetry data into more advanced analytics and data monetization. There's also going to be an entirely new set of apps based on the workloads and use cases ranging from hospitals, sports arenas, race tracks, shipping ports, you name it. Virtually every vertical will participate in this transformation as the industry evolves its focus toward innovation, agility, and open ecosystems. Now remember, this is not a binary state. There are going to be greenfield companies disrupting the apple cart, but the incumbent telcos are going to have to continue to ensure newer systems work with their legacy infrastructure, in their OSS and BSS existing systems. And as we know, this is not going to be an overnight task. Integration is a difficult thing, transformations, migrations. So that's what makes this all so interesting because others can come in with Greenfield and potentially disrupt. There'll be interesting partnerships and ecosystems will form and coalitions will also form. Now, we mentioned that several traditional enterprise companies are or will be playing in this space. Now, ETR doesn't have a ton of data on specific telecom equipment and software providers, but it does have some interesting data that we cut for this breaking analysis. What we're showing here in this graphic is some of the names that we've followed over the years and how they're faring. Specifically, we did the cut within the telco sector. So the Y-axis here shows net score or spending velocity. And the horizontal axis, that shows the presence or pervasiveness in the data set. And that table insert in the upper left, that informs as to how the dots are plotted. You know, the two columns there, net score and the ends. And that red-dotted line, that horizontal line at 40%, that is an indicator of a highly elevated level. Anything above that, we consider quite outstanding. And what we'll do now is we'll comment on some of the cohorts and share with you how they're doing in telecommunications, and that sector, that vertical relative to their position overall in the data set. Let's start with the public cloud players. They're prominent in every industry. Telcos, telecommunications is no exception and it's quite an interesting cohort here. On the one hand, they can help telecommunication firms modernize and become more agile by eliminating the heavy lifting and you know, all the cloud, you know, value prop, data center costs, and the cloud benefits. At the same time, public cloud players are bringing their services to the edge, building out their own global networks and are a disruptive force to traditional telcos. All right, let's talk about Azure first. Their net score is basically identical to telco relative to its overall average. AWS's net score is higher in telco by just a few percentage points. Google Cloud platform is eight percentage points higher in telco with a 53% net score. So all three hyperscalers have an equal or stronger presence in telco than their average overall. Okay, let's look at the traditional enterprise hardware and software infrastructure cohort. Dell, Cisco, HPE, Red Hat, VMware, and Oracle. We've highlighted in this chart just as sort of indicators or proxies. Dell's net score's 10 percentage points higher in telco than its overall average. Interesting. Cisco's is a bit higher. HPE's is actually lower by about nine percentage points in the ETR survey, and VMware's is lower by about four percentage points. Now, Red Hat is really interesting. OpenStack, as we've previously reported is popular with telcos who want to build out their own private cloud. And the data shows that Red Hat OpenStack's net score is 15 percentage points higher in the telco sector than its overall average. OpenShift, on the other hand, has a net score that's four percentage points lower in telco than its overall average. So this to us talks to the pace of adoption of microservices and containers. You know, it's going to happen, but it's going to happen more slowly. Finally, Oracle's spending momentum is somewhat lower in the sector than its average, despite the firm having a decent telco business. IBM and Accenture, heavy services companies are both lower in this sector than their average. And real quickly, snowflake's net score is much lower by about 12 percentage points relative to its very high average net score of 62%. But we look for them to be a player in this space as telcos need to modernize their analytics stack and share data in a governed manner. Databricks' net score is also much lower than its average by about 13 points. And same, I would expect them to be a player as open architectures and cloud gains steam in telco. All right, let's close out now on what we're going to be talking about at MWC23 and some of the key issues that we'll be unpacking. We've talked about stack disaggregation in this breaking analysis, but the key here will be the pace at which it will reach the operational efficiency and reliability of closed stacks. Telcos, you know, in a large part, they're engineering heavy firms and much of their work takes place, kind of in the basement, in the dark. It's not really a big public hype machine, and they tend to move slowly and cautiously. While they understand the importance of agility, they're going to be careful because, you know, it's in their DNA. And so at the same time, if they don't move fast enough, they're going to get hurt and disrupted by competitors. So that's going to be a topic of conversation, and we'll be looking for proof points. And the other comment I'll make is around integration. Telcos because of their conservatism will benefit from better testing and those firms that can innovate on the testing front and have labs and certifications and innovate at that level, with an ecosystem are going to be in a better position. Because open sometimes means wild west. So the more players like Dell, HPE, Cisco, Red Hat, et cetera, that do that and align with their ecosystems and provide those resources, the faster adoption is going to go. So we'll be looking for, you know, who's actually doing that, Open RAN or Radio Access Networks. That fits in this discussion because O-RAN is an emerging network architecture. It essentially enables the use of open technologies from an ecosystem and over time, look at O-RAN is going to be open, but the questions, you know, a lot of questions remain as to when it will be able to deliver the operational efficiency of traditional RAN. Got some interesting dynamics going on. Rakuten is a company that's working hard on this problem, really focusing on operational efficiency. Then you got Dish Networks. They're also embracing O-RAN. They're coming at it more from service innovation. So that's something that we'll be monitoring and unpacking. We're going to look at cloud as a disruptor. On the one hand, cloud can help drive agility, as we said earlier and optionality, and innovation for incumbent telcos. But the flip side is going to also do the same for startups trying to disrupt and cloud attracts startups. While some of the telcos are actually embracing the cloud, many are being cautious. So that's going to be an interesting topic of discussion. And there's private wireless networks and 5G, and hyperlocal private networks, they're being deployed, you know, at the edge. This idea of open edge is also a really hot topic and this trend is going to accelerate. You know, the importance here is that the use cases are going to be widely varied. The needs of a hospital are going to be different than those of a sports venue are different from a remote drilling location, and energy or a concert venue. Things like real-time AI inference and data flows are going to bring new services and monetization opportunities. And many firms are going to be bypassing traditional telecommunications networks to build these out. Satellites as well, we're going to see, you know, in this decade, you're going to have, you're going to look down at Google Earth and you're going to see real-time. You know, today you see snapshots and so, lots of innovations going in that space. So how is this going to disrupt industries and traditional industry structures? Now, as always, we'll be looking at data angles, right? 'Cause it's in The Cube's DNA to follow the data and what opportunities and risks data brings. The Cube is going to be on location at MWC23 at the end of the month. We got a great set. We're in the walkway between halls four and five, right in Congress Square, it's booths CS60. So we'll have a full, they're called Stan CS60. We have a full schedule. I'm going to be there with Lisa Martin, Dave Nicholson and the entire Cube crew, so don't forget to stop by. All right, that's a wrap. I want to thank Alex Myerson, who's on production and manages the podcast, Ken Schiffman as well. Kristin Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hof is our editor-in-chief over at Silicon Angle, does some great stuff for us. Thank you all. Remember, all these episodes are available as podcasts. Wherever you listen, just search "Breaking Analysis" podcasts I publish each week on wikibon.com and silicon angle.com. And all the video content is available on demand at thecube.net. You can email me directly at david.vellante@silicon angle.com. You can DM me at dvellante or comment on my LinkedIn post. Please do check out etr.ai for the best survey data in the enterprise tech business. This is Dave Vellante for The Cube Insights powered by ETR. Thanks for watching and we'll see you at Mobile World Congress, and/or at next time on "Breaking Analysis." (bright music) (bright music fades)
SUMMARY :
From the Cube Studios and some of the key issues
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Alex Myerson | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Ericsson | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Huawei | ORGANIZATION | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
Kristin Martin | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
Rakuten | ORGANIZATION | 0.99+ |
Rob Hof | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
GSMA | ORGANIZATION | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
2019 | DATE | 0.99+ |
53% | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Wind River | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
more than $300 million | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
Telcos | ORGANIZATION | 0.99+ |
Congress Square | LOCATION | 0.99+ |
First | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Dish Networks | ORGANIZATION | 0.99+ |
telco | ORGANIZATION | 0.99+ |
2010s | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
david.vellante@silicon angle.com | OTHER | 0.99+ |
MWC23 | EVENT | 0.99+ |
1990s | DATE | 0.99+ |
62% | QUANTITY | 0.99+ |
Mobile World Congress | EVENT | 0.99+ |
two columns | QUANTITY | 0.99+ |
each week | QUANTITY | 0.99+ |
Seagates | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
early March | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
thecube.net | OTHER | 0.99+ |
MWC | EVENT | 0.99+ |
ETR | ORGANIZATION | 0.98+ |
this year | DATE | 0.98+ |
Cube Studios | ORGANIZATION | 0.98+ |
one part | QUANTITY | 0.98+ |
Chinese | OTHER | 0.98+ |
Boston | LOCATION | 0.98+ |
decades ago | DATE | 0.97+ |
three | QUANTITY | 0.97+ |
90s | DATE | 0.97+ |
about 13 points | QUANTITY | 0.97+ |
Daren Brabham & Erik Bradley | What the Spending Data Tells us About Supercloud
(gentle synth music) (music ends) >> Welcome back to Supercloud 2, an open industry collaboration between technologists, consultants, analysts, and of course practitioners to help shape the future of cloud. At this event, one of the key areas we're exploring is the intersection of cloud and data. And how building value on top of hyperscale clouds and across clouds is evolving, a concept of course we call "Supercloud". And we're pleased to welcome our friends from Enterprise Technology research, Erik Bradley and Darren Brabham. Guys, thanks for joining us, great to see you. we love to bring the data into these conversations. >> Thank you for having us, Dave, I appreciate it. >> Yeah, thanks. >> You bet. And so, let me do the setup on what is Supercloud. It's a concept that we've floated, Before re:Invent 2021, based on the idea that cloud infrastructure is becoming ubiquitous, incredibly powerful, but there's a lack of standards across the big three clouds. That creates friction. So we defined over the period of time, you know, better part of a year, a set of essential elements, deployment models for so-called supercloud, which create this common experience for specific cloud services that, of course, again, span multiple clouds and even on-premise data. So Erik, with that as background, I wonder if you could add your general thoughts on the term supercloud, maybe play proxy for the CIO community, 'cause you do these round tables, you talk to these guys all the time, you gather a lot of amazing information from senior IT DMs that compliment your survey. So what are your thoughts on the term and the concept? >> Yeah, sure. I'll even go back to last year when you and I did our predictions panel, right? And we threw it out there. And to your point, you know, there's some haters. Anytime you throw out a new term, "Is it marketing buzz? Is it worth it? Why are you even doing it?" But you know, from my own perspective, and then also speaking to the IT DMs that we interview on a regular basis, this is just a natural evolution. It's something that's inevitable in enterprise tech, right? The internet was not built for what it has become. It was never intended to be the underlying infrastructure of our daily lives and work. The cloud also was not built to be what it's become. But where we're at now is, we have to figure out what the cloud is and what it needs to be to be scalable, resilient, secure, and have the governance wrapped around it. And to me that's what supercloud is. It's a way to define operantly, what the next generation, the continued iteration and evolution of the cloud and what its needs to be. And that's what the supercloud means to me. And what depends, if you want to call it metacloud, supercloud, it doesn't matter. The point is that we're trying to define the next layer, the next future of work, which is inevitable in enterprise tech. Now, from the IT DM perspective, I have two interesting call outs. One is from basically a senior developer IT architecture and DevSecOps who says he uses the term all the time. And the reason he uses the term, is that because multi-cloud has a stigma attached to it, when he is talking to his business executives. (David chuckles) the stigma is because it's complex and it's expensive. So he switched to supercloud to better explain to his business executives and his CFO and his CIO what he's trying to do. And we can get into more later about what it means to him. But the inverse of that, of course, is a good CSO friend of mine for a very large enterprise says the concern with Supercloud is the reduction of complexity. And I'll explain, he believes anything that takes the requirement of specific expertise out of the equation, even a little bit, as a CSO worries him. So as you said, David, always two sides to the coin, but I do believe supercloud is a relevant term, and it is necessary because the cloud is continuing to be defined. >> You know, that's really interesting too, 'cause you know, Darren, we use Snowflake a lot as an example, sort of early supercloud, and you think from a security standpoint, we've always pushed Amazon and, "Are you ever going to kind of abstract the complexity away from all these primitives?" and their position has always been, "Look, if we produce these primitives, and offer these primitives, we we can move as the market moves. When you abstract, then it becomes harder to peel the layers." But Darren, from a data standpoint, like I say, we use Snowflake a lot. I think of like Tim Burners-Lee when Web 2.0 came out, he said, "Well this is what the internet was always supposed to be." So in a way, you know, supercloud is maybe what multi-cloud was supposed to be. But I mean, you think about data sharing, Darren, across clouds, it's always been a challenge. Snowflake always, you know, obviously trying to solve that problem, as are others. But what are your thoughts on the concept? >> Yeah, I think the concept fits, right? It is reflective of, it's a paradigm shift, right? Things, as a pendulum have swung back and forth between needing to piece together a bunch of different tools that have specific unique use cases and they're best in breed in what they do. And then focusing on the duct tape that holds 'em all together and all the engineering complexity and skill, it shifted from that end of the pendulum all the way back to, "Let's streamline this, let's simplify it. Maybe we have budget crunches and we need to consolidate tools or eliminate tools." And so then you kind of see this back and forth over time. And with data and analytics for instance, a lot of organizations were trying to bring the data closer to the business. That's where we saw self-service analytics coming in. And tools like Snowflake, what they did was they helped point to different databases, they helped unify data, and organize it in a single place that was, you know, in a sense neutral, away from a single cloud vendor or a single database, and allowed the business to kind of be more flexible in how it brought stuff together and provided it out to the business units. So Snowflake was an example of one of those times where we pulled back from the granular, multiple points of the spear, back to a simple way to do things. And I think Snowflake has continued to kind of keep that mantle to a degree, and we see other tools trying to do that, but that's all it is. It's a paradigm shift back to this kind of meta abstraction layer that kind of simplifies what is the reality, that you need a complex multi-use case, multi-region way of doing business. And it sort of reflects the reality of that. >> And you know, to me it's a spectrum. As part of Supercloud 2, we're talking to a number of of practitioners, Ionis Pharmaceuticals, US West, we got Walmart. And it's a spectrum, right? In some cases the practitioner's saying, "You know, the way I solve multi-cloud complexity is mono-cloud, I just do one cloud." (laughs) Others like Walmart are saying, "Hey, you know, we actually are building an abstraction layer ourselves, take advantage of it." So my general question to both of you is, is this a concept, is the lack of standards across clouds, you know, really a problem, you know, or is supercloud a solution looking for a problem? Or do you hear from practitioners that "No, this is really an issue, we have to bring together a set of standards to sort of unify our cloud estates." >> Allow me to answer that at a higher level, and then we're going to hand it over to Dr. Brabham because he is a little bit more detailed on the realtime streaming analytics use cases, which I think is where we're going to get to. But to answer that question, it really depends on the size and the complexity of your business. At the very large enterprise, Dave, Yes, a hundred percent. This needs to happen. There is complexity, there is not only complexity in the compute and actually deploying the applications, but the governance and the security around them. But for lower end or, you know, business use cases, and for smaller businesses, it's a little less necessary. You certainly don't need to have all of these. Some of the things that come into mind from the interviews that Darren and I have done are, you know, financial services, if you're doing real-time trading, anything that has real-time data metrics involved in your transactions, is going to be necessary. And another use case that we hear about is in online travel agencies. So I think it is very relevant, the complexity does need to be solved, and I'll allow Darren to explain a little bit more about how that's used from an analytics perspective. >> Yeah, go for it. >> Yeah, exactly. I mean, I think any modern, you know, multinational company that's going to have a footprint in the US and Europe, in China, or works in different areas like manufacturing, where you're probably going to have on-prem instances that will stay on-prem forever, for various performance reasons. You have these complicated governance and security and regulatory issues. So inherently, I think, large multinational companies and or companies that are in certain areas like finance or in, you know, online e-commerce, or things that need real-time data, they inherently are going to have a very complex environment that's going to need to be managed in some kind of cleaner way. You know, they're looking for one door to open, one pane of glass to look at, one thing to do to manage these multi points. And, streaming's a good example of that. I mean, not every organization has a real-time streaming use case, and may not ever, but a lot of organizations do, a lot of industries do. And so there's this need to use, you know, they want to use open-source tools, they want to use Apache Kafka for instance. They want to use different megacloud vendors offerings, like Google Pub/Sub or you know, Amazon Kinesis Firehose. They have all these different pieces they want to use for different use cases at different stages of maturity or proof of concept, you name it. They're going to have to have this complexity. And I think that's why we're seeing this need, to have sort of this supercloud concept, to juggle all this, to wrangle all of it. 'Cause the reality is, it's complex and you have to simplify it somehow. >> Great, thanks you guys. All right, let's bring up the graphic, and take a look. Anybody who follows the breaking analysis, which is co-branded with ETR Cube Insights powered by ETR, knows we like to bring data to the table. ETR does amazing survey work every quarter, 1200 plus 1500 practitioners that that answer a number of questions. The vertical axis here is net score, which is ETR's proprietary methodology, which is a measure of spending momentum, spending velocity. And the horizontal axis here is overlap, but it's the presence pervasiveness, and the dataset, the ends, that table insert on the bottom right shows you how the dots are plotted, the net score and then the ends in the survey. And what we've done is we've plotted a bunch of the so-called supercloud suspects, let's start in the upper right, the cloud platforms. Without these hyperscale clouds, you can't have a supercloud. And as always, Azure and AWS, up and to the right, it's amazing we're talking about, you know, 80 plus billion dollar company in AWS. Azure's business is, if you just look at the IaaS is in the 50 billion range, I mean it's just amazing to me the net scores here. Anything above 40% we consider highly elevated. And you got Azure and you got Snowflake, Databricks, HashiCorp, we'll get to them. And you got AWS, you know, right up there at that size, it's quite amazing. With really big ends as well, you know, 700 plus ends in the survey. So, you know, kind of half the survey actually has these platforms. So my question to you guys is, what are you seeing in terms of cloud adoption within the big three cloud players? I wonder if you could could comment, maybe Erik, you could start. >> Yeah, sure. Now we're talking data, now I'm happy. So yeah, we'll get into some of it. Right now, the January, 2023 TSIS is approaching 1500 survey respondents. One caveat, it's not closed yet, it will close on Friday, but with an end that big we are over statistically significant. We also recently did a cloud survey, and there's a couple of key points on that I want to get into before we get into individual vendors. What we're seeing here, is that annual spend on cloud infrastructure is expected to grow at almost a 70% CAGR over the next three years. The percentage of those workloads for cloud infrastructure are expected to grow over 70% as three years as well. And as you mentioned, Azure and AWS are still dominant. However, we're seeing some share shift spreading around a little bit. Now to get into the individual vendors you mentioned about, yes, Azure is still number one, AWS is number two. What we're seeing, which is incredibly interesting, CloudFlare is number three. It's actually beating GCP. That's the first time we've seen it. What I do want to state, is this is on net score only, which is our measure of spending intentions. When you talk about actual pervasion in the enterprise, it's not even close. But from a spending velocity intention point of view, CloudFlare is now number three above GCP, and even Salesforce is creeping up to be at GCPs level. So what we're seeing here, is a continued domination by Azure and AWS, but some of these other players that maybe might fit into your moniker. And I definitely want to talk about CloudFlare more in a bit, but I'm going to stop there. But what we're seeing is some of these other players that fit into your Supercloud moniker, are starting to creep up, Dave. >> Yeah, I just want to clarify. So as you also know, we track IaaS and PaaS revenue and we try to extract, so AWS reports in its quarterly earnings, you know, they're just IaaS and PaaS, they don't have a SaaS play, a little bit maybe, whereas Microsoft and Google include their applications and so we extract those out and if you do that, AWS is bigger, but in the surveys, you know, customers, they see cloud, SaaS to them as cloud. So that's one of the reasons why you see, you know, Microsoft as larger in pervasion. If you bring up that survey again, Alex, the survey results, you see them further to the right and they have higher spending momentum, which is consistent with what you see in the earnings calls. Now, interesting about CloudFlare because the CEO of CloudFlare actually, and CloudFlare itself uses the term supercloud basically saying, "Hey, we're building a new type of internet." So what are your thoughts? Do you have additional information on CloudFlare, Erik that you want to share? I mean, you've seen them pop up. I mean this is a really interesting company that is pretty forward thinking and vocal about how it's disrupting the industry. >> Sure, we've been tracking 'em for a long time, and even from the disruption of just a traditional CDN where they took down Akamai and what they're doing. But for me, the definition of a true supercloud provider can't just be one instance. You have to have multiple. So it's not just the cloud, it's networking aspect on top of it, it's also security. And to me, CloudFlare is the only one that has all of it. That they actually have the ability to offer all of those things. Whereas you look at some of the other names, they're still piggybacking on the infrastructure or platform as a service of the hyperscalers. CloudFlare does not need to, they actually have the cloud, the networking, and the security all themselves. So to me that lends credibility to their own internal usage of that moniker Supercloud. And also, again, just what we're seeing right here that their net score is now creeping above AGCP really does state it. And then just one real last thing, one of the other things we do in our surveys is we track adoption and replacement reasoning. And when you look at Cloudflare's adoption rate, which is extremely high, it's based on technical capabilities, the breadth of their feature set, it's also based on what we call the ability to avoid stack alignment. So those are again, really supporting reasons that makes CloudFlare a top candidate for your moniker of supercloud. >> And they've also announced an object store (chuckles) and a database. So, you know, that's going to be, it takes a while as you well know, to get database adoption going, but you know, they're ambitious and going for it. All right, let's bring the chart back up, and I want to focus Darren in on the ecosystem now, and really, we've identified Snowflake and Databricks, it's always fun to talk about those guys, and there are a number of other, you know, data platforms out there, but we use those too as really proxies for leaders. We got a bunch of the backup guys, the data protection folks, Rubric, Cohesity, and Veeam. They're sort of in a cluster, although Rubric, you know, ahead of those guys in terms of spending momentum. And then VMware, Tanzu and Red Hat as sort of the cross cloud platform. But I want to focus, Darren, on the data piece of it. We're seeing a lot of activity around data sharing, governed data sharing. Databricks is using Delta Sharing as their sort of place, Snowflakes is sort of this walled garden like the app store. What are your thoughts on, you know, in the context of Supercloud, cross cloud capabilities for the data platforms? >> Yeah, good question. You know, I think Databricks is an interesting player because they sort of have made some interesting moves, with their Data Lakehouse technology. So they're trying to kind of complicate, or not complicate, they're trying to take away the complications of, you know, the downsides of data warehousing and data lakes, and trying to find that middle ground, where you have the benefits of a managed, governed, you know, data warehouse environment, but you have sort of the lower cost, you know, capability of a data lake. And so, you know, Databricks has become really attractive, especially by data scientists, right? We've been tracking them in the AI machine learning sector for quite some time here at ETR, attractive for a data scientist because it looks and acts like a lake, but can have some managed capabilities like a warehouse. So it's kind of the best of both worlds. So in some ways I think you've seen sort of a data science driver for the adoption of Databricks that has now become a little bit more mainstream across the business. Snowflake, maybe the other direction, you know, it's a cloud data warehouse that you know, is starting to expand its capabilities and add on new things like Streamlit is a good example in the analytics space, with apps. So you see these tools starting to branch and creep out a bit, but they offer that sort of neutrality, right? We heard one IT decision maker we recently interviewed that referred to Snowflake and Databricks as the quote unquote Switzerland of what they do. And so there's this desirability from an organization to find these tools that can solve the complex multi-headed use-case of data and analytics, which every business unit needs in different ways. And figure out a way to do that, an elegant way that's governed and centrally managed, that federated kind of best of both worlds that you get by bringing the data close to the business while having a central governed instance. So these tools are incredibly powerful and I think there's only going to be room for growth, for those two especially. I think they're going to expand and do different things and maybe, you know, join forces with others and a lot of the power of what they do well is trying to define these connections and find these partnerships with other vendors, and try to be seen as the nice add-on to your existing environment that plays nicely with everyone. So I think that's where those two tools are going, but they certainly fit this sort of label of, you know, trying to be that supercloud neutral, you know, layer that unites everything. >> Yeah, and if you bring the graphic back up, please, there's obviously big data plays in each of the cloud platforms, you know, Microsoft, big database player, AWS is, you know, 11, 12, 15, data stores. And of course, you know, BigQuery and other, you know, data platforms within Google. But you know, I'm not sure the big cloud guys are going to go hard after so-called supercloud, cross-cloud services. Although, we see Oracle getting in bed with Microsoft and Azure, with a database service that is cross-cloud, certainly Google with Anthos and you know, you never say never with with AWS. I guess what I would say guys, and I'll I'll leave you with this is that, you know, just like all players today are cloud players, I feel like anybody in the business or most companies are going to be so-called supercloud players. In other words, they're going to have a cross-cloud strategy, they're going to try to build connections if they're coming from on-prem like a Dell or an HPE, you know, or Pure or you know, many of these other companies, Cohesity is another one. They're going to try to connect to their on-premise states, of course, and create a consistent experience. It's natural that they're going to have sort of some consistency across clouds. You know, the big question is, what's that spectrum look like? I think on the one hand you're going to have some, you know, maybe some rudimentary, you know, instances of supercloud or maybe they just run on the individual clouds versus where Snowflake and others and even beyond that are trying to go with a single global instance, basically building out what I would think of as their own cloud, and importantly their own ecosystem. I'll give you guys the last thought. Maybe you could each give us, you know, closing thoughts. Maybe Darren, you could start and Erik, you could bring us home on just this entire topic, the future of cloud and data. >> Yeah, I mean I think, you know, two points to make on that is, this question of these, I guess what we'll call legacy on-prem players. These, mega vendors that have been around a long time, have big on-prem footprints and a lot of people have them for that reason. I think it's foolish to assume that a company, especially a large, mature, multinational company that's been around a long time, it's foolish to think that they can just uproot and leave on-premises entirely full scale. There will almost always be an on-prem footprint from any company that was not, you know, natively born in the cloud after 2010, right? I just don't think that's reasonable anytime soon. I think there's some industries that need on-prem, things like, you know, industrial manufacturing and so on. So I don't think on-prem is going away, and I think vendors that are going to, you know, go very cloud forward, very big on the cloud, if they neglect having at least decent connectors to on-prem legacy vendors, they're going to miss out. So I think that's something that these players need to keep in mind is that they continue to reach back to some of these players that have big footprints on-prem, and make sure that those integrations are seamless and work well, or else their customers will always have a multi-cloud or hybrid experience. And then I think a second point here about the future is, you know, we talk about the three big, you know, cloud providers, the Google, Microsoft, AWS as sort of the opposite of, or different from this new supercloud paradigm that's emerging. But I want to kind of point out that, they will always try to make a play to become that and I think, you know, we'll certainly see someone like Microsoft trying to expand their licensing and expand how they play in order to become that super cloud provider for folks. So also don't want to downplay them. I think you're going to see those three big players continue to move, and take over what players like CloudFlare are doing and try to, you know, cut them off before they get too big. So, keep an eye on them as well. >> Great points, I mean, I think you're right, the first point, if you're Dell, HPE, Cisco, IBM, your strategy should be to make your on-premise state as cloud-like as possible and you know, make those differences as minimal as possible. And you know, if you're a customer, then the business case is going to be low for you to move off of that. And I think you're right. I think the cloud guys, if this is a real problem, the cloud guys are going to play in there, and they're going to make some money at it. Erik, bring us home please. >> Yeah, I'm going to revert back to our data and this on the macro side. So to kind of support this concept of a supercloud right now, you know Dave, you and I know, we check overall spending and what we're seeing right now is total year spent is expected to only be 4.6%. We ended 2022 at 5% even though it began at almost eight and a half. So this is clearly declining and in that environment, we're seeing the top two strategies to reduce spend are actually vendor consolidation with 36% of our respondents saying they're actively seeking a way to reduce their number of vendors, and consolidate into one. That's obviously supporting a supercloud type of play. Number two is reducing excess cloud resources. So when I look at both of those combined, with a drop in the overall spending reduction, I think you're on the right thread here, Dave. You know, the overall macro view that we're seeing in the data supports this happening. And if I can real quick, couple of names we did not touch on that I do think deserve to be in this conversation, one is HashiCorp. HashiCorp is the number one player in our infrastructure sector, with a 56% net score. It does multiple things within infrastructure and it is completely agnostic to your environment. And if we're also speaking about something that's just a singular feature, we would look at Rubric for data, backup, storage, recovery. They're not going to offer you your full cloud or your networking of course, but if you are looking for your backup, recovery, and storage Rubric, also number one in that sector with a 53% net score. Two other names that deserve to be in this conversation as we watch it move and evolve. >> Great, thank you for bringing that up. Yeah, we had both of those guys in the chart and I failed to focus in on HashiCorp. And clearly a Supercloud enabler. All right guys, we got to go. Thank you so much for joining us, appreciate it. Let's keep this conversation going. >> Always enjoy talking to you Dave, thanks. >> Yeah, thanks for having us. >> All right, keep it right there for more content from Supercloud 2. This is Dave Valente for John Ferg and the entire Cube team. We'll be right back. (gentle synth music) (music fades)
SUMMARY :
is the intersection of cloud and data. Thank you for having period of time, you know, and evolution of the cloud So in a way, you know, supercloud the data closer to the business. So my general question to both of you is, the complexity does need to be And so there's this need to use, you know, So my question to you guys is, And as you mentioned, Azure but in the surveys, you know, customers, the ability to offer and there are a number of other, you know, and maybe, you know, join forces each of the cloud platforms, you know, the three big, you know, And you know, if you're a customer, you and I know, we check overall spending and I failed to focus in on HashiCorp. to you Dave, thanks. Ferg and the entire Cube team.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Erik | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
John Ferg | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
Erik Bradley | PERSON | 0.99+ |
David | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dave Valente | PERSON | 0.99+ |
January, 2023 | DATE | 0.99+ |
China | LOCATION | 0.99+ |
US | LOCATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
50 billion | QUANTITY | 0.99+ |
Ionis Pharmaceuticals | ORGANIZATION | 0.99+ |
Darren Brabham | PERSON | 0.99+ |
56% | QUANTITY | 0.99+ |
4.6% | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
53% | QUANTITY | 0.99+ |
36% | QUANTITY | 0.99+ |
Tanzu | ORGANIZATION | 0.99+ |
Darren | PERSON | 0.99+ |
1200 | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Friday | DATE | 0.99+ |
Rubric | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
two sides | QUANTITY | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
5% | QUANTITY | 0.99+ |
Cohesity | ORGANIZATION | 0.99+ |
two tools | QUANTITY | 0.99+ |
Veeam | ORGANIZATION | 0.99+ |
CloudFlare | TITLE | 0.99+ |
two | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
2022 | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
Daren Brabham | PERSON | 0.99+ |
three years | QUANTITY | 0.99+ |
TSIS | ORGANIZATION | 0.99+ |
Brabham | PERSON | 0.99+ |
CloudFlare | ORGANIZATION | 0.99+ |
1500 survey respondents | QUANTITY | 0.99+ |
second point | QUANTITY | 0.99+ |
first point | QUANTITY | 0.98+ |
Snowflake | TITLE | 0.98+ |
one | QUANTITY | 0.98+ |
Supercloud | ORGANIZATION | 0.98+ |
ETR | ORGANIZATION | 0.98+ |
Snowflake | ORGANIZATION | 0.98+ |
Akamai | ORGANIZATION | 0.98+ |
Ramesh Prabagaran, Prosimo.io | Defining the Network Supercloud
(upbeat music) >> Hello, and welcome to Supercloud2. I'm John Furrier, host of theCUBE here. We're exploring all the new Supercloud trends around multiple clouds, hyper scale gaps in their systems, new innovations, new applications, new companies, new products, new brands emerging from this big inflection point. Got a great guest who's going to unpack it with me today, Ramesh Prabagaran, who's the co-founder and CEO of Prosimo, CUBE alumni. Ramesh, legend in the industry, you've been around. You've seen many cycles. Welcome to Supercloud2. >> Thank you. You're being too kind. >> Well, you know, you guys have been a technical, great technical founding team, multiple ventures, multiple times around the track as they say, but now we're seeing something completely different. This is our second event, kind of we're doing to start the the ball rolling around unpacking this idea of Supercloud which evolved from a riff with me and Dave to now a working group paper, multiple definitions. People are saying they're Supercloud. CloudFlare says this is their version. Someone says there over there. Fitzi over there in the blog is always, you know, challenging us on our definitions, but it's, the consensus is though something's happening. >> Ramesh: Absolutely. >> And what's your take on this kind of big inflection point? >> Absolutely, so if you just look at kind of this in layers right, so you have hyper scalers that are innovating really quickly on underlying capabilities, and then you have enterprises adopting these technologies, right, there is a layer in the middle that I would say is largely missing, right? And one that addresses the gaps introduced by these new capabilities, by the hyper scalers. At the same time, one that actually spans, let's say multiple regions, multiple clouds and so forth. So that to me is kind of the Supercloud layer of sorts. One that helps enterprises adopt the underlying hyper scaler capabilities a lot faster, and at the same time brings a certain level of consistency and homogeneity also. >> What do you think the big driver of Supercloud is? Is it the industry growing up or is it the demand for new kinds of capabilities or both? Or just evolution? What's your take? >> I would say largely it depends on kind of who the entity is that you're talking about, right? And so I would say both. So if you look at one cohort here, it's adoption, right? If I have a externally facing digital presence, for example, then I'm going to scale that up and get to as many subscribers and users no matter what, right? And at that time it's a different set of problems. If you're looking at kind of traditional enterprise inward that are bringing apps into the cloud and so forth, it's a different set of care abouts, right? So both are, I would say, equally important problems to solve for. >> Well, one reality that we're definitely tracking, and it's not really a debate anymore, is hybrid. >> Ramesh: Yep >> Hybrid happened. It happened faster than most people thought. But, you know, we were talking about this in 2015 when it first got kicked around, but now you see hybrid in the cloud, on premises and the edge. This kind of forms that distributed computing paradigm that we've always been predicting. And so if that continues to play out the way it is, you're now going to have a completely distributed, connected internet and sets of systems, intra and external within companies. So again, the world is connected 100%. Everything's changing, right? >> And that introduces. >> It wasn't your grandfather's networking anymore or storage. The game is still the same, but the play, the components are acting differently. What's your take on this? >> Absolutely. No, absolutely. That's a very key important point, and it's one that we always ask our customers right at the front end, right? Because your starting assumptions matter. If you have workloads of workloads in the cloud and data center is something that you want to connect into, then you'll make decisions kind of keeping cloud in the center and then kind of bolt on technologies for what that means to extend it to the data center. If your center of gravity is in the data center, and then cloud is let's say 10% right now, but you see that growing, then what choices do you have? Right, do you want to bring your data center technologies into the cloud because you want that consistency in operations? Or do you want to start off fresh, right? So this is a really key, important question, and one that many of our customers are actually are grappling with, right? They have this notion that going cloud native is the right approach, but at the same time that means I have a bifurcation in kind of how do I operate my data center versus my cloud, right? Two different operating models, and slowly it'll shift over to one. But you're going to have to deal with dual reality for a while. >> I was talking to an old friend of mine, CIO, very experienced CIO. Big time company, large deployment, a lot of IT. I said, so what's the big trend everyone's telling me about IT's going. He goes no, not really. IT's not going away for me. It's going everywhere in the company. >> Ramesh: Exactly. >> So I need to scale my IT-like capabilities everywhere and then make it invisible. >> Ramesh: Correct. >> Which is essentially code words for saying it's going to be completely cloud native everywhere. This is what is happening. Do you agree? >> Absolutely right, and so if you look at what do enterprises care about it? The reason to go to the cloud is to get speed of operations, and it's apps, apps, apps, right? Do you ever have a conversation on networking and infrastructure first? No, that kind of gets brought into the conversation because you want to deal with users, applications and services, right? And so the end goal is essentially how do users communicate with apps and get the right experience, security and whatnot, and how do apps talk to each other and make sure that you get all of the connectivity and security requirements? Underneath the covers, what does this mean for infrastructure, networking, security and whatnot? It's actually going to be someone else's job, right? And you shouldn't have to think too much about it. So this whole notion of kind of making that transparent is real actually, right? But at the same time, us and all the guys that we talk to on the customer side, that's their job, right? Like we have to work towards making that transparent. Some are going to be in the form of capability, some are going to be driven by data, but that's really where the two worlds are going to come together. >> Lots of debates going on. We just heard from Bob Muglia here on Supercloud2. He said Supercloud's a platform that provides programmatically consistent services hosted on heterogeneous cloud providers. So the question that's being debated is is Supercloud a platform or an architecture in your view? >> Okay, that's a tough one actually. I'm going to side on the side on kind of the platform side right, and the reason for that is architectural choices are things that you make ahead of time. And you, once you're in, there really isn't a fork in the road, right? Platforms continue to evolve. You can iterate, innovate and so on and so forth. And so I'm thinking Supercloud is more of a platform because you do have a choice. Hey, am I going AWS, Azure, GCP. You make that choice. What is my center of gravity? You make that choice. That's kind of an architectural decision, right? Once you make that, then how do I make things work consistently across like two or three clouds? That's a platform choice. >> So who's responsible for the architecture as the platform, the vendor serving the platform or is the platform vendor agnostic? >> You know, this is where you have to kind of peel the onion in layers, right? If you talk about applications, you can't go to a developer team or an app team and say I want you to operate on Google or AWS. They're like I'll pick the cloud that I want, right? Now who are we talking to? The infrastructure guys and the networking guys, right? They want to make sure that it's not bifurcated. It's like, hey, I want to make sure whatever I build for AWS I can equally use that on Azure. I can equally use that on GCP. So if you're talking to more of the application centric teams who really want infrastructure to be transparent, they'll say, okay, I want to make this choice of whether this is AWS, Azure, GCP, and stick to that. And if you come kind of down the layers of the stack into infrastructure, they are thinking a little more holistically, a little more Supercloud, a little more multicloud, and that. >> That's a good point. So that brings up the deployment question. >> Ramesh: Exactly! >> I want to ask you the next question, okay, what is the preferred deployment in your opinion for a Supercloud narrative? Is it single instance, spread it around everywhere? What's the, do you have a single global instance or do you have everything synchronized? >> So I would say first layer of that Supercloud really kind of fix the holes that have been introduced as a result of kind of adopting the hyper scaler technologies, right? So each, the hyper scalers have been really good at innovating and providing really massive scale elastic capabilities, right? But once you start to build capabilities on top of that to help serve the application, there's a few holes start to show up. So first job of Supercloud really is to plug those holes, right? Second is can I get to an operating model, so that I can replicate this not just in a single region, but across multiple regions, same cloud, and then across multiple clouds, right? And so both of those need to be solved for in order to be (cross talking). >> So is that multiple instantiations of the stack or? >> Yeah, so this again depends on kind of the capability, right? So if you take a more solution view, and so I can speak for kind of networking security combined right? There you always take a solution view. You don't ever look at, you know, what does this mean for a single instance in a single region. You take a macro view, and then you then break it down into what does this mean for region, what does it mean for instance, what does this mean for AZs? And so on and so forth. So you kind of have to go top to bottom. >> Okay, welcome you down into the trap now. Okay, synchronizing the data, latency, these are all questions. So what does the network Supercloud look like to you? Because networking is big here. >> Ramesh: Yes, absolutely. >> This is what you guys do. >> Exactly, yeah. So the different set of problems as you go up the stack, right? So if you have hundreds of workloads in a single region, the set of problems you're dealing with there are kind of app native connectivity, how do I go from kind of east/west, all of those fun things, right? Which are usually bound in terms of latency. You don't have those challenges as much, but can you build your entire enterprise application architecture in one region? No, you're going to have to create multiple instances, right? So my data lake is invariably going to be in one place. My business logic is going to be spread across a few places. What does that bring in? I need to go across regions. Am I going to put those two regions right next to each other? No, I'm not going to, right? I'm going to have places in Europe. I'm going to have APAC, and I'm going to have a North American presence, and I need to bring all these things together. So this is where, back to your point, latency really matters, right? Because I need to be able to find out not just best path but also how do I reduce the millisecond, microseconds that my application cares about, which brings in a layer of optimization and then so on and so on and so forth. So this is what we call kind of to borrow the Prosimo language full stack networking, right? Because I'm not just dealing with how do I go from one region to another because that's laws of physics. I can only control so much. But there are a few elements up the application stack in software that you can tweak to actually bring these things closer and closer. >> And on that point, you're seeing security being talked a lot more at the network layer. So how do you secure the Supercloud at the network layer? What's that look like? >> Yeah, we've been grappling with essentially is security kind of foundational, and then is the network on top. And then we had an alternative viewpoint which is kind of network and then security on top. And the answer is actually it's neither, right? It's almost like a meshed up sandwich of sorts. So you need to have networking security work really well together, right? Case in point, I mean we were talking to a customer yesterday. He said, hey, I have my data lake in one region that needs to talk to an analytics service in a completely different region of a different cloud. These two things just need to be able to talk to each other, which means I need to bring elements of networking. I need to bring elements of security, secure access, app segmentation, all of those things. Very simple, I have an analytics service that needs to contact a data lake. That's what he starts with, but then before you know it, it actually brings up a whole stack underneath, so that's. >> VMware calls that cloud chaos. >> Ramesh: Yes, exactly. >> And then that's the halfway point between cloud smart. Cloud first, cloud chaos, cloud smart, and the next thing, you can skip that whole step. But again, again, it's pick your strategy right? Again, this comes back down to your earlier point. I want to ask you from a customer standpoint, you got the hyper scalers doing very, very well. >> Ramesh: Yep, absolutely. >> And I love what their Amazon's doing. I think Microsoft again though they had a little bit of downgrade are catching up fast, and they have their installed base. So you got the land of the installed bases. >> Correct. >> First and greater, better cloud. Install base getting better, almost as good, almost as good is a gift, but close. Now you have them specializing. Silicon, special silicon. So there's gaps for other services. >> Ramesh: Correct. >> And Amazon Web Services, Adam Selipsky's a open book saying, hey, we want our ecosystem to pick up these gaps and build on them. Go ahead, go to town. >> So this is where I think choices are tough, right? Because if you had one choice, you would work with it, and you would work around it, right? Now I have five different choices. Now what do I do? Our viewpoint is there are a bunch of things that say AWS does really, really well. Use that as a foundational layer, right? Like don't reinvent the wheel on those things. Transit gateways, global accelerators and whatnot, they exist for a reason. Billions of dollars have gone into building those things. Use that foundational layer, right? But what you want to build on top of that is actually driven by the application. The requirements of a lambda application that's serverless, it's very different than a packaged application that's responding for transactions, right? Like it's just completely very, very different. And so bring in the right set of capabilities required for those set of applications, and then you go based on that. This is also where I think whether something is a regional construct versus an overall global construct really, really matters, right? Because if you start with the assumption that everything is going to be built regionally, then it's someone else's job to make sure that all of these things are connected. But if you start with kind of the global purview, then the rest of them start to (cross talking). >> What are some of the things that the enterprises might want that are gaps that are going to be filled by the, by startups like you guys and the ecosystem because we're seeing the ecosystem form into two big camps. >> Ramesh: Yep. >> ISVs, which is an old school definition of independent software vendor, aka someone who writes software. >> Ramesh: Exactly. >> SaaS app. >> Ramesh: Correct. >> And then ecosystem software players that were once ISVs now have people building on top of them. >> Ramesh: Correct. >> They're building on top of the cloud. So you have that new hyper scale effect going on. >> Ramesh: Exactly. >> You got ISVs, which is software developers, software vendors. >> Ramesh: Correct. >> And ecosystems. >> Yep. >> What's that impact of that? Cause it's a new dynamic. >> Exactly, so if you take kind of enterprises, want to make sure that that their apps and the data center migrate to the cloud, new apps are developed the right way in the cloud, right? So that's kind of table stakes. So now what choices do they have? They listen to AWS and say, okay, I have all these cloud native services. I want to be able to instantiate all that. Now comes the interesting choice that they have to make. Do I go hire a whole bunch of people and do it myself or do I go there on the platform route, right? Because I made an architectural choice. Now I have to decide whether I want to do this myself or the platform choice. DIY works great for some, but you don't know what you're getting into, and it's people involved, right? People, process, all those fun things involved, right? So we show up there and say, you don't know what you don't know, right? Like because that's the nature of it. Why don't you invest in a platform like what what we provide, and then you actually build on top of it. We will, it's our job to make sure that we keep up with the innovation happening underneath the covers. And at the same time, this is not a closed ended system. You can actually build on top of our platform, right? And so that actually gives you a good mix. Now the care abouts are interesting. Some apps care about experience. Some apps care about latency. Some apps are extremely charty and extremely data intensive, but nobody wants to pay for it, right? And so it's a interesting Jenga that you have to play between experience versus security versus cost, right? And that makes kind of head of infrastructure and cloud platform teams' life really, really, really interesting. >> And this is why I love your background, and Stu Miniman, when he was with theCUBE, and now he's at Red Hat, we used to riff about the network and how network folks are now, those concepts are now up the top of the stack because the cloud is one big network effect. >> Ramesh: Exactly, correct. >> It's a computer. >> Yep, absolutely. No, and case in point, right, like say we're in let's say in San Jose here or or Palo Alto here, and let's say my application is sitting in London, right? The cloud gives you different express lanes. I can go down to my closest pop location provided by AWS and then I can go ride that all the way up to up to London. It's going to give me better performance, low latency, but I'm going to have to incur some costs associated with it. Or I can go all the wild internet all the way from Palo Alta up to kind of the ingress point into London and then go access, but I'm spending time on the wild internet, which means all kinds of fun things happen, right? But I'm not paying much, but my experience is not going to be so great. So, and there are various degrees of shade in them, of gray in the middle, right? So how do you pick what? It all kind of is driven by the applications. >> Well, we certainly want you back for Supercloud3, our next version of this virtual/live event here in our Palo Alto studios. Really appreciate you coming on. >> Absolutely. >> While you're here, give a quick plug for the company. Next minute, we can take a minute to talk about the success of the company. >> Ramesh: Absolutely. >> I know you got a fresh financing this past year. Plenty of money in the bank, going to ride this new wave, Supercloud wave. Give us a quick plug. >> Absolutely, yeah. So three years going on to four this calendar year. So it's an interesting time for the company. We have proven that our technology, product and our initial customers are quite happy with it. Now comes essentially more of those and scale and so forth. That's kind of the interesting phase that we are in. Also heartened to see quite a few of kind of really large and dominant players in the market, partners, channels and so forth, invest in us to take this to the next set of customers. I would say there's been a dramatic shift in the conversation with our customers. The first couple of years or so of the company, we are about three years old right now, was really about us educating them. This is what you need. This is what you need. Now actually it's a lot of just pull, right? We've seen a good indication, as much as a hate RFIs, a good indication is the number of RFIs that show up at our door saying we want you to participate in this because we want to understand more, right? And so as a, I think we are at an interesting point of the, of that shift. >> RFIs always like do all this work and hope for the best. Pray for a deal. You know, you guys on the right side of history. If a customer asks with respect to Supercloud, multicloud, is that your focus? Is that the direction you guys are going into? >> Yeah, so I would say we are kind of both, right? Supercloud and multicloud because we, our customers are hybrid, multiple clouds, all of the above, right? Our main pitch and kind of value back to the customers is go embrace cloud native because that's the right approach, right? It doesn't make sense to go reinvent the wheel on that one, but then make a really good choice about whether you want to do this yourself or invest in a platform to make your life easy. Because we have seen this story play out with many many enterprises, right? They pick the right technologies. They do a simple POC overnight, and they say, yeah, I can make this work for two apps, right? And then they say, yes, I can make this work for 100. You go down a certain path. You hit a wall. You hit a wall, and it's a hard wall. It's like, no, there isn't a thing that you can go around it. >> A lot of dead bodies laying around. >> Ramesh: Exactly. >> Dead wall. >> And then they have to unravel around that, and then they come talk to us, and they say, okay, now what? Like help me, help me through this journey. So I would say to the extent that you can do this diligence ahead of time, do that, and then, and then pick the right platform. >> You've got to have the talent. And you got to be geared up. You got to know what you're getting into. >> Ramesh: Exactly. >> You got to have the staff to do this. >> And cloud talent and skillset in particular, I mean there's lots available but it's in pockets right? And if you look at kind of web three companies, they've gone and kind of amassed all those guys, right? So enterprises are not left with the cream of the crop. >> John: They might be coming back. >> Exactly, exactly, so. >> With this downturn. Ramesh, great to see you and thanks for contributing to Supercloud2, and again, love your team. Very technical team, and you're in the right side of history in this one. Congratulations. >> Ramesh: No, and thank you, thank you very much. >> Okay, this is Supercloud2. I'm John Furrier with Dave Vellante. We'll be back right after this short break. (upbeat music)
SUMMARY :
Ramesh, legend in the You're being too kind. blog is always, you know, And one that addresses the gaps and get to as many subscribers and users and it's not really a This kind of forms that The game is still the same, but the play, and it's one that we It's going everywhere in the company. So I need to scale my it's going to be completely and make sure that you get So the question that's being debated is on kind of the platform side kind of peel the onion in layers, right? So that brings up the deployment question. And so both of those need to be solved for So you kind of have to go top to bottom. down into the trap now. in software that you can tweak So how do you secure the that needs to talk to an analytics service and the next thing, you So you got the land of Now you have them specializing. ecosystem to pick up these gaps and then you go based on that. and the ecosystem of independent software vendor, that were once ISVs now have So you have that new hyper is software developers, What's that impact of that? and the data center migrate to the cloud, because the cloud is of gray in the middle, right? you back for Supercloud3, quick plug for the company. Plenty of money in the bank, That's kind of the interesting Is that the direction all of the above, right? and then they come talk to us, And you got to be geared up. And if you look at kind Ramesh, great to see you Ramesh: No, and thank Okay, this is Supercloud2.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ramesh | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Ramesh Prabagaran | PERSON | 0.99+ |
Bob Muglia | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2015 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
Microsoft | ORGANIZATION | 0.99+ |
London | LOCATION | 0.99+ |
San Jose | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
10% | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Adam Selipsky | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
100 | QUANTITY | 0.99+ |
two apps | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Palo Alta | LOCATION | 0.99+ |
Second | QUANTITY | 0.99+ |
two regions | QUANTITY | 0.99+ |
APAC | ORGANIZATION | 0.99+ |
First | QUANTITY | 0.99+ |
one choice | QUANTITY | 0.99+ |
second event | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
Prosimo | ORGANIZATION | 0.99+ |
Billions of dollars | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
one region | QUANTITY | 0.98+ |
multicloud | ORGANIZATION | 0.98+ |
five different choices | QUANTITY | 0.98+ |
hundreds | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
first layer | QUANTITY | 0.98+ |
first | QUANTITY | 0.97+ |
two worlds | QUANTITY | 0.97+ |
Supercloud | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.97+ |
single instance | QUANTITY | 0.97+ |
Supercloud2 | ORGANIZATION | 0.97+ |
two big camps | QUANTITY | 0.97+ |
one reality | QUANTITY | 0.96+ |
three companies | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
SaaS | TITLE | 0.95+ |
CloudFlare | ORGANIZATION | 0.95+ |
first couple of years | QUANTITY | 0.95+ |
CUBE | ORGANIZATION | 0.94+ |
first job | QUANTITY | 0.94+ |
Supercloud wave | EVENT | 0.94+ |
Azure | ORGANIZATION | 0.94+ |
three clouds | QUANTITY | 0.93+ |
AWS Startup Showcase S3E1
(upbeat electronic music) >> Hello everyone, welcome to this CUBE conversation here from the studios in the CUBE in Palo Alto, California. I'm John Furrier, your host. We're featuring a startup, Astronomer. Astronomer.io is the URL, check it out. And we're going to have a great conversation around one of the most important topics hitting the industry, and that is the future of machine learning and AI, and the data that powers it underneath it. There's a lot of things that need to get done, and we're excited to have some of the co-founders of Astronomer here. Viraj Parekh, who is co-founder of Astronomer, and Paola Peraza Calderon, another co-founder, both with Astronomer. Thanks for coming on. First of all, how many co-founders do you guys have? >> You know, I think the answer's around six or seven. I forget the exact, but there's really been a lot of people around the table who've worked very hard to get this company to the point that it's at. We have long ways to go, right? But there's been a lot of people involved that have been absolutely necessary for the path we've been on so far. >> Thanks for that, Viraj, appreciate that. The first question I want to get out on the table, and then we'll get into some of the details, is take a minute to explain what you guys are doing. How did you guys get here? Obviously, multiple co-founders, sounds like a great project. The timing couldn't have been better. ChatGPT has essentially done so much public relations for the AI industry to kind of highlight this shift that's happening. It's real, we've been chronicalizing, take a minute to explain what you guys do. >> Yeah, sure, we can get started. So, yeah, when Viraj and I joined Astronomer in 2017, we really wanted to build a business around data, and we were using an open source project called Apache Airflow that we were just using sort of as customers ourselves. And over time, we realized that there was actually a market for companies who use Apache Airflow, which is a data pipeline management tool, which we'll get into, and that running Airflow is actually quite challenging, and that there's a big opportunity for us to create a set of commercial products and an opportunity to grow that open source community and actually build a company around that. So the crux of what we do is help companies run data pipelines with Apache Airflow. And certainly we've grown in our ambitions beyond that, but that's sort of the crux of what we do for folks. >> You know, data orchestration, data management has always been a big item in the old classic data infrastructure. But with AI, you're seeing a lot more emphasis on scale, tuning, training. Data orchestration is the center of the value proposition, when you're looking at coordinating resources, it's one of the most important things. Can you guys explain what data orchestration entails? What does it mean? Take us through the definition of what data orchestration entails. >> Yeah, for sure. I can take this one, and Viraj, feel free to jump in. So if you google data orchestration, here's what you're going to get. You're going to get something that says, "Data orchestration is the automated process" "for organizing silo data from numerous" "data storage points, standardizing it," "and making it accessible and prepared for data analysis." And you say, "Okay, but what does that actually mean," right, and so let's give sort of an an example. So let's say you're a business and you have sort of the following basic asks of your data team, right? Okay, give me a dashboard in Sigma, for example, for the number of customers or monthly active users, and then make sure that that gets updated on an hourly basis. And then number two, a consistent list of active customers that I have in HubSpot so that I can send them a monthly product newsletter, right? Two very basic asks for all sorts of companies and organizations. And when that data team, which has data engineers, data scientists, ML engineers, data analysts get that request, they're looking at an ecosystem of data sources that can help them get there, right? And that includes application databases, for example, that actually have in product user behavior and third party APIs from tools that the company uses that also has different attributes and qualities of those customers or users. And that data team needs to use tools like Fivetran to ingest data, a data warehouse, like Snowflake or Databricks to actually store that data and do analysis on top of it, a tool like DBT to do transformations and make sure that data is standardized in the way that it needs to be, a tool like Hightouch for reverse ETL. I mean, we could go on and on. There's so many partners of ours in this industry that are doing really, really exciting and critical things for those data movements. And the whole point here is that data teams have this plethora of tooling that they use to both ingest the right data and come up with the right interfaces to transform and interact with that data. And data orchestration, in our view, is really the heartbeat of all of those processes, right? And tangibly the unit of data orchestration is a data pipeline, a set of tasks or jobs that each do something with data over time and eventually run that on a schedule to make sure that those things are happening continuously as time moves on and the company advances. And so, for us, we're building a business around Apache Airflow, which is a workflow management tool that allows you to author, run, and monitor data pipelines. And so when we talk about data orchestration, we talk about sort of two things. One is that crux of data pipelines that, like I said, connect that large ecosystem of data tooling in your company. But number two, it's not just that data pipeline that needs to run every day, right? And Viraj will probably touch on this as we talk more about Astronomer and our value prop on top of Airflow. But then it's all the things that you need to actually run data and production and make sure that it's trustworthy, right? So it's actually not just that you're running things on a schedule, but it's also things like CICD tooling, secure secrets management, user permissions, monitoring, data lineage, documentation, things that enable other personas in your data team to actually use those tools. So long-winded way of saying that it's the heartbeat, we think, of of the data ecosystem, and certainly goes beyond scheduling, but again, data pipelines are really at the center of it. >> One of the things that jumped out, Viraj, if you can get into this, I'd like to hear more about how you guys look at all those little tools that are out. You mentioned a variety of things. You look at the data infrastructure, it's not just one stack. You've got an analytic stack, you've got a realtime stack, you've got a data lake stack, you got an AI stack potentially. I mean you have these stacks now emerging in the data world that are fundamental, that were once served by either a full package, old school software, and then a bunch of point solution. You mentioned Fivetran there, I would say in the analytics stack. Then you got S3, they're on the data lake stack. So all these things are kind of munged together. >> Yeah. >> How do you guys fit into that world? You make it easier, or like, what's the deal? >> Great question, right? And you know, I think that one of the biggest things we've found in working with customers over the last however many years is that if a data team is using a bunch of tools to get what they need done, and the number of tools they're using is growing exponentially and they're kind of roping things together here and there, that's actually a sign of a productive team, not a bad thing, right? It's because that team is moving fast. They have needs that are very specific to them, and they're trying to make something that's exactly tailored to their business. So a lot of times what we find is that customers have some sort of base layer, right? That's kind of like, it might be they're running most of the things in AWS, right? And then on top of that, they'll be using some of the things AWS offers, things like SageMaker, Redshift, whatever, but they also might need things that their cloud can't provide. Something like Fivetran, or Hightouch, those are other tools. And where data orchestration really shines, and something that we've had the pleasure of helping our customers build, is how do you take all those requirements, all those different tools and whip them together into something that fulfills a business need? So that somebody can read a dashboard and trust the number that it says, or somebody can make sure that the right emails go out to their customers. And Airflow serves as this amazing kind of glue between that data stack, right? It's to make it so that for any use case, be it ELT pipelines, or machine learning, or whatever, you need different things to do them, and Airflow helps tie them together in a way that's really specific for a individual business' needs. >> Take a step back and share the journey of what you guys went through as a company startup. So you mentioned Apache, open source. I was just having an interview with a VC, we were talking about foundational models. You got a lot of proprietary and open source development going on. It's almost the iPhone/Android moment in this whole generative space and foundational side. This is kind of important, the open source piece of it. Can you share how you guys started? And I can imagine your customers probably have their hair on fire and are probably building stuff on their own. Are you guys helping them? Take us through, 'cause you guys are on the front end of a big, big wave, and that is to make sense of the chaos, rain it in. Take us through your journey and why this is important. >> Yeah, Paola, I can take a crack at this, then I'll kind of hand it over to you to fill in whatever I miss in details. But you know, like Paola is saying, the heart of our company is open source, because we started using Airflow as an end user and started to say like, "Hey wait a second," "more and more people need this." Airflow, for background, started at Airbnb, and they were actually using that as a foundation for their whole data stack. Kind of how they made it so that they could give you recommendations, and predictions, and all of the processes that needed orchestrated. Airbnb created Airflow, gave it away to the public, and then fast forward a couple years and we're building a company around it, and we're really excited about that. >> That's a beautiful thing. That's exactly why open source is so great. >> Yeah, yeah. And for us, it's really been about watching the community and our customers take these problems, find a solution to those problems, standardize those solutions, and then building on top of that, right? So we're reaching to a point where a lot of our earlier customers who started to just using Airflow to get the base of their BI stack down and their reporting in their ELP infrastructure, they've solved that problem and now they're moving on to things like doing machine learning with their data, because now that they've built that foundation, all the connective tissue for their data arriving on time and being orchestrated correctly is happening, they can build a layer on top of that. And it's just been really, really exciting kind of watching what customers do once they're empowered to pick all the tools that they need, tie them together in the way they need to, and really deliver real value to their business. >> Can you share some of the use cases of these customers? Because I think that's where you're starting to see the innovation. What are some of the companies that you're working with, what are they doing? >> Viraj, I'll let you take that one too. (group laughs) >> So you know, a lot of it is... It goes across the gamut, right? Because it doesn't matter what you are, what you're doing with data, it needs to be orchestrated. So there's a lot of customers using us for their ETL and ELT reporting, right? Just getting data from other disparate sources into one place and then building on top of that. Be it building dashboards, answering questions for the business, building other data products and so on and so forth. From there, these use cases evolve a lot. You do see folks doing things like fraud detection, because Airflow's orchestrating how transactions go, transactions get analyzed. They do things like analyzing marketing spend to see where your highest ROI is. And then you kind of can't not talk about all of the machine learning that goes on, right? Where customers are taking data about their own customers, kind of analyze and aggregating that at scale, and trying to automate decision making processes. So it goes from your most basic, what we call data plumbing, right? Just to make sure data's moving as needed, all the ways to your more exciting expansive use cases around automated decision making and machine learning. >> And I'd say, I mean, I'd say that's one of the things that I think gets me most excited about our future, is how critical Airflow is to all of those processes, and I think when you know a tool is valuable is when something goes wrong and one of those critical processes doesn't work. And we know that our system is so mission critical to answering basic questions about your business and the growth of your company for so many organizations that we work with. So it's, I think, one of the things that gets Viraj and I and the rest of our company up every single morning is knowing how important the work that we do for all of those use cases across industries, across company sizes, and it's really quite energizing. >> It was such a big focus this year at AWS re:Invent, the role of data. And I think one of the things that's exciting about the open AI and all the movement towards large language models is that you can integrate data into these models from outside. So you're starting to see the integration easier to deal with. Still a lot of plumbing issues. So a lot of things happening. So I have to ask you guys, what is the state of the data orchestration area? Is it ready for disruption? Has it already been disrupted? Would you categorize it as a new first inning kind of opportunity, or what's the state of the data orchestration area right now? Both technically and from a business model standpoint. How would you guys describe that state of the market? >> Yeah, I mean, I think in a lot of ways, in some ways I think we're category creating. Schedulers have been around for a long time. I released a data presentation sort of on the evolution of going from something like Kron, which I think was built in like the 1970s out of Carnegie Mellon. And that's a long time ago, that's 50 years ago. So sort of like the basic need to schedule and do something with your data on a schedule is not a new concept. But to our point earlier, I think everything that you need around your ecosystem, first of all, the number of data tools and developer tooling that has come out industry has 5X'd over the last 10 years. And so obviously as that ecosystem grows, and grows, and grows, and grows, the need for orchestration only increases. And I think, as Astronomer, I think we... And we work with so many different types of companies, companies that have been around for 50 years, and companies that got started not even 12 months ago. And so I think for us it's trying to, in a ways, category create and adjust sort of what we sell and the value that we can provide for companies all across that journey. There are folks who are just getting started with orchestration, and then there's folks who have such advanced use case, 'cause they're hitting sort of a ceiling and only want to go up from there. And so I think we, as a company, care about both ends of that spectrum, and certainly want to build and continue building products for companies of all sorts, regardless of where they are on the maturity curve of data orchestration. >> That's a really good point, Paola. And I think the other thing to really take into account is it's the companies themselves, but also individuals who have to do their jobs. If you rewind the clock like 5 or 10 years ago, data engineers would be the ones responsible for orchestrating data through their org. But when we look at our customers today, it's not just data engineers anymore. There's data analysts who sit a lot closer to the business, and the data scientists who want to automate things around their models. So this idea that orchestration is this new category is right on the money. And what we're finding is the need for it is spreading to all parts of the data team, naturally where Airflow's emerged as an open source standard and we're hoping to take things to the next level. >> That's awesome. We've been up saying that the data market's kind of like the SRE with servers, right? You're going to need one person to deal with a lot of data, and that's data engineering, and then you're got to have the practitioners, the democratization. Clearly that's coming in what you're seeing. So I have to ask, how do you guys fit in from a value proposition standpoint? What's the pitch that you have to customers, or is it more inbound coming into you guys? Are you guys doing a lot of outreach, customer engagements? I'm sure they're getting a lot of great requirements from customers. What's the current value proposition? How do you guys engage? >> Yeah, I mean, there's so many... Sorry, Viraj, you can jump in. So there's so many companies using Airflow, right? So the baseline is that the open source project that is Airflow that came out of Airbnb, over five years ago at this point, has grown exponentially in users and continues to grow. And so the folks that we sell to primarily are folks who are already committed to using Apache Airflow, need data orchestration in their organization, and just want to do it better, want to do it more efficiently, want to do it without managing that infrastructure. And so our baseline proposition is for those organizations. Now to Viraj's point, obviously I think our ambitions go beyond that, both in terms of the personas that we addressed and going beyond that data engineer, but really it's to start at the baseline, as we continue to grow our our company, it's really making sure that we're adding value to folks using Airflow and help them do so in a better way, in a larger way, in a more efficient way, and that's really the crux of who we sell to. And so to answer your question on, we get a lot of inbound because they're... >> You have a built in audience. (laughs) >> The world that use it. Those are the folks who we talk to and come to our website and chat with us and get value from our content. I mean, the power of the opensource community is really just so, so big, and I think that's also one of the things that makes this job fun. >> And you guys are in a great position. Viraj, you can comment a little, get your reaction. There's been a big successful business model to starting a company around these big projects for a lot of reasons. One is open source is continuing to be great, but there's also supply chain challenges in there. There's also we want to continue more innovation and more code and keeping it free and and flowing. And then there's the commercialization of productizing it, operationalizing it. This is a huge new dynamic, I mean, in the past 5 or so years, 10 years, it's been happening all on CNCF from other areas like Apache, Linux Foundation, they're all implementing this. This is a huge opportunity for entrepreneurs to do this. >> Yeah, yeah. Open source is always going to be core to what we do, because we wouldn't exist without the open source community around us. They are huge in numbers. Oftentimes they're nameless people who are working on making something better in a way that everybody benefits from it. But open source is really hard, especially if you're a company whose core competency is running a business, right? Maybe you're running an e-commerce business, or maybe you're running, I don't know, some sort of like, any sort of business, especially if you're a company running a business, you don't really want to spend your time figuring out how to run open source software. You just want to use it, you want to use the best of it, you want to use the community around it, you want to be able to google something and get answers for it, you want the benefits of open source. You don't have the time or the resources to invest in becoming an expert in open source, right? And I think that dynamic is really what's given companies like us an ability to kind of form businesses around that in the sense that we'll make it so people get the best of both worlds. You'll get this vast open ecosystem that you can build on top of, that you can benefit from, that you can learn from. But you won't have to spend your time doing undifferentiated heavy lifting. You can do things that are just specific to your business. >> It's always been great to see that business model evolve. We used a debate 10 years ago, can there be another Red Hat? And we said, not really the same, but there'll be a lot of little ones that'll grow up to be big soon. Great stuff. Final question, can you guys share the history of the company? The milestones of Astromer's journey in data orchestration? >> Yeah, we could. So yeah, I mean, I think, so Viraj and I have obviously been at Astronomer along with our other founding team and leadership folks for over five years now. And it's been such an incredible journey of learning, of hiring really amazing people, solving, again, mission critical problems for so many types of organizations. We've had some funding that has allowed us to invest in the team that we have and in the software that we have, and that's been really phenomenal. And so that investment, I think, keeps us confident, even despite these sort of macroeconomic conditions that we're finding ourselves in. And so honestly, the milestones for us are focusing on our product, focusing on our customers over the next year, focusing on that market for us that we know can get valuable out of what we do, and making developers' lives better, and growing the open source community and making sure that everything that we're doing makes it easier for folks to get started, to contribute to the project and to feel a part of the community that we're cultivating here. >> You guys raised a little bit of money. How much have you guys raised? >> Don't know what the total is, but it's in the ballpark over $200 million. It feels good to... >> A little bit of capital. Got a little bit of cap to work with there. Great success. I know as a Series C Financing, you guys have been down. So you're up and running, what's next? What are you guys looking to do? What's the big horizon look like for you from a vision standpoint, more hiring, more product, what is some of the key things you're looking at doing? >> Yeah, it's really a little of all of the above, right? Kind of one of the best and worst things about working at earlier stage startups is there's always so much to do and you often have to just kind of figure out a way to get everything done. But really investing our product over the next, at least over the course of our company lifetime. And there's a lot of ways we want to make it more accessible to users, easier to get started with, easier to use, kind of on all areas there. And really, we really want to do more for the community, right, like I was saying, we wouldn't be anything without the large open source community around us. And we want to figure out ways to give back more in more creative ways, in more code driven ways, in more kind of events and everything else that we can keep those folks galvanized and just keep them happy using Airflow. >> Paola, any final words as we close out? >> No, I mean, I'm super excited. I think we'll keep growing the team this year. We've got a couple of offices in the the US, which we're excited about, and a fully global team that will only continue to grow. So Viraj and I are both here in New York, and we're excited to be engaging with our coworkers in person finally, after years of not doing so. We've got a bustling office in San Francisco as well. So growing those teams and continuing to hire all over the world, and really focusing on our product and the open source community is where our heads are at this year. So, excited. >> Congratulations. 200 million in funding, plus. Good runway, put that money in the bank, squirrel it away. It's a good time to kind of get some good interest on it, but still grow. Congratulations on all the work you guys do. We appreciate you and the open source community does, and good luck with the venture, continue to be successful, and we'll see you at the Startup Showcase. >> Thank you. >> Yeah, thanks so much, John. Appreciate it. >> Okay, that's the CUBE Conversation featuring astronomer.io, that's the website. Astronomer is doing well. Multiple rounds of funding, over 200 million in funding. Open source continues to lead the way in innovation. Great business model, good solution for the next gen cloud scale data operations, data stacks that are emerging. I'm John Furrier, your host, thanks for watching. (soft upbeat music)
SUMMARY :
and that is the future of for the path we've been on so far. for the AI industry to kind of highlight So the crux of what we center of the value proposition, that it's the heartbeat, One of the things and the number of tools they're using of what you guys went and all of the processes That's a beautiful thing. all the tools that they need, What are some of the companies Viraj, I'll let you take that one too. all of the machine learning and the growth of your company that state of the market? and the value that we can provide and the data scientists that the data market's And so the folks that we sell to You have a built in audience. one of the things that makes this job fun. in the past 5 or so years, 10 years, that you can build on top of, the history of the company? and in the software that we have, How much have you guys raised? but it's in the ballpark What's the big horizon look like for you Kind of one of the best and worst things and continuing to hire the work you guys do. Yeah, thanks so much, John. for the next gen cloud
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Viraj Parekh | PERSON | 0.99+ |
Paola | PERSON | 0.99+ |
Viraj | PERSON | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Airbnb | ORGANIZATION | 0.99+ |
2017 | DATE | 0.99+ |
San Francisco | LOCATION | 0.99+ |
New York | LOCATION | 0.99+ |
Apache | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
Two | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Paola Peraza Calderon | PERSON | 0.99+ |
1970s | DATE | 0.99+ |
first question | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Airflow | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
Linux Foundation | ORGANIZATION | 0.99+ |
200 million | QUANTITY | 0.99+ |
Astronomer | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
over 200 million | QUANTITY | 0.99+ |
over $200 million | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
10 years ago | DATE | 0.99+ |
HubSpot | ORGANIZATION | 0.98+ |
Fivetran | ORGANIZATION | 0.98+ |
50 years ago | DATE | 0.98+ |
over five years | QUANTITY | 0.98+ |
one stack | QUANTITY | 0.98+ |
12 months ago | DATE | 0.98+ |
10 years | QUANTITY | 0.97+ |
Both | QUANTITY | 0.97+ |
Apache Airflow | TITLE | 0.97+ |
both worlds | QUANTITY | 0.97+ |
CNCF | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.97+ |
ChatGPT | ORGANIZATION | 0.97+ |
5 | DATE | 0.97+ |
next year | DATE | 0.96+ |
Astromer | ORGANIZATION | 0.96+ |
today | DATE | 0.95+ |
5X | QUANTITY | 0.95+ |
over five years ago | DATE | 0.95+ |
CUBE | ORGANIZATION | 0.94+ |
two things | QUANTITY | 0.94+ |
each | QUANTITY | 0.93+ |
one person | QUANTITY | 0.93+ |
First | QUANTITY | 0.92+ |
S3 | TITLE | 0.91+ |
Carnegie Mellon | ORGANIZATION | 0.91+ |
Startup Showcase | EVENT | 0.91+ |
Mobile World Congress Preview 2023 | Mobile World Congress 2023
(electronic music) (graphics whooshing) (graphics tinkling) >> Telecommunications is well north of a trillion-dollar business globally, that provides critical services on which virtually everyone on the planet relies. Dramatic changes are occurring in the sector, and one of the most important dimensions of this change is the underlying infrastructure that powers global telecommunications networks. Telcos have been thawing out, if you will, they're frozen infrastructure, modernizing. They're opening up, they're disaggregating their infrastructure, separating, for example, the control plane from the data plane, and adopting open standards. Telco infrastructure is becoming software-defined. And leading telcos are adopting cloud native microservices to help make developers more productive, so they can respond more quickly to market changes. They're embracing technology consumption models, and selectively leveraging the cloud where it makes sense. And these changes are being driven by market forces, the root of which stem from customer demand. So from a customer's perspective, they want services, and they want them fast. Meaning, not only at high speeds, but also they want them now. Customers want the latest, the greatest, and they want these services to be reliable and stable with high quality of service levels. And they want them to be highly cost-effective. Hello and welcome to this preview of Mobile World Congress 2023. My name is Dave Vellante, and at this year's event, theCUBE has a major presence at the show made possible by Dell Technologies, and with me to unpack the trends in telco, and look ahead to MWC23 are Dennis Hoffman, he's the Senior Vice President and General Manager of Dell's telecom business, and Aaron Chaisson, who is the Vice President of Telecom and Edge Solutions Marketing at Dell Technologies, gentlemen, welcome, thanks so much for spending some time with me. >> Thank you, Dave. >> Thanks, glad to be here. >> So, Dennis, let's start with you. Telcos in recent history have been slow to deliver and to monetize new services, and a large part because their purpose-built infrastructure could been somewhat of a barrier to responding to all these market forces. In many ways, this is what makes telecoms, really this market so exciting. So from your perspective, where is the action in this space? >> Yeah, the action Dave is kind of all over the place, partly because it's an ecosystem play. I think it's been, as you point out, the disaggregation trend has been going on for a while. The opportunity's been clear, but it has taken a few years to get all of the vendors, and all of the components that make up a solution, as well as the operators themselves, to a point where we can start putting this stuff together, and actually achieving some of the promise. >> So Aaron, for those who might not be as familiar with Dell's a activities in this area, here we are just ahead of Mobile World Congress, it's the largest event for telecoms, what should people know about Dell? And what's the key message to this industry? >> Sure, yeah, I think everybody knows that there's a lot of innovation that's been happening in the industry of late. One of the major trends that we're seeing is that shift from more of a vertically-integrated technology stack, to more of a disaggregated set of solutions, and that trend has actually created a ton of innovation that's happening across the industry, or along technology vendors and providers, the telecoms themselves. And so, one of the things that Dell's really looking to do is, as Dennis talked about, is build out a really strong ecosystem of partners and vendors that we're working closely together to be able to collaborate on new technologies, new capabilities that are solving challenges that the networks are seeing today. Be able to create new solutions built on those in order to be able to bring new value to the industry. And then finally, we want to help both partners, as well as our CSP providers activate those changes, so that they can bring new solutions to market, to be able to serve their customers. And so, the key areas that we're really focusing on with our customers is, technologies to help modernize the network, to be able to capitalize on the value of open architectures, and bring price performance to what they're expecting, and availability that they're expecting today. And then also, partner with the lines of business to be able to take these new capabilities, produce new solutions, and then deliver new value to their customers. >> Great, thank you, Aaron. So Dennis, you and I, known you for a number of years. I've watched you, you're are a trend spotter. You're a strategic thinker. I love now the fact that you're running a business that you had to go out and analyze, and now you got to make it happen. So, how would you describe Dell's strategy in this market? >> Well, it's really two things. And I appreciate the comment, I'm not sure how much of a trend spotter I am, but I certainly enjoy, and I think I'm fascinated by what's going on in this industry right now. Our two main thrusts, Dave, are first round, trying to catalyze that ecosystem, be a force for pulling together a group of folks, vendors that have been flying in fairly loose formation for a couple of years, to deliver the kinds of solutions that move the needle forward, and produce the outcomes that our network operator customers can actually buy and consume, and deploy, and have them be supported. The other thing is, there's a couple of very key technology areas that need to be advanced here. This ends up being a much anticipated year in telecom. Because of the delivery of some open infrastructure solutions that have being developed for years. With the Intel Sapphire Rapids program coming to market, we've of course got some purpose-built solutions on top of that for telecommunications networks. Some expanded partnerships in the area of multi-cloud infrastructure. And so, I would say the second main thrust is, we've got to bring some intellectual property to the party. It's not just about pulling the ecosystem together. But those two things together really form the twin thrusts of our strategy. >> Okay, so as you point out, you obviously not going to go alone in this market, it's way too broad, there's so many routes to market, partnerships, obviously very, very important. So, can you share a little bit more about the ecosystem and partners, maybe give some examples of some of the key partners that you'd be highlighting or working with, maybe at Mobile World Congress, or other activities this year? >> Yeah, absolutely. As Aaron touched on, I'm a visual thinker. The way I think about this thing is a very, very vertical architecture is tipping sideways. It's becoming horizontal. And all of the layers of that horizontal architecture are really where the partnerships are at. So, let's start at the bottom, silicon. The silicon ecosystem is very much focused on this market. And producing very specific products to enable open, high performance telecom networks. That's both in the form of host processors, as well as accelerators. One layer up, of course, is the stuff that we're known for, subsystems, compute storage, the hardware infrastructure that forms the foundation for telco clouds. A layer above that, all of the cloud software layer, the virtualization and containerization software, and all of the usual suspects there, all of whom are very good partners of ours, and we're looking to expand that pretty broadly this year. And then at the top of the layer cake, all of the network functions, all of the VNF's and CNF's that were once kind of the top of proprietary stacks, that are now opening up and being delivered, as well-formed containers that can run on these clouds. So, we're focusing on all of those, if you will, product partnerships, and there is a services wrapper around all of it. The systems integration necessary to make these systems part of a carrier's network, which of course, has been running for a long time, and needs to be integrated with in a very specific way. And so, all of that, together kind of forms the ecosystem, all of those are partners, and we're really excited about being at the heart of it. >> Interesting, it's not like we've never seen this movie before, which is, it's sort of repeating itself in telco. Aaron, you heard my little intro up front about the need to modernize infrastructure, I wonder if I could touch on another major trend, which we're seeing is the cloud, and I'm talkin' about not only public, but private and hybrid cloud. The public cloud is an opportunity, but it's also a threat for telcos. Telcom providers are lookin' to the public cloud for specific use cases, you think about like bursting for an iPhone launch or whatever. But at the same time, these cloud vendors, they're sort of competing with telcos. They're providing local zones, for example, sometimes trying to do an end run on the telco connectivity services, so telecom companies, they have to find the right balance between what they own and what they rent. And I wonder if you could add some color as to what you see in the market and what Dell specifically is doing to support these trends. >> Yeah, and I think the most important thing is what we're seeing, as you said, is these aren't things that we haven't seen before. And I think that telecom is really going through their own set of cloud transformations, and so, one of the hot topics in the industry now is, what is telco cloud? And what does that look like going forward? And it's going to be, as you said, a combination of services that they offer, services that they leverage. But at the end of the day, it's going to help them modernize how they deliver telecommunication services to their customers, and then provide value added services on top of that. From a Dell perspective, we're really providing the technologies to provide the underpinnings to lay a foundation on which that network can be built, whether that's best of breed servers that are built in design for the telecom environments. Recently, we announced our Infer block program, in partnering with virtualization providers, to be able to provide engineered systems that dramatically simplify how our customers can deploy, manage, and lifecycle manage throughout day two operations, an entire cloud environment. And whether they're using Red Hat, whether they're using Wind River, or VMware, or other virtualization layers, they can deploy the right virtualization layer at the right part of their network to support the applications they're looking to drive. And Dell is looking to solve how they simplify and manage all of that, both from a hardware, as well as on management software perspective. So, this is really what Dell's doing to, again, partner with the broader technology community, to help make that telco cloud a reality. >> Aaron, let's stay here for a second, I'm interested in some of the use cases that you're going after with customers. You've got Edge infrastructure, remote work, 5G, where's security fit, what are the focus areas for Dell, and can we double click on that a little bit? >> Yeah, I mean, I think there's two main areas of telecommunication industry that we're talking to. One, we've really been talking about the sort of the network buyer, how do they modernize the core, the network Edge, the RAN capabilities to deliver traditional telecommunication services, and modernize that as they move into 5G and beyond. I think the other side of the business is, telecoms are really looking from a line of business perspective to figure out how do they monetize that network, and be able to deliver value added services to their enterprise customers on top of these new networks. So, you were just touching on a couple of things that are really critical. In the enterprise space, AI and IoT is driving a tremendous amount of innovation out there, and there's a need for being able to support and manage Edge compute at scale, be able to provide connectivity, like private mobility, and 4G and 5G, being able to support things like mobile workforces and client capabilities, to be able to access these devices that are around all of these Edge environments of the enterprises. And telecoms are seeing as that, as an opportunity for them to not only provide connectivity, but how do they extend their cloud out into these enterprise environments with compute, with connectivity, with client and connectivity resources, and even also provide protection for those environments as well. So, these are areas that Dell is historically very strong at. Being able to provide compute, be able to provide connectivity, and being able to provide data protection and client services, we are looking to work closely with lines of businesses to be able to develop solutions that they can bring to market in combination with us, to be able to serve their end user customers and their enterprises. So, those are really the two key areas, not only network buyer, but being able to enable the lines of business to go and capitalize on the services they're developing for their customers. >> I think that line of business aspect is key, I mean, the telcos have had to sit back and provide the plumbing, cost per bit goes down, data consumption going through the roof, all the over at the top guys have had the field day with the data, and the customer relationships, and now it's almost like the revenge (chuckles) of the telcos. Dennis, I wonder if we could talk about the future. What can we expect in the years ahead from Dell, if you break out the binoculars a little bit. >> Yeah, I think you hit it earlier. We've seen the movie before. This has happened in the IT data center. We went from proprietary vertical solutions to horizontal open systems. We went from client server to software-defined open hardware cloud native. And the trend is likely to be exactly that, in the telecom industry because that's what the operators want. They're not naive to what's happened in the IT data center, they all run very large data centers. And they're trying to get some of the scale economies. Some of the agility, the cost of ownership benefits for the reasons Aaron just discussed. It's clear as you point out, this industry's been really defined by the inability to stop investing, and the difficulty to monetize that investment. And I think now, everybody's looking at this 5G, and frankly, 5G plus 6G, and beyond, as the opportunity to really go get a chunk of that revenue, and Enterprise Edge is the target. >> And 5G is touching so many industries, and that kind of brings me, Aaron into Mobile World Congress. I mean, you look at the floor layout, it's amazing. You got Industry 4.0, you've got our traditional industry and telco colliding. There's public policy. So, give us a teaser to Mobile World Congress 23, what's on deck at the show from Dell? >> Yeah, we're really excited about Mobile World Congress. This, as you know, is a massive event for the industry every year. And it's really the event that the whole industry uses to kick off this coming year. So, we're going to be using this obviously to talk to our customers and our partners about what Dell's looking to do, and what we're innovating on right now, and what we're looking to partner with them around. In the front of the house, we're going to be doin', we're going to be highlighting 13 different solutions and demonstrations to be able to show our customers what we're doing today, and show them the use cases, and put into action, so they get to actually look and feel, and touch, and experience what it is that we're working around. Obviously, meetings are important, everybody knows Mobile World Congress is the place to get those meetings and kickoff for the year. So, we're going to have, we're lookin' at several hundred meetings, hundreds of meetings that we're going to be lookin' to have across the industry with our customers and partners in the broader community. And of course, we've also got technology that's going to be in a variety of different partner spaces as well. So, you can come and see us in hall three, but we're also going to have technologies, kind of spread all over the floor. And of course, there's always theCUBE. You're going to be able to see us live all four days, all day, every day. You're going to be hearing our executives, our partners, our customers, talk about what Dell is doing to innovate in the industry, and how we're looking to leverage the broader, open ecosystem to be able to transform the network, and what we're lookin' to do. So, in that space, we're going to be focusing on what we're doing from an ecosystem perspective, our infrastructure focus. We'll be talking about what we're doing to support telco cloud transformation. And then finally, as we talked about earlier, how are we helping the lines of business within our telecoms monetize the opportunity? So, these are all different things we're really excited to be focusing on, and look forward to the event next month. >> Yeah, it's going to be awesome in Barcelona at the FITA, as you say, Dell's big presence in hall three, Orange is in there, Deutsche Telecom, Intel's in hall three. VMware's there, Nokia, Vodafone, you got some great things to see there. Check that out, and of course, theCUBE, we are super excited to be collaborating with you, we got a great setup. We're in the walkway right between halls four and five, right across from the government of Catalonia, who are the host partners for the event, so there's going to be a ton of action there. Guys, can't wait to see you there, really appreciate your time today. >> Great, thanks. >> Alright, Mobile World Congress, theCUBE's coverage starts on February 27th right after the keynotes. So, first thing in the morning, east coast time, we'll be broadcasting is, Aaron said all week, Monday through Thursday in the show floor, check that out at thecube.net. siliconangle.com has all the written coverage, and go to dell.com, see what's happenin' there, have all the action from the event. Don't miss us, this is Dave Vellante, we'll see you there. (electronic music)
SUMMARY :
and one of the most important and to monetize new and all of the components the network, to be able to capitalize on I love now the fact that Because of the delivery of some open examples of some of the key and all of the usual suspects there, about the need to the applications they're looking to drive. I'm interested in some of the use cases the lines of business to go and capitalize I mean, the telcos have had to sit back and the difficulty to and that kind of brings me, Aaron and kickoff for the year. awesome in Barcelona at the FITA, and go to dell.com, see
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dennis | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Aaron | PERSON | 0.99+ |
Vodafone | ORGANIZATION | 0.99+ |
Aaron Chaisson | PERSON | 0.99+ |
Dennis Hoffman | PERSON | 0.99+ |
February 27th | DATE | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Orange | ORGANIZATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
Mobile World Congress | EVENT | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Deutsche Telecom | ORGANIZATION | 0.99+ |
Monday | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
first round | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
Thursday | DATE | 0.99+ |
Mobile World Congress | EVENT | 0.99+ |
next month | DATE | 0.99+ |
Telco | ORGANIZATION | 0.98+ |
13 different solutions | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Telcos | ORGANIZATION | 0.98+ |
thecube.net. | OTHER | 0.98+ |
both | QUANTITY | 0.98+ |
Mobile World Congress 23 | EVENT | 0.98+ |
this year | DATE | 0.98+ |
One | QUANTITY | 0.98+ |
One layer | QUANTITY | 0.98+ |
VMware | ORGANIZATION | 0.98+ |
both partners | QUANTITY | 0.98+ |
Mobile World Congress 2023 | EVENT | 0.97+ |
one | QUANTITY | 0.97+ |
MWC23 | EVENT | 0.97+ |
twin thrusts | QUANTITY | 0.97+ |
two key areas | QUANTITY | 0.96+ |
telco | ORGANIZATION | 0.95+ |
two main thrusts | QUANTITY | 0.94+ |
five | QUANTITY | 0.93+ |
second main thrust | QUANTITY | 0.93+ |
2023 | DATE | 0.93+ |
Edge | TITLE | 0.92+ |
theCUBE | ORGANIZATION | 0.92+ |
a trillion-dollar | QUANTITY | 0.91+ |
Telcom | ORGANIZATION | 0.91+ |
first | QUANTITY | 0.91+ |
hall three | QUANTITY | 0.9+ |
dell.com | ORGANIZATION | 0.89+ |
Brian Stevens, Neural Magic | Cube Conversation
>> John: Hello and welcome to this cube conversation here in Palo Alto, California. I'm John Furrier, host of theCUBE. We got a great conversation on making machine learning easier and more affordable in an era where everybody wants more machine learning and AI. We're featuring Neural Magic with the CEO is also Cube alumni, Brian Steve. CEO, Great to see you Brian. Thanks for coming on this cube conversation. Talk about machine learning. >> Brian: Hey John, happy to be here again. >> John: What a buzz that's going on right now? Machine learning, one of the hottest topics, AI front and center, kind of going mainstream. We're seeing the success of the, of the kind of NextGen capabilities in the enterprise and in apps. It's a really exciting time. So perfect timing. Great, great to have this conversation. Let's start with taking a minute to explain what you guys are doing over there at Neural Magic. I know there's some history there, neural networks, MIT. But the, the convergence of what's going on, this big wave hitting, it's an exciting time for you guys. Take a minute to explain the company and your mission. >> Brian: Sure, sure, sure. So, as you said, the company's Neural Magic and spun out at MIT four plus years ago, along with some people and, and some intellectual property. And you summarize it better than I can cause you said, we're just trying to make, you know, AI that much easier. And so, but like another level of specificity around it is. You know, in the world you have a lot of like data scientists really focusing on making AI work for whatever their use case is. And then the next phase of that, then they're looking at optimizing the models that they built. And then it's not good enough just to work on models. You got to put 'em into production. So, what we do is we make it easier to optimize the models that have been developed and trained and then trying to make it super simple when it comes time to deploying those in production and managing them. >> Brian: You know, we've seen this movie before with the cloud. You start to see abstractions come out. Data science we saw like was like the, the secret art of being like a data scientist now democratization of data. You're kind of seeing a similar wave with machine learning models, foundational models, some call it developers are getting involved. Model complexity's still there, but, but it's getting easier. There's almost like the democratization happening. You got complexity, you got deployment, it's challenges, cost, you got developers involved. So it's like how do you grow it? How do you get more horsepower? And then how do you make developers productive, right? So like, this seems to be the thread. So, so where, where do you see this going? Because there's going to be a massive demand for, I want to do more with my machine learning. But what's the data source? What's the formatting? This kind of a stack develop, what, what are you guys doing to address this? Can you take us through and demystify this, this wave that's hitting, that everyone's seeing? >> Brian: Yeah. Now like you said, like, you know, the democratization of all of it. And that brings me all the way back to like the roots of open source, right? When you think about like, like back in the day you had to build your own tech stack yourself. A lot of people probably probably don't remember that. And then you went, you're building, you're always starting on a body of code or a module that was out there with open source. And I think that's what I equate to where AI has gotten to with what you were talking about the foundational models that didn't really exist years ago. So you really were like putting the layers of your models together in the formulas and it was a lot of heavy lifting. And so there was so much time spent on development. With far too few success cases, you know, to get into production to solve like a business stereo technical need. But as these, what's happening is as these models are becoming foundational. It's meaning people don't have to start from scratch. They're actually able to, you know, the avant-garde now is start with existing model that almost does what you want, but then applying your data set to it. So it's, you know, it's really the industry moving forward. And then we, you know, and, and the best thing about it is open source plays a new dimension, but this time, you know, in the, in the realm of AI. And so to us though, like, you know, I've been like, I spent a career focusing on, I think on like the, not just the technical side, but the consumption of the technology and how it's still way too hard for somebody to actually like, operationalize technology that all those vendors throw at them. So I've always been like empathetic the user around like, you know what their job is once you give them great technology. And so it's still too difficult even with the foundational models because what happens is there's really this impedance mismatch between the development of the model and then where, where the model has to live and run and be deployed and the life cycle of the model, if you will. And so what we've done in our research is we've developed techniques to introduce what's known as sparsity into a machine learning model. It's already been developed and trained. And what that sparsity does is that unlocks by making that model so much smaller. So in many cases we can make a model 90 to 95% smaller, even smaller than that in research. So, and, and so by doing that, we do that in a way that preserves all the accuracy out of the foundational model as you talked about. So now all of a sudden you get this much smaller model just as accurate. And then the even more exciting part about it is we developed a software-based engine called Deep Source. And what that, what the Inference Runtime does is takes that now sparsified model and it runs it, but because you sparsified it, it only needs a fraction of the compute that it, that it would've needed otherwise. So what we've done is make these models much faster, much smaller, and then by pairing that with an inference runtime, you now can actually deploy that model anywhere you want on commodity hardware, right? So X 86 in the cloud, X 86 in the data center arm at the edge, it's like this massive unlock that happens because you get the, the state-of-the-art models, but you get 'em, you know, on the IT assets and the commodity infrastructure. That is where all the applications are running today. >> John: I want to get into the inference piece and the deep sparse you mentioned, but I first have to ask, you mentioned open source, Dave and I with some fellow cube alumnis. We're having a chat about, you know, the iPhone and Android moment where you got proprietary versus open source. You got a similar thing happening with some of these machine learning modules where there's a lot of proprietary things happening and there's open source movement is growing. So is there a balance there? Are they all trying to do the same thing? Is it more like a chip, you know, silicons involved, all kinds of things going on that are really fascinating from a science. What's your, what's your reaction to that? >> Brian: I think it's like anything that, you know, the way we talk about AI you think had been around for decades, but the reality is it's been some of the deep learning models. When we first, when we first started taking models that the brain team was working on at Google and billing APIs around them on Google Cloud where the first cloud to even have AI services was 2015, 2016. So when you think about it, it's really been what, 6 years since like this thing is even getting lift off. So I think with that, everybody's throwing everything at it. You know, there's tons of funded hardware thrown at specialty for training or inference new companies. There's legacy companies that are getting into like AI now and whether it's a, you know, a CPU company that's now building specialized ASEX for training. There's new tech stacks proprietary software and there's a ton of asset service. So it really is, you know, what's gone from nascent 8 years ago is the wild, wild west out there. So there's a, there's a little bit of everything right now and I think that makes sense because at the early part of any industry it really becomes really specialized. And that's the, you know, showing my age of like, you know, the early pilot of the two thousands, you know, red Hat people weren't running X 86 in enterprise back then and they thought it was a toy and they certainly weren't running open source, but you really, and it made sense that they weren't because it didn't deliver what they needed to at that time. So they needed specialty stacks, they needed expensive, they needed expensive hardware that did what an Oracle database needed to do. They needed proprietary software. But what happens is that commoditizes through both hardware and through open source and the same thing's really just starting with with AI. >> John: Yeah. And I think that's a great point before we to call that out because in any industry timing's everything, right? I mean I remember back in the 80s, late 80s and 90s, AI, you know, stuff was going on and it just wasn't, there wasn't enough horsepower, there wasn't enough tech. >> Brian: Yep. >> John: You mentioned some of the processing. So AI is this industry that has all these experts who have been itch scratching that itch for decades. And now with cloud and custom silicon. The tech fundamental at the lower end of the stack, if you will, on the performance side is significantly more performant. It's there you got more capabilities. >> Brian: Yeah. >> John: Now you're kicking into more software, faster software. So it just seems like we're at a tipping point where finally it's here, like that AI moment or machine learning and now data is, is involved. So this is where organizations I see really jumping in with the CEO mandate. Hey team, make ML work for us. Go figure it out. It's got to be an advantage for us. >> Brian: Yeah. >> John: So now they go, okay boss, we will. So what, what do they do? What's the steps does an enterprise take to get machine learning into their organizations? Cause you know, it's coming down from the boards, you know, how does this work for rob? >> Brian: Yeah. Like the, you know, the, what we're seeing is it's like anything, like it's, whether that was source adoption or whether that was cloud adoption, it always starts usually with one person. And increasingly it is the CEO, which realizes they're getting further behind the competition because they're not leaning in, you know, faster. But typically it really comes down to like a really strong practitioner that's inside the organization, right? And, that realizes that the number one goal isn't doing more and just training more models and and necessarily being proprietary about it. It's really around understanding the art of the possible. Something that's grounded in the art of the possible, what, what deep learning can do today and what business outcomes you can deliver, you know, if you can employ. And then there's well proven paths through that. It's just that because of where it's been, it's not that industrialized today. It's very much, you know, you see ML project by ML project is very snowflakey, right? And that was kind of the early days of open source as well. And so, we're just starting to get to the point where it's getting easier, it's getting more industrialized, there's less steps, there's less burdensome on developers, there's less burdensome on, on the deployment side. And we're trying to bring that, that whole last mile by saying, you know what? Deploying deep learning and AI models should be as easy as the as to deploy your application, right? You shouldn't have to take an extra step to deploy an AI model. It shouldn't have to require a new hardware, it shouldn't require a new process, a new DevOps model. It should be as simple as what you're already doing. >> John: What is the best practice for companies to effectively bring an acceptable level of machine learning and performance into their organizations? >> Brian: Yeah, I think like the, the number one start is like what you hinted at before is they, they have to know the use case. They have to, in most cases, you're going to find across every industry you know, that that problem's been tackled by some company, right? And then you have to have the best practice around fine-tuning the models already exist. So fine tuning that existing model. That foundational model on your unique dataset. You, you know, if you are in medical instruments, it's not good enough to identify that it's a medical instrument in the picture. You got to know what type of medical instrument. So there's always a fine tuning step. And so we've created open source tools that make it easy for you to do two things at once. You can fine tune that existing foundational model, whether that's in the language space or whether that's in the vision space. You can fine tune that on your dataset. And at the same time you get an optimized model that comes out the other end. So you get kind of both things. So you, you no longer have to worry about you're, we're freeing you from worrying about the complexity of that transfer learning, if you will. And we're freeing you from worrying about, well where am I going to deploy the model? Where does it need to be? Does it need to be on a device, an edge, a data center, a cloud edge? What kind of hardware is it? Is there enough hardware there? We're liberating you from all of that. Because what you want, what you can count on is there'll always be commodity capability, commodity CPUs where you want to deploy in abundance cause that's where your application is. And so all of a sudden we're just freeing you of that, of that whole step. >> John: Okay. Let's get into deep sparse because you mentioned that earlier. What inspired the creation of deep sparse and how does it differ from any other solutions in the market that are out there? >> Brian: Sure. So, so where unique is it? It starts by, by two things. One is what the industry's pretty good at from the optimization side is they're good at like this thing called quantization, which turns like, you know, big numbers into small numbers, lower precision. So a 32 bit representation of a, of AI weight into a bit. And they're good at like cutting out layers, which also takes away accuracy. What we've figured out is to take those, the industry techniques for those that are best practice, but we combined it with unstructured varsity. So by reducing that model by 90 to 95% in size, that's great because it's made it smaller. But we've taken that when it's the deep sparse engine, when you deploy it that looks at that model and says, because it's so much smaller, I no longer have to run the part of the model that's been essentially sparsified. So what that's done is, it's meant that you no longer need a supercomputer to run models because there's not nearly as much math and processing as there was before the model was optimized. So now what happens is, every CPU platform out there has, has an enormous amount of compute because we've sparsified the rest of it away. So you can pick a, you can pick your, your laptop and you have enough compute to run state-of-the-art models. The second thing that, and you need a software engine to do that cause it ignores the parts of the models. It doesn't need to run, which is what like specialized hardware can't do. The second part is it's then turned into a memory efficiency problem. So it's really around just getting memory, getting the models loaded into the cash of the computer and keeping it there. Never having to go back out to memory. So, so our techniques are both, we reduce the model size and then we only run the part of the model that matters and then we keep it all in cash. And so what that does is it gets us to like these, these low, low latency faster and we're able to increase, you know, the CPU processing by an order magnitude. >> John: Yeah. That low latency is key. And you got developers, you know, co coding super fast. We'll get to the developer angle in a second. I want to just follow up on this, this motivation behind the, the deep sparse because you know, as we were talking earlier before we came on camera about the old days, I mean, not too long ago, virtualization and VMware abstracted away the os from, from the hardware rights and the server virtualization changed the game. >> Brian: Yeah. >> John: And that basically invented cloud computing as we know it today. So, so we see that abstraction. >> Brian: Yeah. >> John: There seems to be a motivation behind abstracting the way the machine learning models away from the hardware. And that seems to be bringing advantages to the AI growth. Can you elaborate on, is that true? And it's, what's your comment? >> Brian: It's true. I think it's true for us. I don't think the industry's there yet, honestly. Cause I think the industry still is of that mindset that if I took, if it took these expensive GPUs to train my model, then I want to run my model on those same expensive GPUs. Because there's often like not a separation between the people that are developing AI and the people that have to manage and deploy at where you need it. So the reality is, is that that's everything that we're after. Like, do we decrease the cost? Yes. Do we make the models smaller? Yes. Do we make them faster? A yes. But I think the most amazing power is that we've turned AI into a docker based microservice. And so like who in the industry wants to deploy their apps the old way on a os without virtualization, without docker, without Kubernetes, without microservices, without service mesh without serverless. You want all those tools for your apps by converting AI models. So they can be run inside a docker container with no apologies around latency and performance cause it's faster. You get the best of that whole world that you just talked about, which is, you know, what we're calling, you know, software delivered AI. So now the AI lives in the same world. Organizations that have gone through that digital cloud transformation with their app infrastructure. AI fits into that world. >> John: And this is where the abstraction concepts matter. When you have these inflection points, the convergence of compute data, machine learning that powers AI, it really becomes a developer opportunity. Because now applications and businesses, when they actually go through the digital transformation, their businesses are completely transformed. There is no IT. Developers are the application. They are the company, right? So AI will be part of whatever business or app will be out there. So there is a application developer angle here. Brian, can you explain >> Brian: Oh completely. >> John: how they're going to use this? Because you mentioned docker container microservice, I mean this really is an insane flipping of the script for developers. >> Brian: Yeah. >> John: So what's that look like? >> Brian: Well speak, it's because like AI's kind of, I mean, again, like it's come so fast. So you figure there's my app team and here's my AI team, right? And they're in different places and the AI team is dragging in specialized infrastructure in support of that as well. And that's not how app developers think. Like they've ran on fungible infrastructure that subtracted and virtualized forever, right? And so what we've done is we've, in addition to fitting into that world that they, that they like, we've also made it simple for them for they don't have to be a machine learning engineer to be able to experiment with these foundational models and transfer learning 'em. We've done that. So they can do that in a couple of commands and it has a simple API that they can either link to their application directly as a library to make difference calls or they can stand it up as a standalone, you know, scale up, scale out inference server. They get two choices. But it really fits into that, you know, you know that world that the modern developer, whether they're just using Python or C or otherwise, we made it just simple. So as opposed to like Go learn something else, they kind of don't have to. So in a way though, it's made it. It's almost made it hard because people expect when we talk to 'em for the first time to be the old way. Like, how do you look like a piece of hardware? Are you compatible with my existing hardware that runs ML? Like, no, we're, we're not. Because you don't need that stack anymore. All you need is a library called to make your prediction and that's it. That's it. >> John: Well, I mean, we were joking on Twitter the other day with someone saying, is AI a pet or a cattle? Right? Because they love their, their AI bots right now. So, so I'd say pet there. But you look at a lot of, there's going to be a lot of AI. So on a more serious note, you mentioned in microservices, will deep sparse have an API for developers? And how does that look like? What do I do? >> Brian: Yeah. >> John: tell me what my, as a developer, what's the roadmap look like? What's the >> Brian: Yeah, it, it really looks, it really can go in both modes. It can go in a standalone server mode where it handles, you know, rest API and it can scale out with ES as the workload comes up and scale back and like try to make hardware do that. Hardware may scale back, but it's just sitting there dormant, you know, so with this, it scales the same way your application needs to. And then for a developer, they basically just, they just, the PIP install de sparse, you know, has one commanded to do an install, and then they do two calls, really. The first call is a library call that the app makes to create the model. And models really already trained, but they, it's called a model create call. And the second command they do is they make a call to do a prediction. And it's as simple as that. So it's, it's AI's as simple as using any other library that the developers are already using, which I, which sounds hard to fathom because it is just so simplified. >> John: Software delivered AI. Okay, that's a cool thing. I believe in it personally. I think that's the way to go. I think there's going to be plenty of hardware options if you look at the advances of cloud players that got more silicon coming out. Yeah. More GPU. I mean, there's more instance, I mean, everything's out there right now. So the question is how does that evolve in your mind? Because that's seems to be key. You have open source projects emerging. What, what path does this take? Is there a parallel mental model that you see, Brian, that is similar? You mentioned open source earlier. Is it more like a VMware virtualization thing or is it more of a cloud thing? Is there Yeah. Is it going to evolve in a, in a trajectory that looks similar to what we might've seen in the past? >> Brian: Yeah, we're, you know, when I, when when I got involved with the company, what I, when I thought about it and I was reasoning about it, like, do you, you know, you want to, like, we all do when you want to join something full-time. I thought about it and said, where will the industry eventually get to? Right? To fully realize the value of, of deep learning and what's plausible as it evolves. And to me, like I, I know it's the old adage of, you know, you know, software, its hardware, cloudy software. But it truly was like, you know, we can solve these problems in software. Like there's nothing special that's happening at the hardware layer and the processing AI. The reality is that it's just early in the industry. So the view that that we had was like, this is eventually the best place where the industry will be, is the liberation of being able to run AI anywhere. Like you're really not democratizing, you democratize the model. But if you can't run the model anywhere you want because these models are getting bigger and bigger with these large language models, then you're kind of not democratizing. And if you got to go and like by a cluster to run this thing on. So the democratization comes by if all of a sudden that model can be consumed anywhere on demand without planning, without provisioning, wherever infrastructure is. And so I think that's with or without Neural Magic, that's where the industry will go and will get to. I think we're the leaders, leaders in getting it there. It's right because we're more advanced on these techniques. >> John: Yeah. And your background too. You've seen OpenStack, pre-cloud, you saw open source grow and still exponentially growing. And so you have the same similar dynamic with machine learning models growing. And they're also segmenting into almost a, an ML stack or foundational model as we talk about. So you're starting to see the formation of tooling inference. So a lot of components coming. It's almost a stack, it's almost a, it literally is like an operating system problem space, you know? How do you run things, how do you link things? How do you bring things together? Is that what's going on here? Is this like a data modeling operating environment kind of red hat type thing going on? Like. >> Brian: Yeah. Yeah. Like I think there is, you know, I thought about that too. And I think there is the role of like distribution, because the industrialization not happening fast enough of this. Like, can I go back to like every customers, every, every user does it in their own kind of way. Like it's not, everyone's a little bit of a snowflake. And I think that's okay. There's definitely plenty of companies that want to come in and say, well, this is the way it's going to be and we industrialize it as long as you do it our way. The reality is technology doesn't get industrialized by one company just saying, do it our way. And so that's why like we've taken the approach through open source by saying like, Hey, you haven't really industrialized it if you said. We made it simple, but you always got to run AI here. Yeah, right. You only like really industrialize it if you break it down into components that are simple to use and they work integrated in the stack the way you want them to. And so to me, that first principles was getting thing into microservices and dockers that could be run on VMware, OpenShare on the cloud in the edge. And so that's the, that's the real part that we're happening with. The other part, like I do agree, like I think it's going to quickly move into less about the model. Less about the training of the model and the transfer learning, you know, the data set of the model. We're taking away the complexity of optimization. Giving liberating deployment to be anywhere. And I think the last mile, John is going to be around the ML ops around that. Because it's easy to think of like soft now that it's just a software problem, we've turned it into a software problem. So it's easy to think of software as like kind of a point release, but that's not the reality, right? It's a life cycle. And it's, and so I think ML very much brings in the what is the lifecycle of that deployment? And, you know, you get into more interesting conversations, to be honest than like, once you've deployed in a docking container is around like model drift and accuracy and the dataset changes and the user changes is how do you become from an ML perspective of where of that sending signal back retraining. And, and that's where I think a lot of the, in more of the innovation's going to start to move there. >> John: Yeah. And software also, the software problem, the software opportunity as well is developer focused. And if you look at the cloud native landscape now, similar stacks developing a lot of components. A lot of things to, to stitch together a lot of things that are automating under the hood. A lot of developer productivity conversations. I think this is going to go down that same road. I want to get your thoughts because developers will set the pace. And this is something that's clear in this next wave developer productivity. They're the defacto standards bodies. They will decide what microservices check, API check. Now, skill gap is going to be a problem because it's relatively new. So model sprawl, model sizes, proprietary versus open. There has to be a way to kind of crunch that down into a, like a DevOps, like just make it, get the developer out of the, the muck. So what's your view? Are we early days like that? Or what's the young kid in college studying CS or whatever degree who comes into this with, with both feet? What are they doing? >> Brian: I'll probably say like the, the non-popular answer to that. A little bit is it's happening so fast that it's going to get kind of boring fast. Meaning like, yeah, you could go to school and go to MIT, right? Sorry. Like, and you could get a hold through end like becoming a model architect, like inventing the next model, right? And the layers and combining 'em and et cetera, et cetera. And then what operators and, and building a model that's bigger than the last one and trains faster, right? And there will be those people, right? That actually, like they're building the engines the same way. You know, I grew up as an infrastructure software developer. There's not a lot of companies that hire those anymore because they're all sitting inside of three big clouds. Yeah. Right? So you better be a good app developer, but I think what you're going to see is before you had to be everything, you had to be the, if you were going to use infrastructure, you had to know how to build infrastructure. And I think the same thing's true around is quickly exiting ML is to be able to use ML in your company, you better be like, great at every aspect of ML, including every intricacy inside of the model and every operation's doing, that's quickly changing. Like, you're going to start with a starting point. You know, in the future you're not going to be like cracking open these GPT models, you're going to just be pulling them off the shelf, fine tuning 'em and go. You don't have to invent it. You don't have to understand it. And I think that's going to be a pivot point, you know, in the industry between, you know, what's the future? What's, what's the future of a, a data scientist? ML engineer researcher look like? >> John: I think that's, the outcome's going to be determined. I mean, you mentioned, you know, doing it yourself what an SRE is for a Google with the servers scale's huge. So yeah, it might have to, at the beginning get boring, you get obsolete quickly, but that means it's progressing. So, The scale becomes huge. And that's where I think it's going to be interesting when we see that scale. >> Brian: Yep. Yeah, I think that's right. I think that's right. And we always, and, and what I've always said, and much the, again, the distribute into my ML team is that I want every developer to be as adept at being able take advantage of ML as non ML engineer, right? It's got to be that simple. And I think, I think it's getting there. I really do. >> John: Well, Brian, great, great to have you on theCUBE here on this cube conversation. As part of the startup showcase that's coming up. You're going to be featured. Or your company would featured on the upcoming ABRA startup showcase on making machine learning easier and more affordable as more machine learning models come in. You guys got deep sparse and some great technology. We're going to dig into that next time. I'll give you the final word right now. What do you see for the company? What are you guys looking for? Give a plug for the company right now. >> Brian: Oh, give a plug that I haven't already doubled in as the plug. >> John: You're hiring engineers, I assume from MIT and other places. >> Brian: Yep. I think like the, the biggest thing is like, like we're on the developer side. We're here to make this easy. The majority of inference today is, is on CPUs already, believe it or not, as much as kind of, we like to talk about hardware and specialized hardware. The majority is already on CPUs. We're basically bringing 95% cost savings to CPUs through this acceleration. So, but we're trying to do it in a way that makes it community first. So I think the, the shout out would be come find the Neural Magic community and engage with us and you'll find, you know, a thousand other like-minded people in Slack that are willing to help you as well as our engineers. And, and let's, let's go take on some successful AI deployments. >> John: Exciting times. This is, I think one of the pivotal moments, NextGen data, machine learning, and now starting to see AI not be that chat bot, just, you know, customer support or some basic natural language processing thing. You're starting to see real innovation. Brian Stevens, CEO of Neural Magic, bringing the magic here. Thanks for the time. Great conversation. >> Brian: Thanks John. >> John: Thanks for joining me. >> Brian: Cheers. Thank you. >> John: Okay. I'm John Furrier, host of theCUBE here in Palo Alto, California for this cube conversation with Brian Stevens. Thanks for watching.
SUMMARY :
CEO, Great to see you Brian. happy to be here again. minute to explain what you guys in the world you have a lot So it's like how do you grow it? like back in the day you had and the deep sparse you And that's the, you know, late 80s and 90s, AI, you know, It's there you got more capabilities. the CEO mandate. Cause you know, it's coming the as to deploy your application, right? And at the same time you get in the market that are out meant that you no longer need a the deep sparse because you know, John: And that basically And that seems to be bringing and the people that have to the convergence of compute data, insane flipping of the script But it really fits into that, you know, But you look at a lot of, call that the app makes to model that you see, Brian, the old adage of, you know, And so you have the same the way you want them to. And if you look at the to see is before you had to be I mean, you mentioned, you know, the distribute into my ML team great to have you on theCUBE already doubled in as the plug. and other places. the biggest thing is like, of the pivotal moments, Brian: Cheers. host of theCUBE here in Palo Alto,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Brian Stevens | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
95% | QUANTITY | 0.99+ |
2015 | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
90 | QUANTITY | 0.99+ |
2016 | DATE | 0.99+ |
32 bit | QUANTITY | 0.99+ |
Neural Magic | ORGANIZATION | 0.99+ |
Brian Steve | PERSON | 0.99+ |
Neural Magic | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
two calls | QUANTITY | 0.99+ |
both things | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
second thing | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Python | TITLE | 0.99+ |
MIT | ORGANIZATION | 0.99+ |
first call | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
both feet | QUANTITY | 0.98+ |
Oracle | ORGANIZATION | 0.98+ |
both modes | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
80s | DATE | 0.98+ |
first | QUANTITY | 0.98+ |
second command | QUANTITY | 0.98+ |
Breaking Analysis: Google's Point of View on Confidential Computing
>> From theCUBE studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> Confidential computing is a technology that aims to enhance data privacy and security by providing encrypted computation on sensitive data and isolating data from apps in a fenced off enclave during processing. The concept of confidential computing is gaining popularity, especially in the cloud computing space where sensitive data is often stored and of course processed. However, there are some who view confidential computing as an unnecessary technology in a marketing ploy by cloud providers aimed at calming customers who are cloud phobic. Hello and welcome to this week's Wikibon CUBE Insights powered by ETR. In this Breaking Analysis, we revisit the notion of confidential computing, and to do so, we'll invite two Google experts to the show, but before we get there, let's summarize briefly. There's not a ton of ETR data on the topic of confidential computing. I mean, it's a technology that's deeply embedded into silicon and computing architectures. But at the highest level, security remains the number one priority being addressed by IT decision makers in the coming year as shown here. And this data is pretty much across the board by industry, by region, by size of company. I mean we dug into it and the only slight deviation from the mean is in financial services. The second and third most cited priorities, cloud migration and analytics, are noticeably closer to cybersecurity in financial services than in other sectors, likely because financial services has always been hyper security conscious, but security is still a clear number one priority in that sector. The idea behind confidential computing is to better address threat models for data in execution. Protecting data at rest and data and transit have long been a focus of security approaches, but more recently, silicon manufacturers have introduced architectures that separate data and applications from the host system. Arm, Intel, AMD, Nvidia and other suppliers are all on board, as are the big cloud players. Now the argument against confidential computing is that it narrowly focuses on memory encryption and it doesn't solve the biggest problems in security. Multiple system images updates different services and the entire code flow aren't directly addressed by memory encryption, rather to truly attack these problems, many believe that OSs need to be re-engineered with the attacker and hacker in mind. There are so many variables and at the end of the day, critics say the emphasis on confidential computing made by cloud providers is overstated and largely hype. This tweet from security researcher Rodrigo Branco sums up the sentiment of many skeptics. He says, "Confidential computing is mostly a marketing campaign for memory encryption. It's not driving the industry towards the hard open problems. It is selling an illusion." Okay. Nonetheless, encrypting data in use and fencing off key components of the system isn't a bad thing, especially if it comes with the package essentially for free. There has been a lack of standardization and interoperability between different confidential computing approaches. But the confidential computing consortium was established in 2019 ostensibly to accelerate the market and influence standards. Notably, AWS is not part of the consortium, likely because the politics of the consortium were probably a conundrum for AWS because the base technology defined by the the consortium is seen as limiting by AWS. This is my guess, not AWS's words, and but I think joining the consortium would validate a definition which AWS isn't aligned with. And two, it's got a lead with this Annapurna acquisition. This was way ahead with Arm integration and so it probably doesn't feel the need to validate its competitors. Anyway, one of the premier members of the confidential computing consortium is Google, along with many high profile names including Arm, Intel, Meta, Red Hat, Microsoft, and others. And we're pleased to welcome two experts on confidential computing from Google to unpack the topic, Nelly Porter is head of product for GCP confidential computing and encryption, and Dr. Patricia Florissi is the technical director for the office of the CTO at Google Cloud. Welcome Nelly and Patricia, great to have you. >> Great to be here. >> Thank you so much for having us. >> You're very welcome. Nelly, why don't you start and then Patricia, you can weigh in. Just tell the audience a little bit about each of your roles at Google Cloud. >> So I'll start, I'm owning a lot of interesting activities in Google and again security or infrastructure securities that I usually own. And we are talking about encryption and when encryption and confidential computing is a part of portfolio in additional areas that I contribute together with my team to Google and our customers is secure software supply chain. Because you need to trust your software. Is it operate in your confidential environment to have end-to-end story about if you believe that your software and your environment doing what you expect, it's my role. >> Got it. Okay. Patricia? >> Well, I am a technical director in the office of the CTO, OCTO for short, in Google Cloud. And we are a global team. We include former CTOs like myself and senior technologists from large corporations, institutions and a lot of success, we're startups as well. And we have two main goals. First, we walk side by side with some of our largest, more strategic or most strategical customers and we help them solve complex engineering technical problems. And second, we are devise Google and Google Cloud engineering and product management and tech on there, on emerging trends and technologies to guide the trajectory of our business. We are unique group, I think, because we have created this collaborative culture with our customers. And within OCTO, I spend a lot of time collaborating with customers and the industry at large on technologies that can address privacy, security, and sovereignty of data in general. >> Excellent. Thank you for that both of you. Let's get into it. So Nelly, what is confidential computing? From Google's perspective, how do you define it? >> Confidential computing is a tool and it's still one of the tools in our toolbox. And confidential computing is a way how we would help our customers to complete this very interesting end-to-end lifecycle of the data. And when customers bring in the data to cloud and want to protect it as they ingest it to the cloud, they protect it at rest when they store data in the cloud. But what was missing for many, many years is ability for us to continue protecting data and workloads of our customers when they running them. And again, because data is not brought to cloud to have huge graveyard, we need to ensure that this data is actually indexed. Again, there is some insights driven and drawn from this data. You have to process this data and confidential computing here to help. Now we have end to end protection of our customer's data when they bring the workloads and data to cloud, thanks to confidential computing. >> Thank you for that. Okay, we're going to get into the architecture a bit, but before we do, Patricia, why do you think this topic of confidential computing is such an important technology? Can you explain, do you think it's transformative for customers and if so, why? >> Yeah, I would maybe like to use one thought, one way, one intuition behind why confidential commuting matters, because at the end of the day, it reduces more and more the customer's thresh boundaries and the attack surface. That's about reducing that periphery, the boundary in which the customer needs to mind about trust and safety. And in a way, is a natural progression that you're using encryption to secure and protect the data. In the same way that we are encrypting data in transit and at rest, now we are also encrypting data while in use. And among other beneficials, I would say one of the most transformative ones is that organizations will be able to collaborate with each other and retain the confidentiality of the data. And that is across industry, even though it's highly focused on, I wouldn't say highly focused, but very beneficial for highly regulated industries. It applies to all of industries. And if you look at financing for example, where bankers are trying to detect fraud, and specifically double finance where you are, a customer is actually trying to get a finance on an asset, let's say a boat or a house, and then it goes to another bank and gets another finance on that asset. Now bankers would be able to collaborate and detect fraud while preserving confidentiality and privacy of the data. >> Interesting. And I want to understand that a little bit more but I'm going to push you a little bit on this, Nelly, if I can because there's a narrative out there that says confidential computing is a marketing ploy, I talked about this upfront, by cloud providers that are just trying to placate people that are scared of the cloud. And I'm presuming you don't agree with that, but I'd like you to weigh in here. The argument is confidential computing is just memory encryption and it doesn't address many other problems. It is over hyped by cloud providers. What do you say to that line of thinking? >> I absolutely disagree, as you can imagine, with this statement, but the most importantly is we mixing multiple concepts, I guess. And exactly as Patricia said, we need to look at the end-to-end story, not again the mechanism how confidential computing trying to again, execute and protect a customer's data and why it's so critically important because what confidential computing was able to do, it's in addition to isolate our tenants in multi-tenant environments the cloud covering to offer additional stronger isolation. They called it cryptographic isolation. It's why customers will have more trust to customers and to other customers, the tenant that's running on the same host but also us because they don't need to worry about against threats and more malicious attempts to penetrate the environment. So what confidential computing is helping us to offer our customers, stronger isolation between tenants in this multi-tenant environment, but also incredibly important, stronger isolation of our customers, so tenants from us. We also writing code, we also software providers will also make mistakes or have some zero days. Sometimes again us introduced, sometimes introduced by our adversaries. But what I'm trying to say by creating this cryptographic layer of isolation between us and our tenants and amongst those tenants, we're really providing meaningful security to our customers and eliminate some of the worries that they have running on multi-tenant spaces or even collaborating to gather this very sensitive data knowing that this particular protection is available to them. >> Okay, thank you. Appreciate that. And I think malicious code is often a threat model missed in these narratives. Operator access, yeah, maybe I trust my clouds provider, but if I can fence off your access even better, I'll sleep better at night. Separating a code from the data, everybody's, Arm, Intel, AMD, Nvidia, others, they're all doing it. I wonder if, Nelly, if we could stay with you and bring up the slide on the architecture. What's architecturally different with confidential computing versus how operating systems and VMs have worked traditionally. We're showing a slide here with some VMs, maybe you could take us through that. >> Absolutely. And Dave, the whole idea for Google and now industry way of dealing with confidential computing is to ensure that three main property is actually preserved. Customers don't need to change the code. They can operate on those VMs exactly as they would with normal non-confidential VMs, but to give them this opportunity of lift and shift or no changing their apps and performing and having very, very, very low latency and scale as any cloud can, something that Google actually pioneer in confidential computing. I think we need to open and explain how this magic was actually done. And as I said, it's again the whole entire system have to change to be able to provide this magic. And I would start with we have this concept of root of trust and root of trust where we will ensure that this machine, when the whole entire post has integrity guarantee, means nobody changing my code on the most low level of system. And we introduce this in 2017 called Titan. It was our specific ASIC, specific, again, inch by inch system on every single motherboard that we have that ensures that your low level former, your actually system code, your kernel, the most powerful system is actually proper configured and not changed, not tampered. We do it for everybody, confidential computing included. But for confidential computing, what we have to change, we bring in AMD, or again, future silicon vendors and we have to trust their former, their way to deal with our confidential environments. And that's why we have obligation to validate integrity, not only our software and our former but also former and software of our vendors, silicon vendors. So we actually, when we booting this machine, as you can see, we validate that integrity of all of the system is in place. It means nobody touching, nobody changing, nobody modifying it. But then we have this concept of AMD secure processor, it's special ASICs, best specific things that generate a key for every single VM that our customers will run or every single node in Kubernetes or every single worker thread in our Hadoop or Spark capability. We offer all of that. And those keys are not available to us. It's the best keys ever in encryption space because when we are talking about encryption, the first question that I'm receiving all the time, where's the key, who will have access to the key? Because if you have access to the key then it doesn't matter if you encrypted or not. So, but the case in confidential computing provides so revolutionary technology, us cloud providers, who don't have access to the keys. They sitting in the hardware and they head to memory controller. And it means when hypervisors that also know about these wonderful things saying I need to get access to the memories that this particular VM trying to get access to, they do not decrypt the data, they don't have access to the key because those keys are random, ephemeral and per VM, but the most importantly, in hardware not exportable. And it means now you would be able to have this very interesting role that customers or cloud providers will not be able to get access to your memory. And what we do, again, as you can see our customers don't need to change their applications, their VMs are running exactly as it should run and what you're running in VM, you actually see your memory in clear, it's not encrypted, but God forbid is trying somebody to do it outside of my confidential box. No, no, no, no, no, they would not be able to do it. Now you'll see cyber and it's exactly what combination of these multiple hardware pieces and software pieces have to do. So OS is also modified. And OS is modified such way to provide integrity. It means even OS that you're running in your VM box is not modifiable and you, as customer, can verify. But the most interesting thing, I guess, how to ensure the super performance of this environment because you can imagine, Dave, that encrypting and it's additional performance, additional time, additional latency. So we were able to mitigate all of that by providing incredibly interesting capability in the OS itself. So our customers will get no changes needed, fantastic performance and scales as they would expect from cloud providers like Google. >> Okay, thank you. Excellent. Appreciate that explanation. So, again, the narrative on this as well, you've already given me guarantees as a cloud provider that you don't have access to my data, but this gives another level of assurance, key management as they say is key. Now humans aren't managing the keys, the machines are managing them. So Patricia, my question to you is, in addition to, let's go pre confidential computing days, what are the sort of new guarantees that these hardware-based technologies are going to provide to customers? >> So if I am a customer, I am saying I now have full guarantee of confidentiality and integrity of the data and of the code. So if you look at code and data confidentiality, the customer cares and they want to know whether their systems are protected from outside or unauthorized access, and that recovered with Nelly, that it is. Confidential computing actually ensures that the applications and data internals remain secret, right? The code is actually looking at the data, the only the memory is decrypting the data with a key that is ephemeral and per VM and generated on demand. Then you have the second point where you have code and data integrity, and now customers want to know whether their data was corrupted, tampered with or impacted by outside actors. And what confidential computing ensures is that application internals are not tampered with. So the application, the workload as we call it, that is processing the data, it's also, it has not been tampered and preserves integrity. I would also say that this is all verifiable. So you have attestation and these attestation actually generates a log trail and the log trail guarantees that, provides a proof that it was preserved. And I think that the offer's also a guarantee of what we call ceiling, this idea that the secrets have been preserved and not tampered with, confidentiality and integrity of code and data. >> Got it. Okay, thank you. Nelly, you mentioned, I think I heard you say that the applications, it's transparent, you don't have to change the application, it just comes for free essentially. And we showed some various parts of the stack before. I'm curious as to what's affected, but really more importantly, what is specifically Google's value add? How do partners participate in this, the ecosystem, or maybe said another way, how does Google ensure the compatibility of confidential computing with existing systems and applications? >> And a fantastic question by the way. And it's very difficult and definitely complicated world because to be able to provide these guarantees, actually a lot of work was done by community. Google is very much operate in open, so again, our operating system, we working with operating system repository OSs, OS vendors to ensure that all capabilities that we need is part of the kernels, are part of the releases and it's available for customers to understand and even explore if they have fun to explore a lot of code. We have also modified together with our silicon vendors a kernel, host kernel to support this capability and it means working this community to ensure that all of those patches are there. We also worked with every single silicon vendor as you've seen, and that's what I probably feel that Google contributed quite a bit in this whole, we moved our industry, our community, our vendors to understand the value of easy to use confidential computing or removing barriers. And now I don't know if you noticed, Intel is pulling the lead and also announcing their trusted domain extension, very similar architecture. And no surprise, it's, again, a lot of work done with our partners to, again, convince, work with them and make this capability available. The same with Arm this year, actually last year, Arm announced their future design for confidential computing. It's called Confidential Computing Architecture. And it's also influenced very heavily with similar ideas by Google and industry overall. So it's a lot of work in confidential computing consortiums that we are doing, for example, simply to mention, to ensure interop, as you mentioned, between different confidential environments of cloud providers. They want to ensure that they can attest to each other because when you're communicating with different environments, you need to trust them. And if it's running on different cloud providers, you need to ensure that you can trust your receiver when you are sharing your sensitive data workloads or secret with them. So we coming as a community and we have this attestation sig, the, again, the community based systems that we want to build and influence and work with Arm and every other cloud providers to ensure that we can interrupt and it means it doesn't matter where confidential workloads will be hosted, but they can exchange the data in secure, verifiable and controlled by customers way. And to do it, we need to continue what we are doing, working open, again, and contribute with our ideas and ideas of our partners to this role to become what we see confidential computing has to become, it has to become utility. It doesn't need to be so special, but it's what we want it to become. >> Let's talk about, thank you for that explanation. Let's talk about data sovereignty because when you think about data sharing, you think about data sharing across the ecosystem and different regions and then of course data sovereignty comes up. Typically public policy lags, the technology industry and sometimes is problematic. I know there's a lot of discussions about exceptions, but Patricia, we have a graphic on data sovereignty. I'm interested in how confidential computing ensures that data sovereignty and privacy edicts are adhered to, even if they're out of alignment maybe with the pace of technology. One of the frequent examples is when you delete data, can you actually prove that data is deleted with a hundred percent certainty? You got to prove that and a lot of other issues. So looking at this slide, maybe you could take us through your thinking on data sovereignty. >> Perfect. So for us, data sovereignty is only one of the three pillars of digital sovereignty. And I don't want to give the impression that confidential computing addresses it all. That's why we want to step back and say, hey, digital sovereignty includes data sovereignty where we are giving you full control and ownership of the location, encryption and access to your data. Operational sovereignty where the goal is to give our Google Cloud customers full visibility and control over the provider operations, right? So if there are any updates on hardware, software stack, any operations, there is full transparency, full visibility. And then the third pillar is around software sovereignty where the customer wants to ensure that they can run their workloads without dependency on the provider's software. So they have sometimes is often referred as survivability, that you can actually survive if you are untethered to the cloud and that you can use open source. Now let's take a deep dive on data sovereignty, which by the way is one of my favorite topics. And we typically focus on saying, hey, we need to care about data residency. We care where the data resides because where the data is at rest or in processing, it typically abides to the jurisdiction, the regulations of the jurisdiction where the data resides. And others say, hey, let's focus on data protection. We want to ensure the confidentiality and integrity and availability of the data, which confidential computing is at the heart of that data protection. But it is yet another element that people typically don't talk about when talking about data sovereignty, which is the element of user control. And here, Dave, is about what happens to the data when I give you access to my data. And this reminds me of security two decades ago, even a decade ago, where we started the security movement by putting firewall protections and login accesses. But once you were in, you were able to do everything you wanted with the data. An insider had access to all the infrastructure, the data and the code. And that's similar because with data sovereignty we care about whether it resides, where, who is operating on the data. But the moment that the data is being processed, I need to trust that the processing of the data will abide by user control, by the policies that I put in place of how my data is going to be used. And if you look at a lot of the regulation today and a lot of the initiatives around the International Data Space Association, IDSA, and Gaia-X, there is a movement of saying the two parties, the provider of the data and the receiver of the data are going to agree on a contract that describes what my data can be used for. The challenge is to ensure that once the data crosses boundaries, that the data will be used for the purposes that it was intended and specified in the contract. And if you actually bring together, and this is the exciting part, confidential computing together with policy enforcement, now the policy enforcement can guarantee that the data is only processed within the confines of a confidential computing environment, that the workload is cryptographically verified that there is the workload that was meant to process the data and that the data will be only used when abiding to the confidentiality and integrity safety of the confidential computing environment. And that's why we believe confidential computing is one necessary and essential technology that will allow us to ensure data sovereignty, especially when it comes to user control. >> Thank you for that. I mean it was a deep dive, I mean brief, but really detailed. So I appreciate that, especially the verification of the enforcement. Last question, I met you two because as part of my year end prediction post, you guys sent in some predictions and I wasn't able to get to them in the predictions post. So I'm thrilled that you were able to make the time to come on the program. How widespread do you think the adoption of confidential computing will be in 23 and what's the maturity curve look like, this decade in your opinion? Maybe each of you could give us a brief answer. >> So my prediction in five, seven years, as I started, it'll become utility. It'll become TLS as of, again, 10 years ago we couldn't believe that websites will have certificates and we will support encrypted traffic. Now we do and it's become ubiquity. It's exactly where confidential computing is getting and heading, I don't know we deserve yet. It'll take a few years of maturity for us, but we will be there. >> Thank you. And Patricia, what's your prediction? >> I will double that and say, hey, in the future, in the very near future, you will not be able to afford not having it. I believe as digital sovereignty becomes evermore top of mind with sovereign states and also for multi national organizations and for organizations that want to collaborate with each other, confidential computing will become the norm. It'll become the default, if I say, mode of operation. I like to compare that today is inconceivable. If we talk to the young technologists, it's inconceivable to think that at some point in history, and I happen to be alive that we had data at rest that was not encrypted, data in transit that was not encrypted, and I think that will be inconceivable at some point in the near future that to have unencrypted data while in use. >> And plus I think the beauty of the this industry is because there's so much competition, this essentially comes for free. I want to thank you both for spending some time on Breaking Analysis. There's so much more we could cover. I hope you'll come back to share the progress that you're making in this area and we can double click on some of these topics. Really appreciate your time. >> Anytime. >> Thank you so much. >> In summary, while confidential computing is being touted by the cloud players as a promising technology for enhancing data privacy and security, there are also those, as we said, who remain skeptical. The truth probably lies somewhere in between and it will depend on the specific implementation and the use case as to how effective confidential computing will be. Look, as with any new tech, it's important to carefully evaluate the potential benefits, the drawbacks, and make informed decisions based on the specific requirements in the situation and the constraints of each individual customer. But the bottom line is silicon manufacturers are working with cloud providers and other system companies to include confidential computing into their architectures. Competition, in our view, will moderate price hikes. And at the end of the day, this is under the covers technology that essentially will come for free. So we'll take it. I want to thank our guests today, Nelly and Patricia from Google, and thanks to Alex Myerson who's on production and manages the podcast. Ken Schiffman as well out of our Boston studio, Kristin Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hof is our editor-in-chief over at siliconangle.com. Does some great editing for us, thank you all. Remember all these episodes are available as podcasts. Wherever you listen, just search Breaking Analysis podcast. I publish each week on wikibon.com and siliconangle.com where you can get all the news. If you want to get in touch, you can email me at david.vellante@siliconangle.com or dm me @DVellante. And you can also comment on my LinkedIn post. Definitely you want to check out etr.ai for the best survey data in the enterprise tech business. I know we didn't hit on a lot today, but there's some amazing data and it's always being updated, so check that out. This is Dave Vellante for theCUBE Insights, powered by ETR. Thanks for watching and we'll see you next time on Breaking Analysis. (upbeat music)
SUMMARY :
bringing you data-driven and at the end of the day, Just tell the audience a little and confidential computing Got it. and the industry at large for that both of you. in the data to cloud into the architecture a bit, and privacy of the data. people that are scared of the cloud. and eliminate some of the we could stay with you and they head to memory controller. So, again, the narrative on this as well, and integrity of the data and of the code. how does Google ensure the compatibility and ideas of our partners to this role One of the frequent examples and that the data will be only used of the enforcement. and we will support encrypted traffic. And Patricia, and I happen to be alive beauty of the this industry and the constraints of
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Nelly | PERSON | 0.99+ |
Patricia | PERSON | 0.99+ |
International Data Space Association | ORGANIZATION | 0.99+ |
Alex Myerson | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
IDSA | ORGANIZATION | 0.99+ |
Rodrigo Branco | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Nvidia | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
2017 | DATE | 0.99+ |
Kristin Martin | PERSON | 0.99+ |
Nelly Porter | PERSON | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
Rob Hof | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
last year | DATE | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
two parties | QUANTITY | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Patricia Florissi | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
second point | QUANTITY | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
Meta | ORGANIZATION | 0.99+ |
second | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Arm | ORGANIZATION | 0.99+ |
each | QUANTITY | 0.99+ |
two experts | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
Gaia-X | ORGANIZATION | 0.99+ |
two decades ago | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
seven years | QUANTITY | 0.99+ |
OCTO | ORGANIZATION | 0.99+ |
zero days | QUANTITY | 0.98+ |
10 years ago | DATE | 0.98+ |
each week | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
Breaking Analysis: Google's PoV on Confidential Computing
>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> Confidential computing is a technology that aims to enhance data privacy and security, by providing encrypted computation on sensitive data and isolating data, and apps that are fenced off enclave during processing. The concept of, I got to start over. I fucked that up, I'm sorry. That's not right, what I said was not right. On Dave in five, four, three. Confidential computing is a technology that aims to enhance data privacy and security by providing encrypted computation on sensitive data, isolating data from apps and a fenced off enclave during processing. The concept of confidential computing is gaining popularity, especially in the cloud computing space, where sensitive data is often stored and of course processed. However, there are some who view confidential computing as an unnecessary technology in a marketing ploy by cloud providers aimed at calming customers who are cloud phobic. Hello and welcome to this week's Wikibon Cube Insights powered by ETR. In this Breaking Analysis, we revisit the notion of confidential computing, and to do so, we'll invite two Google experts to the show. But before we get there, let's summarize briefly. There's not a ton of ETR data on the topic of confidential computing, I mean, it's a technology that's deeply embedded into silicon and computing architectures. But at the highest level, security remains the number one priority being addressed by IT decision makers in the coming year as shown here. And this data is pretty much across the board by industry, by region, by size of company. I mean we dug into it and the only slight deviation from the mean is in financial services. The second and third most cited priorities, cloud migration and analytics are noticeably closer to cybersecurity in financial services than in other sectors, likely because financial services has always been hyper security conscious, but security is still a clear number one priority in that sector. The idea behind confidential computing is to better address threat models for data in execution. Protecting data at rest and data in transit have long been a focus of security approaches, but more recently, silicon manufacturers have introduced architectures that separate data and applications from the host system, ARM, Intel, AMD, Nvidia and other suppliers are all on board, as are the big cloud players. Now, the argument against confidential computing is that it narrowly focuses on memory encryption and it doesn't solve the biggest problems in security. Multiple system images, updates, different services and the entire code flow aren't directly addressed by memory encryption. Rather to truly attack these problems, many believe that OSs need to be re-engineered with the attacker and hacker in mind. There are so many variables and at the end of the day, critics say the emphasis on confidential computing made by cloud providers is overstated and largely hype. This tweet from security researcher Rodrigo Bronco, sums up the sentiment of many skeptics. He says, "Confidential computing is mostly a marketing campaign from memory encryption. It's not driving the industry towards the hard open problems. It is selling an illusion." Okay. Nonetheless, encrypting data in use and fencing off key components of the system isn't a bad thing, especially if it comes with the package essentially for free. There has been a lack of standardization and interoperability between different confidential computing approaches. But the confidential computing consortium was established in 2019 ostensibly to accelerate the market and influence standards. Notably, AWS is not part of the consortium, likely because the politics of the consortium were probably a conundrum for AWS because the base technology defined by the consortium is seen as limiting by AWS. This is my guess, not AWS' words. But I think joining the consortium would validate a definition which AWS isn't aligned with. And two, it's got to lead with this Annapurna acquisition. It was way ahead with ARM integration, and so it's probably doesn't feel the need to validate its competitors. Anyway, one of the premier members of the confidential computing consortium is Google, along with many high profile names, including Aem, Intel, Meta, Red Hat, Microsoft, and others. And we're pleased to welcome two experts on confidential computing from Google to unpack the topic. Nelly Porter is Head of Product for GCP Confidential Computing and Encryption and Dr. Patricia Florissi is the Technical Director for the Office of the CTO at Google Cloud. Welcome Nelly and Patricia, great to have you. >> Great to be here. >> Thank you so much for having us. >> You're very welcome. Nelly, why don't you start and then Patricia, you can weigh in. Just tell the audience a little bit about each of your roles at Google Cloud. >> So I'll start, I'm owning a lot of interesting activities in Google and again, security or infrastructure securities that I usually own. And we are talking about encryption, end-to-end encryption, and confidential computing is a part of portfolio. Additional areas that I contribute to get with my team to Google and our customers is secure software supply chain because you need to trust your software. Is it operate in your confidential environment to have end-to-end security, about if you believe that your software and your environment doing what you expect, it's my role. >> Got it. Okay, Patricia? >> Well, I am a Technical Director in the Office of the CTO, OCTO for short in Google Cloud. And we are a global team, we include former CTOs like myself and senior technologies from large corporations, institutions and a lot of success for startups as well. And we have two main goals, first, we walk side by side with some of our largest, more strategic or most strategical customers and we help them solve complex engineering technical problems. And second, we advice Google and Google Cloud Engineering, product management on emerging trends and technologies to guide the trajectory of our business. We are unique group, I think, because we have created this collaborative culture with our customers. And within OCTO I spend a lot of time collaborating with customers in the industry at large on technologies that can address privacy, security, and sovereignty of data in general. >> Excellent. Thank you for that both of you. Let's get into it. So Nelly, what is confidential computing from Google's perspective? How do you define it? >> Confidential computing is a tool and one of the tools in our toolbox. And confidential computing is a way how we would help our customers to complete this very interesting end-to-end lifecycle of the data. And when customers bring in the data to cloud and want to protect it as they ingest it to the cloud, they protect it at rest when they store data in the cloud. But what was missing for many, many years is ability for us to continue protecting data and workloads of our customers when they run them. And again, because data is not brought to cloud to have huge graveyard, we need to ensure that this data is actually indexed. Again, there is some insights driven and drawn from this data. You have to process this data and confidential computing here to help. Now we have end-to-end protection of our customer's data when they bring the workloads and data to cloud thanks to confidential computing. >> Thank you for that. Okay, we're going to get into the architecture a bit, but before we do Patricia, why do you think this topic of confidential computing is such an important technology? Can you explain? Do you think it's transformative for customers and if so, why? >> Yeah, I would maybe like to use one thought, one way, one intuition behind why confidential computing matters because at the end of the day, it reduces more and more the customer's thrush boundaries and the attack surface. That's about reducing that periphery, the boundary in which the customer needs to mind about trust and safety. And in a way is a natural progression that you're using encryption to secure and protect data in the same way that we are encrypting data in transit and at rest. Now, we are also encrypting data while in the use. And among other beneficials, I would say one of the most transformative ones is that organizations will be able to collaborate with each other and retain the confidentiality of the data. And that is across industry, even though it's highly focused on, I wouldn't say highly focused but very beneficial for highly regulated industries, it applies to all of industries. And if you look at financing for example, where bankers are trying to detect fraud and specifically double finance where a customer is actually trying to get a finance on an asset, let's say a boat or a house, and then it goes to another bank and gets another finance on that asset. Now bankers would be able to collaborate and detect fraud while preserving confidentiality and privacy of the data. >> Interesting and I want to understand that a little bit more but I got to push you a little bit on this, Nellie if I can, because there's a narrative out there that says confidential computing is a marketing ploy I talked about this up front, by cloud providers that are just trying to placate people that are scared of the cloud. And I'm presuming you don't agree with that, but I'd like you to weigh in here. The argument is confidential computing is just memory encryption, it doesn't address many other problems. It is over hyped by cloud providers. What do you say to that line of thinking? >> I absolutely disagree as you can imagine Dave, with this statement. But the most importantly is we mixing a multiple concepts I guess, and exactly as Patricia said, we need to look at the end-to-end story, not again, is a mechanism. How confidential computing trying to execute and protect customer's data and why it's so critically important. Because what confidential computing was able to do, it's in addition to isolate our tenants in multi-tenant environments the cloud offering to offer additional stronger isolation, they called it cryptographic isolation. It's why customers will have more trust to customers and to other customers, the tenants running on the same host but also us because they don't need to worry about against rats and more malicious attempts to penetrate the environment. So what confidential computing is helping us to offer our customers stronger isolation between tenants in this multi-tenant environment, but also incredibly important, stronger isolation of our customers to tenants from us. We also writing code, we also software providers, we also make mistakes or have some zero days. Sometimes again us introduce, sometimes introduced by our adversaries. But what I'm trying to say by creating this cryptographic layer of isolation between us and our tenants and among those tenants, we really providing meaningful security to our customers and eliminate some of the worries that they have running on multi-tenant spaces or even collaborating together with very sensitive data knowing that this particular protection is available to them. >> Okay, thank you. Appreciate that. And I think malicious code is often a threat model missed in these narratives. You know, operator access. Yeah, maybe I trust my cloud's provider, but if I can fence off your access even better, I'll sleep better at night separating a code from the data. Everybody's ARM, Intel, AMD, Nvidia and others, they're all doing it. I wonder if Nell, if we could stay with you and bring up the slide on the architecture. What's architecturally different with confidential computing versus how operating systems and VMs have worked traditionally? We're showing a slide here with some VMs, maybe you could take us through that. >> Absolutely, and Dave, the whole idea for Google and now industry way of dealing with confidential computing is to ensure that three main property is actually preserved. Customers don't need to change the code. They can operate in those VMs exactly as they would with normal non-confidential VMs. But to give them this opportunity of lift and shift though, no changing the apps and performing and having very, very, very low latency and scale as any cloud can, some things that Google actually pioneer in confidential computing. I think we need to open and explain how this magic was actually done, and as I said, it's again the whole entire system have to change to be able to provide this magic. And I would start with we have this concept of root of trust and root of trust where we will ensure that this machine within the whole entire host has integrity guarantee, means nobody changing my code on the most low level of system, and we introduce this in 2017 called Titan. So our specific ASIC, specific inch by inch system on every single motherboard that we have that ensures that your low level former, your actually system code, your kernel, the most powerful system is actually proper configured and not changed, not tempered. We do it for everybody, confidential computing included, but for confidential computing is what we have to change, we bring in AMD or future silicon vendors and we have to trust their former, their way to deal with our confidential environments. And that's why we have obligation to validate intelligent not only our software and our former but also former and software of our vendors, silicon vendors. So we actually, when we booting this machine as you can see, we validate that integrity of all of this system is in place. It means nobody touching, nobody changing, nobody modifying it. But then we have this concept of AMD Secure Processor, it's special ASIC best specific things that generate a key for every single VM that our customers will run or every single node in Kubernetes or every single worker thread in our Hadoop spark capability. We offer all of that and those keys are not available to us. It's the best case ever in encryption space because when we are talking about encryption, the first question that I'm receiving all the time, "Where's the key? Who will have access to the key?" because if you have access to the key then it doesn't matter if you encrypted or not. So, but the case in confidential computing why it's so revolutionary technology, us cloud providers who don't have access to the keys, they're sitting in the hardware and they fed to memory controller. And it means when hypervisors that also know about this wonderful things saying I need to get access to the memories, that this particular VM I'm trying to get access to. They do not decrypt the data, they don't have access to the key because those keys are random, ephemeral and per VM, but most importantly in hardware not exportable. And it means now you will be able to have this very interesting world that customers or cloud providers will not be able to get access to your memory. And what we do, again as you can see, our customers don't need to change their applications. Their VMs are running exactly as it should run. And what you've running in VM, you actually see your memory clear, it's not encrypted. But God forbid is trying somebody to do it outside of my confidential box, no, no, no, no, no, you will now be able to do it. Now, you'll see cyber test and it's exactly what combination of these multiple hardware pieces and software pieces have to do. So OS is also modified and OS is modified such way to provide integrity. It means even OS that you're running in your VM box is not modifiable and you as customer can verify. But the most interesting thing I guess how to ensure the super performance of this environment because you can imagine Dave, that's increasing and it's additional performance, additional time, additional latency. So we're able to mitigate all of that by providing incredibly interesting capability in the OS itself. So our customers will get no changes needed, fantastic performance and scales as they would expect from cloud providers like Google. >> Okay, thank you. Excellent, appreciate that explanation. So you know again, the narrative on this is, well, you've already given me guarantees as a cloud provider that you don't have access to my data, but this gives another level of assurance, key management as they say is key. Now humans aren't managing the keys, the machines are managing them. So Patricia, my question to you is in addition to, let's go pre-confidential computing days, what are the sort of new guarantees that these hardware based technologies are going to provide to customers? >> So if I am a customer, I am saying I now have full guarantee of confidentiality and integrity of the data and of the code. So if you look at code and data confidentiality, the customer cares and they want to know whether their systems are protected from outside or unauthorized access, and that we covered with Nelly that it is. Confidential computing actually ensures that the applications and data antennas remain secret. The code is actually looking at the data, only the memory is decrypting the data with a key that is ephemeral, and per VM, and generated on demand. Then you have the second point where you have code and data integrity and now customers want to know whether their data was corrupted, tempered with or impacted by outside actors. And what confidential computing ensures is that application internals are not tempered with. So the application, the workload as we call it, that is processing the data is also has not been tempered and preserves integrity. I would also say that this is all verifiable, so you have attestation and this attestation actually generates a log trail and the log trail guarantees that provides a proof that it was preserved. And I think that the offers also a guarantee of what we call sealing, this idea that the secrets have been preserved and not tempered with, confidentiality and integrity of code and data. >> Got it. Okay, thank you. Nelly, you mentioned, I think I heard you say that the applications is transparent, you don't have to change the application, it just comes for free essentially. And we showed some various parts of the stack before, I'm curious as to what's affected, but really more importantly, what is specifically Google's value add? How do partners participate in this, the ecosystem or maybe said another way, how does Google ensure the compatibility of confidential computing with existing systems and applications? >> And a fantastic question by the way, and it's very difficult and definitely complicated world because to be able to provide these guarantees, actually a lot of work was done by community. Google is very much operate and open. So again our operating system, we working this operating system repository OS is OS vendors to ensure that all capabilities that we need is part of the kernels are part of the releases and it's available for customers to understand and even explore if they have fun to explore a lot of code. We have also modified together with our silicon vendors kernel, host kernel to support this capability and it means working this community to ensure that all of those pages are there. We also worked with every single silicon vendor as you've seen, and it's what I probably feel that Google contributed quite a bit in this world. We moved our industry, our community, our vendors to understand the value of easy to use confidential computing or removing barriers. And now I don't know if you noticed Intel is following the lead and also announcing a trusted domain extension, very similar architecture and no surprise, it's a lot of work done with our partners to convince work with them and make this capability available. The same with ARM this year, actually last year, ARM announced future design for confidential computing, it's called confidential computing architecture. And it's also influenced very heavily with similar ideas by Google and industry overall. So it's a lot of work in confidential computing consortiums that we are doing, for example, simply to mention, to ensure interop as you mentioned, between different confidential environments of cloud providers. They want to ensure that they can attest to each other because when you're communicating with different environments, you need to trust them. And if it's running on different cloud providers, you need to ensure that you can trust your receiver when you sharing your sensitive data workloads or secret with them. So we coming as a community and we have this at Station Sig, the community-based systems that we want to build, and influence, and work with ARM and every other cloud providers to ensure that they can interop. And it means it doesn't matter where confidential workloads will be hosted, but they can exchange the data in secure, verifiable and controlled by customers really. And to do it, we need to continue what we are doing, working open and contribute with our ideas and ideas of our partners to this role to become what we see confidential computing has to become, it has to become utility. It doesn't need to be so special, but it's what what we've wanted to become. >> Let's talk about, thank you for that explanation. Let's talk about data sovereignty because when you think about data sharing, you think about data sharing across the ecosystem in different regions and then of course data sovereignty comes up, typically public policy, lags, the technology industry and sometimes it's problematic. I know there's a lot of discussions about exceptions but Patricia, we have a graphic on data sovereignty. I'm interested in how confidential computing ensures that data sovereignty and privacy edicts are adhered to, even if they're out of alignment maybe with the pace of technology. One of the frequent examples is when you delete data, can you actually prove the data is deleted with a hundred percent certainty, you got to prove that and a lot of other issues. So looking at this slide, maybe you could take us through your thinking on data sovereignty. >> Perfect. So for us, data sovereignty is only one of the three pillars of digital sovereignty. And I don't want to give the impression that confidential computing addresses it at all, that's why we want to step back and say, hey, digital sovereignty includes data sovereignty where we are giving you full control and ownership of the location, encryption and access to your data. Operational sovereignty where the goal is to give our Google Cloud customers full visibility and control over the provider operations, right? So if there are any updates on hardware, software stack, any operations, there is full transparency, full visibility. And then the third pillar is around software sovereignty, where the customer wants to ensure that they can run their workloads without dependency on the provider's software. So they have sometimes is often referred as survivability that you can actually survive if you are untethered to the cloud and that you can use open source. Now, let's take a deep dive on data sovereignty, which by the way is one of my favorite topics. And we typically focus on saying, hey, we need to care about data residency. We care where the data resides because where the data is at rest or in processing need to typically abides to the jurisdiction, the regulations of the jurisdiction where the data resides. And others say, hey, let's focus on data protection, we want to ensure the confidentiality, and integrity, and availability of the data, which confidential computing is at the heart of that data protection. But it is yet another element that people typically don't talk about when talking about data sovereignty, which is the element of user control. And here Dave, is about what happens to the data when I give you access to my data, and this reminds me of security two decades ago, even a decade ago, where we started the security movement by putting firewall protections and logging accesses. But once you were in, you were able to do everything you wanted with the data. An insider had access to all the infrastructure, the data, and the code. And that's similar because with data sovereignty, we care about whether it resides, who is operating on the data, but the moment that the data is being processed, I need to trust that the processing of the data we abide by user's control, by the policies that I put in place of how my data is going to be used. And if you look at a lot of the regulation today and a lot of the initiatives around the International Data Space Association, IDSA and Gaia-X, there is a movement of saying the two parties, the provider of the data and the receiver of the data going to agree on a contract that describes what my data can be used for. The challenge is to ensure that once the data crosses boundaries, that the data will be used for the purposes that it was intended and specified in the contract. And if you actually bring together, and this is the exciting part, confidential computing together with policy enforcement. Now, the policy enforcement can guarantee that the data is only processed within the confines of a confidential computing environment, that the workload is in cryptographically verified that there is the workload that was meant to process the data and that the data will be only used when abiding to the confidentiality and integrity safety of the confidential computing environment. And that's why we believe confidential computing is one necessary and essential technology that will allow us to ensure data sovereignty, especially when it comes to user's control. >> Thank you for that. I mean it was a deep dive, I mean brief, but really detailed. So I appreciate that, especially the verification of the enforcement. Last question, I met you two because as part of my year-end prediction post, you guys sent in some predictions and I wasn't able to get to them in the predictions post, so I'm thrilled that you were able to make the time to come on the program. How widespread do you think the adoption of confidential computing will be in '23 and what's the maturity curve look like this decade in your opinion? Maybe each of you could give us a brief answer. >> So my prediction in five, seven years as I started, it will become utility, it will become TLS. As of freakin' 10 years ago, we couldn't believe that websites will have certificates and we will support encrypted traffic. Now we do, and it's become ubiquity. It's exactly where our confidential computing is heeding and heading, I don't know we deserve yet. It'll take a few years of maturity for us, but we'll do that. >> Thank you. And Patricia, what's your prediction? >> I would double that and say, hey, in the very near future, you will not be able to afford not having it. I believe as digital sovereignty becomes ever more top of mind with sovereign states and also for multinational organizations, and for organizations that want to collaborate with each other, confidential computing will become the norm, it will become the default, if I say mode of operation. I like to compare that today is inconceivable if we talk to the young technologists, it's inconceivable to think that at some point in history and I happen to be alive, that we had data at rest that was non-encrypted, data in transit that was not encrypted. And I think that we'll be inconceivable at some point in the near future that to have unencrypted data while we use. >> You know, and plus I think the beauty of the this industry is because there's so much competition, this essentially comes for free. I want to thank you both for spending some time on Breaking Analysis, there's so much more we could cover. I hope you'll come back to share the progress that you're making in this area and we can double click on some of these topics. Really appreciate your time. >> Anytime. >> Thank you so much, yeah. >> In summary, while confidential computing is being touted by the cloud players as a promising technology for enhancing data privacy and security, there are also those as we said, who remain skeptical. The truth probably lies somewhere in between and it will depend on the specific implementation and the use case as to how effective confidential computing will be. Look as with any new tech, it's important to carefully evaluate the potential benefits, the drawbacks, and make informed decisions based on the specific requirements in the situation and the constraints of each individual customer. But the bottom line is silicon manufacturers are working with cloud providers and other system companies to include confidential computing into their architectures. Competition in our view will moderate price hikes and at the end of the day, this is under-the-covers technology that essentially will come for free, so we'll take it. I want to thank our guests today, Nelly and Patricia from Google. And thanks to Alex Myerson who's on production and manages the podcast. Ken Schiffman as well out of our Boston studio. Kristin Martin and Cheryl Knight help get the word out on social media and in our newsletters, and Rob Hoof is our editor-in-chief over at siliconangle.com, does some great editing for us. Thank you all. Remember all these episodes are available as podcasts. Wherever you listen, just search Breaking Analysis podcast. I publish each week on wikibon.com and siliconangle.com where you can get all the news. If you want to get in touch, you can email me at david.vellante@siliconangle.com or DM me at D Vellante, and you can also comment on my LinkedIn post. Definitely you want to check out etr.ai for the best survey data in the enterprise tech business. I know we didn't hit on a lot today, but there's some amazing data and it's always being updated, so check that out. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching and we'll see you next time on Breaking Analysis. (subtle music)
SUMMARY :
bringing you data-driven and at the end of the day, and then Patricia, you can weigh in. contribute to get with my team Okay, Patricia? Director in the Office of the CTO, for that both of you. in the data to cloud into the architecture a bit, and privacy of the data. that are scared of the cloud. and eliminate some of the we could stay with you and they fed to memory controller. to you is in addition to, and integrity of the data and of the code. that the applications is transparent, and ideas of our partners to this role One of the frequent examples and a lot of the initiatives of the enforcement. and we will support encrypted traffic. And Patricia, and I happen to be alive, the beauty of the this industry and at the end of the day,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Nelly | PERSON | 0.99+ |
Patricia | PERSON | 0.99+ |
Alex Myerson | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
International Data Space Association | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
AWS' | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Rob Hoof | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Nelly Porter | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Nvidia | ORGANIZATION | 0.99+ |
IDSA | ORGANIZATION | 0.99+ |
Rodrigo Bronco | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
2017 | DATE | 0.99+ |
ARM | ORGANIZATION | 0.99+ |
Aem | ORGANIZATION | 0.99+ |
Nellie | PERSON | 0.99+ |
Kristin Martin | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
two parties | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
last year | DATE | 0.99+ |
Patricia Florissi | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
Meta | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
Gaia-X | ORGANIZATION | 0.99+ |
second point | QUANTITY | 0.99+ |
two experts | QUANTITY | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
second | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
theCUBE Studios | ORGANIZATION | 0.99+ |
two decades ago | DATE | 0.99+ |
'23 | DATE | 0.99+ |
each | QUANTITY | 0.99+ |
a decade ago | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
zero days | QUANTITY | 0.98+ |
four | QUANTITY | 0.98+ |
OCTO | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
theCUBE's New Analyst Talks Cloud & DevOps
(light music) >> Hi everybody. Welcome to this Cube Conversation. I'm really pleased to announce a collaboration with Rob Strechay. He's a guest cube analyst, and we'll be working together to extract the signal from the noise. Rob is a long-time product pro, working at a number of firms including AWS, HP, HPE, NetApp, Snowplow. I did a stint as an analyst at Enterprise Strategy Group. Rob, good to see you. Thanks for coming into our Marlboro Studios. >> Well, thank you for having me. It's always great to be here. >> I'm really excited about working with you. We've known each other for a long time. You've been in the Cube a bunch. You know, you're in between gigs, and I think we can have a lot of fun together. Covering events, covering trends. So. let's get into it. What's happening out there? We're sort of exited the isolation economy. Things were booming. Now, everybody's tapping the brakes. From your standpoint, what are you seeing out there? >> Yeah. I'm seeing that people are really looking how to get more out of their data. How they're bringing things together, how they're looking at the costs of Cloud, and understanding how are they building out their SaaS applications. And understanding that when they go in and actually start to use Cloud, it's not only just using the base services anymore. They're looking at, how do I use these platforms as a service? Some are easier than others, and they're trying to understand, how do I get more value out of that relationship with the Cloud? They're also consolidating the number of Clouds that they have, I would say to try to better optimize their spend, and getting better pricing for that matter. >> Are you seeing people unhook Clouds, or just reduce maybe certain Cloud activities and going maybe instead of 60/40 going 90/10? >> Correct. It's more like the 90/10 type of rule where they're starting to say, Hey I'm not going to get rid of Azure or AWS or Google. I'm going to move a portion of this over that I was using on this one service. Maybe I got a great two-year contract to start with on this platform as a service or a database as a service. I'm going to unhook from that and maybe go with an independent. Maybe with something like a Snowflake or a Databricks on top of another Cloud, so that I can consolidate down. But it also gives them more flexibility as well. >> In our last breaking analysis, Rob, we identified six factors that were reducing Cloud consumption. There were factors and customer tactics. And I want to get your take on this. So, some of the factors really, you got fewer mortgage originations. FinTech, obviously big Cloud user. Crypto, not as much activity there. Lower ad spending means less Cloud. And then one of 'em, which you kind of disagreed with was less, less analytics, you know, fewer... Less frequency of calculations. I'll come back to that. But then optimizing compute using Graviton or AMD instances moving to cheaper storage tiers. That of course makes sense. And then optimize pricing plans. Maybe going from On Demand, you know, to, you know, instead of pay by the drink, buy in volume. Okay. So, first of all, do those make sense to you with the exception? We'll come back and talk about the analytics piece. Is that what you're seeing from customers? >> Yeah, I think so. I think that was pretty much dead on with what I'm seeing from customers and the ones that I go out and talk to. A lot of times they're trying to really monetize their, you know, understand how their business utilizes these Clouds. And, where their spend is going in those Clouds. Can they use, you know, lower tiers of storage? Do they really need the best processors? Do they need to be using Intel or can they get away with AMD or Graviton 2 or 3? Or do they need to move in? And, I think when you look at all of these Clouds, they always have pricing curves that are arcs from the newest to the oldest stuff. And you can play games with that. And understanding how you can actually lower your costs by looking at maybe some of the older generation. Maybe your application was written 10 years ago. You don't necessarily have to be on the best, newest processor for that application per se. >> So last, I want to come back to this whole analytics piece. Last June, I think it was June, Dev Ittycheria, who's the-- I call him Dev. Spelled Dev, pronounced Dave. (chuckles softly) Same pronunciation, different spelling. Dev Ittycheria, CEO of Mongo, on the earnings call. He was getting, you know, hit. Things were starting to get a little less visible in terms of, you know, the outlook. And people were pushing him like... Because you're in the Cloud, is it easier to dial down? And he said, because we're the document database, we support transaction applications. We're less discretionary than say, analytics. Well on the Snowflake earnings call, that same month or the month after, they were all over Slootman and Scarpelli. Oh, the Mongo CEO said that they're less discretionary than analytics. And Snowflake was an interesting comment. They basically said, look, we're the Cloud. You can dial it up, you can dial it down, but the area under the curve over a period of time is going to be the same, because they get their customers to commit. What do you say? You disagreed with the notion that people are running their calculations less frequently. Is that because they're trying to do a better job of targeting customers in near real time? What are you seeing out there? >> Yeah, I think they're moving away from using people and more expensive marketing. Or, they're trying to figure out what's my Google ad spend, what's my Meta ad spend? And what they're trying to do is optimize that spend. So, what is the return on advertising, or the ROAS as they would say. And what they're looking to do is understand, okay, I have to collect these analytics that better understand where are these people coming from? How do they get to my site, to my store, to my whatever? And when they're using it, how do they they better move through that? What you're also seeing is that analytics is not only just for kind of the retail or financial services or things like that, but then they're also, you know, using that to make offers in those categories. When you move back to more, you know, take other companies that are building products and SaaS delivered products. They may actually go and use this analytics for making the product better. And one of the big reasons for that is maybe they're dialing back how many product managers they have. And they're looking to be more data driven about how they actually go and build the product out or enhance the product. So maybe they're, you know, an online video service and they want to understand why people are either using or not using the whiteboard inside the product. And they're collecting a lot of that product analytics in a big way so that they can go through that. And they're doing it in a constant manner. This first party type tracking within applications is growing rapidly by customers. >> So, let's talk about who wins in that. So, obviously the Cloud guys, AWS, Google and Azure. I want to come back and unpack that a little bit. Databricks and Snowflake, we reported on our last breaking analysis, it kind of on a collision course. You know, a couple years ago we were thinking, okay, AWS, Snowflake and Databricks, like perfect sandwich. And then of course they started to become more competitive. My sense is they still, you know, compliment each other in the field, right? But, you know, publicly, they've got bigger aspirations, they get big TAMs that they're going after. But it's interesting, the data shows that-- So, Snowflake was off the charts in terms of spending momentum and our EPR surveys. Our partner down in New York, they kind of came into line. They're both growing in terms of market presence. Databricks couldn't get to IPO. So, we don't have as much, you know, visibility on their financials. You know, Snowflake obviously highly transparent cause they're a public company. And then you got AWS, Google and Azure. And it seems like AWS appears to be more partner friendly. Microsoft, you know, depends on what market you're in. And Google wants to sell BigQuery. >> Yeah. >> So, what are you seeing in the public Cloud from a data platform perspective? >> Yeah. I think that was pretty astute in what you were talking about there, because I think of the three, Google is definitely I think a little bit behind in how they go to market with their partners. Azure's done a fantastic job of partnering with these companies to understand and even though they may have Synapse as their go-to and where they want people to go to do AI and ML. What they're looking at is, Hey, we're going to also be friendly with Snowflake. We're also going to be friendly with a Databricks. And I think that, Amazon has always been there because that's where the market has been for these developers. So, many, like Databricks' and the Snowflake's have gone there first because, you know, Databricks' case, they built out on top of S3 first. And going and using somebody's object layer other than AWS, was not as simple as you would think it would be. Moving between those. >> So, one of the financial meetups I said meetup, but the... It was either the CEO or the CFO. It was either Slootman or Scarpelli talking at, I don't know, Merrill Lynch or one of the other financial conferences said, I think it was probably their Q3 call. Snowflake said 80% of our business goes through Amazon. And he said to this audience, the next day we got a call from Microsoft. Hey, we got to do more. And, we know just from reading the financial statements that Snowflake is getting concessions from Amazon, they're buying in volume, they're renegotiating their contracts. Amazon gets it. You know, lower the price, people buy more. Long term, we're all going to make more money. Microsoft obviously wants to get into that game with Snowflake. They understand the momentum. They said Google, not so much. And I've had customers tell me that they wanted to use Google's AI with Snowflake, but they can't, they got to go to to BigQuery. So, honestly, I haven't like vetted that so. But, I think it's true. But nonetheless, it seems like Google's a little less friendly with the data platform providers. What do you think? >> Yeah, I would say so. I think this is a place that Google looks and wants to own. Is that now, are they doing the right things long term? I mean again, you know, you look at Google Analytics being you know, basically outlawed in five countries in the EU because of GDPR concerns, and compliance and governance of data. And I think people are looking at Google and BigQuery in general and saying, is it the best place for me to go? Is it going to be in the right places where I need it? Still, it's still one of the largest used databases out there just because it underpins a number of the Google services. So you almost get, like you were saying, forced into BigQuery sometimes, if you want to use the tech on top. >> You do strategy. >> Yeah. >> Right? You do strategy, you do messaging. Is it the right call by Google? I mean, it's not a-- I criticize Google sometimes. But, I'm not sure it's the wrong call to say, Hey, this is our ace in the hole. >> Yeah. >> We got to get people into BigQuery. Cause, first of all, BigQuery is a solid product. I mean it's Cloud native and it's, you know, by all, it gets high marks. So, why give the competition an advantage? Let's try to force people essentially into what is we think a great product and it is a great product. The flip side of that is, they're giving up some potential partner TAM and not treating the ecosystem as well as one of their major competitors. What do you do if you're in that position? >> Yeah, I think that that's a fantastic question. And the question I pose back to the companies I've worked with and worked for is, are you really looking to have vendor lock-in as your key differentiator to your service? And I think when you start to look at these companies that are moving away from BigQuery, moving to even, Databricks on top of GCS in Google, they're looking to say, okay, I can go there if I have to evacuate from GCP and go to another Cloud, I can stay on Databricks as a platform, for instance. So I think it's, people are looking at what platform as a service, database as a service they go and use. Because from a strategic perspective, they don't want that vendor locking. >> That's where Supercloud becomes interesting, right? Because, if I can run on Snowflake or Databricks, you know, across Clouds. Even Oracle, you know, they're getting into business with Microsoft. Let's talk about some of the Cloud players. So, the big three have reported. >> Right. >> We saw AWSs Cloud growth decelerated down to 20%, which is I think the lowest growth rate since they started to disclose public numbers. And they said they exited, sorry, they said January they grew at 15%. >> Yeah. >> Year on year. Now, they had some pretty tough compares. But nonetheless, 15%, wow. Azure, kind of mid thirties, and then Google, we had kind of low thirties. But, well behind in terms of size. And Google's losing probably almost $3 billion annually. But, that's not necessarily a bad thing by advocating and investing. What's happening with the Cloud? Is AWS just running into the law, large numbers? Do you think we can actually see a re-acceleration like we have in the past with AWS Cloud? Azure, we predicted is going to be 75% of AWS IAS revenues. You know, we try to estimate IAS. >> Yeah. >> Even though they don't share that with us. That's a huge milestone. You'd think-- There's some people who have, I think, Bob Evans predicted a while ago that Microsoft would surpass AWS in terms of size. You know, what do you think? >> Yeah, I think that Azure's going to keep to-- Keep growing at a pretty good clip. I think that for Azure, they still have really great account control, even though people like to hate Microsoft. The Microsoft sellers that are out there making those companies successful day after day have really done a good job of being in those accounts and helping people. I was recently over in the UK. And the UK market between AWS and Azure is pretty amazing, how much Azure there is. And it's growing within Europe in general. In the states, it's, you know, I think it's growing well. I think it's still growing, probably not as fast as it is outside the U.S. But, you go down to someplace like Australia, it's also Azure. You hear about Azure all the time. >> Why? Is that just because of the Microsoft's software state? It's just so convenient. >> I think it has to do with, you know, and you can go with the reasoning they don't break out, you know, Office 365 and all of that out of their numbers is because they have-- They're in all of these accounts because the office suite is so pervasive in there. So, they always have reasons to go back in and, oh by the way, you're on these old SQL licenses. Let us move you up here and we'll be able to-- We'll support you on the old version, you know, with security and all of these things. And be able to move you forward. So, they have a lot of, I guess you could say, levers to stay in those accounts and be interesting. At least as part of the Cloud estate. I think Amazon, you know, is hitting, you know, the large number. Laws of large numbers. But I think that they're also going through, and I think this was seen in the layoffs that they were making, that they're looking to understand and have profitability in more of those services that they have. You know, over 350 odd services that they have. And you know, as somebody who went there and helped to start yet a new one, while I was there. And finally, it went to beta back in September, you start to look at the fact that, that number of services, people, their own sellers don't even know all of their services. It's impossible to comprehend and sell that many things. So, I think what they're going through is really looking to rationalize a lot of what they're doing from a services perspective going forward. They're looking to focus on more profitable services and bringing those in. Because right now it's built like a layer cake where you have, you know, S3 EBS and EC2 on the bottom of the layer cake. And then maybe you have, you're using IAM, the authorization and authentication in there and you have all these different services. And then they call it EMR on top. And so, EMR has to pay for that entire layer cake just to go and compete against somebody like Mongo or something like that. So, you start to unwind the costs of that. Whereas Azure, went and they build basically ground up services for the most part. And Google kind of falls somewhere in between in how they build their-- They're a sort of layer cake type effect, but not as many layers I guess you could say. >> I feel like, you know, Amazon's trying to be a platform for the ecosystem. Yes, they have their own products and they're going to sell. And that's going to drive their profitability cause they don't have to split the pie. But, they're taking a piece of-- They're spinning the meter, as Ziyas Caravalo likes to say on every time Snowflake or Databricks or Mongo or Atlas is, you know, running on their system. They take a piece of the action. Now, Microsoft does that as well. But, you look at Microsoft and security, head-to-head competitors, for example, with a CrowdStrike or an Okta in identity. Whereas, it seems like at least for now, AWS is a more friendly place for the ecosystem. At the same time, you do a lot of business in Microsoft. >> Yeah. And I think that a lot of companies have always feared that Amazon would just throw, you know, bodies at it. And I think that people have come to the realization that a two pizza team, as Amazon would call it, is eight people. I think that's, you know, two slices per person. I'm a little bit fat, so I don't know if that's enough. But, you start to look at it and go, okay, if they're going to start out with eight engineers, if I'm a startup and they're part of my ecosystem, do I really fear them or should I really embrace them and try to partner closer with them? And I think the smart people and the smart companies are partnering with them because they're realizing, Amazon, unless they can see it to, you know, a hundred million, $500 million market, they're not going to throw eight to 16 people at a problem. I think when, you know, you could say, you could look at the elastic with OpenSearch and what they did there. And the licensing terms and the battle they went through. But they knew that Elastic had a huge market. Also, you had a number of ecosystem companies building on top of now OpenSearch, that are now domain on top of Amazon as well. So, I think Amazon's being pretty strategic in how they're doing it. I think some of the-- It'll be interesting. I think this year is a payout year for the cuts that they're making to some of the services internally to kind of, you know, how do we take the fat off some of those services that-- You know, you look at Alexa. I don't know how much revenue Alexa really generates for them. But it's a means to an end for a number of different other services and partners. >> What do you make of this ChatGPT? I mean, Microsoft obviously is playing that card. You want to, you want ChatGPT in the Cloud, come to Azure. Seems like AWS has to respond. And we know Google is, you know, sharpening its knives to come up with its response. >> Yeah, I mean Google just went and talked about Bard for the first time this week and they're in private preview or I guess they call it beta, but. Right at the moment to select, select AI users, which I have no idea what that means. But that's a very interesting way that they're marketing it out there. But, I think that Amazon will have to respond. I think they'll be more measured than say, what Google's doing with Bard and just throwing it out there to, hey, we're going into beta now. I think they'll look at it and see where do we go and how do we actually integrate this in? Because they do have a lot of components of AI and ML underneath the hood that other services use. And I think that, you know, they've learned from that. And I think that they've already done a good job. Especially for media and entertainment when you start to look at some of the ways that they use it for helping do graphics and helping to do drones. I think part of their buy of iRobot was the fact that iRobot was a big user of RoboMaker, which is using different models to train those robots to go around objects and things like that, so. >> Quick touch on Kubernetes, the whole DevOps World we just covered. The Cloud Native Foundation Security, CNCF. The security conference up in Seattle last week. First time they spun that out kind of like reinforced, you know, AWS spins out, reinforced from reinvent. Amsterdam's coming up soon, the CubeCon. What should we expect? What's hot in Cubeland? >> Yeah, I think, you know, Kubes, you're going to be looking at how OpenShift keeps growing and I think to that respect you get to see the momentum with people like Red Hat. You see others coming up and realizing how OpenShift has gone to market as being, like you were saying, partnering with those Clouds and really making it simple. I think the simplicity and the manageability of Kubernetes is going to be at the forefront. I think a lot of the investment is still going into, how do I bring observability and DevOps and AIOps and MLOps all together. And I think that's going to be a big place where people are going to be looking to see what comes out of CubeCon in Amsterdam. I think it's that manageability ease of use. >> Well Rob, I look forward to working with you on behalf of the whole Cube team. We're going to do more of these and go out to some shows extract the signal from the noise. Really appreciate you coming into our studio. >> Well, thank you for having me on. Really appreciate it. >> You're really welcome. All right, keep it right there, or thanks for watching. This is Dave Vellante for the Cube. And we'll see you next time. (light music)
SUMMARY :
I'm really pleased to It's always great to be here. and I think we can have the number of Clouds that they have, contract to start with those make sense to you And, I think when you look in terms of, you know, the outlook. And they're looking to My sense is they still, you know, in how they go to market And he said to this audience, is it the best place for me to go? You do strategy, you do messaging. and it's, you know, And I think when you start Even Oracle, you know, since they started to to be 75% of AWS IAS revenues. You know, what do you think? it's, you know, I think it's growing well. Is that just because of the And be able to move you forward. I feel like, you know, I think when, you know, you could say, And we know Google is, you know, And I think that, you know, you know, AWS spins out, and I think to that respect forward to working with you Well, thank you for having me on. And we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Bob Evans | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Rob | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Oracle | ORGANIZATION | 0.99+ |
Rob Strechay | PERSON | 0.99+ |
New York | LOCATION | 0.99+ |
September | DATE | 0.99+ |
Seattle | LOCATION | 0.99+ |
January | DATE | 0.99+ |
Dev Ittycheria | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
NetApp | ORGANIZATION | 0.99+ |
Amsterdam | LOCATION | 0.99+ |
75% | QUANTITY | 0.99+ |
UK | LOCATION | 0.99+ |
AWSs | ORGANIZATION | 0.99+ |
June | DATE | 0.99+ |
Snowplow | ORGANIZATION | 0.99+ |
eight | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
Scarpelli | PERSON | 0.99+ |
15% | QUANTITY | 0.99+ |
Australia | LOCATION | 0.99+ |
Mongo | ORGANIZATION | 0.99+ |
Slootman | PERSON | 0.99+ |
two-year | QUANTITY | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
six factors | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Merrill Lynch | ORGANIZATION | 0.99+ |
Last June | DATE | 0.99+ |
five countries | QUANTITY | 0.99+ |
eight people | QUANTITY | 0.99+ |
U.S. | LOCATION | 0.99+ |
last week | DATE | 0.99+ |
16 people | QUANTITY | 0.99+ |
Databricks' | ORGANIZATION | 0.99+ |