Image Title

Search Results for CIOS:

SiliconANGLE News | Dell Partners with Telecom and Infrastructure Players to Accelerate Adoption


 

(energetic instrumental music) >> Hey, everyone. Welcome to SiliconANGLE CUBE News here from Mobile World Congress. This is a Mobile World Congress news update. Dell in the news here partners with leading infrastructure companies, Dell Technologies, really setting up an ecosystem. Here, Dell, with leading telecom and infrastructure players accelerating the network adoption, announcing that it's launching the Dell's Open Telecom Ecosystem community. A community of multiple telecom partners and communication service providers aimed at becoming a unifying force in the telecom industry. This announcement comes just days after Dell introduced a host of new hardware, platforms designed to help the teleconference build cloud-native open radio network access, also called RAN architectures, using proprietary and sub-components for various suppliers. Dell's Open Telecom Ecosystem community has already partnered with Nokia, Qualcomm, Amdocs and Juniper Networks to create new offerings aimed at accelerating open RAN price performance for communication service providers. This includes creating a new virtual RAN offering using Open Telecom Ecosystem Labs, and as the center for testing and validation, building next-generation 5G virtualized distributed units and deploy and automated validated 5G-SA network with various partners across the ecosystem. Dell's promising that this is just the beginning of the collaboration with the telecom industry as it seeks to accelerate the adoption of 5G networking technologies and solve key industry challenges. More action's on the ground, go to thecube.net, theCUBE is broadcasting live for four days, Dave Vellante, Lisa Martin. I'm in the studios in Palo Alto bringing you the news. Lot of action happening, of course. Go to siliconangle.com to catch all the breaking news. We have a special report. We already got 10 plus stories already flowing. Probably have another 10 today. Day two tomorrow as MWC continues to power more news coverage for the edge and cloud-native technologies. (pensive ambient music)

Published Date : Feb 28 2023

SUMMARY :

and as the center for

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Lisa MartinPERSON

0.99+

NokiaORGANIZATION

0.99+

AmdocsORGANIZATION

0.99+

QualcommORGANIZATION

0.99+

DellORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

todayDATE

0.99+

Juniper NetworksORGANIZATION

0.99+

siliconangle.comOTHER

0.99+

Dell TechnologiesORGANIZATION

0.99+

10 plus storiesQUANTITY

0.99+

four daysQUANTITY

0.99+

thecube.netOTHER

0.98+

10QUANTITY

0.98+

MWCEVENT

0.97+

tomorrowDATE

0.96+

Day twoQUANTITY

0.95+

Mobile World CongressEVENT

0.95+

theCUBEORGANIZATION

0.94+

Mobile World CongressEVENT

0.83+

SiliconANGLE CUBEORGANIZATION

0.78+

OpenORGANIZATION

0.75+

SiliconANGLE NewsORGANIZATION

0.73+

Open Telecom EcosystemORGANIZATION

0.73+

Ecosystem LabsORGANIZATION

0.66+

Open Telecom EcosystemORGANIZATION

0.59+

DV trusted Infrastructure part 2 Open


 

>>The cybersecurity landscape continues to be one characterized by a series of point tools designed to do a very specific job, often pretty well, but the mosaic of tooling is grown over the years causing complexity in driving up costs and increasing exposures. So the game of Whackamole continues. Moreover, the way organizations approach security is changing quite dramatically. The cloud, while offering so many advantages, has also created new complexities. The shared responsibility model redefines what the cloud provider secures, for example, the S three bucket and what the customer is responsible for, eg properly configuring the bucket. You know, this is all well and good, but because virtually no organization of any size can go all in on a single cloud, that shared responsibility model now spans multiple clouds and with different protocols. Now, that of course includes on-prem and edge deployments, making things even more complex. Moreover, the DevOps team is being asked to be the point of execution to implement many aspects of an organization's security strategy. >>This extends to securing the runtime, the platform, and even now containers, which can end up anywhere. There's a real need for consolidation in the security industry, and that's part of the answer. We've seen this both in terms of mergers and acquisitions as well as platform plays that cover more and more ground. But the diversity of alternatives and infrastructure implementations continues to boggle the mind with more and more entry points for the attackers. This includes sophisticated supply chain attacks that make it even more difficult to understand how to secure components of a system and how secure those components actually are. The number one challenge CISOs face in today's complex world is lack of talent to address these challenges, and I'm not saying that SecOps pros are now talented. They are. There just aren't enough of them to go around, and the adversary is also talented and very creative, and there are more and more of them every day. >>Now, one of the very important roles that a technology vendor can play is to take mundane infrastructure security tasks off the plates of SEC off teams. Specifically, we're talking about shifting much of the heavy lifting around securing servers, storage, networking, and other infrastructure and their components onto the technology vendor via r and d and other best practices like supply chain management. And that's what we're here to talk about. Welcome to the second part in our series, A Blueprint for Trusted Infrastructure Made Possible by Dell Technologies and produced by the Cube. My name is Dave Ante, and I'm your host now. Previously, we looked at what trusted infrastructure means >>And the role that storage and data protection play in the equation. In this part two of the series, we explore the changing nature of technology infrastructure, how the industry generally in Dell specifically, are adapting to these changes and what is being done to proactively address threats that are increasingly stressing security teams. Now today, we continue the discussion and look more deeply into servers networking and hyper-converged infrastructure to better understand the critical aspects of how one company Dell is securing these elements so that devs SEC op teams can focus on the myriad new attack vectors and challenges that they faced. First up is Deepak rang Garage Power Edge security product manager at Dell Technologies, and after that we're gonna bring on Mahesh Naar oim, who was a consultant in the networking product management area at Dell. And finally, we're closed with Jerome West, who is the product management security lead for HCI hyperconverged infrastructure and converged infrastructure at Dell. Thanks for joining us today. We're thrilled to have you here and hope you enjoy the program.

Published Date : Oct 5 2022

SUMMARY :

provider secures, for example, the S three bucket and what the customer is responsible But the diversity of alternatives and infrastructure implementations continues to Now, one of the very important roles that a technology vendor can play is to take how the industry generally in Dell specifically, are adapting to

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jerome WestPERSON

0.99+

DellORGANIZATION

0.99+

FirstQUANTITY

0.99+

Dave AntePERSON

0.99+

todayDATE

0.99+

second partQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

Mahesh Naar oimPERSON

0.99+

oneQUANTITY

0.98+

DeepakPERSON

0.98+

bothQUANTITY

0.98+

part 2OTHER

0.97+

A Blueprint for Trusted Infrastructure Made PossibleTITLE

0.95+

HCIORGANIZATION

0.95+

single cloudQUANTITY

0.94+

CubeORGANIZATION

0.9+

WhackamoleTITLE

0.89+

one companyQUANTITY

0.85+

Power EdgeORGANIZATION

0.7+

part twoQUANTITY

0.65+

DevOpsORGANIZATION

0.6+

SecOpsTITLE

0.6+

pointQUANTITY

0.54+

DV trusted Infrastructure part 2 close


 

>> Whenever you're ready. >> Okay, I'm Dave, in five, four, three. I want to thank our guests for their contributions in helping us understand how investments by a company like Dell can both reduce the need for DevSecOp teams to worry about some of the more fundamental security issues around infrastructure, and have greater confidence in the quality, provenance and data protection designed in to core infrastructure like servers, storage, networking, and hyperconverged systems. At the end of the day, whether your workloads are in the cloud, on prem or at the edge, you are responsible for your own security but vendor R&D and vendor process must play an important role in easing the burden faced by security, devs and operation teams. And on behalf of theCUBE production, content and social teams, as well as Dell Technologies, we want to thank you for watching A Blueprint for Trusted Infrastructure. Remember, part one of this series, as well as all the videos associated with this program and of course, today's program are available on demand at thecube.net with additional coverage at siliconangle.com. And you can go to dell.com/securitysolutions, dell.com/dell.com/securitysolutions to learn more about Dell's approach to securing infrastructure and there's tons of additional resources that can help you on your journey. This is Dave Vellante for theCUBE, your leader in enterprise and emerging tech coverage. We'll see you next time.

Published Date : Oct 4 2022

SUMMARY :

in the quality, provenance

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

fiveQUANTITY

0.99+

DavePERSON

0.99+

DellORGANIZATION

0.99+

threeQUANTITY

0.99+

fourQUANTITY

0.99+

siliconangle.comOTHER

0.99+

thecube.netOTHER

0.99+

bothQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.98+

dell.com/dell.com/securitysolutionsOTHER

0.97+

A Blueprint for Trusted InfrastructureTITLE

0.95+

theCUBEORGANIZATION

0.94+

todayDATE

0.93+

tonsQUANTITY

0.83+

part 2OTHER

0.82+

dell.com/securitysolutionsOTHER

0.77+

partQUANTITY

0.57+

oneOTHER

0.49+

Dell A Blueprint for Trusted Infrastructure


 

the cyber security landscape has changed dramatically over the past 24 to 36 months rapid cloud migration has created a new layer of security defense sure but that doesn't mean csos can relax in many respects it further complicates or at least changes the ciso's scope of responsibilities in particular the threat surface has expanded and that creates more seams and cisos have to make sure their teams pick up where the hyperscaler clouds leave off application developers have become a critical execution point for cyber assurance shift left is the kind of new buzz phrase for devs but organizations still have to shield right meaning the operational teams must continue to partner with secops to make sure infrastructure is resilient so it's no wonder that in etr's latest survey of nearly 1500 cios and it buyers that business technology executives cite security as their number one priority well ahead of other critical technology initiatives including collaboration software cloud computing and analytics rounding out the top four but budgets are under pressure and csos have to prioritize it's not like they have an open checkbook they have to contend with other key initiatives like those just mentioned to secure the funding and what about zero trust can you go out and buy xero trust or is it a framework a mindset in a series of best practices applied to create a security consciousness throughout the organization can you implement zero trust in other words if a machine or human is not explicitly allowed access then access is denied can you implement that policy without constricting organizational agility the question is what's the most practical way to apply that premise and what role does infrastructure play as the enforcer how does automation play in the equation the fact is that today's approach to cyber resilient type resilience can't be an either or it has to be an and conversation meaning you have to ensure data protection while at the same time advancing the mission of the organization with as little friction as possible and don't even talk to me about the edge that's really going to keep you up at night hello and welcome to the special cube presentation a blueprint for trusted infrastructure made possible by dell technologies in this program we explore the critical role that trusted infrastructure plays in cyber security strategies how organizations should think about the infrastructure side of the cyber security equation and how dell specifically approaches securing infrastructure for your business we'll dig into what it means to transform and evolve toward a modern security infrastructure that's both trusted and agile first up are pete gear and steve kenniston they're both senior cyber security consultants at dell technologies and they're going to talk about the company's philosophy and approach to trusted infrastructure and then we're going to speak to paris arcadi who's a senior consultant for storage at dell technologies to understand where and how storage plays in this trusted infrastructure world and then finally rob emsley who heads product marketing for data protection and cyber security he's going to take a deeper dive with rob into data protection and explain how it has become a critical component of a comprehensive cyber security strategy okay let's get started pete gear steve kenniston welcome to the cube thanks for coming into the marlboro studios today great to be here dave thanks dave good to see you great to see you guys pete start by talking about the security landscape you heard my little rap up front what are you seeing i thought you wrapped it up really well and you touched on all the key points right technology is ubiquitous today it's everywhere it's no longer confined to a monolithic data center it lives at the edge it lives in front of us it lives in our pockets and smartphones along with that is data and as you said organizations are managing sometimes 10 to 20 times the amount of data that they were just five years ago and along with that cyber crime has become a very profitable enterprise in fact it's been more than 10 years since uh the nsa chief actually called cyber crime the biggest transfer of wealth in history that was 10 years ago and we've seen nothing but accelerating cyber crime and really sophistication of how those attacks are perpetrated and so the new security landscape is really more of an evolution we're finally seeing security catch up with all of the technology adoption all the build out the work from home and work from anywhere that we've seen over the last couple of years we're finally seeing organizations and really it goes beyond the i t directors it's a board level discussion today security's become a board level discussion yeah i think that's true as well it's like it used to be the security was okay the secops team you're responsible for security now you've got the developers are involved the business lines are involved it's part of onboarding for most companies you know steve this concept of zero trust it was kind of a buzzword before the pandemic and i feel like i've often said it's now become a mandate but it's it's it's still fuzzy to a lot of people how do you guys think about zero trust what does it mean to you how does it fit yeah i thought again i thought your opening was fantastic in in this whole lead into to what is zero trust it had been a buzzword for a long time and now ever since the federal government came out with their implementation or or desire to drive zero trust a lot more people are taking a lot more seriously because i don't think they've seen the government do this but ultimately let's see ultimately it's just like you said right if if you don't have trust to those particular devices uh applications or data you can't get at it the question is and and you phrase it perfectly can you implement that as well as allow the business to be as agile as it needs to be in order to be competitive because we're seeing with your whole notion around devops and the ability to kind of build make deploy build make deploy right they still need that functionality but it also needs to be trusted it needs to be secure and things can't get away from you yeah so it's interesting we attended every uh reinforce since 2019 and the narrative there is hey everything in this in the cloud is great you know and this narrative around oh security is a big problem is you know doesn't help the industry the fact is that the big hyperscalers they're not strapped for talent but csos are they don't have the the capabilities to really apply all these best practices they're they're playing whack-a-mole so they look to companies like yours to take their r your r d and bake it into security products and solutions so what are the critical aspects of the so-called dell trusted infrastructure that we should be thinking about yeah well dell trusted infrastructure for us is a way for us to describe uh the the work that we do through design development and even delivery of our it system so dell trusted infrastructure includes our storage it includes our servers our networking our data protection our hyper converged everything that infrastructure always has been it's just that today customers consume that infrastructure at the edge as a service in a multi-cloud environment i mean i view the cloud as really a way for organizations to become more agile and to become more flexible and also to control costs i don't think organizations move to the cloud or move to a multi-cloud environment to enhance security so i don't see cloud computing as a panacea for security i see it as another attack surface and another uh aspect in front that organizations and and security organizations and departments have to manage it's part of their infrastructure today whether it's in their data center in a cloud or at the edge i mean i think it's a huge point because a lot of people think oh data's in the cloud i'm good it's like steve we've talked about oh why do i have to back up my data it's in the cloud well you might have to recover it someday so i don't know if you have anything to add to that or any additional thoughts on it no i mean i think i think like what pete was saying when it comes to when it comes to all these new vectors for attack surfaces you know people did choose the cloud in order to be more agile more flexible and all that did was open up to the csos who need to pay attention to now okay where can i possibly be attacked i need to be thinking about is that secure and part of the part of that is dell now also understands and thinks about as we're building solutions is it is it a trusted development life cycle so we have our own trusted development life cycle how many times in the past did you used to hear about vendors saying you got to patch your software because of this we think about what changes to our software and what implementations and what enhancements we deliver can actually cause from a security perspective and make sure we don't give up or or have security become a whole just in order to implement a feature we got to think about those things yeah and as pete alluded to our secure supply chain so all the way through knowing what you're going to get when you actually receive it is going to be secure and not be tampered with becomes vitally important and pete and i were talking earlier when you have tens of thousands of devices that need to be delivered whether it be storage or laptops or pcs or or whatever it is you want to be you want to know that that that those devices are can be trusted okay guys maybe pete you could talk about the how dell thinks about it's its framework and its philosophy of cyber security and then specifically what dell's advantages are relative to the competition yeah definitely dave thank you so we've talked a lot about dell as a technology provider but one thing dell also is is a partner in this larger ecosystem we realize that security whether it's a zero trust paradigm or any other kind of security environment is an ecosystem uh with a lot of different vendors so we look at three areas one is protecting data in systems we know that it starts with and ends with data that helps organizations combat threats across their entire infrastructure and what it means is dell's embedding security features consistently across our portfolios of storage servers networking the second is enhancing cyber resiliency over the last decade a lot of the funding and spending has been in protecting or trying to prevent cyber threats not necessarily in responding to and recovering from threats right we call that resiliency organizations need to build resiliency across their organization so not only can they withstand a threat but they can respond recover and continue with their operations and the third is overcoming security complexity security is hard it's more difficult because of the things we've talked about about distributed data distributed technology and and attack surfaces everywhere and so we're enabling organizations to scale confidently to continue their business but know that all all the i.t decisions that they're making um have these intrinsic security features and are built and delivered in a consistent security so those are kind of the three pillars maybe we could end on what you guys see as the key differentiators that people should know about that that dell brings to the table maybe each of you could take take a shot at that yeah i think first of all from from a holistic portfolio perspective right the uh secure supply chain and the secure development life cycle permeate through everything dell does when building things so we build things with security in mind all the way from as pete mentioned from from creation to delivery we want to make sure you have that that secure device or or asset that permeates everything from servers networking storage data protection through hyper converge through everything that to me is really a key asset because that means you can you understand when you receive something it's a trusted piece of your infrastructure i think the other core component to think about and pete mentioned as dell being a partner for making sure you can deliver these things is that even though those are that's part of our framework these pillars are our framework of how we want to deliver security it's also important to understand that we are partners and that you don't need to rip and replace but as you start to put in new components you can be you can be assured that the components that you're replacing as you're evolving as you're growing as you're moving to the cloud as you're moving to a more on-prem type services or whatever that your environment is secure i think those are two key things got it okay pete bring us home yeah i think one of one of the big advantages of dell is our scope and our scale right we're a large technology vendor that's been around for decades and we develop and sell almost every piece of technology we also know that organizations are might make different decisions and so we have a large services organization with a lot of experienced services people that can help customers along their security journey depending on whatever type of infrastructure or solutions that they're looking at the other thing we do is make it very easy to consume our technology whether that's traditional on-premise in a multi-cloud environment uh or as a service and so the best of breed technology can be consumed in any variety of fashion and know that you're getting that consistent secure infrastructure that dell provides well and dell's forgot the probably top supply chain not only in the tech business but probably any business and so you can actually take take your dog food and then and allow other billionaire champagne sorry allow other people to you know share share best practices with your with your customers all right guys thanks so much for coming thank you appreciate it okay keep it right there after this short break we'll be back to drill into the storage domain you're watching a blueprint for trusted infrastructure on the cube the leader in enterprise and emerging tech coverage be right back concern over cyber attacks is now the norm for organizations of all sizes the impact of these attacks can be operationally crippling expensive and have long-term ramifications organizations have accepted the reality of not if but when from boardrooms to i.t departments and are now moving to increase their cyber security preparedness they know that security transformation is foundational to digital transformation and while no one can do it alone dell technologies can help you fortify with modern security modern security is built on three pillars protect your data and systems by modernizing your security approach with intrinsic features and hardware and processes from a provider with a holistic presence across the entire it ecosystem enhance your cyber resiliency by understanding your current level of resiliency for defending your data and preparing for business continuity and availability in the face of attacks overcome security complexity by simplifying and automating your security operations to enable scale insights and extend resources through service partnerships from advanced capabilities that intelligently scale a holistic presence throughout it and decades as a leading global technology provider we'll stop at nothing to help keep you secure okay we're back digging into trusted infrastructure with paris sarcadi he's a senior consultant for product marketing and storage at dell technologies parasaur welcome to the cube good to see you great to be with you dave yeah coming from hyderabad awesome so i really appreciate you uh coming on the program let's start with talking about your point of view on what cyber security resilience means to to dell generally but storage specifically yeah so for something like storage you know we are talking about the data layer name and if you look at cyber security it's all about securing your data applications and infrastructure it has been a very mature field at the network and application layers and there are a lot of great technologies right from you know enabling zero trust advanced authentications uh identity management systems and so on and and in fact you know with the advent of you know the the use of artificial intelligence and machine learning really these detection tools for cyber securities have really evolved in the network and the application spaces so for storage what it means is how can you bring them to the data layer right how can you bring you know the principles of zero trust to the data layer uh how can you leverage artificial intelligence and machine learning to look at you know access patterns and make intelligent decisions about maybe an indicator of a compromise and identify them ahead of time just like you know how it's happening and other ways of applications and when it comes to cyber resilience it's it's basically a strategy which assumes that a threat is imminent and it's a good assumption with the severity of the frequency of the attacks that are happening and the question is how do we fortify the infrastructure in the switch infrastructure to withstand those attacks and have a plan a response plan where we can recover the data and make sure the business continuity is not affected so that's uh really cyber security and cyber resiliency and storage layer and of course there are technologies like you know network isolation immutability and all these principles need to be applied at the storage level as well let me have a follow up on that if i may the intelligence that you talked about that ai and machine learning is that do you do you build that into the infrastructure or is that sort of a separate software module that that points at various you know infrastructure components how does that work both dave right at the data storage level um we have come with various data characteristics depending on the nature of data we developed a lot of signals to see what could be a good indicator of a compromise um and there are also additional applications like cloud iq is the best example which is like an infrastructure wide health monitoring system for dell infrastructure and now we have elevated that to include cyber security as well so these signals are being gathered at cloud iq level and other applications as well so that we can make those decisions about compromise and we can either cascade that intelligence and alert stream upstream for uh security teams um so that they can take actions in platforms like sign systems xtr systems and so on but when it comes to which layer the intelligence is it has to be at every layer where it makes sense where we have the information to make a decision and being closest to the data we have we are basically monitoring you know the various parallels data access who is accessing um are they crossing across any geo fencing uh is there any mass deletion that is happening or a mass encryption that is happening and we are able to uh detect uh those uh patterns and flag them as indicators of compromise and in allowing automated response manual control and so on for it teams yeah thank you for that explanation so at dell technologies world we were there in may it was one of the first you know live shows that that we did in the spring certainly one of the largest and i interviewed shannon champion and a huge takeaway from the storage side was the degree to which you guys emphasized security uh within the operating systems i mean really i mean powermax more than half i think of the features were security related but also the rest of the portfolio so can you talk about the the security aspects of the dell storage portfolio specifically yeah yeah so when it comes to data security and broadly data availability right in the context of cyber resiliency dell storage this you know these elements have been at the core of our um a core strength for the portfolio and the source of differentiation for the storage portfolio you know with almost decades of collective experience of building highly resilient architectures for mission critical data something like power max system which is the most secure storage platform for high-end enterprises and now with the increased focus on cyber security we are extending those core technologies of high availability and adding modern detection systems modern data isolation techniques to offer a comprehensive solution to the customer so that they don't have to piece together multiple things to ensure data security or data resiliency but a well-designed and well-architected solution by design is delivered to them to ensure cyber protection at the data layer got it um you know we were talking earlier to steve kenniston and pete gear about this notion of dell trusted infrastructure how does storage fit into that as a component of that sort of overall you know theme yeah and you know and let me say this if you could adjust because a lot of people might be skeptical that i can actually have security and at the same time not constrict my organizational agility that's old you know not an ore it's an end how do you actually do that if you could address both of those that would be great definitely so for dell trusted infrastructure cyber resiliency is a key component of that and just as i mentioned you know uh air gap isolation it really started with you know power protect cyber recovery you know that was the solution more than three years ago we launched and that was first in the industry which paved way to you know kind of data isolation being a core element of data management and uh for data infrastructure and since then we have implemented these technologies within different storage platforms as well so that customers have the flexibility depending on their data landscape they can approach they can do the right data isolation architecture right either natively from the storage platform or consolidate things into the backup platform and isolate from there and and the other key thing we focus in trusted infrastructure dell infra dell trusted infrastructure is you know the goal of simplifying security for the customers so one good example here is uh you know being able to respond to these cyber threats or indicators of compromise is one thing but an i.t security team may not be looking at the dashboard of the storage systems constantly right storage administration admins may be looking at it so how can we build this intelligence and provide this upstream platforms so that they have a single pane of glass to understand security landscape across applications across networks firewalls as well as storage infrastructure and in compute infrastructure so that's one of the key ways where how we are helping simplify the um kind of the ability to uh respond ability to detect and respond these threads uh in real time for security teams and you mentioned you know about zero trust and how it's a balance of you know not uh kind of restricting users or put heavy burden on you know multi-factor authentication and so on and this really starts with you know what we're doing is provide all the tools you know when it comes to advanced authentication uh supporting external identity management systems multi-factor authentication encryption all these things are intrinsically built into these platforms now the question is the customers are actually one of the key steps is to identify uh what are the most critical parts of their business or what are the applications uh that the most critical business operations depend on and similarly identify uh mission critical data where part of your response plan where it cannot be compromised where you need to have a way to recover once you do this identification then the level of security can be really determined uh by uh by the security teams by the infrastructure teams and you know another you know intelligence that gives a lot of flexibility uh for for even developers to do this is today we have apis um that so you can not only track these alerts at the data infrastructure level but you can use our apis to take concrete actions like blocking a certain user or increasing the level of authentication based on the threat level that has been perceived at the application layer or at the network layer so there is a lot of flexibility that is built into this by design so that depending on the criticality of the data criticality of the application number of users affected these decisions have to be made from time to time and it's as you mentioned it's it's a balance right and sometimes you know if if an organization had a recent attack you know the level of awareness is very high against cyber attacks so for a time you know these these settings may be a bit difficult to deal with but then it's a decision that has to be made by security teams as well got it so you're surfacing what may be hidden kpis that are being buried inside for instance the storage system through apis upstream into a dashboard so that somebody could you know dig into the storage tunnel extract that data and then somehow you know populate that dashboard you're saying you're automating that that that workflow that's a great example and you may have others but is that the correct understanding absolutely and it's a two-way integration let's say a detector an attack has been detected at a completely different layer right in the application layer or at a firewall we can respond to those as well so it's a two-way integration we can cascade things up as well as respond to threats that have been detected elsewhere um uh through the api that's great all right hey api for power skill is the best example for that uh excellent so thank you appreciate that give us the last word put a bow on this and and bring this segment home please absolutely so a dell storage portfolio um using advanced data isolation um with air gap having machine learning based algorithms to detect uh indicators of compromise and having rigor mechanisms with granular snapshots being able to recover data and restore applications to maintain business continuity is what we deliver to customers uh and these are areas where a lot of innovation is happening a lot of product focus as well as you know if you look at the professional services all the way from engineering to professional services the way we build these systems the way we we configure and architect these systems um cyber security and protection is a key focus uh for all these activities and dell.com securities is where you can learn a lot about these initiatives that's great thank you you know at the recent uh reinforce uh event in in boston we heard a lot uh from aws about you know detent and response and devops and machine learning and some really cool stuff we heard a little bit about ransomware but i'm glad you brought up air gaps because we heard virtually nothing in the keynotes about air gaps that's an example of where you know this the cso has to pick up from where the cloud leaves off but that was in front and so number one and number two we didn't hear a ton about how the cloud is making the life of the cso simpler and that's really my takeaway is is in part anyway your job and companies like dell so paris i really appreciate the insights thank you for coming on thecube thank you very much dave it's always great to be in these uh conversations all right keep it right there we'll be right back with rob emsley to talk about data protection strategies and what's in the dell portfolio you're watching thecube data is the currency of the global economy it has value to your organization and cyber criminals in the age of ransomware attacks companies need secure and resilient it infrastructure to safeguard their data from aggressive cyber attacks [Music] as part of the dell technologies infrastructure portfolio powerstor and powermax combine storage innovation with advanced security that adheres to stringent government regulations and corporate compliance requirements security starts with multi-factor authentication enabling only authorized admins to access your system using assigned roles tamper-proof audit logs track system usage and changes so it admins can identify suspicious activity and act with snapshot policies you can quickly automate the protection and recovery process for your data powermax secure snapshots cannot be deleted by any user prior to the retention time expiration dell technologies also make sure your data at rest stays safe with power store and powermax data encryption protects your flash drive media from unauthorized access if it's removed from the data center while adhering to stringent fips 140-2 security requirements cloud iq brings together predictive analytics anomaly detection and machine learning with proactive policy-based security assessments monitoring and alerting the result intelligent insights that help you maintain the security health status of your storage environment and if a security breach does occur power protect cyber recovery isolates critical data identifies suspicious activity and accelerates data recovery using the automated data copy feature unchangeable data is duplicated in a secure digital vault then an operational air gap isolates the vault from the production and backup environments [Music] architected with security in mind dell emc power store and powermax provides storage innovation so your data is always available and always secure wherever and whenever you need it [Music] welcome back to a blueprint for trusted infrastructure we're here with rob emsley who's the director of product marketing for data protection and cyber security rob good to see a new role yeah good to be back dave good to see you yeah it's been a while since we chatted last and you know one of the changes in in my world is that i've expanded my responsibilities beyond data protection marketing to also focus on uh cyber security marketing specifically for our infrastructure solutions group so certainly that's you know something that really has driven us to you know to come and have this conversation with you today so data protection obviously has become an increasingly important component of the cyber security space i i don't think necessarily of you know traditional backup and recovery as security it's to me it's an adjacency i know some companies have said oh yeah now we're a security company they're kind of chasing the valuation for sure bubble um dell's interesting because you you have you know data protection in the form of backup and recovery and data management but you also have security you know direct security capability so you're sort of bringing those two worlds together and it sounds like your responsibility is to to connect those those dots is that right absolutely yeah i mean i think that uh the reality is is that security is a a multi-layer discipline um i think the the days of thinking that it's one uh or another um technology that you can use or process that you can use to make your organization secure uh are long gone i mean certainly um you actually correct if you think about the backup and recovery space i mean people have been doing that for years you know certainly backup and recovery is all about the recovery it's all about getting yourself back up and running when bad things happen and one of the realities unfortunately today is that one of the worst things that can happen is cyber attacks you know ransomware malware are all things that are top of mind for all organizations today and that's why you see a lot of technology and a lot of innovation going into the backup and recovery space because if you have a copy a good copy of your data then that is really the the first place you go to recover from a cyber attack and that's why it's so important the reality is is that unfortunately the cyber criminals keep on getting smarter i don't know how it happens but one of the things that is happening is that the days of them just going after your production data are no longer the only challenge that you have they go after your your backup data as well so over the last half a decade dell technologies with its backup and recovery portfolio has introduced the concept of isolated cyber recovery vaults and that is really the you know we've had many conversations about that over the years um and that's really a big tenant of what we do in the data protection portfolio so this idea of of cyber security resilience that definition is evolving what does it mean to you yeah i think the the analyst team over at gartner they wrote a very insightful paper called you will be hacked embrace the breach and the whole basis of this analysis is so much money has been spent on prevention is that what's out of balance is the amount of budget that companies have spent on cyber resilience and cyber resilience is based upon the premise that you will be hacked you have to embrace that fact and be ready and prepared to bring yourself back into business you know and that's really where cyber resiliency is very very different than cyber security and prevention you know and i think that balance of get your security disciplines well-funded get your defenses as good as you can get them but make sure that if the inevitable happens and you find yourself compromised that you have a great recovery plan and certainly a great recovery plan is really the basis of any good solid data protection backup and recovery uh philosophy so if i had to do a swot analysis we don't have to do the wot but let's focus on the s um what would you say are dell's strengths in this you know cyber security space as it relates to data protection um one is we've been doing it a long time you know we talk a lot about dell's data protection being proven and modern you know certainly the experience that we've had over literally three decades of providing enterprise scale data protection solutions to our customers has really allowed us to have a lot of insight into what works and what doesn't as i mentioned to you one of the unique differentiators of our solution is the cyber recovery vaulting solution that we introduced a little over five years ago five six years parapatek cyber recovery is something which has become a unique capability for customers to adopt uh on top of their investment in dell technologies data protection you know the the unique elements of our solution already threefold and it's we call them the three eyes it's isolation it's immutability and it's intelligence and the the isolation part is really so important because you need to reduce the attack surface of your good known copies of data you know you need to put it in a location that the bad actors can't get to it and that really is the the the the essence of a cyber recovery vault interestingly enough you're starting to see the market throw out that word um you know from many other places but really it comes down to having a real discipline that you don't allow the security of your cyber recovery vault to be compromised insofar as allowing it to be controlled from outside of the vault you know allowing it to be controlled by your backup application our cyber recovery vaulting technology is independent of the backup infrastructure it uses it but it controls its own security and that is so so important it's like having a vault that the only way to open it is from the inside you know and think about that if you think about you know volts in banks or volts in your home normally you have a keypad on the outside think of our cyber recovery vault as having its security controlled from inside of the vault so nobody can get in nothing can get in unless it's already in and if it's already in then it's trusted exactly yeah exactly yeah so isolation is the key and then you mentioned immutability is the second piece yeah so immutability is is also something which has been around for a long time people talk about uh backup immunoability or immutable backup copies so immutability is just the the the additional um technology that allows the data that's inside of the vault to be unchangeable you know but again that immutability you know your mileage varies you know when you look across the uh the different offers that are out there in the market especially in the backup industry you make a very valid point earlier that the backup vendors in the market seems to be security washing their marketing messages i mean everybody is leaning into the ever-present danger of cyber security not a bad thing but the reality is is that you have to have the technology to back it up you know quite literally yeah no pun intended and then actually pun intended now what about the intelligence piece of it uh that's that's ai ml where does that fit for sure so the intelligence piece is delivered by um a solution called cybersense and cybersense for us is what really gives you the confidence that what you have in your cyber recovery vault is a good clean copy of data so it's looking at the backup copies that get driven into the cyber vault and it's looking for anomalies so it's not looking for signatures of malware you know that's what your antivirus software does that's what your endpoint protection software does that's on the prevention side of the equation but what we're looking for is we're looking to ensure that the data that you need when all hell breaks loose is good and that when you get a request to restore and recover your business you go right let's go and do it and you don't have any concern that what you have in the vault has been compromised so cyber sense is really a unique analytic solution in the market based upon the fact that it isn't looking at cursory indicators of of um of of of malware infection or or ransomware introduction it's doing full content analytics you know looking at you know has the data um in any way changed has it suddenly become encrypted has it suddenly become different to how it was in the previous scan so that anomaly detection is very very different it's looking for um you know like different characteristics that really are an indicator that something is going on and of course if it sees it you immediately get flagged but the good news is is that you always have in the vault the previous copy of good known data which now becomes your restore point so we're talking to rob emsley about how data protection fits into what dell calls dti dell trusted infrastructure and and i want to come back rob to this notion of and not or because i think a lot of people are skeptical like how can i have great security and not introduce friction into my organization is that an automation play how does dell tackle that problem i mean i think a lot of it is across our infrastructure is is security has to be built in i mean intrinsic security within our servers within our storage devices uh within our elements of our backup infrastructure i mean security multi-factor authentication you know elements that make the overall infrastructure secure you know we have capabilities that you know allow us to identify whether or not configurations have changed you know we'll probably be talking about that a little bit more to you later in the segment but the the essence is is um security is not a bolt-on it has to be part of the overall infrastructure and that's so true um certainly in the data protection space give us the the bottom line on on how you see dell's key differentiators maybe you could talk about dell of course always talks about its portfolio but but why should customers you know lead in to dell in in this whole cyber resilience space um you know staying on the data protection space as i mentioned the the the work we've been doing um to introduce this cyber resiliency solution for data protection is in our opinion as good as it gets you know the you know you've spoken to a number of our of our best customers whether it be bob bender from founders federal or more recently at delton allergies world you spoke to tony bryson from the town of gilbert and these are customers that we've had for many years that have implemented cyber recovery vaults and at the end of the day they can now sleep at night you know that's really the the peace of mind that they have is that the insurance that a data protection from dell cyber recovery vault a parapatex cyber recovery solution gives them you know really allows them to you know just have the assurance that they don't have to pay a ransom if they have a an insider threat issue and you know all the way down to data deletion is they know that what's in the cyber recovery vault is good and ready for them to recover from great well rob congratulations on the new scope of responsibility i like how you know your organization is expanding as the threat surface is expanding as we said data protection becoming an adjacency to security not security in and of itself a key component of a comprehensive security strategy rob emsley thank you for coming back in the cube good to see you again you too dave thanks all right in a moment i'll be back to wrap up a blueprint for trusted infrastructure you're watching the cube every day it seems there's a new headline about the devastating financial impacts or trust that's lost due to ransomware or other sophisticated cyber attacks but with our help dell technologies customers are taking action by becoming more cyber resilient and deterring attacks so they can greet students daily with a smile they're ensuring that a range of essential government services remain available 24 7 to citizens wherever they're needed from swiftly dispatching public safety personnel or sending an inspector to sign off on a homeowner's dream to protecting restoring and sustaining our precious natural resources for future generations with ever-changing cyber attacks targeting organizations in every industry our cyber resiliency solutions are right on the money providing the security and controls you need we help customers protect and isolate critical data from ransomware and other cyber threats delivering the highest data integrity to keep your doors open and ensuring that hospitals and healthcare providers have access to the data they need so patients get life-saving treatment without fail if a cyber incident does occur our intelligence analytics and responsive team are in a class by themselves helping you reliably recover your data and applications so you can quickly get your organization back up and running with dell technologies behind you you can stay ahead of cybercrime safeguarding your business and your customers vital information learn more about how dell technology's cyber resiliency solutions can provide true peace of mind for you the adversary is highly capable motivated and well equipped and is not standing still your job is to partner with technology vendors and increase the cost of the bad guys getting to your data so that their roi is reduced and they go elsewhere the growing issues around cyber security will continue to drive forward thinking in cyber resilience we heard today that it is actually possible to achieve infrastructure security while at the same time minimizing friction to enable organizations to move quickly in their digital transformations a xero trust framework must include vendor r d and innovation that builds security designs it into infrastructure products and services from the start not as a bolt-on but as a fundamental ingredient of the cloud hybrid cloud private cloud to edge operational model the bottom line is if you can't trust your infrastructure your security posture is weakened remember this program is available on demand in its entirety at thecube.net and the individual interviews are also available and you can go to dell security solutions landing page for for more information go to dell.com security solutions that's dell.com security solutions this is dave vellante thecube thanks for watching a blueprint for trusted infrastructure made possible by dell we'll see you next time

Published Date : Sep 20 2022

SUMMARY :

the degree to which you guys

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
tony brysonPERSON

0.99+

10QUANTITY

0.99+

bostonLOCATION

0.99+

hyderabadLOCATION

0.99+

steve kennistonPERSON

0.99+

second pieceQUANTITY

0.99+

rob emsleyPERSON

0.99+

two-wayQUANTITY

0.99+

rob emsleyPERSON

0.99+

dell technologiesORGANIZATION

0.99+

petePERSON

0.99+

todayDATE

0.99+

thecube.netOTHER

0.99+

dell.comORGANIZATION

0.99+

gartnerORGANIZATION

0.98+

three eyesQUANTITY

0.98+

davePERSON

0.98+

more than 10 yearsQUANTITY

0.98+

dellORGANIZATION

0.98+

three areasQUANTITY

0.98+

five years agoDATE

0.98+

two keyQUANTITY

0.98+

10 years agoDATE

0.98+

dell technologiesORGANIZATION

0.98+

bothQUANTITY

0.97+

steve kennistonPERSON

0.97+

20 timesQUANTITY

0.97+

firstQUANTITY

0.97+

thirdQUANTITY

0.97+

cybersenseORGANIZATION

0.97+

nearly 1500 ciosQUANTITY

0.96+

a lot more peopleQUANTITY

0.95+

one thingQUANTITY

0.95+

secondQUANTITY

0.95+

stevePERSON

0.94+

cloud iqTITLE

0.94+

tens of thousands of devicesQUANTITY

0.94+

pete gearPERSON

0.94+

more than three years agoDATE

0.93+

oneQUANTITY

0.93+

powermaxORGANIZATION

0.93+

two worldsQUANTITY

0.93+

2019DATE

0.92+

gilbertLOCATION

0.92+

one of the key waysQUANTITY

0.91+

DellORGANIZATION

0.91+

pandemicEVENT

0.91+

more than halfQUANTITY

0.9+

eachQUANTITY

0.9+

first placeQUANTITY

0.89+

benderPERSON

0.89+

a lot of peopleQUANTITY

0.89+

zero trustQUANTITY

0.89+

last decadeDATE

0.88+

Scott Baker, IBM Infrastructure | VMware Explore 2022


 

(upbeat music) >> Welcome back everyone to theCUBEs live coverage in San Francisco for VMware Explorer. I'm John Furrier with my host, Dave Vellante. Two sets, three days of wall to wall coverage. This is day two. We got a great guest, Scott Baker, CMO at IBM, VP of Infrastructure at IBM. Great to see you. Thanks for coming on. >> Hey, good to see you guys as well. It's always a pleasure. >> ()Good time last night at your event? >> Great time last night. >> It was really well-attended. IBM always has the best food so that was good and great props, magicians, and it was really a lot of fun, comedians. Good job. >> Yeah, I'm really glad you came on. One of the things we were chatting, before we came on camera was, how much changed. We've been covering IBM storage days, back on the Edge days, and they had the event. Storage is the center of all the conversations, cyber security- >> ()Right? >> ... But it's not just pure cyber. It's still important there. And just data and the role of multi-cloud and hybrid cloud and data and security are the two hottest areas, that I won't say unresolved, but are resolving themselves. And people are talking. It's the most highly discussed topics. >> Right. >> ()Those two areas. And it's just all on storage. >> Yeah, it sure does. And in fact, what I would even go so far as to say is, people are beginning to realize the importance that storage plays, as the data custodian for the organization. Right? Certainly you have humans that are involved in setting strategies, but ultimately whatever those policies are that get applied, have to be applied to a device that must act as a responsible custodian for the data it holds. >> So what's your role at IBM and the infrastructure team? Storage is one only one of the areas. >> ()Right. >> You're here at VMware Explore. What's going on here with IBM? Take us through what you're doing there at IBM, and then here at VMware. What's the conversations? >> Sure thing. I have the distinct pleasure to run both product marketing and strategy for our storage line. That's my primary focus, but I also have responsibility for the mainframe software, so the Z System line, as well as our Power server line, and our technical support organization, or at least the services side of our technical support organization. >> And one of the things that's going on here, lot of noise going on- >> Is that a bird flying around? >> Yeah >> We got fire trucks. What's changed? 'Cause right now with VMware, you're seeing what they're doing. They got the Platform, Under the Hood, Developer focus. It's still an OPS game. What's the relationship with VMware? What are you guys talking about here? What are some of the conversations you're having here in San Francisco? >> Right. Well, IBM has been a partner with VMware for at least the last 20 years. And VMware does, I think, a really good job about trying to create a working space for everyone to be an equal partner with them. It can be challenging too, if you want to sort of throw out your unique value to a customer. So one of the things that we've really been working on is, how do we partner much stronger? When we look at the customers that we support today, what they're looking for isn't just a solid product. They're looking for a solid ecosystem partnership. So we really lean in on that 20 years of partnership experience that we have with IBM. So one of the things that we announced was actually being one of the first VMware partners to bring both a technical innovation delivery mechanism, as well as technical services, alongside VMware technologies. I would say that was one of the first things that we really leaned in on, as we looked out at what customers are expecting from us. >> So I want to zoom out a little bit and talk about the industry. I've been following IBM since the early 1980s. It's trained in the mainframe market, and so we've seen, a lot of things you see come back to the mainframe, but we won't go there. But prior to Arvind coming on, it seemed like, okay, storage, infrastructure, yeah it's good business, and we'll let it throw off some margin. That's fine. But it's all about services and software. Okay, great. With Arvind, and obviously Red Hat, the whole focus shift to hybrid. We were talking, I think yesterday, about okay, where did we first hear hybrid? Obviously we heard that a lot from VMware. I heard it actually first, early on anyway, from IBM, talking hybrid. Some of the storage guys at the time. Okay, so now all of a sudden there's the realization that to make hybrid work, you need software and hardware working together. >> () Right. So it's now a much more fundamental part of the conversation. So when you look out, Scott, at the trends you're seeing in the market, when you talk to customers, what are you seeing and how is that informing your strategy, and how are you bringing together all the pieces? >> That's a really awesome question because it always depends on who, within the organization, you're speaking to. When you're inside the data center, when you're talking to the architects and the administrators, they understand the value in the necessity for a hybrid-cloud architecture. Something that's consistent. On The Edge, On-Prem, in the cloud. Something that allows them to expand the level of control that they have, without having to specialize on equipment and having to redo things as you move from one medium to the next. As you go upstack in that conversation, what I find really interesting is how leaders are beginning to realize that private cloud or on-prem, multi cloud, super cloud, whatever you call it, whatever's in the middle, those are just deployment mechanisms. What they're coming to understand is it's the applications and the data that's hybrid. And so what they're looking for IBM to deliver, and something that we've really invested in on the infrastructure side is, how do we create bidirectional application mobility? Making it easy for organizations, whether they're using containers, virtual machines, just bare metal, how do they move that data back and forth as they need to, and not just back and forth from on-prem to the cloud, but effectively, how do they go from cloud to cloud? >> Yeah. One of the things I noticed is your pin, says I love AI, with the I next to IBM and get all these (indistinct) in there. AI, remember the quote from IBM is, "You can't have AI without IA." Information architect. >> () Right. >> () Rob Thomas. >> Rob Thomas (indistinct) the sound bites. But that brings up the point about machine learning and some of these things that are coming down the like, how is your area devolving the smarts and the brains around leveraging the AI in the systems itself? We're hearing more and more softwares being coded into the hardware. You see Silicon advances. All this is kind of, not changing it, but bringing back the urgency of, hardware matters. >> That's right. >> () At the same time, it's still software too. >> That's right. So let's connect a couple of dots here. We talked a little bit about the importance of cyber resiliency, and let's talk about a little bit on how we use AI in that matter. So, if you look at the direct flash modules that are in the market today, or the SSDs that are in the market today, just standard-capacity drives. If you look at the flash core modules that IBM produces, we actually treat that as a computational storage offering, where you store the data, but it's got intelligence built into the processor, to offload some of the responsibilities of the controller head. The ability to do compression, single (indistinct), deduplication, you name it. But what if you can apply AI at the controller level, so that signals that are being derived by the flash core module itself, that look anomalous, can be handed up to an intelligence to say, "Hey, I'm all of a sudden getting encrypted rights from a host that I've never gotten encrypted rights for. Maybe this could be a problem." And then imagine if you connect that inferencing engine to the rest of the IBM portfolio, "Hey, Qradar. Hey IBM Guardian. What's going on on the network? Can we see some correlation here?" So what you're going to see IBM infrastructure continue to do is invest heavily into entropy and the ability to measure IO characteristics with respect to anomalous behavior and be able to report against that. And the trick here, because the array technically doesn't know if it's under attack or if the host just decided to turn on encryption, the trick here is using the IBM product relationships, and ecosystem relationships, to do correlation of data to determine what's actually happening, to reduce your false positives. >> And have that pattern of data too. It's all access to data too. Big time. >> That's right. >> And that innovation comes out of IBM R&D? Does it come out of the product group? Is it IBM research that then trickles its way in? Is it the storage innovation? Where's that come from? Where's that bubble up? That partnership? >> Well, I got to tell you, it doesn't take very long in this industry before your counterpart, your competitor, has a similar feature. Right? So we're always looking for, what's the next leg? What's the next advancement that we can make? We knew going into this process, that we had plenty of computational power that was untapped on the FPGA, the processor running on the flash core module. Right? So we thought, okay, well, what should we do next? And we thought, "Hey, why not just set this thing up to start watching IO patterns, do calculations, do trending, and report that back?" And what's great about what you brought up too, John, is that it doesn't stay on the box. We push that upstack through the AIOPS architecture. So if you're using Turbonomic, and you want to look applications stack down, to know if you've got threat potential, or your attack surface is open, you can make some changes there. If you want to look at it across your infrastructure landscape with a storage insight, you could do that. But our goal here is to begin to make the machine smarter and aware of impacts on the data, not just on the data they hold onto, but usage, to move it into the appropriate tier, different write activities or read activities or delete activities that could indicate malicious efforts that are underway, and then begin to start making more autonomous, how about managed autonomous responses? I don't want to turn this into a, oh, it's smart, just turn it on and walk away and it's good. I don't know that we'll ever get there just yet, but the important thing here is, what we're looking at is, how do we continually safeguard and protect that data? And how do we drive features in the box that remove more and more of the day to day responsibility from the administrative staff, who are technically hired really, to service and solve for bigger problems in the enterprise, not to be a specialist and have to manage one box at a time. >> Dave mentioned Arvind coming on, the new CEO of IBM, and the Red Hat acquisition and that change, I'd like to get your personal perspective, or industry perspective, so take your IBM-hat off for a second and put the Scott-experience-in-the-industry hat on, the transformation at the customer level right now is more robust, to use that word. I don't want to say chaotic, but it is chaotic. They say chaos in the cloud here at VM, a big part of their messaging, but it's changing the business model, how things are consumed. You're seeing new business models emerge. So IBM has this lot of storage old systems, you're transforming, the company's transforming. Customers are also transforming, so that's going to change how people market products. >> () Right. >> For example, we know that developers and DevOps love self-service. Why? Because they don't want to install it. Let me go faster. And they want to get rid of it, doesn't work. Storage is infrastructure and still software, so how do you see, in your mind's eye, with all your experience, the vision of how to market products that are super important, that are infrastructure products, that have to be put into play, for really new architectures that are going to transform businesses? It's not as easy as saying, "Oh, we're going to go to market and sell something." The old way. >> () Right. >> This shifting happening is, I don't think there's an answer yet, but I want to get your perspective on that. Customers want to hear the storage message, but it might not be speeds and fees. Maybe it is. Maybe it's not. Maybe it's solutions. Maybe it's security. There's multiple touch points now, that you're dealing with at IBM for the customer, without becoming just a storage thing or just- >> () Right. >> ... or just hardware. I mean, hardware does matter, but what's- >> Yeah, no, you're absolutely right, and I think what complicates that too is, if you look at the buying centers around a purchase decision, that's expanded as well, and so as you engage with a customer, you have to be sensitive to the message that you're telling, so that it touches the needs or the desires of the people that are all sitting around the table. Generally what we like to do when we step in and we engage, isn't so much to talk about the product. At some point, maybe later in the engagements, the importance of speeds, feeds, interconnectivity, et cetera, those do come up. Those are a part of the final decision, but early on it's really about outcomes. What outcomes are you delivering? This idea of being able to deliver, if you use the term zero trust or cyber-resilient storage capability as a part of a broader security architecture that you're putting into place, to help that organization, that certainly comes up. We also hear conversations with customers about, or requests from customers about, how do the parts of IBM themselves work together? Right? And I think a lot of that, again, continues to speak to what kind of outcome are you going to give to me? Here's a challenge that I have. How are you helping me overcome it? And that's a combination of IBM hardware, software, and the services side, where we really have an opportunity to stand out. But the thing that I would tell you, that's probably most important is, the engagement that we have up and down the stack in the market perspective, always starts with, what's the outcome that you're going to deliver for me? And then that drags with it the story that would be specific to the gear. >> Okay, so let's say I'm a customer, and I'm buying it to zero trust architecture, but it's going to be somewhat of a long term plan, but I have a tactical need. I'm really nervous about Ransomware, and I don't feel as though I'm prepared, and I want an outcome that protects me. What are you seeing? Are you seeing any patterns? I know it's going to vary, but are you seeing any patterns, in terms of best practice to protect me? >> Man, the first thing that we wanted to do at IBM is divorce ourselves from the company as we thought through this. And what I mean by that is, we wanted to do what's right, on day zero, for the customer. So we set back using the experience that we've been able to amass, going through various recovery operations, and helping customers get through a Ransomware attack. And we realized, "Hey. What we should offer is a free cyber resilience assessment." So we like to, from the storage side, we'd like to look at what we offer to the customer as following the NIST framework. And most vendors will really lean in hard on the response and the recovery side of that, as you should. But that means that there's four other steps that need to be addressed, and that free cyber-resilience assessment, it's a consultative engagement that we offer. What we're really looking at doing is helping you assess how vulnerable you are, how big is that attack surface? And coming out of that, we're going to give you a Vendor Agnostic Report that says here's your situation, here's your grade or your level of risk and vulnerability, and then here's a prioritized roadmap of where we would recommend that you go off and start solving to close up whatever the gaps or the risks are. Now you could say, "Hey, thanks, IBM. I appreciate that. I'm good with my storage vendor today. I'm going to go off and use it." Now, we may not get some kind of commission check. We may not sell the box. But what I do know is that you're going to walk away knowing the risks that you're in, and we're going to give you the recommendations to get started on closing those up. And that helps me sleep at night. >> That's a nice freebie. >> Yeah. >> Yeah, it really is, 'cause you guys got deep expertise in that area. So take advantage of that. >> Scott, great to have you on. Thanks for spending time out of your busy day. Final question, put a plug in for your group. What are you communicating to customers? Share with the audience here. You're here at VMware Explorer, the new rebranded- >> () Right? >> ... multi-cloud, hybrid cloud, steady state. There are three levels of transformation, virtualization, hybrid cloud, DevOps, now- >> Right? >> ... multi-cloud, so they're in chapter three of their journey- >> That's right. >> Really innovative company, like IBM, so put the plugin. What's going on in your world? Take a minute to explain what you want. >> Right on. So here we are at VMware Explorer, really excited to be here. We're showcasing two aspects of the IBM portfolio, all of the releases and announcements that we're making around the IBM cloud. In fact, you should come check out the product demonstration for the IBM Cloud Satellite. And I don't think they've coined it this, but I like to call it the VMware edition, because it has all of the VMware services and tools built into it, to make it easier to move your workloads around. We certainly have the infrastructure side on the storage, talking about how we can help organizations, not only accelerate their deployments in, let's say Tanzu or Containers, but even how we help them transform the application stack that's running on top of their virtualized environment in the most consistent and secure way possible. >> Multiple years of relationships with VMware. IBM, VMware together. Congratulations. >> () That's right. >> () Thanks for coming on. >> Hey, thanks (indistinct). Thank you very much. >> A lot more live coverage here at Moscone west. This is theCUBE. I'm John Furrier with Dave Vellante. Thanks for watching. Two more days of wall-to-wall coverage continuing here. Stay tuned. (soothing music)

Published Date : Aug 31 2022

SUMMARY :

Great to see you. Hey, good to see you guys as well. IBM always has the best One of the things we were chatting, And just data and the role of And it's just all on storage. for the data it holds. and the infrastructure team? What's the conversations? so the Z System line, as well What's the relationship with VMware? So one of the things that we announced and talk about the industry. of the conversation. and having to redo things as you move from AI, remember the quote from IBM is, but bringing back the () At the same time, that are in the market today, And have that pattern of data too. is that it doesn't stay on the box. and the Red Hat acquisition that have to be put into play, for the customer, ... or just hardware. that are all sitting around the table. and I'm buying it to that need to be addressed, expertise in that area. Scott, great to have you on. There are three levels of transformation, of their journey- Take a minute to explain what you want. because it has all of the relationships with VMware. Thank you very much. Two more days of wall-to-wall

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Dave VellantePERSON

0.99+

IBMORGANIZATION

0.99+

ScottPERSON

0.99+

VMwareORGANIZATION

0.99+

Scott BakerPERSON

0.99+

JohnPERSON

0.99+

San FranciscoLOCATION

0.99+

20 yearsQUANTITY

0.99+

Rob ThomasPERSON

0.99+

John FurrierPERSON

0.99+

yesterdayDATE

0.99+

oneQUANTITY

0.99+

John FurrierPERSON

0.99+

ArvindPERSON

0.99+

Two setsQUANTITY

0.99+

bothQUANTITY

0.99+

early 1980sDATE

0.99+

three daysQUANTITY

0.98+

OneQUANTITY

0.98+

two areasQUANTITY

0.97+

firstQUANTITY

0.97+

todayDATE

0.97+

last nightDATE

0.97+

one boxQUANTITY

0.96+

two hottest areasQUANTITY

0.94+

VMware ExplorerORGANIZATION

0.93+

first thingQUANTITY

0.93+

Red HatORGANIZATION

0.92+

VMware ExploreORGANIZATION

0.91+

chapter threeOTHER

0.91+

two aspectsQUANTITY

0.9+

Two more daysQUANTITY

0.9+

IBM InfrastructureORGANIZATION

0.89+

day twoQUANTITY

0.88+

zeroQUANTITY

0.88+

one mediumQUANTITY

0.88+

first thingsQUANTITY

0.85+

IBM R&DORGANIZATION

0.84+

TurbonomicTITLE

0.83+

Trusted Infrastructure Close


 

(theme music) (logo whooshes) >> The adversary is highly capable, motivated and well-equipped and is not standing still. Your job is to partner with technology vendors and increase the cost of the bad guys getting to your data so that their ROI is reduced and they go elsewhere. The growing issues around cybersecurity will continue to drive forward thinking in cyber resilience. We heard today that it is actually possible to achieve infrastructure security, while at the same time minimizing friction to enable organizations to move quickly in their digital transformations. A zero-trust framework must include vendor R&D and innovation that builds security, designs it into infrastructure products and services from the start, not as a bolt-on, but as a fundamental ingredient of the cloud, hybrid cloud, private cloud to edge operational model. The bottom line is, if you can't trust your infrastructure, your security, posture is weakened. Remember, this program is available on demand in its entirety at thecube.net, and the individual interviews are also available, and you can go to Dell's security solutions, a landing page, for more information. Go to dell.com/securitysolutions. That's dell.com/securitysolutions. This is Dave Vellante of theCUBE. Thanks for watching, A Blueprint for Trusted Infrastructure, made possible by Dell. We'll see ya next time. (theme music)

Published Date : Aug 4 2022

SUMMARY :

and is not standing still.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

thecube.netOTHER

0.99+

DellORGANIZATION

0.99+

todayDATE

0.98+

dell.com/securitysolutionsOTHER

0.97+

theCUBEORGANIZATION

0.96+

zeroQUANTITY

0.52+

Breaking Analysis Learnings from the hottest startups in cyber & IT infrastructure


 

from the cube studios in palo alto in boston bringing you data-driven insights from the cube and etr this is breaking analysis with dave vellante as you well know by now the cloud is about shifting i.t labor to more strategic initiatives or as andy jassy laid out years ago removing the undifferentiated heavy lifting associated with deploying and managing i.t infrastructure cloud is also about changing the operating model and rapidly scaling a business operation or a company often overlooked with cloud however is the innovation piece of the puzzle a main source of that innovation is venture funded startup companies that have brilliant technologists who are mission driven and have a vision to solve really hard problems and enter a large market at scale to disrupt it hello everyone and welcome to this week's wikibon cube insights powered by etr in this breaking analysis we're pleased to welcome a special guest and author of the elite 80. a report that details the hottest privately held cyber security and i.t infrastructure companies in the world eric suppenger is that author and joins us today from jmp securities eric welcome to the cube thanks for being here thank you very much dave i'm uh i'm looking forward to uh to having a discussion here with you yeah me too this is going to be great so let's dive right into the elite 80. first if you could tell us about jmp securities and fill us in on the report its history your approach to picking the 80 companies out of thousands of choices sure so jmp is a middle markets investment bank we're a full full-service investment bank based in san francisco we were founded in 2000 and we focus on technology health care financial services and real estate i've been with jmp since 2011. um i've uh i've i cover uh cyber security companies public companies uh i cover uh it infrastructure companies uh more broadly and um we have having been based here in san francisco i've long kept uh a good dialogue with uh private companies uh that that compete with the public companies that i cover and so um about seven years ago i i started uh developing this uh this report which is really designed to highlight uh emerging uh private companies that uh that i think are are well positioned to be leaders in their respective markets and uh and over time we've um we've built the list up to about 80 companies and uh and we publish this report every year uh it's designed to uh to keep tabs on on the companies that are doing well and uh and we rotate about uh about 15 to 20 to 25 percent of the companies uh out of the report every year either as they get acquired or they do an ipo or um uh they uh they if we think that they are slowing and others are getting a little bit more uh more exciting and you talk directly to the companies that's part of your methodology as well you do a lot of background research digging into funding but you also talk to the executives at these companies correct yes for the most part we uh we try to talk to the ceos at least the cfos the object here is to build a a relationship with these companies so that we have some good insights into uh into how they're doing and and how the market trends are evolving as they relate to those companies in particular some of the some of the dynamics that go into us selecting companies is one we do have to talk to the management teams uh two we uh we we base our decisions on who we include on how the companies are performing on how their competitors are uh are are discussing those companies their performance uh how other industry contacts talk about those companies and then we we track their hiring and uh and and how they've uh you know other metrics that we can uh we can gauge them by got it okay so i dug into the report a little bit and tried to summarize a few key takeaways so let's take a look at those and if if you allow me just set up the points and then and ask you to add some color so the first two things that really you know jumped out i want to comment on are the perspectives of the technology companies and then of course the other side is the buyers so it seems that the pandemic really got startups to sharpen their focus i remember talking to a number of vcs early on in the shutdown and they were all over their portfolio companies to reset their icp their ideal customer profile and sharpen their uvp their unique value proposition and they wanted them to do that specifically in the context of the pandemic and the new reality and then on the buy side let's face it if you weren't a digital business you were out of business so picking up on those two thoughts eric what can you share with us in terms of the findings that you have well that's that's very uh consistent with what we had found uh basically um when the pandemic first when the lockdown came uh in march we reached out to quite a few companies and industry contacts at that time feedback was uh you know it was uh it was a period of great uncertainty and a lot of a lot of budgets were were tightened pretty quickly but it didn't take very long and a lot of these companies uh you know having been uh innovation engines and and emerging players what they found was that uh the broader market quickly adopted uh digital transformation in response to the pandemic basically that was how they they uh facilitated uh keeping their their doors open so to speak and so um the ones that were able to uh to leverage uh need for emerging technologies because of an acceleration in digital transformation uh they they really stepped up and and quite a few of these companies they kept hiring they kept uh their sales uh did very well and uh and ultimately um a lot of the vcs that had been uh putting on the brakes uh they actually stepped up and uh and and continued funding uh pretty generously yeah we've got some data on that that we wanna look into so thank you for that now let's take a look at some of the the specific date of the study just break that down the elite 80 raised more than three billion dollars last year eclipsing the previous highs in your studies of 2019 and then a big portion of that capital went to pretty small number only 10 of the 80 firms and and most of that went to cyber security plays so what do you make of these numbers especially you know given your history with with this group of elite companies and the high concentration this year this past year so one of the trends that we've seen in the public in the public market or the ipo market is um companies are are waiting until they're a little bit more mature than they used to be so what we've seen is um the the funding for companies uh the the larger rounds are far larger than they used to be these companies typically are waiting until they're of size you know maybe now they're waiting to be uh 200 million uh in annual salary in annual revenues versus a 100 million before and so they are consuming quite a bit the larger rounds are are much bigger than they used to be um in the in the most recent uh report that we published we had uh one round that was over half a billion and another one that was over 400 million and if you go back just a couple of few years ago a large round was over 100 million and you didn't get too many that were over 200 million so that's that's been a distinct change and and i think that's not necessarily just a function of the pandemic but i think the pandemic caused caused some companies to kind of step up the size of their rounds uh and so there were a handful of uh very large rounds uh certainly bigger than what we've ever seen before yeah those are great observations i mean you're right it was 100 million used to be the magic number to go public and now you get so much late money coming in locking in maybe smaller gains but giving that company you know a little more time to get their act together pre-ipo let's take a look at where the money went you know talk about follow the money and eric you and your team you segmented that three billion dollars into a number of different categories as i said most of it go into cyber security uh categories like application security is assessment and risk there's endpoint endpoint boomed during the pandemic same with identity and this chart really shows those categories that you created to better understand these dynamics and sort of figure out where the money went how did you come up with these these categories and what does this data tell you so these categories were basically uh homegrown these are how i um i think of these companies um it's a little bit of uh pulling some information out of uh the likes of gartner but uh for the most part this was how i how i conceptualize the landscape uh in my mind um the interesting thing to me is you know so a lot of that data is skewed by a few large transactions so um you know if you if you think about the the the allocation of those uh those different categories and and the uh investments in those categories it's it's skewed by large transactions and what was most interesting to me was one the application security space is a space that had quite a few additional smaller rounds and i think that's one that's pretty interesting going forward and then the one that was a surprise to me more than that was the data management um outside of cyber security uh data management's a space that's getting uh a lot more attention and uh and it's getting um uh some pretty good uh growth so that's a space that we're uh we're paying some good good attention to as well yeah that's interesting i mean of course data management means a lot of different things to a lot of different people and vc's throwing money at it maybe trying to define it and then and then the the the ai ops and and the that data management piece you know took a took a portion of it but wow the the cyber guys really are are killing it and now as we mentioned ten companies sucked up the lion's share of of the funding and this next chart shows that concentration of those 10 investments so eric some big numbers here one trust secured more than a half a billion dollars four others nabbed more than a quarter billion in funding give us your thoughts on this what do you make of that high concentration well um i i think this is a function of companies that are waiting uh longer than they used to um they're these these companies are getting to be of considerable scale i mean titanium would be a good example that's a company that could have gone public years ago and uh and i don't think they're particularly eager to get out the door uh they provide liquidity to their previous investors by raising money and uh and and buying those shares back um and so they uh they basically uh just continue to uh to grow uh without the uh the burden or or the um uh the demands that being a public company create um so there's this that's that's really a function of of companies just waiting longer before they get out the door got it now here's another view of that that data the so the left side of this chart uh that we we want to show you next um gives you a sense of the size of the companies the revenue in the elite 80 and you know most of these companies have broken through the 100 million dollar revenue mark as you say uh and they're they're still private and so you can see the breakdown and then the right-hand side of the chart shows the most active investors we just pulled out those with three or more transactions and it's it's interesting to see the players there and of course you've got some strategics you got city in there you've got cisco along with a little bit of p and e private equity action maybe your thoughts on on on this data so so to give you a little flavor around the uh the size of these companies when we first started publishing this report a little bit of the goal was to try to keep those categories relatively equal and as you can see they've skewed uh far to the left uh towards the uh to the larger revenue stream you know size so that's that just goes to the point that um uh the the companies that uh you know that are getting that a lot of these private companies uh they're they're of saw considerable size before they uh they really go out the door and and i think that's a reflection of um of the caliber of uh of or the quality of investments that uh that are out there today these are companies that have built very mature businesses and they're not going into the market until um until they can demonstrate uh high confidence and uh and consistency in their performance yeah i mean you i remember when when cloudera took that massive i think it was the 750 billion a million dollar investment from uh from intel you know way back when they that bridged them to ipo and that was sort of if i recall started that that trend and then now you get a ipo last year like snowflake which is price to perfection and you got guys that really know how to do this they've done it a number of times and so it really is somewhat changed that that dynamic uh for ipos which of course came booming back it was so quiet there for so many years but let's look into these markets a bit um i want to talk about the security space and the i.t infrastructure space and here's a chart from optiv which is one of the elite 80 ironically and we've shared this with with our audience before and the point of this is that the cyber security spaces it's highly fragmented we've reported on this a lot it's got hundreds and hundreds of companies in there it's just mosaic of solutions so very complicated and bespoke sets of tooling and combine that with a lack of skilled expertise you know csos tell us the lack of talent is their biggest challenge makes it a really dynamic market and eric this is part of the reason why vcs they want in so the takeaway i get from that chart is we have a lot of um we still have a great need for best of breed um digital transformation uh cloud mobile all these trends are creating such a disruption that there's still a great opportunity for somebody that can deliver a uh you know a real best of the best of breed uh solution uh in spite of uh all the challenges that uh id it departments are having with trying to uh to meet you know security requirements and things like that uh the the world has embraced uh you know digital delivery and uh you are your success is oftentimes dependent on your your digital differentiation and if that's the case then there's always going to be opportunity for a better technology out there so that's that in the end is uh is why uh optiv has a uh a line card that's uh as as long as you can read it i'm glad you brought the point about best of breeze it's an age-old debate in the industry it's do we go best of breed or do we go you know integrated suites you know you look at a company like microsoft obviously that that works very well for them uh companies like cisco but so this next uh set of data we're gonna bring in some etr customer spending data and see where the momentum is and i think it'll really underscore the points that you're making there in terms of best of breed this chart shares a popular view that we like to to share with our community on the vertical axis is net score or that's spending velocity and the horizontal axis shows market share or pervasiveness in the data as we've said before anything above 40 percent that red line on the vertical axis is considered elevated and you can see a lot of companies in cyber security are above that mark now a couple points i want to make here before we bring eric back in first is the market it's fragmented but it's pretty large at over 100 billion dollars depending on which research firm you look at it's growing at you know the low double digits so so nice growth is putting on 10 billion dollars a year into that number and there are some big pure plays like palo alto networks and fortinet but the market includes some other large whales like cisco uh they've built up a sizeable security business microsoft microsoft's in most markets and serves its you know software customers so but you can see how crowded this market is now we've superimposed in the red recent valuations for some of the companies and and the other point we want to make is there's some big numbers here and some divergence between us eric was saying the the best of breed and the integrated suites and the pandemic as we've talked about a lot is fueled a shift in cyber strategies toward endpoint identity and cloud and you can see that in crowdstrike's 50 billion plus valuation octa another best of breed 34 billion dollars in identity they just bought off zero and paid four and a half billion dollars for auth0 to get access to the developer community z scaler at 28 billion proof point is going private at a 12 billion dollar number so you can see why vcs are pouring money into this market some really attractive valuations eric what are your thoughts on this data so my interpretation is that's that's just further validation that uh that these security markets are uh are getting disrupted and uh and the truth of the matter is there's only one um really well positioned uh platform player in there uh uh palo alto the rest of them are are platforms within their respective uh security technology space but uh you know there's there's not very many um you know broad security solution providers today and the reason for that is because we've got such a uh transformation going on uh across uh technology that the need for best of breed is uh is is getting recognized uh day in day out yeah you're right palo alto they're they csos love to work with palo alto they're kind of the high-end gold standard but and we reported last year on the divergence in valuations between fortinet and palo alto networks fortinet was doing a better job you know pivoting to the cloud we said palo alto will get its act together it did but then you see these pure play best of breeds really you know doing well so now let's take a look at the it infrastructure space and it's it's quite different in terms of the dynamics of the market so here's that same view of the etr data and we've cut it by uh three categories we cut on networking servers and storage and this is a very large market it's it's it's over 200 billion dollars but it's much more of an oligopoly in that you've got great concentration at the top you've got some really big companies like cisco and dell which is spinning out vmware so we're going to unlock you know more value of the core dell company dell's valuation is 79 billion and that includes its 80 ownership in vmware so you do the math and figure out what core dell is worth hpe is much smaller it's notable that its valuation is comparable to netapp netapp's around you know one-fifth the size of revenue-wise uh hpe now eric arista they stand out as the lone player that's having some success clearly against cisco what are your thoughts on on the infrastructure space so so a couple things i'll take away from that now first off uh you mentioned arista arista is a bit of an anomaly um a switching company you know a networking company that is in that upper echelon like you've pointed out above 40 percent it is it is unique and and basically they kind of cracked the code they figured out how to beat cisco at cisco's core competency which is traditionally switching switching and routing and they they did that by delivering a very differentiated uh uh hardware product um that that they were able to tap into some markets that uh that even cisco hasn't been able to open up and and those would be the hyperscale uh hyperscale you know hosting vendors like uh google and facebook and microsoft but i would i would put i would put arista kind of in a in a unique situation the other thing that i'll just point out that i think is an interesting takeaway from the um from the the the slide that you showed is there are some uh infrastructure or what i would consider is bordering on data management type companies i mean you look at uh rubric you look at cohesity and nutanix veeam they're they're all kind of bubbling up there and pure storage and i think that comes back to what i was mentioning earlier where there is some pretty interesting innovation going on in data management which has traditionally not had a lot of innovation so i would bet you those names would have bubbled up just in the last uh year or two where that's been a market that hasn't had a lot of innovation and and now there's some interesting things coming down the pipe you know that's interesting comments that you make in there because if you think back to sort of last decade arista obviously broke out the only two other companies in the in the core infrastructure space and this was a hardware game historically but it's obviously becoming a software game but take a look at pure storage and nutanix you can see their valuations at five billion and seven point four billion dollars respectively uh and then to your point cohesity you got them at 3.7 billion just did a recent you know round rubric 3.3 billion that's from 2019 and so you know presumably that's a higher valuation now veeam got taken out last january at five billion by uh inside capital uh and so i think they're doing very well and they're probably uh up from that and susa is going public at uh at a reported seven billion dollar valuation so quite a bit different dynamic in the infrastructure space so eric i want to bring it back to the elite 80 in in in in startups in general my first question to you is is what do you look for from successful startups to make this elite 80 list so a few factors first off uh their performance is uh is is one of the primary uh situations if it's a company that's not growing we'll we'll probably pull it from the list um i would say it is also very much a function of my perception of the quality of management uh we we do meet with all these management teams um if we feel like uh they're they're they're putting together a uh you know a um a leadership team that's gonna be around for a long time and they've got a product position that's uh pretty attractive uh those would certainly be two key aspects of what i look for beyond that uh certainly feedback that we get from competitors uh feedback that we get from industry contacts like resellers and then then i'd also just say my enthusiasm for their respective market that they're in if it's a a market that i think is is going to be difficult or flat or not very interesting then then that would certainly be a a reason to to not include them uh conversely even if it's a small company if it's if it's a sector that i think is going to be uh around for quite a while and it's very differentiated uh then we'll include um a lot of the smaller companies too well a good example that's like a weka i mean i don't want to i don't want to go into these companies but two because we believe we 80 companies are going to leave somebody else but that that's a good example of a smaller company that looks to be disruptive um how should enterprise customers the buyers do you think evaluate and filter startups you have any sense of that well um a couple things that i struggle with that that would be uh you know something that's a lot more readily available to them is uh is just the quality of the product i mean that's obviously uh why why they're looking at it but uh if it's a uh if it's a company that's got a a unique product that uh is is built uh you know that that can that can that works that would be the starting point then then beyond that it's also is it a management team is is the behavior of the company something that uh reflects a management team that's uh that's that's you know a high quality management team if they if they you know are responsive if they're following up if they're not trying to pull in business uh quickly if they're priced appropriately uh metrics like that would certainly be um key aspects that would be readily available to uh to the you know to the the buyers of technology beyond that um you know i think the viability of that market is going to be uh a key aspect as to whether or not that company is going to be around even if it's a good company if uh if it's a highly competitive uh market that's going to have some big big players that can kind of integrate it and to make it a feature across other other product lines then that's going to make it a a tough a tough road to to go for a start-up these days you know the other thing i wanted to to talk about was the risks and the rewards of working with with startup companies and i've had i've had cios and and enterprise architects tell me that they'll when when they have to do an rfp they'll pull out the gartner magic quadrant they'll always you know pick a couple in the top right just to cover their butts but they many say you know what we also pick some of those those in the challenger space because because that are that are really interesting to us and and we run them through the paces and we manage those risks we don't we don't run the company on them but it helps us find these diamonds in the rough i mean think about you know in the in this in the second part of last decade if you picked a snowflake you might have been able to get ahead of some of your competition things like data sharing or or maybe you found that that well you know what octa is going to help me with my identity in in a new way and you're going to be better prepared to be a digital business but do you have any thoughts on how uh people should manage those risks and and how they should think about the upside i don't i don't think today um a a you know a company can work today using legacy technologies i i think the risk the greater risk is falling behind from a a digital transformation perspective this this era i think the pandemic is probably the best proof point of this um you can't you can't go with just a uh a traditional legacy architecture in a in in a key aspect of your business and so the startups um i i think you've got to take the quote-unquote risk of working with a startup that's uh you know that's got a viability concern or sustainability concern uh the risk of of having a um uh an i.t infrastructure that's inadequate is uh is a far greater risk from my perspective so i think that the startups right now are are are in a very strong position and they're well funded that's the last question i wanted to talk about is how will startups kind of penetrate the enterprise in this modern era i mean you know this is really a software world and software is this sort of capital efficient business but yet you're seeing companies raise hundreds of millions of dollars i mean that's not even absurd these days you see companies go to ipo that have raised over a billion dollars and much of that if not most of it goes to promotion and go to market uh so so how maybe you could give us your perspectives on how you see startups getting into the enterprise in these sectors so i one of the really interesting things that we've seen in the last couple years is a lot of changes to sales models and and if you look at the mid market the ability to leverage viral sales models uh has been wildly successful for some companies um it's been um you know a great strategy uh there's a public company ubiquity that did a uh has built a multi-billion dollar uh you know business on on without without a sales organization so there's some pretty interesting um directions that i think sales and go to market is going to uh incur over the over the coming years uh traditional enterprise sales i think are still uh pretty standard today but i i think that the efficiency of um of you know social networking and and uh and and what would the the delivery of uh of products on on a digital for in a digital format is going to change the way that we do sales so i think i think there's a lot of efficiencies that are going to come in uh in sales over the coming years that's interesting because then you'll you know i i think you're right and and and instead of just just pouring money at promotion maybe get more efficient there and pour money in into engineering because that really is the long-term sustainable value that these companies are going to create right yeah i i would absolutely agree with that and um again if you look at the you know if you look at the charts of the well-established players that that you had mentioned those companies are where they are that the ones at the top are where they are because of their technology i mean it's it's not because of uh their go to market it's it's it's because they have something that other people can't uh can't replicate right well eric hey it's been great having you on today thanks so much for joining us really appreciate your time well dave i greatly appreciate it uh it's been a lot of fun so uh so thank you all right hey go get the elite 80 report all you got to do is search jmp elite 80 and it'll it'll come up there's a there's a lot of data out there so it's really a worthwhile reference tool and uh so thank you everybody for watching remember these episodes are all available as podcasts wherever you listen you can check out etr's website at etr dot plus and we also publish weekly a full report on wikibon.com and siliconangle.com you can email me david.velante at siliconangle.com or dm me on twitter at divalante hit up hit our linkedin post and really appreciate those comments this is dave vellante for the cube insights powered by etr have a great week everybody stay safe and we'll see you next time you

Published Date : Jun 14 2021

SUMMARY :

the the companies that uh you know that

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2000DATE

0.99+

five billionQUANTITY

0.99+

2019DATE

0.99+

3.7 billionQUANTITY

0.99+

dellORGANIZATION

0.99+

microsoftORGANIZATION

0.99+

100 millionQUANTITY

0.99+

ciscoORGANIZATION

0.99+

79 billionQUANTITY

0.99+

28 billionQUANTITY

0.99+

34 billion dollarsQUANTITY

0.99+

last yearDATE

0.99+

san franciscoLOCATION

0.99+

80 companiesQUANTITY

0.99+

siliconangle.comOTHER

0.99+

10 investmentsQUANTITY

0.99+

80 companiesQUANTITY

0.99+

first questionQUANTITY

0.99+

more than a quarter billionQUANTITY

0.99+

over 200 millionQUANTITY

0.99+

four billion dollarsQUANTITY

0.99+

over 400 millionQUANTITY

0.99+

over 100 millionQUANTITY

0.99+

jmp securitiesORGANIZATION

0.99+

hundreds of millions of dollarsQUANTITY

0.99+

more than a half a billion dollarsQUANTITY

0.99+

2011DATE

0.99+

four and a half billion dollarsQUANTITY

0.99+

more than three billion dollarsQUANTITY

0.99+

twoQUANTITY

0.99+

dave vellantePERSON

0.99+

over half a billionQUANTITY

0.99+

palo altoORGANIZATION

0.99+

pandemicEVENT

0.99+

over 200 billion dollarsQUANTITY

0.99+

ten companiesQUANTITY

0.98+

over a billion dollarsQUANTITY

0.98+

todayDATE

0.98+

zeroQUANTITY

0.98+

googleORGANIZATION

0.98+

fortinetORGANIZATION

0.98+

100 million dollarQUANTITY

0.98+

seven billion dollarQUANTITY

0.98+

bostonLOCATION

0.98+

etrORGANIZATION

0.97+

jmpORGANIZATION

0.97+

over 100 billion dollarsQUANTITY

0.97+

two key aspectsQUANTITY

0.97+

facebookORGANIZATION

0.97+

eric aristaPERSON

0.97+

multi-billion dollarQUANTITY

0.97+

secondQUANTITY

0.96+

20QUANTITY

0.96+

firstQUANTITY

0.96+

ericPERSON

0.96+

80 firmsQUANTITY

0.96+

two other companiesQUANTITY

0.96+

10QUANTITY

0.96+

25 percentQUANTITY

0.96+

last decadeDATE

0.95+

3.3 billionQUANTITY

0.95+

750 billion a million dollarQUANTITY

0.95+

200 millionQUANTITY

0.95+

oneQUANTITY

0.95+

this yearDATE

0.95+

10 billion dollars a yearQUANTITY

0.95+

about seven years agoDATE

0.94+

last januaryDATE

0.94+

HPE GreenLake: Bringing-As-A-Service to Infrastructure


 

>>Hello, everyone. This is Dave Volonte with the Cube. On December 9th, the cube 3. 65 will be hosting Green Lake Day. Brought to you by Hewlett Packard Enterprise. Now Green Lake is H P E S. As a service initiative, it's designed to bring a cloud like experience to your I t environment, regardless of physical location. Now let me give you my take on what's happening here. Look, if you're a company that has relied primarily on selling hardware and infrastructure software on premises for decades, and you >>don't own a public cloud public cloud, well, you'd better have a strategy that supports the single most important trend in the business over the >>past decade. And that's cloud computing. HP formally announced Green Lake a year ago and really was the first to do so in the modern era. We're seeing others follow suit, and why not? The infrastructure World is taking a page out of the SAS business from a transaction and pricing standpoint where SAS models are being applied toe large portfolios of companies that sell and service, compute storage and networking gear and associated software. Now, like sass, these models generally require customers toe lock into a term >>of a least a year or mawr, and they'll require the customer to commit to a minimum threshold of capacity. So it's not a perfect replica replica of the pure paid by the drink. Cancel anytime public cloud model. But as I've said, neither is most SAS. For instance, when you buy from workday and salesforce and service now and many others, you have to commit to a term now with infrastructure. It's even more complex because the vendor has to install capacity and commit that to the customer. If you so choose, you can scale up or down and Onley pay for what you use as long as you commit to the term and pay for a certain minimum. So it's a shared risk model, which is a big step in the right direction. Now I will tell you that initiatives like Green Lake involves much more than playing financial games. I mean, that technique has been around forever since the mainframe days. No, true as a service, models require entirely new thinking around product design, salesforce, compensation, tooling to provide transparency and predictability, etcetera, for example, technology vendors they gotta get out of the mindset of selling boxes. They have to think about packaging services. When you sell a box, you drop it on the loading dock. You make sure it's delivered and deployed. You sign the customer up for a maintenance contract and you go on to the next one. In a model like Green Lake, the renewal process starts when the contract is signed. You have to earn the customers loyalty every day. Not that you don't have to do so in the old model, but it's different in and as a service context because it's not just >>the services organization has to worry about the customer renewing its everyone from the CEO down to the support specialist look churn is the silent killer oven. As a service model, an entirely new incentive and metric system has to emerge to support this change. The company also has to think about its portfolio not as products, but is a suite of services turning their product portfolio into a set of services with a P I s and an ecosystem that can plug into that. It's a completely different mindset now. Also share that. I think the infrastructure guys are playing catch up and It's high time we've seen this model emerge. Catch up to the SAS folks, that is, but I predict that it will continue to evolve. Let me give an example. We're now seeing software companies challenged the traditional SAS Model two examples, or snowflake and Data dog who's selling a consumption basis. It's a true cloud model where the customer can leave anytime. And I predict that over time, as SAS companies and eventually infrastructure >>players get more and more data, they're gonna be forced to look at similar pricing strategies. And as they get MAWR this data and can better predict usage, they'll >>increase their confidence and deploying such a consumption model. Now back to HP Green Lake H P E. By being first and committing the entire company to this approach from the top. Antonio Neary He's like the the CEO. He's a champion of this change. By being first, HP believes that it has an advantage. The company also believes that it has some innovations that will keep it ahead of the competition. So I encourage you to check out the link in the description of this video, register for Green Lake Day and decide for yourself I'll be there with a number of HP experts and customers to share what the future of as a service will look like and what it means to you. So look, if you're a CEO and infrastructure pro, ah, partner in the in the HP ecosystem, an existing customer or someone who is following these trends and wants to learn mawr, register for Green >>Lake Day and participate in the conversation, you'll have the opportunity to interact, live with experts, ask questions and hopefully get answers that will help you plan for the future. We'll see you there.

Published Date : Nov 23 2020

SUMMARY :

Brought to you by Hewlett Packard Enterprise. The infrastructure World is taking a page out of the SAS business from You sign the customer up for a maintenance contract and you go on to the next one. the services organization has to worry about the customer renewing its everyone from the CEO players get more and more data, they're gonna be forced to look at similar pricing strategies. Green Lake H P E. By being first and committing the entire company to this approach live with experts, ask questions and hopefully get answers that will help you plan

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VolontePERSON

0.99+

HPORGANIZATION

0.99+

December 9thDATE

0.99+

Green LakeORGANIZATION

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

Antonio NearyPERSON

0.99+

firstQUANTITY

0.99+

a year agoDATE

0.98+

Green Lake DayEVENT

0.98+

two examplesQUANTITY

0.97+

singleQUANTITY

0.96+

HP Green Lake H P E.ORGANIZATION

0.95+

H P E S.ORGANIZATION

0.93+

Lake DayEVENT

0.9+

past decadeDATE

0.86+

SASORGANIZATION

0.83+

3.QUANTITY

0.75+

HPEORGANIZATION

0.72+

a yearQUANTITY

0.68+

OnleyORGANIZATION

0.62+

65COMMERCIAL_ITEM

0.61+

decadesQUANTITY

0.55+

Greenlake Day: Bringing As-A-Service to Infrastructure


 

hello everyone this is dave vellante with thecube on december 9th the cube 365 will be hosting green lake day brought to you by hewlett packard enterprise now green lake is hpe's as a service initiative it's designed to bring a cloud-like experience to your it environment regardless of physical location now let me give you my take on what's happening here look if you're a company that has relied primarily on selling hardware and infrastructure software on premises for decades and you don't own a public cloud public cloud well you'd better have a strategy that supports the single most important trend in the business over the past decade and that's cloud computing hpe formally announced green lake a year ago and really was the first to do so in the modern era we're seeing others follow suit and why not the infrastructure world is taking a page out of the sas business from a transaction and pricing standpoint where sas models are being applied to large portfolios of companies that sell and service compute storage and networking gear and associated software now like sas these models generally require customers to lock into a term of at least a year or more and they'll require the customer to commit to a minimum threshold of capacity so it's not a perfect replica replica of the pure pay by the drink cancel anytime public cloud model but as i've said neither is most sas for instance when you buy from workday and salesforce and servicenow and many others you have to commit to a term now with infrastructure it's even more complex because the vendor has to install capacity and commit that to the customer if you so choose you can scale up or down and only pay for what you use as long as you commit to the term and pay for a certain minimum so it's a shared risk model which is a big step in the right direction now i will tell you that initiatives like green lake involve much more than playing financial games i mean that technique has been around forever since the mainframe days no true as a service models require entirely new thinking around product design sales force compensation tooling to provide transparency and predictability etc for example technology vendors they got to get out of the mindset of selling boxes they have to think about patch packaging services when you sell a box you drop it on the loading dock you make sure it's delivered and deployed you sign the customer up for a maintenance contract and you go on to the next one in a model like green lake the renewal process starts when the contract is signed you have to earn the customer's loyalty every day not that you don't have to do so in the old model but it's different in an as a service context because it's not just the services organization has to worry about the customer renewing it's everyone from the ceo down to the support specialist look chern is the silent killer of an as a service model an entirely new incentive and metric system has to emerge to support this change company also has to think about its portfolio not as products but as a suite of services turning their product portfolio into a set of services with apis and an ecosystem that can plug into that it's a completely different mindset now also share that i think the infrastructure guys are playing catch up and it's high time we've seen this model emerge catch up to the sas folks that is but i predict that it will continue to evolve let me give an example we're now seeing software companies challenge the traditional sas model two examples are snowflake and datadog who sell on a consumption basis it's a true cloud model where the customer can leave any time and i predict that over time as sas companies and eventually infrastructure players get more and more data they're going to be forced to look at similar pricing strategies and as they get more of this data and can better predict usage they'll increase their confidence in deploying such a consumption model now back to hpe greenlake hpe by being first and committing the entire company to this approach from the top antonio neri he's like the ceo he's a champion of this change by being first hpe believes that it has an advantage the company also believes that it has some innovations that will keep it ahead of the competition so i encourage you to check out the link in the description of this video register for green lake day and decide for yourself i'll be there with a number of hpe experts and customers to share what the future of as a service will look like and what it means to you so look if you're a cio an infrastructure pro a partner in the in the hpe ecosystem an existing customer or someone who's following these trends and wants to learn more register for green lake day and participate in the conversation you'll have the opportunity to interact live with experts ask questions and hopefully get answers that will help you plan for the future we'll see you there

Published Date : Nov 23 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
TristanPERSON

0.99+

George GilbertPERSON

0.99+

JohnPERSON

0.99+

GeorgePERSON

0.99+

Steve MullaneyPERSON

0.99+

KatiePERSON

0.99+

David FloyerPERSON

0.99+

CharlesPERSON

0.99+

Mike DooleyPERSON

0.99+

Peter BurrisPERSON

0.99+

ChrisPERSON

0.99+

Tristan HandyPERSON

0.99+

BobPERSON

0.99+

Maribel LopezPERSON

0.99+

Dave VellantePERSON

0.99+

Mike WolfPERSON

0.99+

VMwareORGANIZATION

0.99+

MerimPERSON

0.99+

Adrian CockcroftPERSON

0.99+

AmazonORGANIZATION

0.99+

BrianPERSON

0.99+

Brian RossiPERSON

0.99+

Jeff FrickPERSON

0.99+

Chris WegmannPERSON

0.99+

Whole FoodsORGANIZATION

0.99+

EricPERSON

0.99+

Chris HoffPERSON

0.99+

Jamak DaganiPERSON

0.99+

Jerry ChenPERSON

0.99+

CaterpillarORGANIZATION

0.99+

John WallsPERSON

0.99+

Marianna TesselPERSON

0.99+

JoshPERSON

0.99+

EuropeLOCATION

0.99+

JeromePERSON

0.99+

GoogleORGANIZATION

0.99+

Lori MacVittiePERSON

0.99+

2007DATE

0.99+

SeattleLOCATION

0.99+

10QUANTITY

0.99+

fiveQUANTITY

0.99+

Ali GhodsiPERSON

0.99+

Peter McKeePERSON

0.99+

NutanixORGANIZATION

0.99+

Eric HerzogPERSON

0.99+

IndiaLOCATION

0.99+

MikePERSON

0.99+

WalmartORGANIZATION

0.99+

five yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

Kit ColbertPERSON

0.99+

PeterPERSON

0.99+

DavePERSON

0.99+

Tanuja RanderyPERSON

0.99+

Kubernetes on Any Infrastructure Top to Bottom Tutorials for Docker Enterprise Container Cloud


 

>>all right, We're five minutes after the hour. That's all aboard. Who's coming aboard? Welcome everyone to the tutorial track for our launchpad of them. So for the next couple of hours, we've got a SYRIZA videos and experts on hand to answer questions about our new product, Doctor Enterprise Container Cloud. Before we jump into the videos and the technology, I just want to introduce myself and my other emcee for the session. I'm Bill Milks. I run curriculum development for Mirant us on. And >>I'm Bruce Basil Matthews. I'm the Western regional Solutions architect for Moran Tissue esa and welcome to everyone to this lovely launchpad oven event. >>We're lucky to have you with us proof. At least somebody on the call knows something about your enterprise Computer club. Um, speaking of people that know about Dr Enterprise Container Cloud, make sure that you've got a window open to the chat for this session. We've got a number of our engineers available and on hand to answer your questions live as we go through these videos and disgusting problem. So that's us, I guess, for Dr Enterprise Container Cloud, this is Mirant asses brand new product for bootstrapping Doctor Enterprise Kubernetes clusters at scale Anything. The airport Abu's? >>No, just that I think that we're trying Thio. Uh, let's see. Hold on. I think that we're trying Teoh give you a foundation against which to give this stuff a go yourself. And that's really the key to this thing is to provide some, you know, many training and education in a very condensed period. So, >>yeah, that's exactly what you're going to see. The SYRIZA videos we have today. We're going to focus on your first steps with Dr Enterprise Container Cloud from installing it to bootstrapping your regional child clusters so that by the end of the tutorial content today, you're gonna be prepared to spin up your first documentary prize clusters using documented prize container class. So just a little bit of logistics for the session. We're going to run through these tutorials twice. We're gonna do one run through starting seven minutes ago up until I guess it will be ten fifteen Pacific time. Then we're gonna run through the whole thing again. So if you've got other colleagues that weren't able to join right at the top of the hour and would like to jump in from the beginning, ten. Fifteen Pacific time. We're gonna do the whole thing over again. So if you want to see the videos twice, you got public friends and colleagues that, you know you wanna pull in for a second chance to see this stuff, we're gonna do it all. All twice. Yeah, this session. Any any logistics I should add, Bruce that No, >>I think that's that's pretty much what we had to nail down here. But let's zoom dash into those, uh, feature films. >>Let's do Edmonds. And like I said, don't be shy. Feel free to ask questions in the chat or engineers and boosting myself are standing by to answer your questions. So let me just tee up the first video here and walk their cost. Yeah. Mhm. Yes. Sorry. And here we go. So our first video here is gonna be about installing the Doctor Enterprise Container Club Management cluster. So I like to think of the management cluster as like your mothership, right? This is what you're gonna use to deploy all those little child clusters that you're gonna use is like, Come on it as clusters downstream. So the management costs was always our first step. Let's jump in there >>now. We have to give this brief little pause >>with no good day video. Focus for this demo will be the initial bootstrap of the management cluster in the first regional clusters to support AWS deployments. The management cluster provides the core functionality, including identity management, authentication, infantry release version. The regional cluster provides the specific architecture provided in this case, eight of us and the Elsie um, components on the UCP Cluster Child cluster is the cluster or clusters being deployed and managed. The deployment is broken up into five phases. The first phase is preparing a big strap note on this dependencies on handling with download of the bridge struck tools. The second phase is obtaining America's license file. Third phase. Prepare the AWS credentials instead of the adduce environment. The fourth configuring the deployment, defining things like the machine types on the fifth phase. Run the bootstrap script and wait for the deployment to complete. Okay, so here we're sitting up the strap node, just checking that it's clean and clear and ready to go there. No credentials already set up on that particular note. Now we're just checking through AWS to make sure that the account we want to use we have the correct credentials on the correct roles set up and validating that there are no instances currently set up in easy to instance, not completely necessary, but just helps keep things clean and tidy when I am perspective. Right. So next step, we're just going to check that we can, from the bootstrap note, reach more antis, get to the repositories where the various components of the system are available. They're good. No areas here. Yeah, right now we're going to start sitting at the bootstrap note itself. So we're downloading the cars release, get get cars, script, and then next, we're going to run it. I'm in. Deploy it. Changing into that big struck folder. Just making see what's there. Right now we have no license file, so we're gonna get the license filed. Oh, okay. Get the license file through the more antis downloads site, signing up here, downloading that license file and putting it into the Carisbrook struck folder. Okay, Once we've done that, we can now go ahead with the rest of the deployment. See that the follow is there. Uh, huh? That's again checking that we can now reach E C two, which is extremely important for the deployment. Just validation steps as we move through the process. All right, The next big step is valid in all of our AWS credentials. So the first thing is, we need those route credentials which we're going to export on the command line. This is to create the necessary bootstrap user on AWS credentials for the completion off the deployment we're now running an AWS policy create. So it is part of that is creating our Food trucks script, creating the mystery policy files on top of AWS, Just generally preparing the environment using a cloud formation script you'll see in a second will give a new policy confirmations just waiting for it to complete. Yeah, and there is done. It's gonna have a look at the AWS console. You can see that we're creative completed. Now we can go and get the credentials that we created Today I am console. Go to that new user that's being created. We'll go to the section on security credentials and creating new keys. Download that information media Access key I D and the secret access key. We went, Yeah, usually then exported on the command line. Okay. Couple of things to Notre. Ensure that you're using the correct AWS region on ensure that in the conflict file you put the correct Am I in for that region? I'm sure you have it together in a second. Yes. Okay, that's the key. Secret X key. Right on. Let's kick it off. Yeah, So this process takes between thirty and forty five minutes. Handles all the AWS dependencies for you, and as we go through, the process will show you how you can track it. Andi will start to see things like the running instances being created on the west side. The first phase off this whole process happening in the background is the creation of a local kind based bootstrapped cluster on the bootstrap node that clusters then used to deploy and manage all the various instances and configurations within AWS. At the end of the process, that cluster is copied into the new cluster on AWS and then shut down that local cluster essentially moving itself over. Okay. Local clusters boat just waiting for the various objects to get ready. Standard communities objects here Okay, so we speed up this process a little bit just for demonstration purposes. Yeah. There we go. So first note is being built the best in host. Just jump box that will allow us access to the entire environment. Yeah, In a few seconds, we'll see those instances here in the US console on the right. Um, the failures that you're seeing around failed to get the I. P for Bastian is just the weight state while we wait for a W s to create the instance. Okay. Yes. Here, beauty there. Okay. Mhm. Okay. Yeah, yeah. Okay. On there. We got question. Host has been built on three instances for the management clusters have now been created. We're going through the process of preparing. Those nodes were now copying everything over. See that? The scaling up of controllers in the big Strap cluster? It's indicating that we're starting all of the controllers in the new question. Almost there. Yeah. Yeah, just waiting for key. Clark. Uh huh. Start to finish up. Yeah. No. What? Now we're shutting down control this on the local bootstrap node on preparing our I. D. C. Configuration. Fourth indication, soon as this is completed. Last phase will be to deploy stack light into the new cluster the last time Monitoring tool set way Go stack like to plan It has started. Mhm coming to the end of the deployment Mountain. Yeah, America. Final phase of the deployment. Onda, We are done. Okay, You'll see. At the end they're providing us the details of you. I log in so there's a keeper clogging. You can modify that initial default password is part of the configuration set up with one documentation way. Go Councils up way can log in. Yeah, yeah, thank you very much for watching. >>Excellent. So in that video are wonderful field CTO Shauna Vera bootstrapped up management costume for Dr Enterprise Container Cloud Bruce, where exactly does that leave us? So now we've got this management costume installed like what's next? >>So primarily the foundation for being able to deploy either regional clusters that will then allow you to support child clusters. Uh, comes into play the next piece of what we're going to show, I think with Sean O'Mara doing this is the child cluster capability, which allows you to then deploy your application services on the local cluster. That's being managed by the ah ah management cluster that we just created with the bootstrap. >>Right? So this cluster isn't yet for workloads. This is just for bootstrapping up the downstream clusters. Those or what we're gonna use for workings. >>Exactly. Yeah. And I just wanted to point out, since Sean O'Mara isn't around, toe, actually answer questions. I could listen to that guy. Read the phone book, and it would be interesting, but anyway, you can tell him I said that >>he's watching right now, Crusoe. Good. Um, cool. So and just to make sure I understood what Sean was describing their that bootstrap er knows that you, like, ran document fresh pretender Cloud from to begin with. That's actually creating a kind kubernetes deployment kubernetes and Docker deployment locally. That then hits the AWS a p i in this example that make those e c two instances, and it makes like a three manager kubernetes cluster there, and then it, like, copies itself over toe those communities managers. >>Yeah, and and that's sort of where the transition happens. You can actually see it. The output that when it says I'm pivoting, I'm pivoting from my local kind deployment of cluster AP, I toothy, uh, cluster, that's that's being created inside of AWS or, quite frankly, inside of open stack or inside of bare metal or inside of it. The targeting is, uh, abstracted. Yeah, but >>those air three environments that we're looking at right now, right? Us bare metal in open staff environments. So does that kind cluster on the bootstrap er go away afterwards. You don't need that afterwards. Yeah, that is just temporary. To get things bootstrapped, then you manage things from management cluster on aws in this example? >>Yeah. Yeah. The seed, uh, cloud that post the bootstrap is not required anymore. And there's no, uh, interplay between them after that. So that there's no dependencies on any of the clouds that get created thereafter. >>Yeah, that actually reminds me of how we bootstrapped doctor enterprise back in the day, be a temporary container that would bootstrap all the other containers. Go away. It's, uh, so sort of a similar, similar temporary transient bootstrapping model. Cool. Excellent. What will convict there? It looked like there wasn't a ton, right? It looked like you had to, like, set up some AWS parameters like credentials and region and stuff like that. But other than that, that looked like heavily script herbal like there wasn't a ton of point and click there. >>Yeah, very much so. It's pretty straightforward from a bootstrapping standpoint, The config file that that's generated the template is fairly straightforward and targeted towards of a small medium or large, um, deployment. And by editing that single file and then gathering license file and all of the things that Sean went through, um, that that it makes it fairly easy to script >>this. And if I understood correctly as well that three manager footprint for your management cluster, that's the minimum, right. We always insist on high availability for this management cluster because boy do not wanna see oh, >>right, right. And you know, there's all kinds of persistent data that needs to be available, regardless of whether one of the notes goes down or not. So we're taking care of all of that for you behind the scenes without you having toe worry about it as a developer. >>No, I think there's that's a theme that I think will come back to throughout the rest of this tutorial session today is there's a lot of there's a lot of expertise baked him to Dr Enterprise Container Cloud in terms of implementing best practices for you like the defaulter, just the best practices of how you should be managing these clusters, Miss Seymour. Examples of that is the day goes on. Any interesting questions you want to call out from the chap who's >>well, there was. Yeah, yeah, there was one that we had responded to earlier about the fact that it's a management cluster that then conduce oh, either the the regional cluster or a local child molester. The child clusters, in each case host the application services, >>right? So at this point, we've got, in some sense, like the simplest architectures for our documentary prize Container Cloud. We've got the management cluster, and we're gonna go straight with child cluster. In the next video, there's a more sophisticated architecture, which will also proper today that inserts another layer between those two regional clusters. If you need to manage regions like across a BS, reads across with these documents anything, >>yeah, that that local support for the child cluster makes it a lot easier for you to manage the individual clusters themselves and to take advantage of our observation. I'll support systems a stack light and things like that for each one of clusters locally, as opposed to having to centralize thumb >>eso. It's a couple of good questions. In the chat here, someone was asking for the instructions to do this themselves. I strongly encourage you to do so. That should be in the docks, which I think Dale helpfully thank you. Dale provided links for that's all publicly available right now. So just head on in, head on into the docks like the Dale provided here. You can follow this example yourself. All you need is a Mirante license for this and your AWS credentials. There was a question from many a hear about deploying this toe azure. Not at G. Not at this time. >>Yeah, although that is coming. That's going to be in a very near term release. >>I didn't wanna make promises for product, but I'm not too surprised that she's gonna be targeted. Very bracing. Cool. Okay. Any other thoughts on this one does. >>No, just that the fact that we're running through these individual pieces of the steps Well, I'm sure help you folks. If you go to the link that, uh, the gentleman had put into the chat, um, giving you the step by staff. Um, it makes it fairly straightforward to try this yourselves. >>E strongly encourage that, right? That's when you really start to internalize this stuff. OK, but before we move on to the next video, let's just make sure everyone has a clear picture in your mind of, like, where we are in the life cycle here creating this management cluster. Just stop me if I'm wrong. Who's creating this management cluster is like, you do that once, right? That's when your first setting up your doctor enterprise container cloud environment of system. What we're going to start seeing next is creating child clusters and this is what you're gonna be doing over and over and over again. When you need to create a cluster for this Deb team or, you know, this other team river it is that needs commodity. Doctor Enterprise clusters create these easy on half will. So this was once to set up Dr Enterprise Container Cloud Child clusters, which we're going to see next. We're gonna do over and over and over again. So let's go to that video and see just how straightforward it is to spin up a doctor enterprise cluster for work clothes as a child cluster. Undocumented brands contain >>Hello. In this demo, we will cover the deployment experience of creating a new child cluster, the scaling of the cluster and how to update the cluster. When a new version is available, we begin the process by logging onto the you I as a normal user called Mary. Let's go through the navigation of the U I so you can switch. Project Mary only has access to development. Get a list of the available projects that you have access to. What clusters have been deployed at the moment there. Nan Yes, this H Keys Associate ID for Mary into her team on the cloud credentials that allow you to create access the various clouds that you can deploy clusters to finally different releases that are available to us. We can switch from dark mode to light mode, depending on your preferences, Right? Let's now set up semester search keys for Mary so she can access the notes and machines again. Very simply, had Mississippi key give it a name, we copy and paste our public key into the upload key block. Or we can upload the key if we have the file available on our local machine. A simple process. So to create a new cluster, we define the cluster ad management nodes and add worker nodes to the cluster. Yeah, again, very simply, you go to the clusters tab. We hit the create cluster button. Give the cluster name. Yeah, Andi, select the provider. We only have access to AWS in this particular deployment, so we'll stick to AWS. What's like the region in this case? US West one release version five point seven is the current release Onda Attach. Mary's Key is necessary Key. We can then check the rest of the settings, confirming the provider Any kubernetes c r D r I p address information. We can change this. Should we wish to? We'll leave it default for now on. Then what components? A stack light I would like to deploy into my Custer. For this. I'm enabling stack light on logging on Aiken. Sit up the retention sizes Attention times on. Even at this stage, at any customer alerts for the watchdogs. E consider email alerting which I will need my smart host details and authentication details. Andi Slack Alerts. Now I'm defining the cluster. All that's happened is the cluster's been defined. I now need to add machines to that cluster. I'll begin by clicking the create machine button within the cluster definition. Oh, select manager, Select the number of machines. Three is the minimum. Select the instant size that I'd like to use from AWS and very importantly, ensure correct. Use the correct Am I for the region. I commend side on the route device size. There we go, my three machines obviously creating. I now need to add some workers to this custom. So I go through the same process this time once again, just selecting worker. I'll just add to once again, the AM is extremely important. Will fail if we don't pick the right, Am I for a boon to machine in this case and the deployment has started. We can go and check on the bold status are going back to the clusters screen on clicking on the little three dots on the right. We get the cluster info and the events, so the basic cluster info you'll see pending their listen cluster is still in the process of being built. We kick on, the events will get a list of actions that have been completed This part of the set up of the cluster. So you can see here we've created the VPC. We've created the sub nets on We've created the Internet gateway. It's unnecessary made of us and we have no warnings of the stage. Yeah, this will then run for a while. We have one minute past waken click through. We can check the status of the machine bulls as individuals so we can check the machine info, details of the machines that we've assigned, right? Mhm Onda. See any events pertaining to the machine areas like this one on normal? Yeah. Just watch asked. The community's components are waiting for the machines to start. Go back to Custer's. Okay, right. Because we're moving ahead now. We can see we have it in progress. Five minutes in new Matt Gateway on the stage. The machines have been built on assigned. I pick up the U. S. Thank you. Yeah. There we go. Machine has been created. See the event detail and the AWS. I'd for that machine. Mhm. No speeding things up a little bit. This whole process and to end takes about fifteen minutes. Run the clock forward, you'll notice is the machines continue to bold the in progress. We'll go from in progress to ready. A soon as we got ready on all three machines, the managers on both workers way could go on and we could see that now we reached the point where the cluster itself is being configured. Mhm, mhm. And then we go. Cluster has been deployed. So once the classes deployed, we can now never get around our environment. Okay, Are cooking into configure cluster We could modify their cluster. We could get the end points for alert alert manager on See here The griffon occupying and Prometheus are still building in the background but the cluster is available on you would be able to put workloads on it the stretch to download the cube conflict so that I can put workloads on it. It's again three little dots in the right for that particular cluster. If the download cube conflict give it my password, I now have the Q conflict file necessary so that I can access that cluster Mhm all right Now that the build is fully completed, we can check out cluster info on. We can see that Allow the satellite components have been built. All the storage is there, and we have access to the CPU. I So if we click into the cluster, we can access the UCP dashboard, right? Shit. Click the signing with Detroit button to use the SSO on. We give Mary's possible to use the name once again. Thing is, an unlicensed cluster way could license at this point. Or just skip it on. There. We have the UCP dashboard. You can see that has been up for a little while. We have some data on the dashboard going back to the console. We can now go to the griffon, a data just being automatically pre configured for us. We can switch and utilized a number of different dashboards that have already been instrumented within the cluster. So, for example, communities cluster information, the name spaces, deployments, nodes. Mhm. So we look at nodes. If we could get a view of the resource is utilization of Mrs Custer is very little running in it. Yeah. General dashboard of Cuba navies cluster one of this is configurable. You can modify these for your own needs, or add your own dashboards on de scoped to the cluster. So it is available to all users who have access to this specific cluster, all right to scale the cluster on to add a notice. A simple is the process of adding a mode to the cluster, assuming we've done that in the first place. So we go to the cluster, go into the details for the cluster we select, create machine. Once again, we need to be ensure that we put the correct am I in and any other functions we like. You can create different sized machines so it could be a larger node. Could be bigger disks and you'll see that worker has been added from the provisioning state on shortly. We will see the detail off that worker as a complete to remove a note from a cluster. Once again, we're going to the cluster. We select the node would like to remove. Okay, I just hit delete On that note. Worker nodes will be removed from the cluster using according and drawing method to ensure that your workouts are not affected. Updating a cluster. When an update is available in the menu for that particular cluster, the update button will become available. And it's a simple as clicking the button, validating which release you would like to update to. In this case, the next available releases five point seven point one. Here I'm kicking the update by in the background We will coordinate. Drain each node slowly go through the process of updating it. Andi update will complete depending on what the update is as quickly as possible. Girl, we go. The notes being rebuilt in this case impacted the manager node. So one of the manager nodes is in the process of being rebuilt. In fact, to in this case, one has completed already on In a few minutes we'll see that there are great has been completed. There we go. Great. Done. Yeah. If you work loads of both using proper cloud native community standards, there will be no impact. >>Excellent. So at this point, we've now got a cluster ready to start taking our communities of workloads. He started playing or APs to that costume. So watching that video, the thing that jumped out to me at first Waas like the inputs that go into defining this workload cost of it. All right, so we have to make sure we were using on appropriate am I for that kind of defines the substrate about what we're gonna be deploying our cluster on top of. But there's very little requirements. A so far as I could tell on top of that, am I? Because Docker enterprise Container Cloud is gonna bootstrap all the components that you need. That s all we have is kind of kind of really simple bunch box that we were deploying these things on top of so one thing that didn't get dug into too much in the video. But it's just sort of implied. Bruce, maybe you can comment on this is that release that Shawn had to choose for his, uh, for his cluster in creating it. And that release was also the thing we had to touch. Wanted to upgrade part cluster. So you have really sharp eyes. You could see at the end there that when you're doing the release upgrade enlisted out a stack of components docker, engine, kubernetes, calico, aled, different bits and pieces that go into, uh, go into one of these commodity clusters that deploy. And so, as far as I can tell in that case, that's what we mean by a release. In this sense, right? It's the validated stack off container ization and orchestration components that you know we've tested out and make sure it works well, introduction environments. >>Yeah, and and And that's really the focus of our effort is to ensure that any CVS in any of the stack are taken care of that there is a fixes air documented and up streamed to the open stack community source community, um, and and that, you know, then we test for the scaling ability and the reliability in high availability configuration for the clusters themselves. The hosts of your containers. Right. And I think one of the key, uh, you know, benefits that we provide is that ability to let you know, online, high. We've got an update for you, and it's fixes something that maybe you had asked us to fix. Uh, that all comes to you online as your managing your clusters, so you don't have to think about it. It just comes as part of the product. >>You just have to click on Yes. Please give me that update. Uh, not just the individual components, but again. It's that it's that validated stack, right? Not just, you know, component X, y and Z work. But they all work together effectively Scalable security, reliably cool. Um, yeah. So at that point, once we started creating that workload child cluster, of course, we bootstrapped good old universal control plane. Doctor Enterprise. On top of that, Sean had the classic comment there, you know? Yeah. Yeah. You'll see a little warnings and errors or whatever. When you're setting up, UCP don't handle, right, Just let it do its job, and it will converge all its components, you know, after just just a minute or two. But we saw in that video, we sped things up a little bit there just we didn't wait for, you know, progress fighters to complete. But really, in real life, that whole process is that anything so spend up one of those one of those fosters so quite quite quick. >>Yeah, and and I think the the thoroughness with which it goes through its process and re tries and re tries, uh, as you know, and it was evident when we went through the initial ah video of the bootstrapping as well that the processes themselves are self healing, as they are going through. So they will try and retry and wait for the event to complete properly on. And once it's completed properly, then it will go to the next step. >>Absolutely. And the worst thing you could do is panic at the first warning and start tearing things that don't don't do that. Just don't let it let it heal. Let take care of itself. And that's the beauty of these manage solutions is that they bake in a lot of subject matter expertise, right? The decisions that are getting made by those containers is they're bootstrapping themselves, reflect the expertise of the Mirant ISS crew that has been developing this content in these two is free for years and years now, over recognizing humanities. One cool thing there that I really appreciate it actually that it adds on top of Dr Enterprise is that automatic griffon a deployment as well. So, Dr Enterprises, I think everyone knows has had, like, some very high level of statistics baked into its dashboard for years and years now. But you know our customers always wanted a double click on that right to be able to go a little bit deeper. And Griffon are really addresses that it's built in dashboards. That's what's really nice to see. >>Yeah, uh, and all of the alerts and, uh, data are actually captured in a Prometheus database underlying that you have access to so that you are allowed to add new alerts that then go out to touch slack and say hi, You need to watch your disk space on this machine or those kinds of things. Um, and and this is especially helpful for folks who you know, want to manage the application service layer but don't necessarily want to manage the operations side of the house. So it gives them a tool set that they can easily say here, Can you watch these for us? And Miran tas can actually help do that with you, So >>yeah, yeah, I mean, that's just another example of baking in that expert knowledge, right? So you can leverage that without tons and tons of a long ah, long runway of learning about how to do that sort of thing. Just get out of the box right away. There was the other thing, actually, that you could sleep by really quickly if you weren't paying close attention. But Sean mentioned it on the video. And that was how When you use dark enterprise container cloud to scale your cluster, particularly pulling a worker out, it doesn't just like Territo worker down and forget about it. Right? Is using good communities best practices to cordon and drain the No. So you aren't gonna disrupt your workloads? You're going to just have a bunch of containers instantly. Excellent crash. You could really carefully manage the migration of workloads off that cluster has baked right in tow. How? How? Document? The brass container cloud is his handling cluster scale. >>Right? And And the kubernetes, uh, scaling methodology is is he adhered to with all of the proper techniques that ensure that it will tell you. Wait, you've got a container that actually needs three, uh, three, uh, instances of itself. And you don't want to take that out, because that node, it means you'll only be able to have to. And we can't do that. We can't allow that. >>Okay, Very cool. Further thoughts on this video. So should we go to the questions. >>Let's let's go to the questions >>that people have. Uh, there's one good one here, down near the bottom regarding whether an a p I is available to do this. So in all these demos were clicking through this web. You I Yes, this is all a p. I driven. You could do all of this. You know, automate all this away is part of the CSC change. Absolutely. Um, that's kind of the point, right? We want you to be ableto spin up. Come on. I keep calling them commodity clusters. What I mean by that is clusters that you can create and throw away. You know, easily and automatically. So everything you see in these demos eyes exposed to FBI? >>Yeah. In addition, through the standard Cube cuddle, Uh, cli as well. So if you're not a programmer, but you still want to do some scripting Thio, you know, set up things and deploy your applications and things. You can use this standard tool sets that are available to accomplish that. >>There is a good question on scale here. So, like, just how many clusters and what sort of scale of deployments come this kind of support our engineers report back here that we've done in practice up to a Zeman ia's like two hundred clusters. We've deployed on this with two hundred fifty nodes in a cluster. So were, you know, like like I said, hundreds, hundreds of notes, hundreds of clusters managed by documented press container fall and then those downstream clusters, of course, subject to the usual constraints for kubernetes, right? Like default constraints with something like one hundred pods for no or something like that. There's a few different limitations of how many pods you can run on a given cluster that comes to us not from Dr Enterprise Container Cloud, but just from the underlying kubernetes distribution. >>Yeah, E. I mean, I don't think that we constrain any of the capabilities that are available in the, uh, infrastructure deliveries, uh, service within the goober Netease framework. So were, you know, But we are, uh, adhering to the standards that we would want to set to make sure that we're not overloading a node or those kinds of things, >>right. Absolutely cool. Alright. So at this point, we've got kind of a two layered our protection when we are management cluster, but we deployed in the first video. Then we use that to deploy one child clustering work, classroom, uh, for more sophisticated deployments where we might want to manage child clusters across multiple regions. We're gonna add another layer into our architectural we're gonna add in regional cluster management. So this idea you're gonna have the single management cluster that we started within the first video. On the next video, we're gonna learn how to spin up a regional clusters, each one of which would manage, for example, a different AWS uh, US region. So let me just pull out the video for that bill. We'll check it out for me. Mhm. >>Hello. In this demo, we will cover the deployment of additional regional management. Cluster will include a brief architectures of you how to set up the management environment, prepare for the deployment deployment overview and then just to prove it, to play a regional child cluster. So, looking at the overall architecture, the management cluster provides all the core functionality, including identity management, authentication, inventory and release version. ING Regional Cluster provides the specific architecture provider in this case AWS on the LCN components on the D you speak Cluster for child cluster is the cluster or clusters being deployed and managed? Okay, so why do you need a regional cluster? Different platform architectures, for example aws who have been stack even bare metal to simplify connectivity across multiple regions handle complexities like VPNs or one way connectivity through firewalls, but also help clarify availability zones. Yeah. Here we have a view of the regional cluster and how it connects to the management cluster on their components, including items like the LCN cluster Manager we also Machine Manager were held. Mandel are managed as well as the actual provider logic. Mhm. Okay, we'll begin by logging on Is the default administrative user writer. Okay, once we're in there, we'll have a look at the available clusters making sure we switch to the default project which contains the administration clusters. Here we can see the cars management cluster, which is the master controller. And you see, it only has three nodes, three managers, no workers. Okay, if we look at another regional cluster similar to what we're going to deploy now, also only has three managers once again, no workers. But as a comparison, here's a child cluster This one has three managers, but also has additional workers associate it to the cluster. All right, we need to connect. Tell bootstrap note. Preferably the same note that used to create the original management plaster. It's just on AWS, but I still want to machine. All right. A few things we have to do to make sure the environment is ready. First thing we're going to see go into route. We'll go into our releases folder where we have the kozberg struck on. This was the original bootstrap used to build the original management cluster. Yeah, we're going to double check to make sure our cube con figures there once again, the one created after the original customers created just double check. That cute conflict is the correct one. Does point to the management cluster. We're just checking to make sure that we can reach the images that everything is working. A condom. No damages waken access to a swell. Yeah. Next we're gonna edit the machine definitions. What we're doing here is ensuring that for this cluster we have the right machine definitions, including items like the am I. So that's found under the templates AWS directory. We don't need to edit anything else here. But we could change items like the size of the machines attempts. We want to use that The key items to ensure where you changed the am I reference for the junta image is the one for the region in this case AWS region for utilizing this was no construct deployment. We have to make sure we're pointing in the correct open stack images. Yeah, okay. Set the correct and my save file. Now we need to get up credentials again. When we originally created the bootstrap cluster, we got credentials from eight of the U. S. If we hadn't done this, we would need to go through the u A. W s set up. So we're just exporting the AWS access key and I d. What's important is CAAs aws enabled equals. True. Now we're sitting the region for the new regional cluster. In this case, it's Frankfurt on exporting our cube conflict that we want to use for the management cluster. When we looked at earlier Yeah, now we're exporting that. Want to call the cluster region Is Frank Foods Socrates Frankfurt yet trying to use something descriptive It's easy to identify. Yeah, and then after this, we'll just run the bootstrap script, which will complete the deployment for us. Bootstrap of the regional cluster is quite a bit quicker than the initial management clusters. There are fewer components to be deployed. Um, but to make it watchable, we've spent it up. So we're preparing our bootstrap cluster on the local bootstrap node. Almost ready on. We started preparing the instances at W s and waiting for that bastard and no to get started. Please. The best you nerd Onda. We're also starting to build the actual management machines they're now provisioning on. We've reached the point where they're actually starting to deploy. Dr. Enterprise, this is probably the longest face. Yeah, seeing the second that all the nerds will go from the player deployed. Prepare, prepare. Yeah, You'll see their status changes updates. He was the first night ready. Second, just applying second already. Both my time. No waiting from home control. Let's become ready. Removing cluster the management cluster from the bootstrap instance into the new cluster running the date of the U. S. All my stay. Ah, now we're playing Stockland. Switch over is done on. Done. Now I will build a child cluster in the new region very, very quickly to find the cluster will pick. Our new credential has shown up. We'll just call it Frankfurt for simplicity a key and customs to find. That's the machine. That cluster stop with three managers. Set the correct Am I for the region? Yeah, Do the same to add workers. There we go test the building. Yeah. Total bill of time Should be about fifteen minutes. Concedes in progress. It's going to expect this up a little bit. Check the events. We've created all the dependencies, machine instances, machines, a boat shortly. We should have a working cluster in Frankfurt region. Now almost a one note is ready from management. Two in progress. Yeah, on we're done. Clusters up and running. Yeah. >>Excellent. So at this point, we've now got that three tier structure that we talked about before the video. We got that management cluster that we do strapped in the first video. Now we have in this example to different regional clustering one in Frankfurt, one of one management was two different aws regions. And sitting on that you can do Strap up all those Doctor enterprise costumes that we want for our work clothes. >>Yeah, that's the key to this is to be able to have co resident with your actual application service enabled clusters the management co resident with it so that you can, you know, quickly access that he observation Elson Surfboard services like the graph, Ana and that sort of thing for your particular region. A supposed to having to lug back into the home. What did you call it when we started >>the mothership? >>The mothership. Right. So we don't have to go back to the mother ship. We could get >>it locally. Yeah, when, like to that point of aggregating things under a single pane of glass? That's one thing that again kind of sailed by in the demo really quickly. But you'll notice all your different clusters were on that same cluster. Your pain on your doctor Enterprise Container Cloud management. Uh, court. Right. So both your child clusters for running workload and your regional clusters for bootstrapping. Those child clusters were all listed in the same place there. So it's just one pane of glass to go look for, for all of your clusters, >>right? And, uh, this is kind of an important point. I was, I was realizing, as we were going through this. All of the mechanics are actually identical between the bootstrapped cluster of the original services and the bootstrapped cluster of the regional services. It's the management layer of everything so that you only have managers, you don't have workers and that at the child cluster layer below the regional or the management cluster itself, that's where you have the worker nodes. And those are the ones that host the application services in that three tiered architecture that we've now defined >>and another, you know, detail for those that have sharp eyes. In that video, you'll notice when deploying a child clusters. There's not on Lee. A minimum of three managers for high availability management cluster. You must have at least two workers that's just required for workload failure. It's one of those down get out of work. They could potentially step in there, so your minimum foot point one of these child clusters is fine. Violence and scalable, obviously, from a >>That's right. >>Let's take a quick peek of the questions here, see if there's anything we want to call out, then we move on to our last want to my last video. There's another question here about, like where these clusters can live. So again, I know these examples are very aws heavy. Honestly, it's just easy to set up down on the other us. We could do things on bare metal and, uh, open stack departments on Prem. That's what all of this still works in exactly the same way. >>Yeah, the, uh, key to this, especially for the the, uh, child clusters, is the provision hers? Right? See you establish on AWS provision or you establish a bare metal provision or you establish a open stack provision. Or and eventually that list will include all of the other major players in the cloud arena. But you, by selecting the provision or within your management interface, that's where you decide where it's going to be hosted, where the child cluster is to be hosted. >>Speaking off all through a child clusters. Let's jump into our last video in the Siri's, where we'll see how to spin up a child cluster on bare metal. >>Hello. This demo will cover the process of defining bare metal hosts and then review the steps of defining and deploying a bare metal based doctor enterprise cluster. So why bare metal? Firstly, it eliminates hyper visor overhead with performance boost of up to thirty percent. Provides direct access to GP use, prioritize for high performance wear clothes like machine learning and AI, and supports high performance workloads like network functions, virtualization. It also provides a focus on on Prem workloads, simplifying and ensuring we don't need to create the complexity of adding another opera visor. Lay it between so continue on the theme Why Communities and bare metal again Hyper visor overhead. Well, no virtualization overhead. Direct access to hardware items like F p G A s G p us. We can be much more specific about resource is required on the nodes. No need to cater for additional overhead. Uh, we can handle utilization in the scheduling. Better Onda we increase the performances and simplicity of the entire environment as we don't need another virtualization layer. Yeah, In this section will define the BM hosts will create a new project will add the bare metal hosts, including the host name. I put my credentials I pay my address the Mac address on then provide a machine type label to determine what type of machine it is for later use. Okay, let's get started. So well again. Was the operator thing. We'll go and we'll create a project for our machines to be a member off helps with scoping for later on for security. I begin the process of adding machines to that project. Yeah. So the first thing we had to be in post, Yeah, many of the machine A name. Anything you want, que experimental zero one. Provide the IAP my user name type my password. Okay. On the Mac address for the common interface with the boot interface and then the i p m I i p address These machines will be at the time storage worker manager. He's a manager. Yeah, we're gonna add a number of other machines on will. Speed this up just so you could see what the process looks like in the future. Better discovery will be added to the product. Okay. Okay. Getting back there we have it are Six machines have been added, are busy being inspected, being added to the system. Let's have a look at the details of a single note. Yeah, you can see information on the set up of the node. Its capabilities? Yeah. As well as the inventory information about that particular machine. I see. Okay, let's go and create the cluster. Yeah, So we're going to deploy a bare metal child cluster. The process we're going to go through is pretty much the same as any other child cluster. So we'll credit custom. We'll give it a name, but if it were selecting bare metal on the region, we're going to select the version we want to apply. No way. We're going to add this search keys. If we hope we're going to give the load. Balancer host I p that we'd like to use out of dress range on update the address range that we want to use for the cluster. Check that the sea ideal blocks for the Cuban ladies and tunnels are what we want them to be. Enable disabled stack light. Yeah, and soothe stack light settings to find the cluster. And then, as for any other machine, we need to add machines to the cluster. Here. We're focused on building communities clusters, so we're gonna put the count of machines. You want managers? We're gonna pick the label type manager and create three machines is the manager for the Cuban eighties. Casting Okay thing. We're having workers to the same. It's a process. Just making sure that the worker label host level are I'm sorry. On when Wait for the machines to deploy. Let's go through the process of putting the operating system on the notes validating and operating system deploying doctor identifies Make sure that the cluster is up and running and ready to go. Okay, let's review the bold events waken See the machine info now populated with more information about the specifics of things like storage and of course, details of a cluster etcetera. Yeah, yeah, well, now watch the machines go through the various stages from prepared to deploy on what's the cluster build? And that brings us to the end of this particular demo. You can see the process is identical to that of building a normal child cluster we got our complaint is complete. >>All right, so there we have it, deploying a cluster to bare metal. Much the same is how we did for AWS. I guess maybe the biggest different stepwise there is there is that registration face first, right? So rather than just using AWS financials toe magically create PM's in the cloud. You got a point out all your bare metal servers to Dr Enterprise between the cloud and they really come in, I guess three profiles, right? You got your manager profile with a profile storage profile which has been labeled as allocate. Um, crossword cluster has appropriate, >>right? And And I think that the you know, the key differentiator here is that you have more physical control over what, uh, attributes that love your cat, by the way, uh, where you have the different attributes of a server of physical server. So you can, uh, ensure that the SSD configuration on the storage nodes is gonna be taken advantage of in the best way the GP use on the worker nodes and and that the management layer is going to have sufficient horsepower to, um, spin up to to scale up the the environments, as required. One of the things I wanted to mention, though, um, if I could get this out without the choking much better. Um, is that Ah, hey, mentioned the load balancer and I wanted to make sure in defining the load balancer and the load balancer ranges. Um, that is for the top of the the cluster itself. That's the operations of the management, uh, layer integrating with your systems internally to be able to access the the Cube Can figs. I I p address the, uh, in a centralized way. It's not the load balancer that's working within the kubernetes cluster that you are deploying. That's still cube proxy or service mesh, or however you're intending to do it. So, um, it's kind of an interesting step that your initial step in building this, um and we typically use things like metal L B or in gen X or that kind of thing is to establish that before we deploy this bear mental cluster so that it can ride on top of that for the tips and things. >>Very cool. So any other thoughts on what we've seen so far today? Bruce, we've gone through all the different layers. Doctor enterprise container clouds in these videos from our management are regional to our clusters on aws hand bear amount, Of course, with his dad is still available. Closing thoughts before we take just a very short break and run through these demos again. >>You know, I've been very exciting. Ah, doing the presentation with you. I'm really looking forward to doing it the second time, so that we because we've got a good rhythm going about this kind of thing. So I'm looking forward to doing that. But I think that the key elements of what we're trying to convey to the folks out there in the audience that I hope you've gotten out of it is that will that this is an easy enough process that if you follow the step by steps going through the documentation that's been put out in the chat, um, that you'll be able to give this a go yourself, Um, and you don't have to limit yourself toe having physical hardware on prim to try it. You could do it in a ws as we've shown you today. And if you've got some fancy use cases like, uh, you you need a Hadoop And and, uh, you know, cloud oriented ai stuff that providing a bare metal service helps you to get there very fast. So right. Thank you. It's been a pleasure. >>Yeah, thanks everyone for coming out. So, like I said we're going to take a very short, like, three minute break here. Uh, take the opportunity to let your colleagues know if they were in another session or they didn't quite make it to the beginning of this session. Or if you just want to see these demos again, we're going to kick off this demo. Siri's again in just three minutes at ten. Twenty five a. M. Pacific time where we will see all this great stuff again. Let's take a three minute break. I'll see you all back here in just two minutes now, you know. Okay, folks, that's the end of our extremely short break. We'll give people just maybe, like one more minute to trickle in if folks are interested in coming on in and jumping into our demo. Siri's again. Eso For those of you that are just joining us now I'm Bill Mills. I head up curriculum development for the training team here. Moran Tous on Joining me for this session of demos is Bruce. Don't you go ahead and introduce yourself doors, who is still on break? That's cool. We'll give Bruce a minute or two to get back while everyone else trickles back in. There he is. Hello, Bruce. >>How'd that go for you? Okay, >>Very well. So let's kick off our second session here. I e just interest will feel for you. Thio. Let it run over here. >>Alright. Hi. Bruce Matthews here. I'm the Western Regional Solutions architect for Marantz. Use A I'm the one with the gray hair and the glasses. Uh, the handsome one is Bill. So, uh, Bill, take it away. >>Excellent. So over the next hour or so, we've got a Siris of demos that's gonna walk you through your first steps with Dr Enterprise Container Cloud Doctor Enterprise Container Cloud is, of course, Miranda's brand new offering from bootstrapping kubernetes clusters in AWS bare metal open stack. And for the providers in the very near future. So we we've got, you know, just just over an hour left together on this session, uh, if you joined us at the top of the hour back at nine. A. M. Pacific, we went through these demos once already. Let's do them again for everyone else that was only able to jump in right now. Let's go. Our first video where we're gonna install Dr Enterprise container cloud for the very first time and use it to bootstrap management. Cluster Management Cluster, as I like to describe it, is our mother ship that's going to spin up all the other kubernetes clusters, Doctor Enterprise clusters that we're gonna run our workloads on. So I'm gonna do >>I'm so excited. I can hardly wait. >>Let's do it all right to share my video out here. Yeah, let's do it. >>Good day. The focus for this demo will be the initial bootstrap of the management cluster on the first regional clusters. To support AWS deployments, the management cluster provides the core functionality, including identity management, authentication, infantry release version. The regional cluster provides the specific architecture provided in this case AWS and the Elsom components on the UCP cluster Child cluster is the cluster or clusters being deployed and managed. The deployment is broken up into five phases. The first phase is preparing a bootstrap note on its dependencies on handling the download of the bridge struck tools. The second phase is obtaining America's license file. Third phase. Prepare the AWS credentials instead of the ideas environment, the fourth configuring the deployment, defining things like the machine types on the fifth phase, Run the bootstrap script and wait for the deployment to complete. Okay, so here we're sitting up the strap node. Just checking that it's clean and clear and ready to go there. No credentials already set up on that particular note. Now, we're just checking through aws to make sure that the account we want to use we have the correct credentials on the correct roles set up on validating that there are no instances currently set up in easy to instance, not completely necessary, but just helps keep things clean and tidy when I am perspective. Right. So next step, we're just gonna check that we can from the bootstrap note, reach more antis, get to the repositories where the various components of the system are available. They're good. No areas here. Yeah, right now we're going to start sitting at the bootstrap note itself. So we're downloading the cars release, get get cars, script, and then next we're going to run it. Yeah, I've been deployed changing into that big struck folder, just making see what's there right now we have no license file, so we're gonna get the license filed. Okay? Get the license file through more antis downloads site signing up here, downloading that license file and putting it into the Carisbrook struck folder. Okay, since we've done that, we can now go ahead with the rest of the deployment. Yeah, see what the follow is there? Uh huh. Once again, checking that we can now reach E C two, which is extremely important for the deployment. Just validation steps as we move through the process. Alright. Next big step is violating all of our AWS credentials. So the first thing is, we need those route credentials which we're going to export on the command line. This is to create the necessary bootstrap user on AWS credentials for the completion off the deployment we're now running in AWS policy create. So it is part of that is creating our food trucks script. Creating this through policy files onto the AWS, just generally preparing the environment using a cloud formation script, you'll see in a second, I'll give a new policy confirmations just waiting for it to complete. And there is done. It's gonna have a look at the AWS console. You can see that we're creative completed. Now we can go and get the credentials that we created. Good day. I am console. Go to the new user that's being created. We'll go to the section on security credentials and creating new keys. Download that information media access Key I. D and the secret access key, but usually then exported on the command line. Okay, Couple of things to Notre. Ensure that you're using the correct AWS region on ensure that in the conflict file you put the correct Am I in for that region? I'm sure you have it together in a second. Okay, thanks. Is key. So you could X key Right on. Let's kick it off. So this process takes between thirty and forty five minutes. Handles all the AWS dependencies for you. Um, as we go through, the process will show you how you can track it. Andi will start to see things like the running instances being created on the AWS side. The first phase off this whole process happening in the background is the creation of a local kind based bootstrapped cluster on the bootstrap node that clusters then used to deploy and manage all the various instances and configurations within AWS at the end of the process. That cluster is copied into the new cluster on AWS and then shut down that local cluster essentially moving itself over. Yeah, okay. Local clusters boat. Just waiting for the various objects to get ready. Standard communities objects here. Yeah, you mentioned Yeah. So we've speed up this process a little bit just for demonstration purposes. Okay, there we go. So first note is being built the bastion host just jump box that will allow us access to the entire environment. Yeah, In a few seconds, we'll see those instances here in the US console on the right. Um, the failures that you're seeing around failed to get the I. P for Bastian is just the weight state while we wait for AWS to create the instance. Okay. Yeah. Beauty there. Movies. Okay, sketch. Hello? Yeah, Okay. Okay. On. There we go. Question host has been built on three instances for the management clusters have now been created. Okay, We're going through the process of preparing. Those nodes were now copying everything over. See that scaling up of controllers in the big strapped cluster? It's indicating that we're starting all of the controllers in the new question. Almost there. Right? Okay. Just waiting for key. Clark. Uh huh. So finish up. Yeah. No. Now we're shutting down. Control this on the local bootstrap node on preparing our I. D. C configuration, fourth indication. So once this is completed, the last phase will be to deploy stack light into the new cluster, that glass on monitoring tool set, Then we go stack like deployment has started. Mhm. Coming to the end of the deployment mountain. Yeah, they were cut final phase of the deployment. And we are done. Yeah, you'll see. At the end, they're providing us the details of you. I log in. So there's a key Clark log in. Uh, you can modify that initial default possible is part of the configuration set up where they were in the documentation way. Go Councils up way can log in. Yeah. Yeah. Thank you very much for watching. >>All right, so at this point, what we have we got our management cluster spun up, ready to start creating work clusters. So just a couple of points to clarify there to make sure everyone caught that, uh, as advertised. That's darker. Enterprise container cloud management cluster. That's not rework loans. are gonna go right? That is the tool and you're gonna use to start spinning up downstream commodity documentary prize clusters for bootstrapping record too. >>And the seed host that were, uh, talking about the kind cluster dingy actually doesn't have to exist after the bootstrap succeeds eso It's sort of like, uh, copies head from the seed host Toothy targets in AWS spins it up it then boots the the actual clusters and then it goes away too, because it's no longer necessary >>so that bootstrapping know that there's not really any requirements, Hardly on that, right. It just has to be able to reach aws hit that Hit that a p I to spin up those easy to instances because, as you just said, it's just a kubernetes in docker cluster on that piece. Drop note is just gonna get torn down after the set up finishes on. You no longer need that. Everything you're gonna do, you're gonna drive from the single pane of glass provided to you by your management cluster Doctor enterprise Continue cloud. Another thing that I think is sort of interesting their eyes that the convict is fairly minimal. Really? You just need to provide it like aws regions. Um, am I? And that's what is going to spin up that spending that matter faster. >>Right? There is a mammal file in the bootstrap directory itself, and all of the necessary parameters that you would fill in have default set. But you have the option then of going in and defining a different Am I different for a different region, for example? Oh, are different. Size of instance from AWS. >>One thing that people often ask about is the cluster footprint. And so that example you saw they were spitting up a three manager, um, managing cluster as mandatory, right? No single manager set up at all. We want high availability for doctrine Enterprise Container Cloud management. Like so again, just to make sure everyone sort of on board with the life cycle stage that we're at right now. That's the very first thing you're going to do to set up Dr Enterprise Container Cloud. You're going to do it. Hopefully exactly once. Right now, you've got your management cluster running, and they're gonna use that to spend up all your other work clusters Day today has has needed How do we just have a quick look at the questions and then lets take a look at spinning up some of those child clusters. >>Okay, e think they've actually been answered? >>Yeah, for the most part. One thing I'll point out that came up again in the Dail, helpfully pointed out earlier in surgery, pointed out again, is that if you want to try any of the stuff yourself, it's all of the dogs. And so have a look at the chat. There's a links to instructions, so step by step instructions to do each and every thing we're doing here today yourself. I really encourage you to do that. Taking this out for a drive on your own really helps internalizing communicate these ideas after the after launch pad today, Please give this stuff try on your machines. Okay, So at this point, like I said, we've got our management cluster. We're not gonna run workloads there that we're going to start creating child clusters. That's where all of our work and we're gonna go. That's what we're gonna learn how to do in our next video. Cue that up for us. >>I so love Shawn's voice. >>Wasn't that all day? >>Yeah, I watched him read the phone book. >>All right, here we go. Let's now that we have our management cluster set up, let's create a first child work cluster. >>Hello. In this demo, we will cover the deployment experience of creating a new child cluster the scaling of the cluster on how to update the cluster. When a new version is available, we begin the process by logging onto the you I as a normal user called Mary. Let's go through the navigation of the u I. So you can switch Project Mary only has access to development. Uh huh. Get a list of the available projects that you have access to. What clusters have been deployed at the moment there. Man. Yes, this H keys, Associate ID for Mary into her team on the cloud credentials that allow you to create or access the various clouds that you can deploy clusters to finally different releases that are available to us. We can switch from dark mode to light mode, depending on your preferences. Right. Let's now set up some ssh keys for Mary so she can access the notes and machines again. Very simply, had Mississippi key give it a name. We copy and paste our public key into the upload key block. Or we can upload the key if we have the file available on our machine. A very simple process. So to create a new cluster, we define the cluster ad management nodes and add worker nodes to the cluster. Yeah, again, very simply, we got the clusters tab we had to create cluster button. Give the cluster name. Yeah, Andi, select the provider. We only have access to AWS in this particular deployment, so we'll stick to AWS. What's like the region in this case? US West one released version five point seven is the current release Onda Attach. Mary's Key is necessary key. We can then check the rest of the settings, confirming the provider any kubernetes c r D a r i p address information. We can change this. Should we wish to? We'll leave it default for now and then what components of stack light? I would like to deploy into my custom for this. I'm enabling stack light on logging, and I consider the retention sizes attention times on. Even at this stage, add any custom alerts for the watchdogs. Consider email alerting which I will need my smart host. Details and authentication details. Andi Slack Alerts. Now I'm defining the cluster. All that's happened is the cluster's been defined. I now need to add machines to that cluster. I'll begin by clicking the create machine button within the cluster definition. Oh, select manager, Select the number of machines. Three is the minimum. Select the instant size that I'd like to use from AWS and very importantly, ensure correct. Use the correct Am I for the region. I convinced side on the route. Device size. There we go. My three machines are busy creating. I now need to add some workers to this cluster. So I go through the same process this time once again, just selecting worker. I'll just add to once again the am I is extremely important. Will fail if we don't pick the right. Am I for a Clinton machine? In this case and the deployment has started, we can go and check on the bold status are going back to the clusters screen on clicking on the little three dots on the right. We get the cluster info and the events, so the basic cluster info you'll see pending their listen. Cluster is still in the process of being built. We kick on, the events will get a list of actions that have been completed This part of the set up of the cluster. So you can see here. We've created the VPC. We've created the sub nets on. We've created the Internet Gateway. It's unnecessary made of us. And we have no warnings of the stage. Okay, this will then run for a while. We have one minute past. We can click through. We can check the status of the machine balls as individuals so we can check the machine info, details of the machines that we've assigned mhm and see any events pertaining to the machine areas like this one on normal. Yeah. Just last. The community's components are waiting for the machines to start. Go back to customers. Okay, right. Because we're moving ahead now. We can see we have it in progress. Five minutes in new Matt Gateway. And at this stage, the machines have been built on assigned. I pick up the U S. Yeah, yeah, yeah. There we go. Machine has been created. See the event detail and the AWS. I'd for that machine. No speeding things up a little bit this whole process and to end takes about fifteen minutes. Run the clock forward, you'll notice is the machines continue to bold the in progress. We'll go from in progress to ready. A soon as we got ready on all three machines, the managers on both workers way could go on and we could see that now we reached the point where the cluster itself is being configured mhm and then we go. Cluster has been deployed. So once the classes deployed, we can now never get around. Our environment are looking into configure cluster. We could modify their cluster. We could get the end points for alert Alert Manager See here the griffon occupying and Prometheus are still building in the background but the cluster is available on You would be able to put workloads on it at this stage to download the cube conflict so that I can put workloads on it. It's again the three little dots in the right for that particular cluster. If the download cube conflict give it my password, I now have the Q conflict file necessary so that I can access that cluster. All right, Now that the build is fully completed, we can check out cluster info on. We can see that all the satellite components have been built. All the storage is there, and we have access to the CPU. I. So if we click into the cluster, we can access the UCP dashboard, click the signing with the clock button to use the SSO. We give Mary's possible to use the name once again. Thing is an unlicensed cluster way could license at this point. Or just skip it on. Do we have the UCP dashboard? You could see that has been up for a little while. We have some data on the dashboard going back to the console. We can now go to the griffon. A data just been automatically pre configured for us. We can switch and utilized a number of different dashboards that have already been instrumented within the cluster. So, for example, communities cluster information, the name spaces, deployments, nodes. Um, so we look at nodes. If we could get a view of the resource is utilization of Mrs Custer is very little running in it. Yeah, a general dashboard of Cuba Navies cluster. What If this is configurable, you can modify these for your own needs, or add your own dashboards on de scoped to the cluster. So it is available to all users who have access to this specific cluster. All right to scale the cluster on to add a No. This is simple. Is the process of adding a mode to the cluster, assuming we've done that in the first place. So we go to the cluster, go into the details for the cluster we select, create machine. Once again, we need to be ensure that we put the correct am I in and any other functions we like. You can create different sized machines so it could be a larger node. Could be bigger group disks and you'll see that worker has been added in the provisioning state. On shortly, we will see the detail off that worker as a complete to remove a note from a cluster. Once again, we're going to the cluster. We select the node we would like to remove. Okay, I just hit delete On that note. Worker nodes will be removed from the cluster using according and drawing method to ensure that your workloads are not affected. Updating a cluster. When an update is available in the menu for that particular cluster, the update button will become available. And it's a simple as clicking the button validating which release you would like to update to this case. This available releases five point seven point one give you I'm kicking the update back in the background. We will coordinate. Drain each node slowly, go through the process of updating it. Andi update will complete depending on what the update is as quickly as possible. Who we go. The notes being rebuilt in this case impacted the manager node. So one of the manager nodes is in the process of being rebuilt. In fact, to in this case, one has completed already. Yeah, and in a few minutes, we'll see that the upgrade has been completed. There we go. Great. Done. If you work loads of both using proper cloud native community standards, there will be no impact. >>All right, there. We haven't. We got our first workload cluster spun up and managed by Dr Enterprise Container Cloud. So I I loved Shawn's classic warning there. When you're spinning up an actual doctor enterprise deployment, you see little errors and warnings popping up. Just don't touch it. Just leave it alone and let Dr Enterprises self healing properties take care of all those very transient temporary glitches, resolve themselves and leave you with a functioning workload cluster within victims. >>And now, if you think about it that that video was not very long at all. And that's how long it would take you if someone came into you and said, Hey, can you spend up a kubernetes cluster for development development A. Over here, um, it literally would take you a few minutes to thio Accomplish that. And that was with a W s. Obviously, which is sort of, ah, transient resource in the cloud. But you could do exactly the same thing with resource is on Prem or resource is, um physical resource is and will be going through that later in the process. >>Yeah, absolutely one thing that is present in that demo, but that I like to highlight a little bit more because it just kind of glides by Is this notion of, ah, cluster release? So when Sean was creating that cluster, and also when when he was upgrading that cluster, he had to choose a release. What does that didn't really explain? What does that mean? Well, in Dr Enterprise Container Cloud, we have released numbers that capture the entire staff of container ization tools that will be deploying to that workload costume. So that's your version of kubernetes sed cor DNs calico. Doctor Engineer. All the different bits and pieces that not only work independently but are validated toe work together as a staff appropriate for production, humanities, adopted enterprise environments. >>Yep. From the bottom of the stack to the top, we actually test it for scale. Test it for CVS, test it for all of the various things that would, you know, result in issues with you running the application services. And I've got to tell you from having, you know, managed kubernetes deployments and things like that that if you're the one doing it yourself, it can get rather messy. Eso This makes it easy. >>Bruce, you were staying a second ago. They I'll take you at least fifteen minutes to install your release. Custer. Well, sure, but what would all the other bits and pieces you need toe? Not just It's not just about pressing the button to install it, right? It's making the right decision. About what components work? Well, our best tested toe be successful working together has a staff? Absolutely. We this release mechanism and Dr Enterprise Container Cloud. Let's just kind of package up that expert knowledge and make it available in a really straightforward, fashionable species. Uh, pre Confederate release numbers and Bruce is you're pointing out earlier. He's got delivered to us is updates kind of transparent period. When when? When Sean wanted toe update that cluster, he created little update. Custer Button appeared when an update was available. All you gotta do is click. It tells you what Here's your new stack of communities components. It goes ahead. And the straps those components for you? >>Yeah, it actually even displays at the top of the screen. Ah, little header That says you've got an update available. Do you want me to apply? It s o >>Absolutely. Another couple of cool things. I think that are easy to miss in that demo was I really like the on board Bafana that comes along with this stack. So we've been Prometheus Metrics and Dr Enterprise for years and years now. They're very high level. Maybe in in previous versions of Dr Enterprise having those detailed dashboards that Ravana provides, I think that's a great value out there. People always wanted to be ableto zoom in a little bit on that, uh, on those cluster metrics, you're gonna provides them out of the box for us. Yeah, >>that was Ah, really, uh, you know, the joining of the Miranda's and Dr teams together actually spawned us to be able to take the best of what Morantes had in the open stack environment for monitoring and logging and alerting and to do that integration in in a very short period of time so that now we've got it straight across the board for both the kubernetes world and the open stack world. Using the same tool sets >>warm. One other thing I wanna point out about that demo that I think there was some questions about our last go around was that demo was all about creating a managed workplace cluster. So the doctor enterprise Container Cloud managers were using those aws credentials provisioned it toe actually create new e c two instances installed Docker engine stalled. Doctor Enterprise. Remember all that stuff on top of those fresh new VM created and managed by Dr Enterprise contain the cloud. Nothing unique about that. AWS deployments do that on open staff doing on Parramatta stuff as well. Um, there's another flavor here, though in a way to do this for all of our long time doctor Enterprise customers that have been running Doctor Enterprise for years and years. Now, if you got existing UCP points existing doctor enterprise deployments, you plug those in to Dr Enterprise Container Cloud, uh, and use darker enterprise between the cloud to manage those pre existing Oh, working clusters. You don't always have to be strapping straight from Dr Enterprises. Plug in external clusters is bad. >>Yep, the the Cube config elements of the UCP environment. The bundling capability actually gives us a very straightforward methodology. And there's instructions on our website for exactly how thio, uh, bring in import and you see p cluster. Um so it it makes very convenient for our existing customers to take advantage of this new release. >>Absolutely cool. More thoughts on this wonders if we jump onto the next video. >>I think we should move press on >>time marches on here. So let's Let's carry on. So just to recap where we are right now, first video, we create a management cluster. That's what we're gonna use to create All our downstream were closed clusters, which is what we did in this video. Let's maybe the simplest architectures, because that's doing everything in one region on AWS pretty common use case because we want to be able to spin up workload clusters across many regions. And so to do that, we're gonna add a third layer in between the management and work cluster layers. That's gonna be our regional cluster managers. So this is gonna be, uh, our regional management cluster that exists per region that we're going to manage those regional managers will be than the ones responsible for spending part clusters across all these different regions. Let's see it in action in our next video. >>Hello. In this demo, we will cover the deployment of additional regional management. Cluster will include a brief architectural overview, how to set up the management environment, prepare for the deployment deployment overview, and then just to prove it, to play a regional child cluster. So looking at the overall architecture, the management cluster provides all the core functionality, including identity management, authentication, inventory and release version. ING Regional Cluster provides the specific architecture provider in this case, AWS on the L C M components on the d you speak cluster for child cluster is the cluster or clusters being deployed and managed? Okay, so why do you need original cluster? Different platform architectures, for example AWS open stack, even bare metal to simplify connectivity across multiple regions handle complexities like VPNs or one way connectivity through firewalls, but also help clarify availability zones. Yeah. Here we have a view of the regional cluster and how it connects to the management cluster on their components, including items like the LCN cluster Manager. We also machine manager. We're hell Mandel are managed as well as the actual provider logic. Okay, we'll begin by logging on Is the default administrative user writer. Okay, once we're in there, we'll have a look at the available clusters making sure we switch to the default project which contains the administration clusters. Here we can see the cars management cluster, which is the master controller. When you see it only has three nodes, three managers, no workers. Okay, if we look at another regional cluster, similar to what we're going to deploy now. Also only has three managers once again, no workers. But as a comparison is a child cluster. This one has three managers, but also has additional workers associate it to the cluster. Yeah, all right, we need to connect. Tell bootstrap note, preferably the same note that used to create the original management plaster. It's just on AWS, but I still want to machine Mhm. All right, A few things we have to do to make sure the environment is ready. First thing we're gonna pseudo into route. I mean, we'll go into our releases folder where we have the car's boot strap on. This was the original bootstrap used to build the original management cluster. We're going to double check to make sure our cube con figures there It's again. The one created after the original customers created just double check. That cute conflict is the correct one. Does point to the management cluster. We're just checking to make sure that we can reach the images that everything's working, condone, load our images waken access to a swell. Yeah, Next, we're gonna edit the machine definitions what we're doing here is ensuring that for this cluster we have the right machine definitions, including items like the am I So that's found under the templates AWS directory. We don't need to edit anything else here, but we could change items like the size of the machines attempts we want to use but the key items to ensure where changed the am I reference for the junta image is the one for the region in this case aws region of re utilizing. This was an open stack deployment. We have to make sure we're pointing in the correct open stack images. Yeah, yeah. Okay. Sit the correct Am I save the file? Yeah. We need to get up credentials again. When we originally created the bootstrap cluster, we got credentials made of the U. S. If we hadn't done this, we would need to go through the u A. W s set up. So we just exporting AWS access key and I d. What's important is Kaz aws enabled equals. True. Now we're sitting the region for the new regional cluster. In this case, it's Frankfurt on exporting our Q conflict that we want to use for the management cluster when we looked at earlier. Yeah, now we're exporting that. Want to call? The cluster region is Frankfurt's Socrates Frankfurt yet trying to use something descriptive? It's easy to identify. Yeah, and then after this, we'll just run the bootstrap script, which will complete the deployment for us. Bootstrap of the regional cluster is quite a bit quicker than the initial management clusters. There are fewer components to be deployed, but to make it watchable, we've spent it up. So we're preparing our bootstrap cluster on the local bootstrap node. Almost ready on. We started preparing the instances at us and waiting for the past, you know, to get started. Please the best your node, onda. We're also starting to build the actual management machines they're now provisioning on. We've reached the point where they're actually starting to deploy Dr Enterprise, he says. Probably the longest face we'll see in a second that all the nodes will go from the player deployed. Prepare, prepare Mhm. We'll see. Their status changes updates. It was the first word ready. Second, just applying second. Grady, both my time away from home control that's become ready. Removing cluster the management cluster from the bootstrap instance into the new cluster running a data for us? Yeah, almost a on. Now we're playing Stockland. Thanks. Whichever is done on Done. Now we'll build a child cluster in the new region very, very quickly. Find the cluster will pick our new credential have shown up. We'll just call it Frankfurt for simplicity. A key on customers to find. That's the machine. That cluster stop with three manages set the correct Am I for the region? Yeah, Same to add workers. There we go. That's the building. Yeah. Total bill of time. Should be about fifteen minutes. Concedes in progress. Can we expect this up a little bit? Check the events. We've created all the dependencies, machine instances, machines. A boat? Yeah. Shortly. We should have a working caster in the Frankfurt region. Now almost a one note is ready from management. Two in progress. On we're done. Trust us up and running. >>Excellent. There we have it. We've got our three layered doctor enterprise container cloud structure in place now with our management cluster in which we scrap everything else. Our regional clusters which manage individual aws regions and child clusters sitting over depends. >>Yeah, you can. You know you can actually see in the hierarchy the advantages that that presents for folks who have multiple locations where they'd like a geographic locations where they'd like to distribute their clusters so that you can access them or readily co resident with your development teams. Um and, uh, one of the other things I think that's really unique about it is that we provide that same operational support system capability throughout. So you've got stack light monitoring the stack light that's monitoring the stack light down to the actual child clusters that they have >>all through that single pane of glass that shows you all your different clusters, whether their workload cluster like what the child clusters or usual clusters from managing different regions. Cool. Alright, well, time marches on your folks. We've only got a few minutes left and I got one more video in our last video for the session. We're gonna walk through standing up a child cluster on bare metal. So so far, everything we've seen so far has been aws focus. Just because it's kind of easy to make that was on AWS. We don't want to leave you with the impression that that's all we do, we're covering AWS bare metal and open step deployments as well documented Craftsman Cloud. Let's see it in action with a bare metal child cluster. >>We are on the home stretch, >>right. >>Hello. This demo will cover the process of defining bare metal hosts and then review the steps of defining and deploying a bare metal based doctor enterprise cluster. Yeah, so why bare metal? Firstly, it eliminates hyper visor overhead with performance boost of up to thirty percent provides direct access to GP use, prioritize for high performance wear clothes like machine learning and AI, and support high performance workouts like network functions, virtualization. It also provides a focus on on Prem workloads, simplifying and ensuring we don't need to create the complexity of adding another hyper visor layer in between. So continuing on the theme Why communities and bare metal again Hyper visor overhead. Well, no virtualization overhead. Direct access to hardware items like F p g A s G p, us. We can be much more specific about resource is required on the nodes. No need to cater for additional overhead. We can handle utilization in the scheduling better Onda. We increase the performance and simplicity of the entire environment as we don't need another virtualization layer. Yeah, In this section will define the BM hosts will create a new project. Will add the bare metal hosts, including the host name. I put my credentials. I pay my address, Mac address on, then provide a machine type label to determine what type of machine it is. Related use. Okay, let's get started Certain Blufgan was the operator thing. We'll go and we'll create a project for our machines to be a member off. Helps with scoping for later on for security. I begin the process of adding machines to that project. Yeah. Yeah. So the first thing we had to be in post many of the machine a name. Anything you want? Yeah, in this case by mental zero one. Provide the IAP My user name. Type my password? Yeah. On the Mac address for the active, my interface with boot interface and then the i p m i P address. Yeah, these machines. We have the time storage worker manager. He's a manager. We're gonna add a number of other machines on will speed this up just so you could see what the process. Looks like in the future, better discovery will be added to the product. Okay, Okay. Getting back there. We haven't Are Six machines have been added. Are busy being inspected, being added to the system. Let's have a look at the details of a single note. Mhm. We can see information on the set up of the node. Its capabilities? Yeah. As well as the inventory information about that particular machine. Okay, it's going to create the cluster. Mhm. Okay, so we're going to deploy a bare metal child cluster. The process we're going to go through is pretty much the same as any other child cluster. So credit custom. We'll give it a name. Thank you. But he thought were selecting bare metal on the region. We're going to select the version we want to apply on. We're going to add this search keys. If we hope we're going to give the load. Balancer host I p that we'd like to use out of the dress range update the address range that we want to use for the cluster. Check that the sea idea blocks for the communities and tunnels are what we want them to be. Enable disabled stack light and said the stack light settings to find the cluster. And then, as for any other machine, we need to add machines to the cluster. Here we're focused on building communities clusters. So we're gonna put the count of machines. You want managers? We're gonna pick the label type manager on create three machines. Is a manager for the Cuban a disgusting? Yeah, they were having workers to the same. It's a process. Just making sure that the worker label host like you are so yes, on Duin wait for the machines to deploy. Let's go through the process of putting the operating system on the notes, validating that operating system. Deploying Docker enterprise on making sure that the cluster is up and running ready to go. Okay, let's review the bold events. We can see the machine info now populated with more information about the specifics of things like storage. Yeah, of course. Details of a cluster, etcetera. Yeah, Yeah. Okay. Well, now watch the machines go through the various stages from prepared to deploy on what's the cluster build, and that brings us to the end of this particular do my as you can see the process is identical to that of building a normal child cluster we got our complaint is complete. >>Here we have a child cluster on bare metal for folks that wanted to play the stuff on Prem. >>It's ah been an interesting journey taken from the mothership as we started out building ah management cluster and then populating it with a child cluster and then finally creating a regional cluster to spread the geographically the management of our clusters and finally to provide a platform for supporting, you know, ai needs and and big Data needs, uh, you know, thank goodness we're now able to put things like Hadoop on, uh, bare metal thio in containers were pretty exciting. >>Yeah, absolutely. So with this Doctor Enterprise container cloud platform. Hopefully this commoditized scooping clusters, doctor enterprise clusters that could be spun up and use quickly taking provisioning times. You know, from however many months to get new clusters spun up for our teams. Two minutes, right. We saw those clusters gets better. Just a couple of minutes. Excellent. All right, well, thank you, everyone, for joining us for our demo session for Dr Enterprise Container Cloud. Of course, there's many many more things to discuss about this and all of Miranda's products. If you'd like to learn more, if you'd like to get your hands dirty with all of this content, police see us a training don Miranda's dot com, where we can offer you workshops and a number of different formats on our entire line of products and hands on interactive fashion. Thanks, everyone. Enjoy the rest of the launchpad of that >>thank you all enjoy.

Published Date : Sep 17 2020

SUMMARY :

So for the next couple of hours, I'm the Western regional Solutions architect for Moran At least somebody on the call knows something about your enterprise Computer club. And that's really the key to this thing is to provide some, you know, many training clusters so that by the end of the tutorial content today, I think that's that's pretty much what we had to nail down here. So the management costs was always We have to give this brief little pause of the management cluster in the first regional clusters to support AWS deployments. So in that video are wonderful field CTO Shauna Vera bootstrapped So primarily the foundation for being able to deploy So this cluster isn't yet for workloads. Read the phone book, So and just to make sure I understood The output that when it says I'm pivoting, I'm pivoting from on the bootstrap er go away afterwards. So that there's no dependencies on any of the clouds that get created thereafter. Yeah, that actually reminds me of how we bootstrapped doctor enterprise back in the day, The config file that that's generated the template is fairly straightforward We always insist on high availability for this management cluster the scenes without you having toe worry about it as a developer. Examples of that is the day goes on. either the the regional cluster or a We've got the management cluster, and we're gonna go straight with child cluster. as opposed to having to centralize thumb So just head on in, head on into the docks like the Dale provided here. That's going to be in a very near term I didn't wanna make promises for product, but I'm not too surprised that she's gonna be targeted. No, just that the fact that we're running through these individual So let's go to that video and see just how We can check the status of the machine bulls as individuals so we can check the machine the thing that jumped out to me at first Waas like the inputs that go into defining Yeah, and and And that's really the focus of our effort is to ensure that So at that point, once we started creating that workload child cluster, of course, we bootstrapped good old of the bootstrapping as well that the processes themselves are self healing, And the worst thing you could do is panic at the first warning and start tearing things that don't that then go out to touch slack and say hi, You need to watch your disk But Sean mentioned it on the video. And And the kubernetes, uh, scaling methodology is is he adhered So should we go to the questions. Um, that's kind of the point, right? you know, set up things and deploy your applications and things. that comes to us not from Dr Enterprise Container Cloud, but just from the underlying kubernetes distribution. to the standards that we would want to set to make sure that we're not overloading On the next video, we're gonna learn how to spin up a Yeah, Do the same to add workers. We got that management cluster that we do strapped in the first video. Yeah, that's the key to this is to be able to have co resident with So we don't have to go back to the mother ship. So it's just one pane of glass to the bootstrapped cluster of the regional services. and another, you know, detail for those that have sharp eyes. Let's take a quick peek of the questions here, see if there's anything we want to call out, then we move on to our last want all of the other major players in the cloud arena. Let's jump into our last video in the Siri's, So the first thing we had to be in post, Yeah, many of the machine A name. Much the same is how we did for AWS. nodes and and that the management layer is going to have sufficient horsepower to, are regional to our clusters on aws hand bear amount, Of course, with his dad is still available. that's been put out in the chat, um, that you'll be able to give this a go yourself, Uh, take the opportunity to let your colleagues know if they were in another session I e just interest will feel for you. Use A I'm the one with the gray hair and the glasses. And for the providers in the very near future. I can hardly wait. Let's do it all right to share my video So the first thing is, we need those route credentials which we're going to export on the command That is the tool and you're gonna use to start spinning up downstream It just has to be able to reach aws hit that Hit that a p I to spin up those easy to instances because, and all of the necessary parameters that you would fill in have That's the very first thing you're going to Yeah, for the most part. Let's now that we have our management cluster set up, let's create a first We can check the status of the machine balls as individuals so we can check the glitches, resolve themselves and leave you with a functioning workload cluster within exactly the same thing with resource is on Prem or resource is, All the different bits and pieces And I've got to tell you from having, you know, managed kubernetes And the straps those components for you? Yeah, it actually even displays at the top of the screen. I really like the on board Bafana that comes along with this stack. the best of what Morantes had in the open stack environment for monitoring and logging So the doctor enterprise Container Cloud managers were Yep, the the Cube config elements of the UCP environment. More thoughts on this wonders if we jump onto the next video. Let's maybe the simplest architectures, of the regional cluster and how it connects to the management cluster on their components, There we have it. that we provide that same operational support system capability Just because it's kind of easy to make that was on AWS. Just making sure that the worker label host like you are so yes, It's ah been an interesting journey taken from the mothership Enjoy the rest of the launchpad

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MaryPERSON

0.99+

SeanPERSON

0.99+

Sean O'MaraPERSON

0.99+

BrucePERSON

0.99+

FrankfurtLOCATION

0.99+

three machinesQUANTITY

0.99+

Bill MilksPERSON

0.99+

AWSORGANIZATION

0.99+

first videoQUANTITY

0.99+

second phaseQUANTITY

0.99+

ShawnPERSON

0.99+

first phaseQUANTITY

0.99+

ThreeQUANTITY

0.99+

Two minutesQUANTITY

0.99+

three managersQUANTITY

0.99+

fifth phaseQUANTITY

0.99+

ClarkPERSON

0.99+

Bill MillsPERSON

0.99+

DalePERSON

0.99+

Five minutesQUANTITY

0.99+

NanPERSON

0.99+

second sessionQUANTITY

0.99+

Third phaseQUANTITY

0.99+

SeymourPERSON

0.99+

Bruce Basil MatthewsPERSON

0.99+

Moran TousPERSON

0.99+

five minutesQUANTITY

0.99+

hundredsQUANTITY

0.99+

Converged Infrastructure Past Present and Future


 

>> Narrator: From theCUBE's studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is theCUBE Conversation. >> You know, businesses have a staggering number of options today to support mission-critical applications. And much of the world's mission-critical data happens to live on converged infrastructure. Converged infrastructure is really designed to support the most demanding workloads. Words like resilience, performance, scalability, recoverability, et cetera. Those are the attributes that define converged infrastructure. Now with COVID-19 the digital transformation mandate, as we all know has been accelerated and buyers are demanding more from their infrastructure, and in particular converged infrastructure. Hi everybody this is Dave Vellante and welcome to this power panel where we're going to explore converged infrastructure, look at its past, its present and its future. And we're going to explore several things. The origins of converged infrastructure, why CI even came about. And what's its historic role been in terms of supporting mission-critical applications. We're going to look at modernizing workloads. What are the opportunities and the risks and what's converged infrastructures role in that regard. How has converged infrastructure evolved? And how will it support cloud and multicloud? And ultimately what's the future of converged infrastructure look like? And to examine these issues, we have three great guests, Trey Layton is here. He is the senior vice president for converged infrastructure and software engineering and architecture at Dell Technologies. And he's joined by Joakim Zetterblad. Who's the director of the SAP practice for EMEA at Dell technologies. And our very own Stu Miniman. Stu is a senior analyst at Wikibon. Guys, great to see you all welcome to theCUBE. Thanks for coming on. >> Thanks for having us. >> Great. >> Trey, I'm going to start with you. Take us back to the early days of converged infrastructure. Why was it even formed? Why was it created? >> Well, if you look back just over a decade ago, a lot of organizations were deploying virtualized environments. Everyone was consolidated on virtualization. A lot of technologies were emerging to enhance that virtualization outcome, meaning acceleration capabilities and storage arrays, networking. And there was a lot of complexity in integrating all of those underlying infrastructure technologies into a solution that would work reliably. You almost had to have a PhD and all of the best practices of many different companies integrations. And so we decided as Dell EMC, Dell Technologies to invest heavily in this area of manufacturing best practices and packaging them so that customers could acquire those technologies and already integrated fully regression tested architecture that could sustain virtually any type of workload that a company would run. And candidly that packaging, that rigor around testing produced a highly reliable product that customers now rely on heavily to operationalize greater efficiencies and run their most critical applications that power their business and ultimately the world economy. >> Now Stu, cause you were there. I was as well at the early days of the original announcement of CI. Looking back and sort of bringing it forward Stu, what was the business impact of converged infrastructure? >> Well, Dave as Trey was talking about it was that wave of virtualization had gone from, you know, just supporting many applications to being able to support all of your applications. And especially if you talk about those high value, you know business mission, critical applications, you want to make sure that you've got a reliable foundation. What the Dell tech team has done for years is make sure that they fully understand, you know the life cycle of testing that needs to happen. And you don't need to worry about, you know, what integration testing you need to do, looking at support major CS and doing a lot of your own sandbox testing, which for the most part was what enterprises needed to do. You said, okay, you know, I get the gear, I load the virtualization and then I have to see, you know, tweak everything to figure out how my application works. The business impact Dave, is you want to spend more time focusing on the business, not having to turn all the dials and worry about, do I get the performance I need? Does it have the reliability uptime that we need? And especially if we're talking about those business critical applications, of course, these are the ones that are running 24 by seven and if they go down, my business goes down with it. >> Yeah, and of course, you know, one of the other major themes we saw with conversion infrastructure was really attacking the IT labor problem. You had separate compute or server teams, storage teams, networking teams, they oftentimes weren't talking together. So there was a lot of inefficiency that converged infrastructure was designed to attack. But I want to come to the SAP expert. Joakim, that's really your wheelhouse. What is it about converged infrastructure that makes it suitable for SAP application specifically? >> You know, if you look at a classic SAP client today, there's really three major transformational waves that all SAP customers are faced with today, it's the move to S/4HANA, the introduction of this new platform, which needs to happen before 2027. It's the introduction of a multicloud cloud or operating model. And last but not least, it is the introduction of new digitization or intelligent technologies such as IOT, machine learning or artificial intelligence. And that drove to the need of a platform that could address all these three transformational waves. It came with a lot of complexity, increased costs, increased risk. And what CI did so uniquely was to provide that Edge to Core to Cloud strategy. Fully certified for both HANA, non HANA workloads for the classical analytical and transactional workloads, as well as the new modernization technologies such as IOT, machine learning, big data and analytics. And that created a huge momentum for converged in our SAP accounts. >> So Trey, I want to go to you cause you're the deep technical expert here. Joakim just mentioned uniqueness. So what are the unique characteristics of converged infrastructure that really make it suitable for handling the most demanding workloads? >> Well, converged infrastructure by definition is the integration of an external storage array with a highly optimized compute platform. And when we build best practices around integrating those technologies together, we essentially package optimizations that allow a customer to increase the quantity of users that are accessing those workloads or the applications that are driving database access in such a way where you can predictably understand consumption and utilization in your environment. Those packaged integrations are kind of like. You know, I have a friend that owns a race car shop and he has all kinds of expertise to build cars, but he has a vehicle that he buys is his daily driver. The customization that they've created to build race cars are great for the race cars that go on the track, but he's building a car on his own, it didn't make any sense. And so what customers found was the ability to acquire a packaged infrastructure with all these infrastructure optimizations, where we package these best practices that gave customers a reliable, predictable, and fully supported integration, so they didn't have to spend 20 hour support calls trying to discover and figure out what particular customization that they had employed for their application, that had some issue that they needed to troubleshoot and solve. This became a standard out of the box integration that the best and the brightest package so that customers can consume it at scale. >> So Joakim, I want to ask you let's take the sort of application view. Let's sort of flip the picture a little bit and come at it from that prism. How, if you think about like core business applications, how have they evolved over the better part of the last decade and specifically with regard to the mission-critical processes? >> So what we're seeing in the process industry and in the industry of mission-critical applications is that they have gone from being very monolithic systems where we literally saw a single ERP components such as all three or UCC. Whereas today customers are faced with a landscape of multiple components. Many of them working both on and off premise, there are multicloud strategies in place. And as we mentioned before, with the introduction of new IOT technologies, we see that there is a flow of information of data that requires a whole new set of infrastructure of components of tools to make these new processes happen. And of course, the focus in the end of the day is all on business outcomes. So what industries and companies doesn't want to do is to focus all their time in making sure that these new technologies are working together, but really focusing on how can I make an impact? How can I start to work in a better way with my clients? So the focus on business outcome, the focus on integrating multiple systems into a single consolidated approach has become so much more important, which is why the modernization of the underlying infrastructure is absolutely key. Without consolidation, without a simplification of the management and orchestration. And without the cloud enabled platform, you won't get there. >> So Stu that's key, what Joakim just said in terms of modernizing the application as being able to manage them, not as one big monolith, but integration with other key systems. So what are the options? Wikibon has done some research on this, but what are the options for modernizing workloads, whether it's on-Prem or off-prem and what are some of the trade offs there? >> Yeah, so Dave, first of all, you know, one of the biggest challenges out there is you don't just want to, you know, lift and shift. If anybody's read research for it from Wikibon, Dave, for a day, for the 10 years, I've been part of it talks about the challenges, if you just talk about migrating, because while it sounds simple, we understand that there are individual customizations that every customer's made. So you might get part of the way there, but there's often the challenges that will get in the way that could cause failure. And as we talked about for you, especially your mission-critical applications, those are the ones that you can't have downtime. So absolutely customers are reevaluating their application portfolio. You know, there are a lot of things to look at. First of all, if you can, certain things can be moved to SaaS. You've seen certain segments of the market. Absolutely SaaS can be preferred methodology, if you can go there. One of the biggest hurdles for SaaS of course, is there's retraining of the workforce. Certain applications they will embracing of that because they can take advantage of new features, get to be able to use that wherever they are. But in other cases, there are the SaaS doesn't have the capability or it doesn't fit into the workflow of the business. The cloud operating model is something we've been talking about it with you Dave, for many years. When you've seen rapid maturation of what originally was called "private cloud", but really was just virtualization plus with a little bit of a management layer on top. But now much of the automation that you build in AI technologies, you know, Trey's got a whole team working on things that if you talk to his team, it sounds very similar to what you had the same conversation should have with cloud providers. So "cloud" as an operating model, not a destination is what we're going for and being able to take advantage of automation and the like. So where your application sits, absolutely some consideration. And what we've talked about Dave, you know, the governance, the security, the reliability, the performance are all reasons why being able to keep things, you know, under my environment with an infrastructure that I have control over is absolutely one of the reasons why I might keep things more along a converged infrastructure, rather than just saying to go through the challenge of migration and optimizing and changing to something in a more of a cloud native methodology. >> What about technical debt? Trey, people talk about technical debt as a bad thing, what is technical debt? Why do I want to avoid it? And how can I avoid it? And specifically, I know, Trey, I've thrown a lot of questions at you yet, but what is it about converged infrastructure and its capabilities that helped me avoid that technical debt? >> Well, it's an interesting thing, when you deploy an environment to support a mission-critical application, you have to make a lot of implementation decisions. Some of those decisions may take you down a path that may have a finite life. And that once you reached the life expectancy of that particular configuration, you now have debt that you have to reconcile. You have to change that architecture, that configuration. And so what we do with converged infrastructure is we dedicate a team of product management, an entire product management organization, a team of engineers that treat the integrations of the architecture as a releases. And we think long range about how do we avoid not having to change the underlying architecture. And one of the greatest testaments to this is in our conversion infrastructure products over the last 11 years, we've only saw two major architectural changes while supporting generational changes in underlying infrastructure capabilities well beyond when we first started. So converged infrastructure approach is about how do we build an architecture that allows you to avoid those dead-end pathways in those integration decisions that you would normally have to make on your own. >> Joakim, I wanted to ask you, you've mentioned monolithic applications before. That's sort of, we're evolving beyond that with application architectures, but there's still a lot of monoliths out there so. And a lot of customers want to modernize those application and workloads. What, in your view, what are you seeing as the best path and the best practice for modernizing some of those monolithic workloads? >> Yeah, so Dave, as clients today are trying to build a new intelligent enterprise, which is one of SAP's leading a guidance today. They needed to start to look at how to integrate all these different systems and applications that we talked about before into the common business process framework that they have. So consolidating workloads from big data to HANA, non HANA systems, cloud, non-cloud applications into a single framework is an absolute key to that modernization strategy. The second thing which I also mentioned before is to take a new grip around orchestration and management. We know that as customers seek this intelligent approach with both analytical data, as well as experience and transactional data, we must look for new ways to orchestrate and manage those application workloads and data flows. And this is where we slowly, slowly enter into the world of a enterprise data strategy. And that's again, where converged as a very important part to play in order to build these next generation platforms that can both consolidate, simplify. And at the same time enable us to work in a cloud enabled fashion with our cloud operating model that most of our clients seek today. >> So Stu, why can't I just shove all this stuff into the public cloud and call it a day? >> Yeah, well, Dave, we've seen some people that, you know, I have a cloud first strategy and often those are the same companies that are quickly doing what we call "repatriation". I bristle a little bit when I hear these, because often it's, I've gone to the cloud without understanding how I take advantage of it, not understanding the full financial ramifications what I'm going to need to do. And therefore they quickly go back to a world that they understand. So, cloud is not a silver bullet. We understand in technology, Dave, you know, things are complicated. There's all the organizational operational pieces they do. There are excellent cloud services and it's really it's innovation. You know, how do I take advantage of the data that I have, how I allow my application to move forward and respond to the business. And really that is not something that only happens in the public clouds. If I can take advantage of infrastructure that gets me along that journey to more of a cloud model, I get the business results. So, you know, automation and APIs and everything and the Ops movement are not something that are only in the public clouds, but something that we should be embracing holistically. And absolutely, that ties into where today and tomorrow's converge infrastructure are going. >> Yeah, and to me, it comes down to the business case too. I mean, you have to look at the risk-reward. The risk of changing something that's actually working for your business versus what the payback is going to be. You know, if it ain't broken, don't fix it, but you may want to update it, change the oil every now and then, you know, maybe prune some deadwood and modernize it. But Trey, I want to come back to you. Let's take a look at some of the options that customers have. And there are a lot of options, as I said at the top. You've got do it yourself, you got a hyper-converged infrastructure, of course, converged infrastructure. What are you seeing as the use case for each of these deployment options? >> So, build your own. We're really talking about an organization that has the expertise in-house to understand the integration standards that they need to deploy to support their environment. And candidly, there are a lot of customers that have very unique application requirements that have very much customized to their environment. And they've invested in the expertise to be able to sustain that on an ongoing basis. And build your own is great for those folks. The next in converged infrastructure, where we're really talking about an external storage array with applications that need to use data services native to a storage array. And self-select compute for scaling that compute for their particular need, and owning that three tiers architecture and its associated integration, but not having to sustain it because it's converged. There are enormous number of applications out there that benefit from that. I think the third one was, you talked about hyper-converged. I'll go back to when we first introduced our hyper-converged product to the market. Which is now leading the industry for quite some time, VxRail. We had always said that customers will consume hyper-converged and converged for different use cases and different applications. The maturity of hyper-converged has come to the point where you can run virtually any application that you would like on it. And this comes down to really two vectors of consideration. One, am I going to run hyper-converged versus converged based on my operational preference? You know, hyper-converged incorporates software defined storage, predominantly a compute operating plane. Converge as mentioned previously uses that external storage array has some type of systems fabric and dedicated compute resources with access into those your operational preference is one aspect of it. And then having applications that need the data services of an external storage, primary storage array are the other aspect of deciding whether those two things are needed in your particular environment. We find more and more customers out there that have an investment of both, not one versus the other. That's not to say that there aren't customers that only have one, they exist, but a majority of customers have both. >> So Joakim, I want to come back to the sort of attributes from the application requirements perspective. When you think about mission-critical, you think about availability, scale, recoverability, data protection. I wonder if you could talk a little bit about those attributes. And again, what is it about converged infrastructure that that is the best fit and the right strategic fit for supporting those demanding applications and workloads? >> Now, when it comes to SAP, we're talking about clients and customers, most mission-critical data and information and applications. And hence the requirements on the underlying infrastructure is absolutely on the very top of what the IT organization needs to deliver. This is why, when we talk about SAP, the requirements for high availability protection disaster recovery is very, very high. And it doesn't only involve a single system. As mentioned before, SAP is not a standalone application, but rather a landscape of systems that needs to be kept consistent. And that's what a CI platform does so well. It can consolidate workloads, whether it's big data or the transactional standard workloads of SAP, ERP or UCC. The converged platforms are able to put the very highest of availability protection standards into this whole landscape and making a really unique platform for CI workloads. And at the same time, it enables our customers to accelerate those modernization journeys into things such as ML, AI, IOT, even blockchain scenarios, where we've built out our capabilities to accelerate these implementations with the help of the underlying CI platforms and the rest of the SAP environment. >> Got it. Stu, I want to go to you. You had mentioned before the cloud operating model and something that we've been talking about for a long time and Wikibon. So can converged infrastructure substantially mimic that cloud operating model and how so? What are the key ingredients of being able to create that experience on-prem? >> Yeah, well, Dave as, we've watched for more than the last decade, the cloud has looked more and more like some of the traditional enterprise things that we would look for and the infrastructure in private clouds have gone more and more cloud-like and embrace that model. So, you know, I got, I think back to the early days, Dave, we talked about how cloud was supposed to just be, you know, "simple". If you look at deploying in the cloud today, it is not simple at all that. There are so many choices out there, you know, way more than I had an initial data center. In the same way, you know, I think, you know, the original converged infrastructure from Dell, if you look at the feedback, the criticism was, you know, oh, you can have it in any color you want, as long as black, just like the Ford model T. But it was that simplicity and consistency that helped build out most of what we were talking about the cloud models I wanted to know that I had a reliable substrate platform to build on top of it. But if you talk about Dave today and in the future, what do we want? First of all, I need that operating model in a multicloud world. So, you know, we look at the environments that can spread, but beyond just a single cloud, because customers today have multiple environments, absolutely hybrid is a big piece of that. We look at what VMware's doing, look at Microsoft, Red Hat, even Amazon are extended beyond just a cloud and going into hybrid and multicloud models. Automation, a critical piece of that. And we've seen, you know, great leaps and bounds in the last couple of generations of what's happening in CI to take advantage of automation. Because we know we've gone beyond what humans can just manage themselves and therefore, you know, true automation is helping along those environments. So yes, absolutely, Dave. You know, that the lines are blurred between what the private cloud and the public cloud. And it's just that overall cloud operating model and helping customers to deal with their data and their applications, regardless of where it lives. >> Well, you know, Trey in the early days of cloud and conversion infrastructure, that homogeneity that Stu was talking about any color, as long as it's black. That was actually an advantage to removing labor costs, that consistency and that standardization. But I'm interested in how CI has evolved, its, you know, added in optionality. I mean Joakim was just talking about blockchain, so all kinds of new services. But how has CCI evolved in the better part of the last decade and what are some of the most recent innovations that people should be thinking about or aware of? >> So I think the underlying experience of CI has remained relatively constant. And we talk about the experience that customers get. So if you just look at the data that we've analyzed for over a decade now, you know, one of the data points that I love is 99% of our customers who buy CI say they have virtually no downtime anymore. And, that's a great testament. 84% of our customers say that they have that their IT operations run more efficiently. The reality around how we delivered that in the past was through services and humans performing these integrations and the upkeep associated with the sustaining of the architecture. What we've focused on at Dell Technologies is really bringing technologies that allow us to automate those human integrations and best practices. In such a way where they can become more repeatable and consumable by more customers. We don't have to have as many services folks deploying these systems as we did in the past. Because we're using software intelligence to embed that human knowledge that we used to rely on individuals exclusively for. So that's one of the aspects of the architecture. And then just taking advantage of all the new technologies that we've seen introduce over the last several years from all flash architectures and NVMe on the horizon, NVMe over fabric. All of these things as we orchestrate them in software will enable them to be more consumable by the average everyday customer. Therefore it becomes more economical for them to deploy infrastructure on premises to support mission-critical applications. >> So Stu, what about cloud and multicloud, how does CI support that? Where do those fit in? Are they relevant? >> Yeah, Dave, so absolutely. As I was talking about before, you know, customers have hybrid and multicloud environments and managing across these environments are pretty important. If I look at the Dell family, obviously they're leveraging heavily VMware as the virtualization layer. And VMware has been moving heavily as to how support containerized and incubates these environments and extend their management to not only what's happening in the data center, but into the cloud environment with VMware cloud. So, you know, management in a multicloud world Dave, is one of those areas that we definitely have some work to do. Something we've looked at Wikibon for the last few years. Is how will multicloud be different than multi-vendor? Because that was not something that the industry had done a great job of solving in the past. But you know, customers are looking to take advantage of the innovation, where it is in the services. And you know, the data first architecture is something that we see and therefore that will bring them to many services and many places. >> Oh yeah, I was talking before about in the early days of CI and even a lot of organizations, some organizations, anyway, there's still these sort of silos of, you know, storage, networking, compute resources. And you think about DevOps, where does DevOps fit into this whole equation? Maybe Stu you could take a stab at it and anybody else who wants to chime in. >> Yeah, so Dave, great, great point there. So, you know, when we talk about those silos, DevOps is one of those movements to really help the unifying force to help customers move faster. And so therefore the development team and the operations team are working together. Things like security are not a bolt-in but something that can happen along the entire path. A more recent addition to the DevOps movement also is something like FinOps. So, you know, how do we make sure that we're not just having finance sign off on things and look back every quarter, but in real time, understand how we're architecting things, especially in the cloud so that we remain responsible for that model. So, you know, speed is, you know, one of the most important pieces for business and therefore the DevOps movement, helping customers move faster and, you know, leverage and get value out of their infrastructure, their applications and their data. >> Yeah, I would add to this that I think the big transition for organizations, cause I've seen it in developing my own organization, is getting IT operators to think programmatically instead of configuration based. Use the tool to configure a device. Think about how do we create programmatic instruction to interacts with all of the devices that creates that cloud-like adaptation. Feeds in application level signaling to adapt and change the underlying configuration about that infrastructure to better run the application without relying upon an IT operator, a human to make a change. This, sort of thinking programmatically is I think one of the biggest obstacles that the industry face. And I feel really good about how we've attacked it, but there is a transformation within that dialogue that every organization is going to navigate through at their own pace. >> Yeah, infrastructure is code automation, this a fundamental to digital transformation. Joakim, I wonder if you could give us some insight as you talk to SAP customers, you know, in Europe, across the EMEA, how does the pandemic change this? >> I think the pandemic has accelerated some of the movements that we already saw in the SAP world. There is obviously a force for making sure that we get our financial budgets in shape and that we don't over spend on our cost levels. And therefore it's going to be very important to see how we can manage all these new revenue generating projects that IT organizations and business organizations have planned around new customer experience initiatives, new supply chain optimization. They know that they need to invest in these projects to stay competitive and to gain new competitive edge. And where CI plays an important part is in order to, first of all, keep costs down in all of these projects, make sure to deliver a standardized common platform upon which all these projects can be introduced. And then of course, making sure that availability and risks are kept high versus at a minimum, right? Risk low and availability at a record high, because we need to stay on with our clients and their demands. So I think again, CI is going to play a very important role. As we see customers go through this pandemic situation and needing to put pressure on both innovation and cost control at the same time. And this is where also our new upcoming data strategies will play a really important part as we need to leverage the data we have better, smarter and more efficient way. >> Got it. Okay guys, we're running out of time, but Trey, I wonder if you could, you know break out your telescope or your crystal ball, give us some visibility into the futures of converged infrastructure. What should we be expecting? >> So if you look at the last release of this last technology that we released in power one, it was all about automation. We'll build on that platform to integrate other converged capability. So if you look at the converged systems market hyper-converged is very much an element of that. And I think that we're trending to is recognizing that we can deliver an architecture that has hyper-converged and converged attributes all in a single architecture and then dial up the degrees of automation to create more adaptations for different type of application workloads, not just your traditional three tier application workloads, but also those microservices based applications that one may historically think, maybe it's best to that off premises. We feel very confident that we are delivering platforms out there today that can run more economically on premises, provide better security, better data governance, and a lot of the adaptations, the enhancements, the optimizations that we'll deliver in our converged platforms of the future about colliding new infrastructure models together, and introducing more levels of automation to have greater adaptations for applications that are running on it. >> Got it. Trey, we're going to give you the last word. You know, if you're an architect of a large organization, you've got some mission-critical workloads that, you know, you're really trying to protect. What's the takeaway? What's really the advice that you would give those folks thinking about the sort of near and midterm and even longterm? >> My advice is to understand that there are many options. We sell a lot of independent component technologies and data centers that run every organization's environment around the world. We sell packaged outcomes and hyper-converged and converged. And a lot of companies buy a little bit of build your own, they buy some converged, they buy some hyper-converged. I would employ everyone, especially in this climate to really evaluate the packaged offerings and understand how they can benefit their environment. And we recognize that everything that there's not one hammer and everything is a nail. That's why we have this broad portfolio of products that are designed to be utilized in the most efficient manners for those customers who are consuming our technologies. And converged and hyper-converge are merely another way to simplify the ongoing challenges that organizations have in managing their data estate and all of the technologies they're consuming at a rapid pace in concert with the investments that they're also making off premises. So this is very much the technologies that we talked today are very much things that organizations should research, investigate and utilize where they best fit in their organization. >> Awesome guys, and of course there's a lot of information at dell.com about that. Wikibon.com has written a lot about this and the many, many sources of information out there. Trey, Joakim, Stu thanks so much for the conversation. Really meaty, a lot of substance, really appreciate your time, thank you. >> Thank you guys. >> Thank you Dave. >> Thanks Dave. >> And everybody for watching. This is Dave Vellante for theCUBE and we'll see you next time. (soft music)

Published Date : Jul 30 2020

SUMMARY :

leaders all around the world, And much of the world's Trey, I'm going to start with you. and all of the best practices of the original announcement that needs to happen. Yeah, and of course, you know, And that drove to the need of a platform for handling the most demanding workloads? that the best and the brightest package of the last decade and And of course, the focus in terms of modernizing the application But now much of the And one of the greatest testaments to this And a lot of customers want to modernize And at the same time enable us to work that are only in the public clouds, the payback is going to be. that need the data services that that is the best fit of the underlying CI platforms and something that we've been You know, that the lines of the last decade and what delivered that in the past something that the industry of silos of, you know, and the operations team that the industry face. in Europe, across the EMEA, and that we don't over I wonder if you could, you know and a lot of the adaptations, that you would give those and all of the technologies and the many, many sources and we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

JoakimPERSON

0.99+

Dave VellantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

EuropeLOCATION

0.99+

Joakim ZetterbladPERSON

0.99+

AmazonORGANIZATION

0.99+

TreyPERSON

0.99+

Trey LaytonPERSON

0.99+

Palo AltoLOCATION

0.99+

Stu MinimanPERSON

0.99+

20 hourQUANTITY

0.99+

StuPERSON

0.99+

99%QUANTITY

0.99+

DellORGANIZATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

84%QUANTITY

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.99+

24QUANTITY

0.99+

10 yearsQUANTITY

0.99+

todayDATE

0.99+

two thingsQUANTITY

0.99+

Wikibon.comORGANIZATION

0.99+

SAPORGANIZATION

0.99+

three tiersQUANTITY

0.99+

second thingQUANTITY

0.99+

third oneQUANTITY

0.99+

BostonLOCATION

0.99+

FordORGANIZATION

0.99+

singleQUANTITY

0.98+

tomorrowDATE

0.98+

Dell EMCORGANIZATION

0.98+

two vectorsQUANTITY

0.98+

WikibonORGANIZATION

0.98+

firstQUANTITY

0.98+

S/4HANATITLE

0.98+

FinOpsTITLE

0.98+

a dayQUANTITY

0.97+

HANATITLE

0.97+

OneQUANTITY

0.97+

first strategyQUANTITY

0.97+

DevOpsTITLE

0.97+

Converged Infrastructure: Past Present and Future


 

>> Narrator: From theCUBE's studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is theCUBE Conversation. >> You know, businesses have a staggering number of options today to support mission-critical applications. And much of the world's mission-critical data happens to live on converged infrastructure. Converged infrastructure is really designed to support the most demanding workloads. Words like resilience, performance, scalability, recoverability, et cetera. Those are the attributes that define converged infrastructure. Now with COVID-19 the digital transformation mandate, as we all know has been accelerated and buyers are demanding more from their infrastructure, and in particular converged infrastructure. Hi everybody this is Dave Vellante and welcome to this power panel where we're going to explore converged infrastructure, look at its past, its present and its future. And we're going to explore several things. The origins of converged infrastructure, why CI even came about. And what's its historic role been in terms of supporting mission-critical applications. We're going to look at modernizing workloads. What are the opportunities and the risks and what's converged infrastructures role in that regard. How has converged infrastructure evolved? And how will it support cloud and multicloud? And ultimately what's the future of converged infrastructure look like? And to examine these issues, we have three great guests, Trey Layton is here. He is the senior vice president for converged infrastructure and software engineering and architecture at Dell Technologies. And he's joined by Joakim Zetterblad. Who's the director of the SAP practice for EMEA at Dell technologies. And our very own Stu Miniman. Stu is a senior analyst at Wikibon. Guys, great to see you all welcome to theCUBE. Thanks for coming on. >> Thanks for having us. >> Great. >> Trey, I'm going to start with you. Take us back to the early days of converged infrastructure. Why was it even formed? Why was it created? >> Well, if you look back just over a decade ago, a lot of organizations were deploying virtualized environments. Everyone was consolidated on virtualization. A lot of technologies were emerging to enhance that virtualization outcome, meaning acceleration capabilities and storage arrays, networking. And there was a lot of complexity in integrating all of those underlying infrastructure technologies into a solution that would work reliably. You almost had to have a PhD and all of the best practices of many different companies integrations. And so we decided as Dell EMC, Dell Technologies to invest heavily in this area of manufacturing best practices and packaging them so that customers could acquire those technologies and already integrated fully regression tested architecture that could sustain virtually any type of workload that a company would run. And candidly that packaging, that rigor around testing produced a highly reliable product that customers now rely on heavily to operationalize greater efficiencies and run their most critical applications that power their business and ultimately the world economy. >> Now Stu, cause you were there. I was as well at the early days of the original announcement of CI. Looking back and sort of bringing it forward Stu, what was the business impact of converged infrastructure? >> Well, Dave as Trey was talking about it was that wave of virtualization had gone from, you know, just supporting many applications to being able to support all of your applications. And especially if you talk about those high value, you know business mission, critical applications, you want to make sure that you've got a reliable foundation. What the Dell tech team has done for years is make sure that they fully understand, you know the life cycle of testing that needs to happen. And you don't need to worry about, you know, what integration testing you need to do, looking at support major CS and doing a lot of your own sandbox testing, which for the most part was what enterprises needed to do. You said, okay, you know, I get the gear, I load the virtualization and then I have to see, you know, tweak everything to figure out how my application works. The business impact Dave, is you want to spend more time focusing on the business, not having to turn all the dials and worry about, do I get the performance I need? Does it have the reliability uptime that we need? And especially if we're talking about those business critical applications, of course, these are the ones that are running 24 by seven and if they go down, my business goes down with it. >> Yeah, and of course, you know, one of the other major themes we saw with conversion infrastructure was really attacking the IT labor problem. You had separate compute or server teams, storage teams, networking teams, they oftentimes weren't talking together. So there was a lot of inefficiency that converged infrastructure was designed to attack. But I want to come to the SAP expert. Joakim, that's really your wheelhouse. What is it about converged infrastructure that makes it suitable for SAP application specifically? >> You know, if you look at a classic SAP client today, there's really three major transformational waves that all SAP customers are faced with today, it's the move to S/4HANA, the introduction of this new platform, which needs to happen before 2027. It's the introduction of a multicloud cloud or operating model. And last but not least, it is the introduction of new digitization or intelligent technologies such as IOT, machine learning or artificial intelligence. And that drove to the need of a platform that could address all these three transformational waves. It came with a lot of complexity, increased costs, increased risk. And what CI did so uniquely was to provide that Edge to Core to Cloud strategy. Fully certified for both HANA, non HANA workloads for the classical analytical and transactional workloads, as well as the new modernization technologies such as IOT, machine learning, big data and analytics. And that created a huge momentum for converged in our SAP accounts. >> So Trey, I want to go to you cause you're the deep technical expert here. Joakim just mentioned uniqueness. So what are the unique characteristics of converged infrastructure that really make it suitable for handling the most demanding workloads? >> Well, converged infrastructure by definition is the integration of an external storage array with a highly optimized compute platform. And when we build best practices around integrating those technologies together, we essentially package optimizations that allow a customer to increase the quantity of users that are accessing those workloads or the applications that are driving database access in such a way where you can predictably understand consumption and utilization in your environment. Those packaged integrations are kind of like. You know, I have a friend that owns a race car shop and he has all kinds of expertise to build cars, but he has a vehicle that he buys is his daily driver. The customization that they've created to build race cars are great for the race cars that go on the track, but he's building a car on his own, it didn't make any sense. And so what customers found was the ability to acquire a packaged infrastructure with all these infrastructure optimizations, where we package these best practices that gave customers a reliable, predictable, and fully supported integration, so they didn't have to spend 20 hour support calls trying to discover and figure out what particular customization that they had employed for their application, that had some issue that they needed to troubleshoot and solve. This became a standard out of the box integration that the best and the brightest package so that customers can consume it at scale. >> So Joakim, I want to ask you let's take the sort of application view. Let's sort of flip the picture a little bit and come at it from that prism. How, if you think about like core business applications, how have they evolved over the better part of the last decade and specifically with regard to the mission-critical processes? >> So what we're seeing in the process industry and in the industry of mission-critical applications is that they have gone from being very monolithic systems where we literally saw a single ERP components such as all three or UCC. Whereas today customers are faced with a landscape of multiple components. Many of them working both on and off premise, there are multicloud strategies in place. And as we mentioned before, with the introduction of new IOT technologies, we see that there is a flow of information of data that requires a whole new set of infrastructure of components of tools to make these new processes happen. And of course, the focus in the end of the day is all on business outcomes. So what industries and companies doesn't want to do is to focus all their time in making sure that these new technologies are working together, but really focusing on how can I make an impact? How can I start to work in a better way with my clients? So the focus on business outcome, the focus on integrating multiple systems into a single consolidated approach has become so much more important, which is why the modernization of the underlying infrastructure is absolutely key. Without consolidation, without a simplification of the management and orchestration. And without the cloud enabled platform, you won't get there. >> So Stu that's key, what Joakim just said in terms of modernizing the application as being able to manage them, not as one big monolith, but integration with other key systems. So what are the options? Wikibon has done some research on this, but what are the options for modernizing workloads, whether it's on-Prem or off-prem and what are some of the trade offs there? >> Yeah, so Dave, first of all, you know, one of the biggest challenges out there is you don't just want to, you know, lift and shift. If anybody's read research for it from Wikibon, Dave, for a day, for the 10 years, I've been part of it talks about the challenges, if you just talk about migrating, because while it sounds simple, we understand that there are individual customizations that every customer's made. So you might get part of the way there, but there's often the challenges that will get in the way that could cause failure. And as we talked about for you, especially your mission-critical applications, those are the ones that you can't have downtime. So absolutely customers are reevaluating their application portfolio. You know, there are a lot of things to look at. First of all, if you can, certain things can be moved to SAS. You've seen certain segments of the market. Absolutely SAS can be preferred methodology, if you can go there. One of the biggest hurdles for SAS of course, is there's retraining of the workforce. Certain applications they will embracing of that because they can take advantage of new features, get to be able to use that wherever they are. But in other cases, there are the SAS doesn't have the capability or it doesn't fit into the workflow of the business. The cloud operating model is something we've been talking about it with you Dave, for many years. When you've seen rapid maturation of what originally was called "private cloud", but really was just virtualization plus with a little bit of a management layer on top. But now much of the automation that you build in AI technologies, you know, Trey's got a whole team working on things that if you talk to his team, it sounds very similar to what you had the same conversation should have with cloud providers. So "cloud" as an operating model, not a destination is what we're going for and being able to take advantage of automation and the like. So where your application sits, absolutely some consideration. And what we've talked about Dave, you know, the governance, the security, the reliability, the performance are all reasons why being able to keep things, you know, under my environment with an infrastructure that I have control over is absolutely one of the reasons why am I keep things more along a converged infrastructure, rather than just saying to go through the challenge of migration and optimizing and changing to something in a more of a cloud native methodology. >> What about technical debt? Trey, people talk about technical debt as a bad thing, what is technical debt? Why do I want to avoid it? And how can I avoid it? And specifically, I know, Trey, I've thrown a lot of questions at you yet, but what is it about converged infrastructure and its capabilities that helped me avoid that technical debt? >> Well, it's an interesting thing, when you deploy an environment to support a mission-critical application, you have to make a lot of implementation decisions. Some of those decisions may take you down a path that may have a finite life. And that once you reached the life expectancy of that particular configuration, you now have debt that you have to reconcile. You have to change that architecture, that configuration. And so what we do with converged infrastructure is we dedicate a team of product management, an entire product management organization, a team of engineers that treat the integrations of the architecture as a releases. And we think long range about how do we avoid not having to change the underlying architecture. And one of the greatest testaments to this is in our conversion infrastructure products over the last 11 years, we've only saw two major architectural changes while supporting generational changes in underlying infrastructure capabilities well beyond when we first started. So converged infrastructure approach is about how do we build an architecture that allows you to avoid those dead-end pathways in those integration decisions that you would normally have to make on your own. >> Joakim, I wanted to ask you, you've mentioned monolithic applications before. That's sort of, we're evolving beyond that with application architectures, but there's still a lot of monoliths out there so. And a lot of customers want to modernize those application and workloads. What, in your view, what are you seeing as the best path and the best practice for modernizing some of those monolithic workloads? >> Yeah, so Dave, as clients today are trying to build a new intelligent enterprise, which is one of SAP's leading a guidance today. They needed to start to look at how to integrate all these different systems and applications that we talked about before into the common business process framework that they have. So consolidating workloads from big data to HANA, non HANA systems, cloud, non-cloud applications into a single framework is an absolute key to that modernization strategy. The second thing which I also mentioned before is to take a new grip around orchestration and management. We know that as customers seek this intelligent approach with both analytical data, as well as experience and transactional data, we must look for new ways to orchestrate and manage those application workloads and data flows. And this is where we slowly, slowly enter into the world of a enterprise data strategy. And that's again, where converged as a very important part to play in order to build these next generation platforms that can both consolidate, simplify. And at the same time enable us to work in a cloud enabled fashion with our cloud operating model that most of our clients seek today. >> So Stu, why can't I just shove all this stuff into the public cloud and call it a day? >> Yeah, well, Dave, we've seen some people that, you know, I have a cloud first strategy and often those are the same companies that are quickly doing what we call "repatriation". I bristle a little bit when I hear these, because often it's, I've gone to the cloud without understanding how I take advantage of it, not understanding the full financial ramifications what I'm going to need to do. And therefore they quickly go back to a world that they understand. So, cloud is not a silver bullet. We understand in technology, Dave, you know, things are complicated. There's all the organizational operational pieces they do. There are excellent cloud services and it's really it's innovation. You know, how do I take advantage of the data that I have, how I allow my application to move forward and respond to the business. And really that is not something that only happens in the public clouds. If I can take advantage of infrastructure that gets me along that journey to more of a cloud model, I get the business results. So, you know, automation and APIs and everything and the Ops movement are not something that are only in the public clouds, but something that we should be embracing holistically. And absolutely, that ties into where today and tomorrow's converge infrastructure are going. >> Yeah, and to me, it comes down to the business case too. I mean, you have to look at the risk-reward. The risk of changing something that's actually working for your business versus what the payback is going to be. You know, if it ain't broken, don't fix it, but you may want to update it, change the oil every now and then, you know, maybe prune some deadwood and modernize it. But Trey, I want to come back to you. Let's take a look at some of the options that customers have. And there are a lot of options, as I said at the top. You've got do it yourself, you got a hyper-converged infrastructure, of course, converged infrastructure. What are you seeing as the use case for each of these deployment options? >> So, build your own. We're really talking about an organization that has the expertise in-house to understand the integration standards that they need to deploy to support their environment. And candidly, there are a lot of customers that have very unique application requirements that have very much customized to their environment. And they've invested in the expertise to be able to sustain that on an ongoing basis. And build your own is great for those folks. The next in converged infrastructure, where we're really talking about an external storage array with applications that need to use data services native to a storage array. And self-select compute for scaling that compute for their particular need, and owning that three tiers architecture and its associated integration, but not having to sustain it because it's converged. There are enormous number of applications out there that benefit from that. I think the third one was, you talked about hyper-converged. I'll go back to when we first introduced our hyper-converged product to the market. Which is now leading the industry for quite some time, VxRail. We had always said that customers will consume hyper-converged and converged for different use cases and different applications. The maturity of hyper-converged has come to the point where you can run virtually any application that you would like on it. And this comes down to really two vectors of consideration. One, am I going to run hyper-converged versus converged based on my operational preference? You know, hyper-converged incorporates software defined storage, predominantly a compute operating plane. Converge as mentioned previously uses that external storage array has some type of systems fabric and dedicated compute resources with access into those your operational preference is one aspect of it. And then having applications that need the data services of an external storage, primary storage array are the other aspect of deciding whether those two things are needed in your particular environment. We find more and more customers out there that have an investment of both, not one versus the other. That's not to say that there aren't customers that only have one, they exist, but a majority of customers have both. >> So Joakim, I want to come back to the sort of attributes from the application requirements perspective. When you think about mission-critical, you think about availability, scale, recoverability, data protection. I wonder if you could talk a little bit about those attributes. And again, what is it about converged infrastructure that that is the best fit and the right strategic fit for supporting those demanding applications and workloads? >> Now, when it comes to SAP, we're talking about clients and customers, most mission-critical data and information and applications. And hence the requirements on the underlying infrastructure is absolutely on the very top of what the IT organization needs to deliver. This is why, when we talk about SAP, the requirements for high availability protection disaster recovery is very, very high. And it doesn't only involve a single system. As mentioned before, SAP is not a standalone application, but rather a landscape of systems that needs to be kept consistent. And that's what a CI platform does so well. It can consolidate workloads, whether it's big data or the transactional standard workloads of SAP, ERP or UCC. The converged platforms are able to put the very highest of availability protection standards into this whole landscape and making a really unique platform for CI workloads. And at the same time, it enables our customers to accelerate those modernization journeys into things such as ML, AI, IOT, even blockchain scenarios, where we've built out our capabilities to accelerate these implementations with the help of the underlying CI platforms and the rest of the SAP environment. >> Got it. Stu, I want to go to you. You had mentioned before the cloud operating model and something that we've been talking about for a long time and Wikibon. So can converged infrastructure substantially mimic that cloud operating model and how so? What are the key ingredients of being able to create that experience on-prem? >> Yeah, well, Dave is, we've watched for more than the last decade, the cloud has looked more and more like some of the traditional enterprise things that we would look for and the infrastructure in private clouds have gone more and more cloud-like and embrace that model. So, you know, I got, I think back to the early days, Dave, we talked about how cloud was supposed to just be, you know, "simple". If you look at deploying in the cloud today, it is not simple at all that. There are so many choices out there, you know, way more than I had an initial data center. In the same way, you know, I think, you know, the original converged infrastructure from Dell, if you look at the feedback, the criticism was, you know, oh, you can have it in any color you want, as long as black, just like the Ford model T. But it was that simplicity and consistency that helped build out most of what we were talking about the cloud models I wanted to know that I had a reliable substrate platform to build on top of it. But if you talk about Dave today and in the future, what do we want? First of all, I need that operating model in a multicloud world. So, you know, we look at the environments that can spread, but beyond just a single cloud, because customers today have multiple environments, absolutely hybrid is a big piece of that. We look at what VMware's doing, look at Microsoft, Red Hat, even Amazon are extended beyond just a cloud and going into hybrid and multicloud models. Automation, a critical piece of that. And we've seen, you know, great leaps and bounds in the last couple of generations of what's happening in CI to take advantage of automation. Because we know we've gone beyond what humans can just manage themselves and therefore, you know, true automation is helping along those environments. So yes, absolutely, Dave. You know, that the lines are blurred between what the private cloud and the public cloud. And it's just that overall cloud operating model and helping customers to deal with their data and their applications, regardless of where it is. >> Well, you know, Trey in the early days of cloud and conversion infrastructure, that homogeneity that Stu was talking about any color, as long as it's black. That was actually an advantage to removing labor costs, that consistency and that standardization. But I'm interested in how CI has evolved, its, you know, added in optionality. I mean Joakim was just talking about blockchain, so all kinds of new services. But how has CCI evolved in the better part of the last decade and what are some of the most recent innovations that people should be thinking about or aware of? >> So I think the underlying experience of CI has remained relatively constant. And we talk about the experience that customers get. So if you just look at the data that we've analyzed for over a decade now, you know, one of the data points that I love is 99% of our customers who buy CI say they have virtually no downtime anymore. And, that's a great testament. 84% of our customers say that they have that their IT operations run more efficiently. The reality around how we delivered that in the past was through services and humans performing these integrations and the upkeep associated with the sustaining of the architecture. What we've focused on at Dell Technologies is really bringing technologies that allow us to automate those human integrations and best practices. In such a way where they can become more repeatable and consumable by more customers. We don't have to have as many services folks deploying these systems as we did in the past. Because we're using software intelligence to embed that human knowledge that we used to rely on individuals exclusively for. So that's one of the aspects of the architecture. And then just taking advantage of all the new technologies that we've seen introduce over the last several years from all flash architectures and NVMe on the horizon, NVMe over fabric. All of these things as we orchestrate them in software will enable them to be more consumable by the average everyday customer. Therefore it becomes more economical for them to deploy infrastructure on premises to support mission-critical applications. >> So Stu, what about cloud and multicloud, how does CI support that? Where do those fit in? Are they relevant? >> Yeah, Dave, so absolutely. As I was talking about before, you know, customers have hybrid and multicloud environments and managing across these environments are pretty important. If I look at the Dell family, obviously they're leveraging heavily VMware as the virtualization layer. And VMware has been moving heavily as to how support containerized and incubates these environments and extend their management to not only what's happening in the data center, but into the cloud environment with VMware cloud. So, you know, management in a multicloud world Dave, is one of those areas that we definitely have some work to do. Something we've looked at Wikibon for the last few years. Is how will multicloud be different than multi-vendor? Because that was not something that the industry had done a great job of solving in the past. But you know, customers are looking to take advantage of the innovation, where it is in the services. And you know, the data first architecture is something that we see and therefore that will bring them to many services and many places. >> Oh yeah, I was talking before about in the early days of CI and even a lot of organizations, some organizations, anyway, there's still these sort of silos of, you know, storage, networking, compute resources. And you think about DevOps, where does DevOps fit into this whole equation? Maybe Stu you could take a stab at it and anybody else who wants to chime in. >> Yeah, so Dave, great, great point there. So, you know, when we talk about those silos, DevOps is one of those movements to really help the unifying force to help customers move faster. And so therefore the development team and the operations team are working together. Things like security are not a built-in but something that can happen along the entire path. A more recent addition to the DevOps movement also is something like FinOps. So, you know, how do we make sure that we're not just having finance sign off on things and look back every quarter, but in real time, understand how we're architecting things, especially in the cloud so that we remain responsible for that model. So, you know, speed is, you know, one of the most important pieces for business and therefore the DevOps movement, helping customers move faster and, you know, leverage and get value out of their infrastructure, their applications and their data. >> Yeah, I would add to this that I think the big transition for organizations, cause I've seen it in developing my own organization, is getting IT operators to think programmatically instead of configuration based. Use the tool to configure a device. Think about how do we create programmatic instruction to interacts with all of the devices that creates that cloud-like adaptation. Feeds in application level signaling to adapt and change the underlying configuration about that infrastructure to better run the application without relying upon an IT operator, a human to make a change. This, sort of thinking programmatically is I think one of the biggest obstacles that the industry face. And I feel really good about how we've attacked it, but there is a transformation within that dialogue that every organization is going to navigate through at their own pace. >> Yeah, infrastructure is code automation, this a fundamental to digital transformation. Joakim, I wonder if you could give us some insight as you talk to SAP customers, you know, in Europe, across the EMEA, how does the pandemic change this? >> I think the pandemic has accelerated some of the movements that we already saw in the SAP world. There is obviously a force for making sure that we get our financial budgets in shape and that we don't over spend on our cost levels. And therefore it's going to be very important to see how we can manage all these new revenue generating projects that IT organizations and business organizations have planned around new customer experience initiatives, new supply chain optimization. They know that they need to invest in these projects to stay competitive and to gain new competitive edge. And where CI plays an important part is in order to, first of all, keep costs down in all of these projects, make sure to deliver a standardized common platform upon which all these projects can be introduced. And then of course, making sure that availability and risks are kept high versus at a minimum, right? Risk low and availability at a record high, because we need to stay on with our clients and their demands. So I think again, CI is going to play a very important role. As we see customers go through this pandemic situation and needing to put pressure on both innovation and cost control at the same time. And this is where also our new upcoming data strategies will play a really important part as we need to leverage the data we have better, smarter and more efficient way. >> Got it. Okay guys, we're running out of time, but Trey, I wonder if you could, you know break out your telescope or your crystal ball, give us some visibility into the futures of converged infrastructure. What should we be expecting? So if you look at the last release of this last technology that we released in power one, it was all about automation. We'll build on that platform to integrate other converged capability. So if you look at the converged systems market hyper-converged is very much an element of that. And I think that we're trending to is recognizing that we can deliver an architecture that has hyper-converged and converged attributes all in a single architecture and then dial up the degrees of automation to create more adaptations for different type of application workloads, not just your traditional three tier application workloads, but also those microservices based applications that one may historically think, maybe it's best to that off premises. We feel very confident that we are delivering platforms out there today that can run more economically on premises, provide better security, better data governance, and a lot of the adaptations, the enhancements, the optimizations that we'll deliver in our converged platforms of the future about colliding new infrastructure models together, and introducing more levels of automation to have greater adaptations for applications that are running on it. >> Got it. Trey, we're going to give you the last word. You know, if you're an architect of a large organization, you've got some mission-critical workloads that, you know, you're really trying to protect. What's the takeaway? What's really the advice that you would give those folks thinking about the sort of near and midterm and even longterm? >> My advice is to understand that there are many options. We sell a lot of independent component technologies and data centers that run every organization's environment around the world. We sell packaged outcomes and hyper-converged and converged. And a lot of companies buy a little bit of build your own, they buy some converged, they buy some hyper-converged. I would employ everyone, especially in this climate to really evaluate the packaged offerings and understand how they can benefit their environment. And we recognize that everything that there's not one hammer and everything is a nail. That's why we have this broad portfolio of products that are designed to be utilized in the most efficient manners for those customers who are consuming our technologies. And converged and hyper-converge are merely another way to simplify the ongoing challenges that organizations have in managing their data estate and all of the technologies they're consuming at a rapid pace in concert with the investments that they're also making off premises. So this is very much the technologies that we talked today are very much things that organizations should research, investigate and utilize where they best fit in their organization. >> Awesome guys, and of course there's a lot of information at dell.com about that. Wikibon.com has written a lot about this and the many, many sources of information out there. Trey, Joakim, Stu thanks so much for the conversation. Really meaty, a lot of substance, really appreciate your time, thank you. >> Thank you guys. >> Thank you Dave. >> Thanks Dave. >> And everybody for watching. This is Dave Vellante for theCUBE and we'll see you next time. (soft music)

Published Date : Jul 6 2020

SUMMARY :

leaders all around the world, And much of the world's Trey, I'm going to start with you. and all of the best practices of the original announcement that needs to happen. Yeah, and of course, you know, And that drove to the need of a platform for handling the most demanding workloads? that the best and the brightest package of the last decade and And of course, the focus in terms of modernizing the application But now much of the And one of the greatest testaments to this And a lot of customers want to modernize And at the same time enable us to work that are only in the public clouds, the payback is going to be. that need the data services that that is the best fit of the underlying CI platforms and something that we've been You know, that the lines of the last decade and what delivered that in the past something that the industry of silos of, you know, and the operations team that the industry face. in Europe, across the EMEA, and that we don't over and a lot of the adaptations, that you would give those and all of the technologies and the many, many sources and we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

JoakimPERSON

0.99+

Dave VellantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

Joakim ZetterbladPERSON

0.99+

AmazonORGANIZATION

0.99+

EuropeLOCATION

0.99+

TreyPERSON

0.99+

Trey LaytonPERSON

0.99+

Palo AltoLOCATION

0.99+

StuPERSON

0.99+

Stu MinimanPERSON

0.99+

20 hourQUANTITY

0.99+

99%QUANTITY

0.99+

DellORGANIZATION

0.99+

bothQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

84%QUANTITY

0.99+

24QUANTITY

0.99+

10 yearsQUANTITY

0.99+

todayDATE

0.99+

oneQUANTITY

0.99+

two thingsQUANTITY

0.99+

SAPORGANIZATION

0.99+

second thingQUANTITY

0.99+

Wikibon.comORGANIZATION

0.99+

FordORGANIZATION

0.99+

BostonLOCATION

0.99+

three tiersQUANTITY

0.98+

Dell EMCORGANIZATION

0.98+

two vectorsQUANTITY

0.98+

tomorrowDATE

0.98+

EMEALOCATION

0.98+

third oneQUANTITY

0.98+

WikibonORGANIZATION

0.98+

S/4HANATITLE

0.98+

firstQUANTITY

0.97+

singleQUANTITY

0.97+

dell.comORGANIZATION

0.97+

HANATITLE

0.97+

FinOpsTITLE

0.97+

one aspectQUANTITY

0.97+

DevOpsTITLE

0.97+

a dayQUANTITY

0.97+

The Value of Oracle’s Gen 2 Cloud Infrastructure + Oracle Consulting


 

>>from the Cube Studios in Palo Alto and Boston. It's the Cube covering empowering the autonomous enterprise brought to you by >>Oracle Consulting. Everybody, this is Dave Vellante. We've been covering the transformation of Oracle consulting and really, it's rebirth. And I'm here with Chris Fox, who's the group vice president for Enterprise Cloud Architects and chief technologist for the North America Tech Cloud at Oracle. Chris, thanks so much for coming on the Cube. >>Thanks too great to be here, >>So I love this title. You know, years ago, this thing is a cloud architect. Certainly there were chief technologist, but so you really that's those are your peeps, Is that right? >>That's right. That's right. That's really in my team. And I That's all we dio. So our focus is really helping our customers take this journey from when they were on premise. You really transforming with cloud? And when we think about Cloud, really, for us, it's a combination. It's it's our hybrid cloud, which happens to be on premise. And then, of course, the true public cloud, like most people, are familiar with so very exciting journey and frankly, of seeing just a lot of success for our customers. You know what I think we're seeing at Oracle, though? Because we're so connected with SAS. And then we're also connected with the traditional applications that have run the business for years. The legacy applications that have been, you know, servicing us for 20 years and then the cloud native developers. So with my team and I are constantly focused on now is things like digital transformation and really wiring up all three of these across. So if we think of, like a customer outcome like I want to have a package delivered to me from a retailer that actual process flow could touch a brand new cognitive site of e commerce it could touch essentially maybe a traditional application that used to be on Prem that's now in the cloud. And then it might even use new SAS application, maybe for maybe Herman process or delivery vehicle and scheduling. So when my team does, we actually connect all three. So what? I was mentioned, too. In my team and all of our customers, we have field service, all three of those constituents. And if you think about process flows, so I take a cloud. Native developer we help them become efficient. We take the person use to run in a traditional application, and we help them become more efficient. And then we have the SAS applications, which are now rolling out new features on a quarterly basis and the whole new delivery model. But the real key is connecting all three of these into your business process flow. That makes the customers life much more vision. >>So I want to get into this cloud conversations that you guys are using this term last mover advantage. I asked you last I was being last, You know, an advantage. But let me start there. >>People always say, You know, of course, we want to get out of the data center. We're going zero data center and how we say, Well, how are you going to handle that back office stuff, right? The stuff that's really big Frankie, um, doesn't handle just, you know, instances dying or things going away too easily. It needs predictable performance in the scale. It absolutely needs security. And ultimately, you know, a lot of these applications truly have relied on Oracle database. The Oracle database has its own specific characteristics that it means to run really well. So we actually looked at the cloud and we said, Let's take the first generation clouds but you're doing great But let's add the features that specifically a lot of times the Oracle workload needed in order to run very well and in a cost effective manner. So that's what we mean when we say last mover advantage, We said, Let's take the best of the clouds that are out there today. Let's look at the workloads that, frankly, Oracle runs and has been running for years. What are customers needed? And then let's build those features right into this, uh, this next version of the cloud we service the Enterprise. So our goal, honestly, which is interesting is even that first discussion we had about cloud, native and legacy applications and also the new SAS applications. We built a cloud that handles all three use cases at scale resiliently in very secure manner, and I don't know of any other cloud that's handling those three use cases all in. We'll call it the same pendency process. Oracle >>Mike witnesses. Why was it important for Oracle? And is it important for Oracle on its customers that have to participate in IAS and Pass and SAS. Why not just the last two layers of that? Um What does that mean from a strategic advantage standpoint? What does that do for >>you? Yeah, great question. So the number one reason why we needed to have all three was that we have so many customers to today are in a data center. They're running a lot of our workloads on premise, and they absolutely are trying to find a better way to deliver lower cost services to their customers. And so we couldn't just say, Let's just everyone needs to just become net new. Everyone just needs to ditch the old and go just a brand new alone. Too hard, too expensive at times. So we said, You know, let's kill us customers the ultimate amount of choice. So let's even go back against that developer conversation and SAS Um, if you didn't have eyes, we couldn't help customers achieve a zero data center strategy with their traditional applications will call it PeopleSoft or JD Edwards, Revisit Suite or even. There's some massive applications that are running on the Oracle cloud right now that are custom applications built on the Oracle database. What they want is, they said, Give me the lowest. Possibly a predictable performance. I as I'll run my app steer on this number two. Give me a platform service for database because, frankly, I don't really want to run your database. Like with all the manual effort. I want someone automate, patching scale up and down and all these types of features like should have given us. And then number three. You know, I do want SAS over time. So we spend a lot of time with our customers really saying, How do I take this traditional application, Run it on eyes and has and the number two Let's modernize it at scale. Maybe I want to start peeling off functionality and running in the cloud Native services right alongside, right? That's something again that we're doing at scale. And other people are having a hard time running these traditional workloads on Prem in the cloud. The second part is they say, you know, I've got this legacy traditional your api been servicing we well, or maybe a supply chain system ultimately want to get out of this. How do I get to SAS? You say Okay, here's the way to do this. First bring into the cloud running on IAS and pass and then selectively, I call it cloud slicing. Take a piece of functionality and put it into SAS. We're helping customers move to the cloud at scale. We're helping them do it at their rate, with whatever level of change they want. And when they're ready for SAS, we're ready for them. >>How does autonomous fit into this whole architecture Wait for that? That that description? I mean, it's a it's nuanced, but it's important. I'm sure you haven't discussed this conversation with a lot of cloud architects and chief technologist. They want to know this stuff. They want to know how it works. Um, you know, we will talk about what the business impact is, but but yeah, it's not about autonomous and where that fits. >>So the autonomous database, what we've done is really big. And look at all the runtime operations of an Oracle database. So tuning, patching, sparing all these different features and what we've done is taken the best of the Oracle database the best of something called Exit Data right, which we run in the cloud which really helps a lot of our customers. And then we wrapped it with a set of automation and security tools to help it. Really, uh, managing self tune itself. Patch itself scale up and down, independent between compute and storage. So why that's important, though, is that it? Really? Our goal is to help people run the Oracle databases they have for years, but with far less effort and then even not letting far less effort. Hopefully, you know a machine. Last man out of the equation we always talk about is your man plus machine is greater than man alone, so being assisted by, um, artificial intelligence and machine learning to perform those database operations, we should provide a better service to our customers. Far less paths are hoping goal is that people have been running Oracle databases, you know, How can we help them do it with far less effort and maybe spend more time on what the data can do for the organization? Right? Improve customer experience at Centra versus maybe like Hana Way. How do I spin up the table? It >>so talk about the business impact. So you go into customers, you talk to the the cloud Architects, the chief technologist. You pass that test now, you got to deliver the business impact. We're is Oracle Consulting fit with regard to that? And maybe you could talk about that where you were You guys want to take this thing? >>Yeah, absolutely. I mean, so you know, the cloud is a great set of technologies, but where Oracle Consulting is really helping us deliver is in, um, you know, one of the things I think that's been fantastic working with the Oracle consulting team is that, you know, Cloud is new for a lot of customers who've been running these environments for a number of years. There's always some fear and a little bit of trepidation saying, How do I learn this new cloud of the workloads? We're talking about David, like tier zero, tier one, tier two and all the way up to Dev and Test and, er, um, Oracle consulting. This really couple things in particular, Number one, they start with the end in mind, and number two that they start to do is they really help implement these systems. And, you know, there's a lot of different assurances that we have that we're going to get it done on time and better be under budget because ultimately, you know, again, that's a something is really paramount for us and then the third part of it. But sometimes a run book, right? We actually don't want to just live in our customer's environments. We want to help them understand how to run this new system. So training and change management. A lot of times, Oracle Consulting is helping with run books. We usually well, after doing it the first time. We'll sit back and say, Let the customer do in the next few times and essentially help them through the process. And our goal at that point is to leave only if the customer wants us to. But ultimately our goal is to implemented, get it to go live on time and then help the customer learn this journey to the cloud and without them. Frankly, uh, you know, I think these systems were sometimes too complex and difficult to do on your own. Maybe the first time, especially cause I could say they're closing the books. They might be running your entire supply chain. They run your entire HR system, whatever they might be, uh, too important, leading a chance. So they really help us with helping a customer become live and become very confident. Skilled. They could do themselves >>of the conversation. We have to leave it right there. But thanks so much for coming on the Cube and sharing your insights. Great stuff. >>Absolutely. Thanks for having me on. >>All right. You're welcome. And thank you for watching everybody. This is Dave Volante for the Cube. We are covering the oracle of North American Consulting. Transformation. And it's rebirth in this digital event. Keep it right there. We'll be right back.

Published Date : Jul 6 2020

SUMMARY :

empowering the autonomous enterprise brought to you by Chris, thanks so much for coming on the Cube. Certainly there were chief technologist, but so you really that's those are your peeps, And if you think about process flows, So I want to get into this cloud conversations that you guys are using this term last mover advantage. And ultimately, you know, Why not just the last two layers of that? There's some massive applications that are running on the Oracle cloud right now that are custom applications built Um, you know, we will talk about what the business impact is, of the equation we always talk about is your man plus machine is greater than man alone, You pass that test now, you got to deliver the business And our goal at that point is to leave only if the customer wants us to. But thanks so much for coming on the Cube and sharing your insights. Thanks for having me on. And thank you for watching everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

Dave VellantePERSON

0.99+

DavidPERSON

0.99+

Chris FoxPERSON

0.99+

OracleORGANIZATION

0.99+

Dave VolantePERSON

0.99+

BostonLOCATION

0.99+

20 yearsQUANTITY

0.99+

MikePERSON

0.99+

second partQUANTITY

0.99+

SASORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

FirstQUANTITY

0.99+

Oracle ConsultingORGANIZATION

0.99+

CentraORGANIZATION

0.99+

Hana WayORGANIZATION

0.99+

first timeQUANTITY

0.99+

three use casesQUANTITY

0.98+

North American ConsultingORGANIZATION

0.98+

third partQUANTITY

0.98+

todayDATE

0.97+

oneQUANTITY

0.96+

Cube StudiosORGANIZATION

0.96+

threeQUANTITY

0.96+

first generationQUANTITY

0.95+

North America Tech CloudORGANIZATION

0.94+

FrankieORGANIZATION

0.92+

PeopleSoftORGANIZATION

0.91+

JD EdwardsORGANIZATION

0.87+

Enterprise Cloud ArchitectsORGANIZATION

0.87+

two layersQUANTITY

0.86+

yearsQUANTITY

0.86+

SASTITLE

0.84+

yearsDATE

0.84+

2QUANTITY

0.83+

first discussionQUANTITY

0.79+

tier oneOTHER

0.79+

CubeORGANIZATION

0.79+

RevisitTITLE

0.75+

SuiteORGANIZATION

0.71+

one reasonQUANTITY

0.71+

zero dataQUANTITY

0.7+

tier twoOTHER

0.68+

PassTITLE

0.67+

tier zeroOTHER

0.66+

IASTITLE

0.65+

twoQUANTITY

0.64+

ArchitePERSON

0.61+

HermanTITLE

0.61+

zeroQUANTITY

0.52+

numberQUANTITY

0.51+

CubeCOMMERCIAL_ITEM

0.51+

Infrastructure Led Transformation


 

>>from >>the Cube Studios in Palo Alto and Boston. It's the Cube covering empowering the autonomous enterprise brought to you by Oracle >>Consulting. Welcome back, everybody, to this special presentation of the Cube. We're covering the rebirth of Oracle Consulting. It's a digital event. We're going out. We're extracting the signal from the noise we happened today to be in Chicago, which is obviously the center of the country. A lot of customers here, a lot of consultants and consulting organizations here. A lot of expertise. Mike Evans is here. He's the VP for cloud advisory and general manager of Oracle. Elevate, Mike. Thanks for coming on The Cube. >>Appreciate it. Good to be here. >>So you elevate in your title. What is Oracle Elevate it or >>elevate was actually announced or cold from world last year. It's the partnership that we really had to actually take our scale of the next levels who actually did it with Deloitte Consulting. So the goal is to actually take the capabilities of both organizations. Deloitte really has functional capabilities and expertise within Oracle practice, and obviously Oracle has Oracle technical expertise. That combination that to really allows us to scale provide sort of call. The one plus one equals three effort for customers. >>You've got a decent timeline or observation. Over the past several years you joined three years ago, you were brand name companies. First of all, what attracted you to come to Oracle Consultant? >>Absolutely. So Oracle was in the point where they were doing a lot of stuff around on prim on premise software, the old RP type stuff they were doing cloud. They sort of had to have this sort of transformational moment. I was asked to come in and consulting in the early days and say, Hey, look, we're trying to transform the organization for mantra consulting over to Cloud Consulting, coming to help us with this stuff that you've worked with prior to cloud companies and help us really move the organization forward and look at things differently. So it's definitely been a journey over the last three years have taken it from early 85% of the 90% of our revenue around on Prem type of engagements to now actually splitting the organization being dedicated 100% in cloud, which is a huge transformation last >>three years. What really, what's the underpinning of Gen two Cloud. Can you give us the bumper sticker on that? >>Yeah, Well, the underpinning agenda to cloud is really if you look at the Gen One cloud was purely just infrastructure layer. Gen two is really based on a segmenting security, which is a huge problem out in the marketplace, right? So we actually have a sort of a world class way. We take segments security outside of the actual environment itself. It's completely segment, which is awesome, right? But then they also when you actually moving forward. The capability of the entire thing is built on sort of the autonomous enterprise, autonomous capabilities. Everything is sort of self healing. Self funding are not sorry, self healing and self aware that continually moves it forward. So the goal with that is, is if you have something that takes mundane tasks back to that, you have people that are no longer doing those capabilities today. So the underpinning of that, and without allows you to do is actually take that business case, and you reduce that because you're no longer having a bunch of people do things that no value add. Those people can actually move on to do back to the innovation and doing those higher level. >>So the business case is really about primarily, I would imagine about labor costs, right? I t labor costs were very labor intensive. We're doing stuff that doesn't necessarily add differentiation and value to the business. You're shifting that other tasks, right? >>Yeah, So the big components are really the overall cost of the infrastructure, what it takes to maintain the infrastructure, and that's broken up into kind of two components. One of it is typical power, physical location of building all those kinds of things and then the people to do the automation that take care of that right at the lower level. The third level is, as you continue to sort of process and automation going forward, that people capability that actually maintains the applications becomes easier because you can actually extend those capabilities out into the application, then require fewer people actually do the typical day to day things. Whether it's DB is like that, so it kind of becomes a continuous stream. There's various elements of the business case. You could sort of start with just the pure infrastructure cost and then get some of the process and automation is going forward and then actually go that even further, right? And then as organizations, a c i 01 of the questions I always have is where do you want to end on this? And they say, What are you talking about, right? It's really never done. You're on. If you're on a journey on a transformation, I go. This is the big boy Big girl conversation, right? Do you want to have an organization that actually, it stays the same from a head count standpoint? Are you trying to look to a partner to do that? Were you trying to get their operating model? What is your company trying to get you to look at, right? Because all those inflection points takes a different steps in the cloud journey. So as an advisor, as a trusted advisor, I asked those there was 1/2 a dozen or so questions. I would kind of walk through the organization through on sort of a cloud strategy, and I'll pick the path that kind of work with them. If they want to go to a manage service provider at the end, we would actually prepare someone, either bring the partner in or have associated partner. We've heard it off to, but we put the right pieces in place to make sure that that business case, >>that's interest. That's a really important point because a lot of customers would say, I don't want to reduce headcount. I want O. I'm starving for people. I want to train people Some companies may want to say Okay, I got to reduce headcount. It's a mandate, but most least in these boom times are saying I want to shift. So my point to the business cases, If you're not gonna cut people, then you have to have those people be more productive. And the example that you gave in terms of making the application developers more productive is relevant. And I won't explain this. Is that, for example, very simple example. You're I'm inferring you're gonna be able to compress the time to value, reduce lower your break, even accelerate the time to positive cash flow, if you will. That's an example of a value component to the business and part of the business case that people look at that and >>really, that's what it is. Definitely the business case and when he called. You know, when you get your rate of return, right? Um, the more that we can compress that. And I would say back to the conversation we had earlier about elevate and some of the partnerships we have with Deloitte around that a lot of that is to actually come up with enough capabilities that we can actually take the business case and actually reduce that have special, other things we can do for our customers around financing and things like that to make it easier for them, Right? We have options to make customers and actually help that business case. Some of the business cases we've seen are entire i t organizations saving 30 plus percent. Well, if you multiply that on a, you know, a large fortune 100 that may have a $1,000,000,000 budget, that's real money. >>Yeah, and okay, yes, no doubt. But then when you translate that into the business impact like you talked about the i t impact. But if you look at the business impact now, it becomes telephone numbers and actually see if it was often times just don't even believe it. But it's true, because if you can make the entire organization just, you know, 1/2 a percentage point more productive and 100,000 employees. I mean, that is that overwhelms. Actually, the i t business case. >>Yeah, and that's where that back to the sort of the steps in the business case is on the business and application side is making those folks actually more productive in the business case in saving them and adding, you know, whether it's a financial services, you're getting a new application out to market that actually generates revenue. Right? So that's sort of the trickle effect. When I look at it, I definitely look at it from a I see all the way through business. I'm a technically a business architect. That does. I t pretty damn good, >>Yeah, enables that sort of. Absolutely. How do you let's talk about this notion of continuous improvement? How are people thinking about that? Because you're talking a lot about just sort of self funding and and self progressing in a sort of organic entity that you're describing. How are people thinking about that? >>Yeah, I would say there's a little bit older map, right? But I would say that the goals what we're trying to embed back to the operating model we want to really embed is sort of a concept of a cloud center of excellence, and as part of that at the end, you have to have a set of functionality of folks that's constantly looking at the applications and or services of the different cloud providers that capability we have across the board. Everyone's got a multi cloud environment, right? How do they take those services They're probably already paying for anyways. And as the components get released, how can you continually put little pieces in there and do little micro releases? Quarterly aren't started weekly every month versus a big bang twice a year, right? Those little automation piece is continually add innovation in smaller chunks, and that's really the goal of cloud computing. And, you know, as you can actually break it up. It's no longer the big bang theory, right? I love that concept in vetting that whether you actually have a partner with some of the stuff that we're doing that actually in bed, what we call like a date to services, that that's what it is, is to support them but constantly look for different ways to include capabilities that were just released to add value on an ongoing basis. You don't have to go. Hey, great. That capability came out. It will be a next year's release. No, it could be next week, next month. >>So the outcome should be should be dramatically lowering costs, really accelerating your time to value. It really is what you're describing and we've been talking about in terms of the autonomous enterprise. It's really a prerequisite for scale, isn't it? >>It is. Absolutely >>thanks so much for coming on the Cube. Really appreciate it. Good stuff. >>Thank you very much. >>Thank you for watching. Be right back with our next guest. You're watching the Cube? We're here in Chicago covering the reverse of Oracle Consulting. I'm Dave Volante. Right back.

Published Date : Jul 6 2020

SUMMARY :

empowering the autonomous enterprise brought to you by Oracle We're extracting the signal from the noise we happened today to be in Chicago, Good to be here. So you elevate in your title. So the goal is to actually take the capabilities Over the past several years you joined three So it's definitely been a journey over the last three years have taken it from early 85% of the Can you give us the bumper sticker on that? So the underpinning of that, and without allows you to do is actually take that business case, So the business case is really about primarily, that people capability that actually maintains the applications becomes easier because you can actually extend And the example that you gave in terms of making the application and some of the partnerships we have with Deloitte around that a lot of that is to actually come up with enough that into the business impact like you talked about the i t impact. So that's sort of the trickle effect. How do you let's talk about this notion and as part of that at the end, you have to have a set of functionality of folks that's constantly looking at the So the outcome should be should be dramatically lowering costs, really accelerating your time It is. thanks so much for coming on the Cube. the reverse of Oracle Consulting.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VolantePERSON

0.99+

Mike EvansPERSON

0.99+

ChicagoLOCATION

0.99+

OracleORGANIZATION

0.99+

DeloitteORGANIZATION

0.99+

90%QUANTITY

0.99+

last yearDATE

0.99+

$1,000,000,000QUANTITY

0.99+

Palo AltoLOCATION

0.99+

next monthDATE

0.99+

100%QUANTITY

0.99+

MikePERSON

0.99+

next weekDATE

0.99+

BostonLOCATION

0.99+

next yearDATE

0.99+

Deloitte ConsultingORGANIZATION

0.99+

Oracle ConsultingORGANIZATION

0.99+

1/2 a dozenQUANTITY

0.99+

OneQUANTITY

0.99+

third levelQUANTITY

0.99+

30 plus percentQUANTITY

0.99+

two componentsQUANTITY

0.98+

three years agoDATE

0.98+

both organizationsQUANTITY

0.98+

100,000 employeesQUANTITY

0.97+

todayDATE

0.97+

ElevatePERSON

0.95+

twice a yearQUANTITY

0.95+

oneQUANTITY

0.95+

CubeCOMMERCIAL_ITEM

0.94+

threeQUANTITY

0.94+

100QUANTITY

0.92+

FirstQUANTITY

0.92+

Cloud ConsultingORGANIZATION

0.88+

last three yearsDATE

0.88+

Gen twoQUANTITY

0.87+

85%QUANTITY

0.79+

every monthQUANTITY

0.73+

CubeORGANIZATION

0.7+

Gen OneQUANTITY

0.66+

StudiosLOCATION

0.62+

yearsDATE

0.61+

several yearsDATE

0.58+

CubeTITLE

0.51+

Oracle ElevateORGANIZATION

0.46+

8 The Value of Oracle’s Gen 2 Cloud Infrastructure + Oracle Consulting


 

>> Narrator: From theCUBE studios in Palo Alto in Boston, it's theCUBE! Covering empowering the autonomous enterprise. Brought to you by ORACLE Consulting. >> Back to theCUBE everybody, this is Dave Vellante. We've been covering the transformation of ORACLE Consulting, and really it's rebirth, and I'm here with Chris Fox, who's the Group Vice President for Enterprise Cloud Architects and Chief Technologist for the North America Tech Cloud at ORACLE. Chris, thanks so much for coming on theCUBE. >> Thanks Dave, glad to be here. >> So, I love this title. I mean, years ago, there was no such thing as a cloud architect. Certainly there were chief technologists, but so, you are really, those are your peeps, is that right? >> That's right, that's right. That's really my team and I, that's all we do. So, our focus is really helping our customers take this journey from when they were on-premise to really transforming with cloud, and when we think about cloud, really, for us, it's a combination. It's our hybrid cloud, which happens to be on-premise, and then, of course, the true public cloud, like most people are familiar with. So, very exciting journey and, frankly, I've seen just a lot of success for our customers. You know, Dave, what I think we're seeing at ORACLE though, because we're so connected with SaaS, and then we're also connected with the traditional applications that have run the business for years, the legacy applications that have been, you know, servicing us for 20 years, and then the cloud needed developers. So, what my team and I are constantly focused on now is things like digital transformation and really wiring up all three of these across. So, if we think of, like, a customer outcome like I want to have a package delivered to me from a retailer, that actual process flow could touch a brand new cloud-native site from eCommerce, it could touch, essentially, maybe a traditional application that used to be on-prem that's now on the cloud, and then it might even use a new SaaS application, maybe, for maybe a permit process or delivery vehicle and scheduling. So, what my team does, we actually connect all three. So, what I always mention to my team and all of our customers, we have to be able to service all three of those constituents and really think about process flows. So, I take the cloud-native developer, we help them become efficient. We take the person who's been running that traditional application and we help them become more efficient, and then we have the SaaS applications, which are now rolling out new features on a quarterly basis and it's a whole new delivery model, but the real key is connecting all three of these into a business process flow that makes the customer's life much more efficient. People always say, you know, Chris, we want to get out of the data center, we're going zero data center, and I always say, well, how are you going to handle that back office stuff? Right? The stuff that's really big, it's cranky, doesn't handle just, you know, instances dying or things going away too easily. It needs predictable performance, it needs scale, it absolutely needs security, and ultimately, you know, a lot of these applications truly have relied on an ORACLE database. The ORACLE database has its own specific characteristics that it needs to run really well. So, we actually looked at the cloud and we said, let's take the first generation clouds, which are doing great, but let's add the features that specifically, a lot of times, the ORACLE workload needed in order to run very well and in a cost effective manner. So, that's what we mean when we say last mover advantage. We said, let's take the best of the clouds that are out there today, let's look at the workloads that, frankly, ORACLE runs and has been running for years, what our customers needed, and then let's build those features right into this next version of the cloud which can service the enterprise. So, our goal, honestly, which is interesting, is even that first discussion we had about cloud-native and legacy applications and also the new SaaS applications, we built a cloud that handles all three use cases at scale, resiliently, in a very secure manner, and I don't know of any other cloud that's handling those three use cases all in, we'll call it the same tendency for us at ORACLE. >> My question is why was it important for ORACLE, and is it important for ORACLE and its customers, to participate in IaaS and PaaS and SaaS? Why not just the last two layers of that? What does that give you from a strategic advantage standpoint and what does that do for your customer? >> Yeah, great question. So, the number one reason why we needed to have all three was that we have so many customers who, today, are in a data center. They're running a lot of our workloads on-premise and they absolutely are trying to find a better way to deliver lower-cost services to their customers and so we couldn't just say, let's just, everyone needs to just become net new, everyone just needs to ditch the old and go just to brand-new alone. Too hard, too expensive, at times. So we said, you know, let's give us customers the ultimate amount of choice. So, let's even go back again to that developer conversation in SaaS. If you didn't have IaaS, we couldn't help customers achieve a zero data center strategy with their traditional application, we'll call it PeopleSoft or JD Edwards or E-Business Suite or even, there's some massive applications that are running on the ORACLE cloud right now that are custom applications built on the ORACLE database. What they want is they said, give me the lowest cost but yet predictable performance IaaS. I'll run my apps tier on this. Number two, give me a platform service for database, 'cause frankly, I don't really want to run your database, like, with all the menial effort. I want someone to automate patching, scale up and down, and all these types of features like the cloud should have given us. And then number three, I do want SaaS over time. So, we spend a lot of time with our customers really saying, how do I take this traditional application, run it on IaaS and PaaS, and then number two, let's modernize it at scale. Maybe I want to start peeling off functionality and running them as cloud-native services right alongside, right? That's something, again, that we're doing at scale and other people are having a hard time running these traditional workloads on-prem in the cloud. The second part is they say, you know, I've got this legacy traditional ERP. It's been servicing me well, or maybe a supply chain system. Ultimately I want to get out of this. How do I get to SaaS? And we say, okay, here's the way to do this. First, bring it to the cloud, run it on IaaS and PaaS, and then selectively, I call it cloud slicing, take a piece of functionality and put it into SaaS. We're helping customers move to the cloud at scale. We're helping 'em do it at their rate, with whatever level of change they want, and when they are ready for SaaS, we're ready for them. >> And how does autonomous fit into this whole architecture? Thank you, by the way, for that description. I mean, it's nuanced but it's important. I'm sure you're having this conversation with a lot of cloud architects and chief technologists. They want to know this stuff, and they want to know how it works. And then, obviously, we'll talk about what the business impact is, but talk about autonomous and where that fit. >> So, the autonomous database, what we've done is really taken a look at all the runtime operations of an ORACLE database, so tuning, patching, securing, all these different features, and what we've done is taken the best of the ORACLE database, the best of something called Exadata, right, which we run on the cloud, which really helps a lot of our customers, and then we've wrapped it with a set of automation and security tools to help it really manage itself, tune itself, patch itself, scale up and down independent between computant storage. So, why that's important though is that it really, our goal is to help people run the ORACLE database as they have for years but with far less effort, and then even not only far less effort, hopefully, you know, a machine plus man, kind of the equation we always talk about is man plus machine is greater than man alone. So, being assisted by artificial intelligence and machine learning to perform those database operations, we should provide a better service to our customers with far less cost. Our hope and goal is that people have been running ORACLE databases. How can we help them do it with far less effort, and maybe spend more time on what the data can do for the organization, right? Improve customer experience, etc. Versus maybe, like, how do I spin up (breaks up). >> So, let's talk about the business impact. So, you go into customers, you talk to the cloud architects, the chief technologists, you pass that test. Now you got to deliver the business impact. Where does ORACLE Consulting fit with regard to that? And maybe you could talk about where you guys want to take this thing. >> Yeah, absolutely. I mean, the cloud is great set of technologies, but where ORACLE Consulting is really helping us deliver is in the outcome. One of the things, I think, that's been fantastic working with the ORACLE Consulting team is that, you know, cloud is new. For a lot of customers who've been running these environments for a number of years, there's always some fear and a little bit of trepidation saying, how do I learn this new cloud? I mean, the workloads we're talking about, Dave, are like tier zero, tier one, tier two and, you know, all the way up to DEV and TEST and DR. ORACLE Consulting does really couple of things in particular. Number one, they start with the end in mind, and number two that they start to do, is they really help implement these systems and there's a lot of different assurances that we have that we're going to get it done on time and better be under budget, 'cause ultimately, again, that's something that's really paramount for us. And then the third part of it, a lot of times it's runbooks, right? We actually don't want to just live in our customers' environments. We want to help them understand how to run this new system, so in training and change management, a lot of times ORACLE Consulting is helping with runbooks. We usually will, after doing it the first time, we'll sit back and let the customer do it the next few times and essentially help them through the process, and our goal at that point is to leave. Only if the customer wants us to, but ultimately our goal is to implement it, get it to go live on time, and then help the customer learn this journey to the cloud. And without them, frankly, I think these systems are sometimes too complex and difficult to do on your own maybe the first time, especially 'cause like I say, they're closing the books. They might be running your entire supply chain. They run your entire HR system or whatever they might be. Too important to leave to chance. So, they really help us with helping the customer become live and become very confident and skilled 'cause they can do it themselves. >> Well Chris, we've covered the gamut. Loved the conversation. We'll have to leave it right there, but thanks so much for coming on theCUBE and sharing your insights. Great stuff. >> Absolutely, thanks Dave, and thanks for having me on. >> All right, you're welcome, and thank you for watching everybody. This is Dave Vellante for theCUBE. We are covering the ORACLE of North America Consulting transformation and its rebirth in this digital event. Keep it right there, we'll be right back.

Published Date : May 8 2020

SUMMARY :

Brought to you by ORACLE Consulting. and I'm here with Chris Fox, So, I love this title. and then we have the SaaS applications, and go just to brand-new alone. and they want to know how it works. and machine learning to perform the business impact. and our goal at that point is to leave. and sharing your insights. and thanks for having me on. and thank you for watching everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

Dave VellantePERSON

0.99+

DavePERSON

0.99+

Chris FoxPERSON

0.99+

Palo AltoLOCATION

0.99+

ORACLE ConsultingORGANIZATION

0.99+

20 yearsQUANTITY

0.99+

OracleORGANIZATION

0.99+

second partQUANTITY

0.99+

ORACLE ConsultingORGANIZATION

0.99+

FirstQUANTITY

0.99+

Oracle ConsultingORGANIZATION

0.99+

ORACLEORGANIZATION

0.99+

three use casesQUANTITY

0.99+

BostonLOCATION

0.99+

third partQUANTITY

0.99+

first timeQUANTITY

0.99+

first discussionQUANTITY

0.98+

OneQUANTITY

0.98+

ORACLETITLE

0.98+

threeQUANTITY

0.98+

zeroQUANTITY

0.97+

first generationQUANTITY

0.96+

twoQUANTITY

0.96+

theCUBEORGANIZATION

0.96+

North America ConsultingORGANIZATION

0.96+

todayDATE

0.95+

oneQUANTITY

0.95+

JD EdwardsORGANIZATION

0.93+

North America Tech CloudORGANIZATION

0.91+

IaaSTITLE

0.9+

Number twoQUANTITY

0.87+

PaaSTITLE

0.84+

years agoDATE

0.8+

ExadataORGANIZATION

0.8+

E-Business SuiteTITLE

0.79+

number threeQUANTITY

0.74+

tier zeroOTHER

0.68+

SaaSTITLE

0.67+

tier oneOTHER

0.67+

yearsQUANTITY

0.63+

CloudORGANIZATION

0.62+

tier twoOTHER

0.61+

PeopleSoftORGANIZATION

0.59+

DEVORGANIZATION

0.58+

Gen 2QUANTITY

0.57+

number twoQUANTITY

0.51+

runbooksTITLE

0.45+

6 Infrastructure Led Transformation – Mike Owens, GVP, Advisory Services, NA Consulting, Oracle


 

>>From the cube studios in Palo Alto in Boston. It's the cube covering, empowering the autonomous enterprise brought to you by Oracle consulting. >>Welcome back everybody to this special presentation of the cube where we're covering the rebirth of Oracle consulting is a digital event where we're going out, we're extracting the signal from the noise. We happen today to be in Chicago, which is obviously the center of the country. A lot of big customers here, a lot of consultants and consulting organizations here. A lot of expertise. Mike Owens is here as a group VP for cloud advisory and the general manager of Oracle elevate. Mike, thanks for coming on the queue. Appreciate it. I'm glad to be here. So I can ask you elevate in your title, what is Oracle elevate? Yeah. Oracle elevate was actually announced Oracle OpenWorld last year and it's the partnership that we really had to actually take our scale to the next level. So we actually did it with a Deloitte consulting, so the goal is to actually take the capabilities of both organizations. >>Deloitte really has functional capabilities and expertise with an Oracle practice and obviously Oracle has Oracle technical expertise. The combination of the two really allows us to scale, provide sort of, I call the one plus one equals three effort for customers. Now you've got a decent timeline or observation over the past several years. I joined three years ago. Um, you were at some brand name companies. First of all, what attracted you to come to Oracle consulting? Yeah, absolutely. So Oracle was in the point where they were doing a lot of stuff around on-prem on premise software, right? The old ERP type stuff. They were doing cloud, they sort of had to have this sort of transformational moment. Um, I was asked to come in and Oracle consulting in the early days and say, Hey look, we're trying to transform the organization from on prem consulting over to cloud consulting. >>Come in and help us with this stuff that you've worked from your prior to cloud companies and help us really move the organization forward and look at things differently. So it's definitely been a journey over the last three years of taking it from really 85% of the 90% of our revenue around on-prem type of engagements to now actually splitting the organization being dedicated a hundred percent on cloud, which is just a huge transformation the last three years. What really, what's the underpinning of gen two cloud? Can you give us sort of the bumper sticker on that? Yeah, all of the underpinning the gen two cloud is really, if you look at the gen one, cloud was purely just an infrastructure layer. Gen two was really based on a segmenting security, which is a huge problem out in the marketplace, right? So we actually have a sort of a worldclass way that we take a segment security outside of the actual environment itself. >>It's completely segmented, which is awesome, right? But then the also when you actually move it forward, the capability of the entire thing is built on sort of the autonomous enterprise autonomous capabilities. Everything is sort of self healing, self funding or not, sorry, self healing and self-aware that continually moves it forward. So the goal with that is, is if you have something that takes mundane tasks back to that, you have people that are no longer doing those capabilities today. So the underpinning of that and what that allows you to do is actually take that business case and you reduce that because you're no longer having a bunch of people do things that are no value add. Those people can actually move on to do back to the innovation and doing those higher level components. So the, >>so the business case is really about, uh, I mean primarily I would imagine about labor costs, right? It labor costs were very labor intensive. We're doing stuff that doesn't necessarily add differentiation and value to the business. You're shifting that to other tasks, right? Yeah. And so the >>patients are really the overall cost of the infrastructure, what it takes to maintain the infrastructure. And that's broken up into kind of two components. One of it is typical power, physical location, a building, all those kinds of things. And then the people that do the automations that take care of that right at the lower level. The third level is as you continue to get, um, sort of, uh, process in automation going forward, the people capability that actually maintains the applications becomes easier because you can actually extend those capabilities out into the application. Then you require fewer people to actually do the typical day to day things, whether it's DBS, et cetera like that. So it kind of becomes a continuous stream. There's various elements of the business case. You could sort of start with just the pure infrastructure cost and then get some of the, um, process and automations going forward and then actually go that even further. >>Right? And then as organizations, as a CIO, one of the questions I always have is where do you want to end on this? And they say, well, what are you talking about? Right? It's really, you're, you're on it, you're on a journey, you're on a transformation. I go, this is the big boy, big girl conversation, right? Do you want to have an organization that actually, uh, is, stays the same from the head count standpoint? Are you trying to look to a partner to do the, where are you trying to get in your operating model? What is your company trying to get you to look at? Right? Because all those inflection points, it takes a different step in the cloud journey. So as an advisor, right, as a trusted advisor, I asked those herbs are half a dozen or so questions I would kind of walk your organization through on sort of a cloud strategy and I'll pick the path that kind of works with them. And if they want to go to a managed service provider at the end, we would actually prepare someone, either bring the partner in or have an associate department. We've heard it off too, but we put the right pieces in place to make sure that that business cake works >>well. That's interesting. That's a really important point because a lot of customers would say, I don't want to reduce head count. I want to, I'm starving for people. I want to retrain people. You know, some companies may want to say, Hey, okay, I got to reduce head count. It's a mandate. But, but most, at least in these boom times are saying, I want to shift. So by point to the business cases, if you're not going to cut people, then you have to have those people be more productive. And so the, the example that you gave in terms of making the application developers more productive as is relevant, and I want to explain this is that, for example, very simple example. You're, you're, I'm inferring you're going to be able to compress the time to value. You're gonna reduce your, lower your break even, you know, accelerate the time to positive cash flow if you will. That's an example of a value component to the business and part of the business case. The people look at that and is that absolutely, absolutely. >>That's what it is. Definitely the business case and when he call it the, you know, when you get your rate of return, right. Um, the more that we can compress that. And I would say back to the conversation we had earlier about elevate and some of the partnerships we have Deloitte around that, a lot of that is to actually come up with enough capabilities that we can actually take the business case and actually reduce that and have special other things we can do for our customers. We're on financing and things like that to make it easier for them. Right. We have options to make customers and actually help that business case. Some of the business cases we've seen our entire it organization saving 30 plus percent or if you multiply that on a, you know, a large fortune 100 that may have a billion dollar budget, that's real money. >>Yeah. And okay, yes, no doubt. But then when you translate that into the business impact, like you talked about the it impact, but if you look at the business impact now it becomes telephone numbers. And actually CFOs oftentimes just don't even believe it. But it's true because if you can make the entire organization just, you know, a half a percentage point more productive and you got a hundred thousand employees, I mean that is, that overwhelms actually the it business case. >>Yeah. And that's where that back to the sort of the steps in the business case is on the business and application side is making those folks actually more productive in the business case and saving them and adding, you know, whether it's a financial services and you're getting, um, an application out to market that actually generates revenue. Right. So that's, it's sort of the trickle effect. So when I look at it, I definitely look at it from a, it all the way through business. I am a technically a business architect that does it pretty damn good. >>Yeah. And it enables that sort of business transformation. How do you, let's talk about this notion of continuous improvement. How are people thinking about that? Um, cause you're talking a lot about just sort of self-funding, um, and, and, and self progressing in a sort of an organic entity that you're describing. How are people I >>think about that? Yeah. And I would say they're kind of a little bit older map. Right. Um, but I would say that the goal is what we're trying to embed back to the operating model we want to really embed is, you know, sort of the concept of the cloud center of excellence in as part of that at the end you have to have a set of functionality to have folks that's constantly looking at the applications and or services of the different cloud providers. A capability you have across the board. Everyone's got a multicloud environment, right? How do they take those services they're probably already paying for anyways. And as the components get released, how can you continually put little pieces in there and do little micro releases. Quarterly are, sorry, weekly, you know, every month versus a big bang twice a year. Right? Those little automation pieces continually add innovation in smaller chunks. >>And that's really the goal of cloud computing. And you know, as you can actually break it up, it's no longer the big bang theory. Right. And I love that concept, embedding that, whether you actually have a partner with some of the stuff that we're doing that actually we embed what we call like a day two services that that's what it is to support them. But Austin constantly look for different ways to include capabilities that were just released to add value on an ongoing basis. You don't have to go, Hey, great, that capability came out. It will be on next year's release. No, it could be next week. It could be next month. Right. >>Well, so the outcomes should be you be dramatically lowering costs, really accelerating your time to value. It really is what you're describing and we've been talking about in terms of the autonomous, you know, enterprise. It's really a prerequisite for scale, isn't it? >>It is. Absolutely right, and so when we use the term autonomous enterprise too, I love that because that's actually the term I've been using for a few years. Even before Larry started talking about the autonomous database, I talk about that environment of constantly look at an a cloud capability and everything that you can put from a machine earlier into AI under basically basically a bit let it run itself. The more that you can do that, the higher the value can you put those people off in a higher level tasks, right? That's been going on every provider for awhile. Oracle just has the capability now within the database that takes it to the next level, right? So we still are the only organization with that put that on top of our gen two cloud where all that is built in. Um, as part of it going forward, that's where we have the upper level really at the enterprise computing level, right? We can, we can work at all types of workload, but where we are niches is really those big enterprise workloads. Cause that's where we started from data enterprise. >>I didn't want to make it a technology discussion. But you said the only, only organization, you mean the only technology company with that autonomous database capabilities, is that correct, sir? Yes. Okay. So I know others sort of talk about it, but you know, Oracle I think talks about it more forcefully. We'll dig into that and uh, and report back. Mike, thanks so much for coming on the cube. Really appreciate it. Good stuff. Thank you very much. All right, and thank you for watching. We're right back with our next guest. You watching the cube. We're here in Chicago covering the rebirth of Oracle consulting. I'm Dave Volante. We'll be right back.

Published Date : May 8 2020

SUMMARY :

empowering the autonomous enterprise brought to you by Oracle consulting. so the goal is to actually take the capabilities of both organizations. First of all, what attracted you to come to Oracle consulting? Yeah, all of the underpinning the gen two cloud is really, if you look at the gen one, cloud was purely just an infrastructure layer. So the goal with that is, is if you have something that takes mundane And so the the people capability that actually maintains the applications becomes easier because you can actually extend And then as organizations, as a CIO, one of the questions I always have is where do you want And so the, the example that you gave in terms of making the application Definitely the business case and when he call it the, you know, when you get your rate of return, right. that into the business impact, like you talked about the it impact, you know, whether it's a financial services and you're getting, um, an application out to market that actually generates revenue. entity that you're describing. center of excellence in as part of that at the end you have to have a set of functionality to have folks that's And I love that concept, embedding that, whether you actually have a partner with some Well, so the outcomes should be you be dramatically lowering costs, really accelerating your time The more that you can do that, the higher the value can you put those people off in a higher level tasks, But you said the only, only organization, you mean the only technology company with that autonomous

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VolantePERSON

0.99+

OracleORGANIZATION

0.99+

DeloitteORGANIZATION

0.99+

MikePERSON

0.99+

Mike OwensPERSON

0.99+

ChicagoLOCATION

0.99+

Palo AltoLOCATION

0.99+

LarryPERSON

0.99+

90%QUANTITY

0.99+

half a dozenQUANTITY

0.99+

OneQUANTITY

0.99+

next weekDATE

0.99+

twoQUANTITY

0.99+

85%QUANTITY

0.99+

30 plus percentQUANTITY

0.99+

third levelQUANTITY

0.99+

next monthDATE

0.99+

oneQUANTITY

0.99+

next yearDATE

0.99+

both organizationsQUANTITY

0.99+

three years agoDATE

0.98+

FirstQUANTITY

0.98+

two componentsQUANTITY

0.97+

twice a yearQUANTITY

0.97+

NA ConsultingORGANIZATION

0.97+

hundred percentQUANTITY

0.96+

BostonLOCATION

0.96+

todayDATE

0.94+

gen oneQUANTITY

0.94+

GVPORGANIZATION

0.93+

billion dollarQUANTITY

0.91+

last yearDATE

0.9+

a hundred thousand employeesQUANTITY

0.89+

Gen twoQUANTITY

0.89+

last three yearsDATE

0.85+

100QUANTITY

0.84+

gen twoQUANTITY

0.81+

two servicesQUANTITY

0.81+

Advisory ServicesORGANIZATION

0.79+

Oracle consultingORGANIZATION

0.76+

three effortQUANTITY

0.76+

every monthQUANTITY

0.73+

past several yearsDATE

0.71+

AustinORGANIZATION

0.68+

dayQUANTITY

0.67+

Oracle OpenWorldEVENT

0.66+

DBSTITLE

0.65+

half aQUANTITY

0.59+

QuarterlyQUANTITY

0.51+

elevateTITLE

0.26+

The Value of Oracle’s Gen 2 Cloud Infrastructure + Oracle Consulting


 

>> Narrator: From theCUBE studios in Palo Alto in Boston, it's theCUBE! Covering empowering the autonomous enterprise. Brought to you by ORACLE Consulting. >> Back to theCUBE everybody, this is Dave Vellante. We've been covering the transformation of ORACLE Consulting, and really it's rebirth, and I'm here with Chris Fox, who's the Group Vice President for Enterprise Cloud Architects and Chief Technologist for the North America Tech Cloud at ORACLE. Chris, thanks so much for coming on theCUBE. >> Thanks Dave, glad to be here. >> So, I love this title. I mean, years ago, there was no such thing as a cloud architect. Certainly there were chief technologists, but so, you are really, those are your peeps, is that right? >> That's right, that's right. That's really my team and I, that's all we do. So, our focus is really helping our customers take this journey from when they were on-premise to really transforming with cloud, and when we think about cloud, really, for us, it's a combination. It's our hybrid cloud, which happens to be on-premise, and then, of course, the true public cloud, like most people are familiar with. So, very exciting journey and, frankly, I've seen just a lot of success for our customers. You know, Dave, what I think we're seeing at ORACLE though, because we're so connected with SaaS, and then we're also connected with the traditional applications that have run the business for years, the legacy applications that have been, you know, servicing us for 20 years, and then the cloud needed developers. So, what my team and I are constantly focused on now is things like digital transformation and really wiring up all three of these across. So, if we think of, like, a customer outcome like I want to have a package delivered to me from a retailer, that actual process flow could touch a brand new cloud-native site from eCommerce, it could touch, essentially, maybe a traditional application that used to be on-prem that's now on the cloud, and then it might even use a new SaaS application, maybe, for maybe a permit process or delivery vehicle and scheduling. So, what my team does, we actually connect all three. So, what I always mention to my team and all of our customers, we have to be able to service all three of those constituents and really think about process flows. So, I take the cloud-native developer, we help them become efficient. We take the person who's been running that traditional application and we help them become more efficient, and then we have the SaaS applications, which are now rolling out new features on a quarterly basis and it's a whole new delivery model, but the real key is connecting all three of these into a business process flow that makes the customer's life much more efficient. People always say, you know, Chris, we want to get out of the data center, we're going zero data center, and I always say, well, how are you going to handle that back office stuff? Right? The stuff that's really big, it's cranky, doesn't handle just, you know, instances dying or things going away too easily. It needs predictable performance, it needs scale, it absolutely needs security, and ultimately, you know, a lot of these applications truly have relied on an ORACLE database. The ORACLE database has its own specific characteristics that it needs to run really well. So, we actually looked at the cloud and we said, let's take the first generation clouds, which are doing great, but let's add the features that specifically, a lot of times, the ORACLE workload needed in order to run very well and in a cost effective manner. So, that's what we mean when we say last mover advantage. We said, let's take the best of the clouds that are out there today, let's look at the workloads that, frankly, ORACLE runs and has been running for years, what our customers needed, and then let's build those features right into this next version of the cloud which can service the enterprise. So, our goal, honestly, which is interesting, is even that first discussion we had about cloud-native and legacy applications and also the new SaaS applications, we built a cloud that handles all three use cases at scale, resiliently, in a very secure manner, and I don't know of any other cloud that's handling those three use cases all in, we'll call it the same tendency for us at ORACLE. >> My question is why was it important for ORACLE, and is it important for ORACLE and its customers, to participate in IaaS and PaaS and SaaS? Why not just the last two layers of that? What does that give you from a strategic advantage standpoint and what does that do for your customer? >> Yeah, great question. So, the number one reason why we needed to have all three was that we have so many customers who, today, are in a data center. They're running a lot of our workloads on-premise and they absolutely are trying to find a better way to deliver lower-cost services to their customers and so we couldn't just say, let's just, everyone needs to just become net new, everyone just needs to ditch the old and go just to brand-new alone. Too hard, too expensive, at times. So we said, you know, let's give us customers the ultimate amount of choice. So, let's even go back again to that developer conversation in SaaS. If you didn't have IaaS, we couldn't help customers achieve a zero data center strategy with their traditional application, we'll call it PeopleSoft or JD Edwards or E-Business Suite or even, there's some massive applications that are running on the ORACLE cloud right now that are custom applications built on the ORACLE database. What they want is they said, give me the lowest cost but yet predictable performance IaaS. I'll run my apps tier on this. Number two, give me a platform service for database, 'cause frankly, I don't really want to run your database, like, with all the menial effort. I want someone to automate patching, scale up and down, and all these types of features like the cloud should have given us. And then number three, I do want SaaS over time. So, we spend a lot of time with our customers really saying, how do I take this traditional application, run it on IaaS and PaaS, and then number two, let's modernize it at scale. Maybe I want to start peeling off functionality and running them as cloud-native services right alongside, right? That's something, again, that we're doing at scale and other people are having a hard time running these traditional workloads on-prem in the cloud. The second part is they say, you know, I've got this legacy traditional ERP. It's been servicing me well, or maybe a supply chain system. Ultimately I want to get out of this. How do I get to SaaS? And we say, okay, here's the way to do this. First, bring it to the cloud, run it on IaaS and PaaS, and then selectively, I call it cloud slicing, take a piece of functionality and put it into SaaS. We're helping customers move to the cloud at scale. We're helping 'em do it at their rate, with whatever level of change they want, and when they are ready for SaaS, we're ready for them. >> And how does autonomous fit into this whole architecture? Thank you, by the way, for that description. I mean, it's nuanced but it's important. I'm sure you're having this conversation with a lot of cloud architects and chief technologists. They want to know this stuff, and they want to know how it works. And then, obviously, we'll talk about what the business impact is, but talk about autonomous and where that fit. >> So, the autonomous database, what we've done is really taken a look at all the runtime operations of an ORACLE database, so tuning, patching, securing, all these different features, and what we've done is taken the best of the ORACLE database, the best of something called Exadata, right, which we run on the cloud, which really helps a lot of our customers, and then we've wrapped it with a set of automation and security tools to help it really manage itself, tune itself, patch itself, scale up and down independent between computant storage. So, why that's important though is that it really, our goal is to help people run the ORACLE database as they have for years but with far less effort, and then even not only far less effort, hopefully, you know, a machine plus man, kind of the equation we always talk about is man plus machine is greater than man alone. So, being assisted by artificial intelligence and machine learning to perform those database operations, we should provide a better service to our customers with far less cost. Our hope and goal is that people have been running ORACLE databases. How can we help them do it with far less effort, and maybe spend more time on what the data can do for the organization, right? Improve customer experience, etc. Versus maybe, like, how do I spin up (breaks up). >> So, let's talk about the business impact. So, you go into customers, you talk to the cloud architects, the chief technologists, you pass that test. Now you got to deliver the business impact. Where does ORACLE Consulting fit with regard to that? And maybe you could talk about where you guys want to take this thing. >> Yeah, absolutely. I mean, the cloud is great set of technologies, but where ORACLE Consulting is really helping us deliver is in the outcome. One of the things, I think, that's been fantastic working with the ORACLE Consulting team is that, you know, cloud is new. For a lot of customers who've been running these environments for a number of years, there's always some fear and a little bit of trepidation saying, how do I learn this new cloud? I mean, the workloads we're talking about, Dave, are like tier zero, tier one, tier two and, you know, all the way up to DEV and TEST and DR. ORACLE Consulting does really couple of things in particular. Number one, they start with the end in mind, and number two that they start to do, is they really help implement these systems and there's a lot of different assurances that we have that we're going to get it done on time and better be under budget, 'cause ultimately, again, that's something that's really paramount for us. And then the third part of it, a lot of times it's runbooks, right? We actually don't want to just live in our customers' environments. We want to help them understand how to run this new system, so in training and change management, a lot of times ORACLE Consulting is helping with runbooks. We usually will, after doing it the first time, we'll sit back and let the customer do it the next few times and essentially help them through the process, and our goal at that point is to leave. Only if the customer wants us to, but ultimately our goal is to implement it, get it to go live on time, and then help the customer learn this journey to the cloud. And without them, frankly, I think these systems are sometimes too complex and difficult to do on your own maybe the first time, especially 'cause like I say, they're closing the books. They might be running your entire supply chain. They run your entire HR system or whatever they might be. Too important to leave to chance. So, they really help us with helping the customer become live and become very confident and skilled 'cause they can do it themselves. >> Well Chris, we've covered the gamut. Loved the conversation. We'll have to leave it right there, but thanks so much for coming on theCUBE and sharing your insights. Great stuff. >> Absolutely, thanks Dave, and thanks for having me on. >> All right, you're welcome, and thank you for watching everybody. This is Dave Vellante for theCUBE. We are covering the ORACLE of North America Consulting transformation and its rebirth in this digital event. Keep it right there, we'll be right back.

Published Date : Apr 28 2020

SUMMARY :

Brought to you by ORACLE Consulting. and I'm here with Chris Fox, So, I love this title. and then we have the SaaS applications, and go just to brand-new alone. and they want to know how it works. and machine learning to perform the business impact. and our goal at that point is to leave. and sharing your insights. and thanks for having me on. and thank you for watching everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

Dave VellantePERSON

0.99+

DavePERSON

0.99+

Chris FoxPERSON

0.99+

Palo AltoLOCATION

0.99+

ORACLE ConsultingORGANIZATION

0.99+

OracleORGANIZATION

0.99+

20 yearsQUANTITY

0.99+

second partQUANTITY

0.99+

ORACLE ConsultingORGANIZATION

0.99+

FirstQUANTITY

0.99+

Oracle ConsultingORGANIZATION

0.99+

ORACLEORGANIZATION

0.99+

three use casesQUANTITY

0.99+

BostonLOCATION

0.99+

third partQUANTITY

0.99+

first timeQUANTITY

0.99+

first discussionQUANTITY

0.98+

OneQUANTITY

0.98+

ORACLETITLE

0.98+

threeQUANTITY

0.98+

zeroQUANTITY

0.97+

first generationQUANTITY

0.96+

twoQUANTITY

0.96+

theCUBEORGANIZATION

0.96+

North America ConsultingORGANIZATION

0.96+

todayDATE

0.95+

oneQUANTITY

0.95+

JD EdwardsORGANIZATION

0.93+

North America Tech CloudORGANIZATION

0.91+

IaaSTITLE

0.9+

Number twoQUANTITY

0.87+

PaaSTITLE

0.84+

years agoDATE

0.8+

ExadataORGANIZATION

0.8+

E-Business SuiteTITLE

0.79+

number threeQUANTITY

0.74+

tier zeroOTHER

0.68+

SaaSTITLE

0.67+

tier oneOTHER

0.67+

Gen 2QUANTITY

0.66+

yearsQUANTITY

0.63+

CloudORGANIZATION

0.62+

tier twoOTHER

0.61+

PeopleSoftORGANIZATION

0.59+

DEVORGANIZATION

0.58+

number twoQUANTITY

0.51+

runbooksTITLE

0.45+

Infrastructure Led Transformation


 

>> Announcer: From theCUBE studios in Palo Alto and Boston, it's theCUBE covering, empowering the Autonomous Enterprise brought to you by Oracle Consulting. >> Welcome back everybody to this special presentation of theCUBE where we're covering The Rebirth of Oracle Consulting. It's a digital event where we're going out, we're extracting the signal from the nose. We happen today to be in Chicago which is obviously, the center of the country, a lot of big customers here, a lot of consultants and consulting organizations here, a lot of expertise. Mike Owens is here, he's a group V.P. for cloud advisory and a general manager of Oracle Elevate. Mike, thanks for coming on theCUBE. >> Hi, I appreciate it, I'm glad to be here. >> So I got to ask you, Elevate in your title, what is Oracle Elevate? >> Yeah, Oracle Elevate was actually announced Oracle Open World last year, and it's the partnership that we really had to actually take our scale the next level. So it's actually did it with Deloitte Consulting. So the goal is to actually take the capabilities of both organizations, Deloitte really has functional capabilities and expertise with an oracle practice, and obviously, Oracle has Oracle technical expertise. The combination of the two really allows us to scale, provide, what I call the one plus one equals three effort for customers. >> Now, you've got a decent timeline or observation over the past several years. I think you joined three years ago? >> Yeah. >> You were at some brand name companies. First of all, what attracted you to come to Oracle Consulting? >> Yeah, absolutely. So Oracle was in the point where they were doing a lot of stuff around on-prem, on-premise software. The old ERP type stuff, they were doing cloud, they sort of had to had this sort of transformational moment. I was asked to come in on Oracle Consulting in the early days and say, hey look, we're trying to transform the organization from on-prem consulting over to cloud consulting, come in and help us with the stuff that you've worked from your prior two cloud companies and help us really move the organization forward and look at things differently. So it's definitely been a journey over the last three years. I've taken it from really 85 percent of, the 90 percent of our revenue around on-prem type of engagements to now actually split in the organization being dedicated 100 percent on Cloud, which is a huge transformation in the last three years. >> What really, what's the underpinning of Gen 2 cloud? Can you give us sort of the bumper sticker on that? >> Yeah, all about the underpinning the Gen 2 cloud is really, if you look at the gen 1 cloud was purely just an infrastructure layer. Gen 2 is really based on a segmenting security which is a huge problem out in the marketplace. >> Mm-hmm. >> So we actually have a sort of a world-class where we take segment security outside of the actual environment itself, it's completely segment which is awesome, right? But then they also, when you actually move it forward, the capability of the entire thing is built on sort of the Autonomous Enterprise or autonomous capabilities, everything is sort of self-healing, self-funding, or not, sorry, self-healing and self-aware, that continually moves it forward. So, the goal with that is, is if you have something that takes mundane tasks back to that you have people that are no longer doing those capabilities today. So the underpinning of that, and what that allows you to do, is actually take that business case and you reduce that because you're no longer having a bunch of people do things that are no value add. Those people can actually move on to do it back to that innovation and doing those higher level components. >> So the business case is really about, I mean, primarily, I would imagine about labor cost, right? I.T., labor costs that we're very labor intensive, we're doing stuff that doesn't necessarily add differentiation of value to the business, you're shifting that to other tasks, right? >> Yeah, so the big components are really the overall cost of the infrastructure, what it takes to maintain the infrastructure and that's broken up into kind of two components. One of it is typical power, physical location, a building, all those kinds of things, and then the people that do the automations that take care of that at the lower level. The third level is, as you continue to sort of process and automation going forward, the people capability that actually maintains the applications becomes easier because you can actually extend those capabilities out into the application, then require fewer people to actually do the typical day-to-day things whether it's DBAs, et cetera, like that. So it kind of becomes a continuous stream. There's various elements of the business case, you could sort of start with just the pure infrastructure cost and then get some of the process and automations going forward and then actually go that even further. And then as organizations as a CIO, one of the questions I always have is, where do you want to end on this? And they say, well what are you talking about? It is really-- >> Dave: We're never done! >> You're on a journey, you're on a transformation, I go, this is the big-boy, big-girl conversation. Do you want to have an organization that actually, it stays the same from a headcount standpoint? Are you trying to look to a partner to do the... Were you trying to get in your operating model? What is your company trying to get you to look at, right? Because all those inflection points takes a different step in the cloud journey. So as the advisor, as the trusted advisor, I ask those half a dozen or so questions, I would kind of walk your organization through on sort of a cloud strategy and I'll pick the path, to kind of works with them and if they want to go to a managed service provider at the end, we would actually prepare someone either bring the partner in or have associated partner we put it off to. But we put the right pieces in place to make sure that that business cake works. >> Well that's interesting, that's a really important point because a lot of customers would say, I don't want to reduce head count, I'm starving for people, I want to retrain people. You know, some companies may want to say, hey, okay, I got to reduced headcount, it's a mandate. But most, at least in these boom times are saying, I want to shift. So my point to the business case is, if you're not going to, you know, cut people, then you have to have those people be more productive. >> Correct. >> The example that you gave in terms of making the application developers more productive is relevant. And I want to explain this, is that, for example, very simple example, I'm inferring you're going to be able to compress the time to value, you're going to reduce, lower your break even, you know, accelerate the time to positive cash flow, if you will. >> Absolutely. That's an example of a value component to the business, and part of the business case. Do people look at that and is that real? >> Absolutely, that's what it is. Definitely, the business case and when you call the... You know, when you get your rate of return, right? >> Mm-hmm. >> The more that we can compress that, and I would say back to the conversation we had earlier about Elevate and some of the partnerships we have with Deloitte around that, a lot of that is to actually come up with enough capabilities that we can actually take the business case and actually reduce that and have special other things we can do for our customers around financing and things like that to make it easier for them. We have options to make customers and actually help that business case. Some of the business cases we've seen are entire I.T. organization saving 30 plus percent. Well, if you multiply that on a, you know, a large Fortune 100 that may have a billion dollar budget, that's real money. >> Okay, yes, no doubt. But then, when you translate that into the business impact, like you talked about the I.T. impact, but if you look at the business impact now it becomes telephone numbers. And actually the CFOs often times just don't even believe it, but it's true. >> Yes. >> Because if you can make the entire organization just you know, a half a percentage point more productive and you got 100,000 employees, I mean, that is, that overwhelms, actually, the I.T. business case. >> Yeah, and that's where back to sort of the steps in the business case is on the business and application side is making those folks actually more productive in the business case and saving them, and adding, you know, whether it's a financial services getting an application out to market that actually generates revenue. So that's, it's sort of the trickle effect. So when I look at it, I definitely look at it from a I.T. all the way through business. I am technically a business architect that does I.T. pretty damn good. >> Yeah, and I.T enables that sort of business transformation. >> Absolutely. >> How do you... Let's talk about this notion of continuous improvement. How are people thinking about that, 'cause you're talking a lot about just sort of self-funding, and self-progressing, sort of an organic entity that you're describing. How are people thinking about that? >> Yeah, I would say they're kind of a little bit all over the map. But I would say that the goal is what we're trying to embed back to the operating model, what we want to really embed is sort of a concept of the cloud set of excellence in as part of that at the end, you have to have a set of functionality of folks that's constantly looking at the applications and or services of the different cloud providers, their capabilities you have across the board, everyone's got to multicloud environment. How do they take those services they're probably already paying for anyways, and as the components get released, how can you continually put little pieces in there and do little micro-releases quarterly? I'm sorry, weekly? You know, every month versus a big bang twice a year. Those little automation pieces continually add innovation in smaller chunks and that's really the goal of cloud computing, you know, is you can actually break it up, it's no longer the big bang theory. And I love that concept, embedding that, whether you actually have a partner with some of the stuff that we're doing that actually embed, what we call, like a day two services that that's what it is, it's to support them but us constantly, look for different ways to include capabilities that were just released, to add value on an ongoing basis. You don't have to go, hey, they're great, that capability came out, it'll be on next year's release. No, it could be next week, it could be next month. >> Well, so the outcome should be dramatically lowering costs, really accelerating your time to value. It really is, what you're describing and we've been talking about in terms of the Autonomous, you know, Enterprise. Is really a prerequisite for scale, isn't it? >> It is, absolutely. And so, when we use the term Autonomous Enterprise too, I love that because that's actually the term I've been using for a few years even before Larry started talking about the autonomous database, I talk about that environment of constantly looking at a cloud capability and everything that you can put from the machine earlier into A.I., under to basically let it run itself. The more that you can do that, the higher the value, and you can put those people off into higher level tasks. That's been going on every provider for a while. Oracle just has the capability now within the database that takes it to the next level. So we still are the only organization with that, put that on top of our Gen 2 cloud where all that is built in, as part of it going forward. That's where we have the upper level really at the enterprise computing level. We can work on all types of workload but where we are niches, is really those big enterprise workloads 'cause that's where we started from data enterprise. >> I don't want to make it a technology discussion. We said they're the only organization, you mean the only technology company with that autonomous database capabilities, is that correct? >> Yes, sir, yes. >> Okay, so I know others sort of talk about it, but, you know, Oracle I think talks about it more forcefully? >> Yes. >> We'll dig into that and report back. Mike, thanks so much for coming on theCUBE. Really, I appreciate it, good stuff. >> Anytime, thank you very much. >> All right, and thank you for watching. We're right back with our next guest, you're watching theCUBE. We're here in Chicago covering, The Rebirth of Oracle Consulting. I'm Dave Vellante, we'll be right back.

Published Date : Apr 28 2020

SUMMARY :

brought to you by Oracle Consulting. center of the country, I'm glad to be here. So the goal is to actually over the past several years. First of all, what attracted you in the last three years. Yeah, all about the of the actual environment itself, So the business case is really about, of the business case, So as the advisor, as the trusted advisor, So my point to the business case is, accelerate the time to positive cash flow, and part of the business case. Definitely, the business a lot of that is to actually come up that into the business impact, the I.T. business case. in the business case is on the business Yeah, and I.T enables that sort of that you're describing. in as part of that at the end, in terms of the Autonomous, The more that you can do capabilities, is that correct? We'll dig into that and report back. All right, and thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

OracleORGANIZATION

0.99+

DeloitteORGANIZATION

0.99+

Mike OwensPERSON

0.99+

MikePERSON

0.99+

ChicagoLOCATION

0.99+

90 percentQUANTITY

0.99+

Palo AltoLOCATION

0.99+

Oracle ConsultingORGANIZATION

0.99+

LarryPERSON

0.99+

BostonLOCATION

0.99+

85 percentQUANTITY

0.99+

100 percentQUANTITY

0.99+

half a dozenQUANTITY

0.99+

100,000 employeesQUANTITY

0.99+

next weekDATE

0.99+

Deloitte ConsultingORGANIZATION

0.99+

twoQUANTITY

0.99+

OneQUANTITY

0.99+

ElevateORGANIZATION

0.99+

next yearDATE

0.99+

next monthDATE

0.99+

third levelQUANTITY

0.99+

both organizationsQUANTITY

0.99+

oneQUANTITY

0.98+

30 plus percentQUANTITY

0.97+

Oracle ElevateORGANIZATION

0.97+

Oracle Open WorldEVENT

0.97+

FirstQUANTITY

0.97+

two componentsQUANTITY

0.96+

twice a yearQUANTITY

0.96+

three years agoDATE

0.96+

todayDATE

0.95+

two cloud companiesQUANTITY

0.95+

last yearDATE

0.93+

theCUBEORGANIZATION

0.93+

billion dollarQUANTITY

0.91+

last three yearsDATE

0.89+

two servicesQUANTITY

0.83+

past several yearsDATE

0.76+

three effortQUANTITY

0.72+

dayQUANTITY

0.71+

Gen 2OTHER

0.68+

Fortune 100ORGANIZATION

0.68+

Gen 2QUANTITY

0.66+

GenOTHER

0.61+

gen 1QUANTITY

0.6+

half a percentageQUANTITY

0.59+

ElevateTITLE

0.58+

2OTHER

0.56+

I.T.ORGANIZATION

0.54+

every monthQUANTITY

0.53+

GenQUANTITY

0.51+

2QUANTITY

0.48+

OracleEVENT

0.42+

What's Next for Converged Infrastructure


 

[Music] [Applause] [Music] [Applause] [Music] hi I'm Stu minimun with wiki bond and welcome to another wiki bond the cube digital community event this one sponsored by Dell EMC of course it's a big week in the industry VMware is having their big European show in Barcelona VMworld and while we are not there in person we have some news that we want to dig into with Dell EMC so like all of our digital community events we're gonna have about 25 minutes of video and then afterwards we're going to have a crowd chat we're gonna have a panel where you have the opportunity to dig in ask your questions give us your viewpoint and talk about everything that's going on so it's important to pay attention think about what questions participate in the crowd chat afterward and thanks so much for joining us talk about the business issues of the day to help us frame this discussion I'm happy to welcome back to the program Pete manka who's the senior vice president with converged infrastructure and solutions engineering at Dell MC Pete great to see you great see you Tuesday all right so Pete converged infrastructures come a long way you and I have a lot of history in this space you know more than a decade now we've been in here so but from a customer standpoint you know this has matured a lot I wouldn't want you to start out give us the customer perspective you know what was convergent restrictor designed to do how is it living up to that and you know what's the state of it today sure well as you said we've got a long history in this and ten years ago we started this business to really simplify IT operations for our customers and we tried to remove the silos between storage compute and networking management and we're doing that we created this market called converged infrastructure by converging the management of those three siloed operations in doing so we added a tremendous amount of value for our customers fast forward now over the years earlier this year we come up with a product that the BX block 1000 that allows us to scale considerably greater within a single environment adding more value to our customer we're very customer driven at Dell EMC as you know and so we talked to our customers again and said what else do you want what else do you want and they pushed us for more automation in more monitoring support for the product and that's really what we're here to talk about today is how we get from simplifying IT operations for customers through allowing scale architectures to eventually automating the customers environment for them yeah when you talk about simplification that the the industry has really been really galvanized gotten really excited at hyper-converged infrastructure and I hear simple that's kind of what HCI is gonna do Dell of course has both converged and hyper-converged we've talked a lot as to how they both fit maybe now you know give give us the update as to you know the relevance of CI today while HCI is still continuing to grow really that sure yeah HDI is a hot market obviously and it is growing fast and customers should be excited about HDI because it's a great solution right it enables the customers get an application up and running very quickly and it's great for scale out architectures you want to add symmetric type nodes and skill oh you're at your application your architecture it's great for that but like all architectures it doesn't fit all solutions or all problems for the customers and there's a place for CI and there's a place for HCI the end you think about HCI versus CI CI is great for asymmetrically scaling architectures you want to have more storage more networking more memory inside your servers more compute you can do that through a CI portfolio and for customers who need that asymmetrical scaling for customers who need high availability very efficient scale type storage environments scale of compute environments you can do that through a CI platform much more efficiently than you can through other platforms in the market alright Pete you mentioned that there was announcement earlier in the year that the VX block 1000 so for those that don't have hauled of history like us that followed from the V block of the BX block and now the 1000 helped remind us what was different about this from things in the past sure when we first started out in the conversion structure business we had blocks that were specific to storage configurations if you wanted a unity or v-max you had to buy a specific model of our of our VX block product line that's great but we realize customers and customers told us they wanted a mix environment they wanted to have a multi-use environment in their block so we created the VX block 1000 announced in February and it allows you to mix and match your storage sand bar along with your compute environment and scales out at a much greater capacity than we could through the original block design so and we're providing the customer a much larger footprint managed by within a single block but also a choice allowing them to have multiple application configurations within the same block all right so people now what what's Del DMC doing to bring converged infrastructure for it even more how are we expanding you know what it's gonna do for customers and the problems they're looking to solve yeah right so again we went back to our customers that said ok tell us your experience with block you tell us what you like tell us what you don't like and they love the product it's been a very successful product they said we want more automation we want more monitoring you want the ability to see what's happening as well as automate workflows and procedures that we have to do to get our workloads up and running quicker and more automated fashion so what we're gonna talk about today is how we're going to do that we're going to provide more automation capabilities and the ability to monitor through our VM work you realize suite toolset alright great Pete I appreciate you helping kind of lay the groundwork we're gonna be back in a quick second one of your peers from Dell MC to dig into the product so stay with us we'll be back right after this this quick break [Music] vx block system 1000 simplifies IT accelerates the pace of innovation and reduces operating costs storage compute networking and virtualization components are all unified in a single system transforming operations and delivering better business outcomes faster this is achieved by five foundational pillars that set Dell EMC apart as the leading data center solutions provider each VX blocks system 1000 is engineered manufactured managed sustained and supported as one welcome back joining me to dig into this announcement is Dan Mita who's the vice president of converged infrastructure engineering at Dell EMC damn thanks for joining us thanks for having me all right so Pete kind of teased out of what we're doing here talked about what we've been building on for the last ten years in the converging infrastructure industry please elaborate you know what this is and shuttle from there yeah absolutely so to your point we know customers have been buying VX blocks and V blocks for the last ten years and there's lots of good reasons behind all of that we also know that customers been asking us for better monitoring better reporting and more orchestration capabilities we this announcement we think we're meeting those challenges so there's three things that I'd like to talk about one is we're gonna help customers raise the bar around awareness of what's going on within the environment we'll do that through health checks and dashboards performance dashboarding real-time alerting for the first time the second thing we'll talk about is we talked about a different level of automation than we've ever had before when it comes to orchestration we'll be introducing the ability to set up the services necessary to run orchestrated workflows and then our intention is to bring to market those engineered workflows and lastly would be you know analytics deeper analytics for customers that want to go even further into why their system is drifted from a known good state we're gonna give them the capabilities to see that great so Dan I think back from the earliest days that you know Vblock was always architected to you know transform the way operations are done what really differentiates this you know how important is there are things like the analytics of you're doing yeah sure so you're right today our customers use element managers to do most of that what this tool will allow them to do is kind of abstract a lot of the complexity folk in the element managers themselves if you think about an example where our customer wants to provision an ESXi host add it to a cluster and you say a Power Max bulan we know there's about a dozen manual steps to do that it cuts across four element managers and that also means you're going to be touching your administrators across compute network storage and virtualization with this single tool that will guide you first by checking the environment taking you through an orderly set of questions or inputs and then lastly validating the environment we know that we're going to help customers eliminate any undue harm that might do to an environment but we're also gonna save them time effort and money by getting it done quicker ok so Dan it sounds like there's a new suite of software explain it exactly what is it and how do all these pieces fit together yeah so there's three pieces in this week foundational is what we call the X blocks central so the X Box central is going to go out mandatory with all new VX blocks we're also going to make it available to our customers running older 300 500 and 700 family the X blocks and we'll provide a migration path for customers that are using vision today that's the tool that's going to allow them to do that performance health and RCM compliance dashboarding as well as do metrics based in real-time alerting one loved one step up from that one layer up from that is what we call the X block orchestration so this this product is being built underneath the V realize operations or excuse me orchestration tool and it's essentially like I said it's going to provide those all of those tools for setting up the services to run the workflows and then we'll provide those workflows so that example that I gave just a minute ago about provisioning that host will have a workflow from that right out of the gate ok so you mentioned the the vir ops thing you know VMware has always been a you know a very important piece of the whole stack there's yeah be in front of everything in the product line while you're announcing it this week at you know vmworld your and you know explain a little bit more that integration between the VMware pieces so you mentioned V Rob's and that's the third piece in this suite right so that is that it's going to provide us the dashboarding to provide all of that detailed analytics so if you think about it we're using V realized opera orchestra ssin as a workflow engine we're using V ROPS for that intelligent insight into the operations as a framework for the things that we're doing but essentially what we've given customers at this point is a framework for a cloud management or a cloud operations model sitting on top of a converged infrastructure alright Dan thanks for explaining all that now we're gonna throw it over to a customer to really hear what they think of this announcement when we started to talk about the needs to innovate within business technology and move forward with the business we knew we had to advance our technology offerings standardize our data center and help bring all our technology to current date vs block allowed us to do that in one purchase and also allowed us to basically bring our entire data center ten years forward with one step the benefits we've seen from the X block from my side of the house I now have that sleep at night capability because I have full high availability I have industry-leading technology the performance is there their applications are now more available we now have a platform where we can modernize our entire system we can add blades we can add storage we can add networking as we need it out of the box all knowing that it's been engineered and architected to work together it has literally set it and forget it for us we go about our daily business and now we've transitioned from a maintenance time set and a maintenance mindset to now we can participate in meetings to help drive business innovation help drive digital transformation within our company and really be that true IT strategic partner the business is looking for with the implementation of VX blocks central upcoming we should be able to get a better idea of what's going on in our VX block through one dashboard we're very sensitive about the number of dashboards we try to view do the whole death bi dashboard situation especially for a small team we really believe yes block central is going to be beneficial for us to have a quick health overview of our entire unit encompassing all components as we discussed additional features coming out for the VX block one of the more interesting ones for me was to see the integration of VMware's be realized product into the VX block most importantly focused around orchestration and analytics that's something that we don't do a lot of right now but as our company continues to grow and we continue to expand our VX block into additional offerings I can see that being beneficial especially for our small team being able to you know or orchestrate and automate kind of daily tasks that we do now may benefit our team in the future and then the analytics piece as we continue to be a almost a service provider for our business partners having that analytic information available to us could be very beneficial from a from a cost revenue standpoint for us to show kind of the return on investment for our company one of the things that we kind of look forward to that the opportunities of VX block is going to give us given the feature set that's coming out is the ability to use automation for some of our daily business tasks that maybe is something as simple as moving a virtual machine from one host to another that seems pretty mundane at this point but as our company grows and workloads get more complex having the automation availability to be able to do that and have VMware do that on its own it's going to benefit our team always love hearing from customers I'm Peter Burris here in our Palo Alto studios let's also hear from a very important partner in this overall announcement that's VMware we've got OJ Singh who's a senior vice president and general manager the cloud management business unit at VMware with us AJ welcome to the cube thank you Peter of that to be here so Archie we've been hearing a lot of great new technology about you know converged infrastructure and how you do better automation and how you do better you know discovery and whatnot associated with it but these technologies been for around for a while and VMware has been a crucial partner of this journey for quite some time give us a little bit about the history absolutely you know this is a as you rightly pointed a long history with a VMware and Dell EMC goes back over a decade ago I started with Vblock in those days and we literally defined the converged infrastructure market at that point and and this partnership has continued to evolve and so this announcement we are really excited to be here you know to continue to announce our joint solutions to our common customers you know in this whole VX blocks 1000 along with the vitalife suite well the VX block Hardware foundation with VMware software foundation was one of the first places where customers actually started building what we now call private clouds tell us a little bit about how that technology came together and how that vision came together and how your customers have been responding to this combinations partnership for a while absolutely if you think about it from a customer standpoint they love the fact that it is a pre engineered solution and you know they have to put less effort and doing the lifecycle management maintenance of the solution so as part of kind of making it a pre engineered solution what we've done is you know made it such that the integrations between the VX block and visualize are out of the box so we put some critical components you know are of course the vSphere and NSX in there but in addition to that for the virial I set we have vro Orchestrator already built in there we have a special management pack that gets into detail dashboards that are related to the hardware associated with the X block also pre integrated in there so that if via ops runs in there it'll automatically kind of figure that as a dashboard out and can configure them and then finally we have VRA or you know an industry-leading automation platform that allows you self-service and literally build a private cloud on top of the X block so the VX central software has been letting or is now allows a customer to make better use of VMware yes similarly some of the new advancements that you're making within VMware are going to help VX bar customers get more out of their devices as well tell us a little bit about some of the recent announcements you've made that are very complimentary absolutely you know to some extent you know the V realized journey has been a journey about at the end of the day in enabling our customers to set up a self-managed private cloud and do large extent we're heading in the direction of what we say self-driving operations using machine learning technologies and all of that so in that kind of direction in that vision if you may we've actually now released with a great integration between VRA and via ops that for the first time closes the loop between the two solutions so that you can start to do intelligent workload placement right depending upon if I'm trying to optimize for cost I'm trying to optimize for tier of service you know whether it's bronze silver gold tier service I'm trying to optimize for software license management you know Oracle license is only going on Oracle tier etcetera this closed-loop with policy ensures you do that and that's the first step in this direction of self-driving that's a very important direction because customers are gonna try to build more complex systems based on or support more complex applications without at the same time seeing that complexity show up in the administration side now that leaves the last question I have because ultimately the two of you are working to make together to make customers successful so tell us a little bit about how your track record your history and your direction of working together in support in service to customers is going and where you think it's gonna go absolutely so we continue to work very closely in partnership and as partners we are committed to support our customers through thick and thin you know to make sure that they can have these engineered pre-engineered clouds set up so they can get the benefits of these clouds lower cost to serve you know in terms of highly efficient workload the fact as much as possible in the you know let me tell about of hardware that's available and at the same time the automation and the self-service that enables the agility so the development teams can build software quickly I think provision software really fast so those are the kind of benefits lower cost agility but in partnership jointly serving our customers RJ Singh senior vice president general manager of the VMware cloud management business unit thanks again for beyond the cube thank you Peter glad to be here Stu back to you all right thanks Peter for sharing that VMware perspective to help understand a little bit more some of the customer implications we're back with Dan and Pete Pete we talked about there's new management there's a few different software packages is this exclusively for the new generation of VX block 1000 or you know who the existing customers will be able to use this sure I mean obviously advanced management features are important to all of our customers so we specifically designed the Xbox central to run both on existing VX block customers and of course in our new VX blocks that were a lot of the factor as well alright so Dan we've talked about the progress we've made the the you know great maturation in these solutions set what's next what customers expect and what should we be looking for from Dellums in the future so this the thing with us is always data center operations simplification if you think about it what we're introducing today is all about simplifying and provisioning and management of the existing system within the system we've heard also from customers what they look for us next to do is to try to improve the upgrade process simplify that as well so we've already got some development efforts working on that we'll be excited and news for later this year or early next year janna follow-up went dance that we always talked to our customers about what they're looking for in addition to more automation and we're monitoring support they want to go to consume their resources in a more agile environment cloud like a farm and even on-premises so that combined with the be realized suite of products we're going to be providing more cloud live experience to our customers for their yeah walks in the future alright Pete and Dan thank you so much for sharing this news we're gonna now turn it over to the community so you've heard about the announcement we've been talking for quite a long time at wiki bond about how automation and tools are gonna hopefully help make your job easier so want you to dig in ask the questions what do you like what do you want to see more of and so everybody let's growl chat great

Published Date : Nov 6 2018

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
Dan MitaPERSON

0.99+

DanPERSON

0.99+

Peter BurrisPERSON

0.99+

VMwareORGANIZATION

0.99+

FebruaryDATE

0.99+

PetePERSON

0.99+

three piecesQUANTITY

0.99+

Pete PetePERSON

0.99+

DellORGANIZATION

0.99+

Dell EMCORGANIZATION

0.99+

Dell EMCORGANIZATION

0.99+

twoQUANTITY

0.99+

third pieceQUANTITY

0.99+

two solutionsQUANTITY

0.99+

OJ SinghPERSON

0.99+

PeterPERSON

0.99+

RJ SinghPERSON

0.99+

first stepQUANTITY

0.99+

ten yearsQUANTITY

0.99+

ArchiePERSON

0.99+

jannaPERSON

0.99+

TuesdayDATE

0.99+

three thingsQUANTITY

0.99+

first timeQUANTITY

0.99+

next yearDATE

0.98+

ESXiTITLE

0.98+

todayDATE

0.98+

BX block 1000COMMERCIAL_ITEM

0.98+

VX blocksCOMMERCIAL_ITEM

0.98+

X blockORGANIZATION

0.98+

second thingQUANTITY

0.98+

five foundational pillarsQUANTITY

0.97+

VX blocks system 1000COMMERCIAL_ITEM

0.97+

HCITITLE

0.97+

one stepQUANTITY

0.97+

this weekDATE

0.96+

VblockORGANIZATION

0.96+

OracleORGANIZATION

0.96+

a minute agoDATE

0.96+

X blockTITLE

0.96+

oneQUANTITY

0.96+

ten years agoDATE

0.96+

bothQUANTITY

0.95+

single toolQUANTITY

0.95+

later this yearDATE

0.95+

V blocksCOMMERCIAL_ITEM

0.95+

one purchaseQUANTITY

0.95+

this weekDATE

0.95+

about 25 minutesQUANTITY

0.95+

single blockQUANTITY

0.94+

VXTITLE

0.94+

Pete mankaPERSON

0.94+

firstQUANTITY

0.93+

more than a decadeQUANTITY

0.92+

vmworldORGANIZATION

0.91+

HCIORGANIZATION

0.9+

singleQUANTITY

0.9+

one layerQUANTITY

0.89+

three siloedQUANTITY

0.89+

V ROPSTITLE

0.89+

VX blockTITLE

0.88+

VX blocks 1000COMMERCIAL_ITEM

0.88+

eachQUANTITY

0.88+

Infrastructure For Big Data Workloads


 

>> From the SiliconANGLE media office in Boston, Massachusetts, it's theCUBE! Now, here's your host, Dave Vellante. >> Hi, everybody, welcome to this special CUBE Conversation. You know, big data workloads have evolved, and the infrastructure that runs big data workloads is also evolving. Big data, AI, other emerging workloads need infrastructure that can keep up. Welcome to this special CUBE Conversation with Patrick Osborne, who's the vice president and GM of big data and secondary storage at Hewlett Packard Enterprise, @patrick_osborne. Great to see you again, thanks for coming on. >> Great, love to be back here. >> As I said up front, big data's changing. It's evolving, and the infrastructure has to also evolve. What are you seeing, Patrick, and what's HPE seeing in terms of the market forces right now driving big data and analytics? >> Well, some of the things that we see in the data center, there is a continuous move to move from bare metal to virtualized. Everyone's on that train. To containerization of existing apps, your apps of record, business, mission-critical apps. But really, what a lot of folks are doing right now is adding additional services to those applications, those data sets, so, new ways to interact, new apps. A lot of those are being developed with a lot of techniques that revolve around big data and analytics. We're definitely seeing the pressure to modernize what you have on-prem today, but you know, you can't sit there and be static. You gotta provide new services around what you're doing for your customers. A lot of those are coming in the form of this Mode 2 type of application development. >> One of the things that we're seeing, everybody talks about digital transformation. It's the hot buzzword of the day. To us, digital means data first. Presumably, you're seeing that. Are organizations organizing around their data, and what does that mean for infrastructure? >> Yeah, absolutely. We see a lot of folks employing not only technology to do that. They're doing organizational techniques, so, peak teams. You know, bringing together a lot of different functions. Also, too, organizing around the data has become very different right now, that you've got data out on the edge, right? It's coming into the core. A lot of folks are moving some of their edge to the cloud, or even their core to the cloud. You gotta make a lot of decisions and be able to organize around a pretty complex set of places, physical and virtual, where your data's gonna lie. >> There's a lot of talk, too, about the data pipeline. The data pipeline used to be, you had an enterprise data warehouse, and the pipeline was, you'd go through a few people that would build some cubes and then they'd hand off a bunch of reports. The data pipeline, it's getting much more complex. You've got the edge coming in, you've got, you know, core. You've got the cloud, which can be on-prem or public cloud. Talk about the evolution of the data pipeline and what that means for infrastructure and big data workloads. >> For a lot of our customers, and we've got a pretty interesting business here at HPE. We do a lot with the Intelligent Edge, so, our Edgeline servers in Aruba, where a a lot of the data is sitting outside of the traditional data center. Then we have what's going on in the core, which, for a lot of customers, they are moving from either traditional EDW, right, or even Hadoop 1.0 if they started that transformation five to seven years ago, to, a lot of things are happening now in real time, or a combination thereof. The data types are pretty dynamic. Some of that is always getting processed out on the edge. Results are getting sent back to the core. We're also seeing a lot of folks move to real-time data analytics, or some people call it fast data. That sits in your core data center, so utilizing things like Kafka and Spark. A lot of the techniques for persistent storage are brand new. What it boils down to is, it's an opportunity, but it's also very complex for our customers. >> What about some of the technical trends behind what's going on with big data? I mean, you've got sprawl, with both data sprawl, you've got workload sprawl. You got developers that are dealing with a lot of complex tooling. What are you guys seeing there, in terms of the big mega-trends? >> We have, as you know, HPE has quite a few customers in the mid-range in enterprise segments. We have some customers that are very tech-forward. A lot of those customers are moving from this, you know, Hadoop 1.0, Hadoop 2.0 system to a set of essentially mixed workloads that are very multi-tenant. We see customers that have, essentially, a mix of batch-oriented workloads. Now they're introducing these streaming type of workloads to folks who are bringing in things like TensorFlow and GPGPUs, and they're trying to apply some of the techniques of AI and ML into those clusters. What we're seeing right now is that that is causing a lot of complexity, not only in the way you do your apps, but the number of applications and the number of tenants who use that data. It's getting used all day long for various different, so now what we're seeing is it's grown up. It started as an opportunity, a science project, the POC. Now it's business-critical. Becoming, now, it's very mission-critical for a lot of the services that drives. >> Am I correct that those diverse workloads used to require a bespoke set of infrastructure that was very siloed? I'm inferring that technology today will allow you to bring those workloads together on a single platform. Is that correct? >> A couple of things that we offer, and we've been helping customers to get off the complexity train, but provide them flexibility and elasticity is, a lot of the workloads that we did in the past were either very vertically-focused and integrated. One app server, networking, storage, to, you know, the beginning of the analytics phase was really around symmetrical clusters and scaling them out. Now we've got a very rich and diverse set of components and infrastructure that can essentially allow a customer to make a data lake that's very scalable. Compute, storage-oriented nodes, GPU-oriented nodes, so it's very flexible and helps us, helps the customers take complexity out of their environment. >> In thinking about, when you talk to customers, what are they struggling with, specifically as it relates to infrastructure? Again, we talked about tooling. I mean, Hadoop is well-known for the complexity of the tooling. But specifically from an infrastructure standpoint, what are the big complaints that you hear? >> A couple things that we hear is that my budget's flat for the next year or couple years, right? We talked earlier in the conversation about, I have to modernize, virtualize, containerizing my existing apps, that means I have to introduce new services as well with a very different type of DevOps, you know, mode of operations. That's all with the existing staff, right? That's the number one issue that we hear from the customers. Anything that we can do to help increase the velocity of deployment through automation. We hear now, frankly, the battle is for whether I'm gonna run these type of workloads on-prem versus off-prem. We have a set of technology as well as services, enabling services with Pointnext. You remember the acquisition we made around cloud technology partners to right-place where those workloads are gonna go and become like a broker in that conversation and assist customers to make that transition and then, ultimately, give them an elastic platform that's gonna scale for the diverse set of workloads that's well-known, sized, easy to deploy. >> As you get all this data, and the data's, you know, Hadoop, it sorta blew up the data model. Said, "Okay, we'll leave the data where it is, "we'll bring the compute there." You had a lot of skunk works projects growing. What about governance, security, compliance? As you have data sprawl, how are customers handling that challenge? Is it a challenge? >> Yeah, it certainly is a challenge. I mean, we've gone through it just recently with, you know, GDPR is implemented. You gotta think about how that's gonna fit into your workflow, and certainly security. The big thing that we see, certainly, is around if the data's residing outside of your traditional data center, that's a big issue. For us, when we have Edgeline servers, certainly a lot of things are coming in over wireless, there's a big buildout in advent of 5G coming out. That certainly is an area that customers are very concerned about in terms of who has their data, who has access to it, how can you tag it, how can you make sure it's secure. That's a big part of what we're trying to provide here at HPE. >> What specifically is HPE doing to address these problems? Products, services, partnerships, maybe you could talk about that a little bit. Maybe even start with, you know, what's your philosophy on infrastructure for big data and AI workloads? >> I mean, for us, we've over the last two years have really concentrated on essentially two areas. We have the Intelligent Edge, which is, certainly, it's been enabled by fantastic growth with our Aruba products in the networks in space and our Edgeline systems, so, being able to take that type of compute and get it as far out to the edge as possible. The other piece of it is around making hybrid IT simple, right? In that area, we wanna provide a very flexible, yet easy-to-deploy set of infrastructure for big data and AI workloads. We have this concept of the Elastic Platform for Analytics. It helps customers deploy that for a whole myriad of requirements. Very compute-oriented, storage-oriented, GPUs, cold and warm data lakes, for that matter. And the third area, what we've really focused on is the ecosystem that we bring to our customers as a portfolio company is evolving rapidly. As you know, in this big data and analytics workload space, the software development portion of it is super dynamic. If we can bring a vetted, well-known ecosystem to our customers as part of a solution with advisory services, that's definitely one of the key pieces that our customers love to come to HP for. >> What about partnerships around things like containers and simplifying the developer experience? >> I mean, we've been pretty public about some of our efforts in this area around OneSphere, and some of these, the models around, certainly, advisory services in this area with some recent acquisitions. For us, it's all about automation, and then we wanna be able to provide that experience to the customers, whether they want to develop those apps and deploy on-prem. You know, we love that. I think you guys tag it as true private cloud. But we know that the reality is, most people are embracing very quickly a hybrid cloud model. Given the ability to take those apps, develop them, put them on-prem, run them off-prem is pretty key for OneSphere. >> I remember Antonio Neri, when you guys announced Apollo, and you had the astronaut there. Antonio was just a lowly GM and VP at the time, and now he's, of course, CEO. Who knows what's in the future? But Apollo, generally at the time, it was like, okay, this is a high-performance computing system. We've talked about those worlds, HPC and big data coming together. Where does a system like Apollo fit in this world of big data workloads? >> Yeah, so we have a very wide product line for Apollo that helps, you know, some of them are very tailored to specific workloads. If you take a look at the way that people are deploying these infrastructures now, multi-tenant with many different workloads. We allow for some compute-focused systems, like the Apollo 2000. We have very balanced systems, the Apollo 4200, that allow a very good mix of CPU, memory, and now customers are certainly moving to flash and storage-class memory for these type of workloads. And then, Apollo 6500 were some of the newer systems that we have. Big memory footprint, NVIDIA GPUs allowing you to do very high calculations rates for AI and ML workloads. We take that and we aggregate that together. We've made some recent acquisitions, like Plexxi, for example. A big part of this is around simplification of the networking experience. You can probably see into the future of automation of the networking level, automation of the compute and storage level, and then having a very large and scalable data lake for customers' data repositories. Object, file, HTFS, some pretty interesting trends in that space. >> Yeah, I'm actually really super excited about the Plexxi acquisition. I think it's because flash, it used to be the bottleneck was the spinning disk, flash pushes the bottleneck largely to the network. Plexxi gonna allow you guys to scale, and I think actually leapfrog some of the other hyperconverged players that are out there. So, super excited to see what you guys do with that acquisition. It sounds like your focus is on optimizing the design for I/O. I'm sure flash fits in there as well. >> And that's a huge accelerator for, even when you take a look at our storage business, right? So, 3PAR, Nimble, All-Flash, certainly moving to NVMe and storage-class memory for acceleration of other types of big data databases. Even though we're talking about Hadoop today, right now, certainly SAP HANA, scale-out databases, Oracle, SQL, all these things play a part in the customer's infrastructure. >> Okay, so you were talking before about, a little bit about GPUs. What is this HPE Elastic Platform for big data analytics? What's that all about? >> I mean, we have a lot of the sizing and scalability falls on the shoulders of our customers in this space, especially in some of these new areas. What we've done is, we have, it's a product/a concept, and what we do is we have this, it's called the Elastic Platform for Analytics. It allows, with all those different components that I rattled off, all great systems in of their own, but when it comes to very complex multi-tenant workloads, what we do is try to take the mystery out of that for our customers, to be able to deploy that cookie-cutter module. We're even gonna get to a place pretty soon where we're able to offer that as a consumption-based service so you don't have to choose for an elastic type of acquisition experience between on-prem and off-prem. We're gonna provide that as well. It's not only a set of products. It's reference architectures. We do a lot of sizing with our partners. The Hortonworks, CloudEra's, MapR's, and a lot of the things that are out in the open source world. It's pretty good. >> We've been covering big data, as you know, for a long, long time. The early days of big data was like, "Oh, this is great, "we're just gonna put white boxes out there "and off the shelf storage!" Well, that changed as big data got, workloads became more enterprise, mainstream, they needed to be enterprise-ready. But my question to you is, okay, I hear you. You got products, you got services, you got perspectives, a philosophy. Obviously, you wanna sell some stuff. What has HPE done internally with regard to big data? How have you transformed your own business? >> For us, we wanna provide a really rich experience, not just products. To do that, you need to provide a set of services and automation, and what we've done is, with products and solutions like InfoSight, we've been able to, we call it AI for the Data Center, or certainly, the tagline of predictive analytics is something that Nimble's brought to the table for a long time. To provide that level of services, InfoSight, predictive analytics, AI for the Data Center, we're running our own big data infrastructure. It started a number of years ago even on our 3PAR platforms and other products, where we had scale-up databases. We moved and transitioned to batch-oriented Hadoop. Now we're fully embedded with real-time streaming analytics that come in every day, all day long, from our customers and telemetry. We're using AI and ML techniques to not only improve on what we've done that's certainly automating for the support experience, and making it easy to manage the platforms, but now introducing things like learning, automation engines, the recommendation engines for various things for our customers to take, essentially, the hands-on approach of managing the products and automate it and put into the products. So, for us, we've gone through a multi-phase, multi-year transition that's brought in things like Kafka and Spark and Elasticsearch. We're using all these techniques in our system to provide new services for our customers as well. >> Okay, great. You're practitioners, you got some street cred. >> Absolutely. >> Can I come back on InfoSight for a minute? It came through an acquisition of Nimble. It seems to us that you're a little bit ahead, and maybe you say a lot a bit ahead of the competition with regard to that capability. How do you see it? Where do you see InfoSight being applied across the portfolio, and how much of a lead do you think you have on competitors? >> I'm paranoid, so I don't think we ever have a good enough lead, right? You always gotta stay grinding on that front. But we think we have a really good product. You know, it speaks for itself. A lot of the customers love it. We've applied it to 3PAR, for example, so we came out with some, we have VMVision for a 3PAR that's based on InfoSight. We've got some things in the works for other product lines that are imminent pretty soon. You can think about what we've done for Nimble and 3PAR, we can apply similar type of logic to Elastic Platform for Analytics, like running at that type of cluster scale to automate a number of items that are pretty pedantic for the customers to manage. There's a lot of work going on within HPE to scale that as a service that we provide with most of our products. >> Okay, so where can I get more information on your big data offerings and what you guys are doing in that space? >> Yeah, so, we have, you can always go to hp.com/bigdata. We've got some really great information out there. We're in our run-up to our big end user event that we do every June in Las Vegas. It's HPE Discover. We have about 15,000 of our customers and trusted partners there, and we'll be doing a number of talks. I'm doing some work there with a British telecom. We'll give some great talks. Those'll be available online virtually, so you'll hear about not only what we're doing with our own InfoSight and big data services, but how other customers like BTE and 21st Century Fox and other folks are applying some of these techniques and making a big difference for their business as well. >> That's June 19th to the 21st. It's at the Sands Convention Center in between the Palazzo and the Venetian, so it's a good conference. Definitely check that out live if you can, or if not, you can all watch online. Excellent, Patrick, thanks so much for coming on and sharing with us this big data evolution. We'll be watching. >> Yeah, absolutely. >> And thank you for watcihing, everybody. We'll see you next time. This is Dave Vellante for theCUBE. (fast techno music)

Published Date : Jun 12 2018

SUMMARY :

From the SiliconANGLE media office and the infrastructure that in terms of the market forces right now to modernize what you have on-prem today, One of the things that we're seeing, of their edge to the cloud, of the data pipeline A lot of the techniques What about some of the technical trends for a lot of the services that drives. Am I correct that a lot of the workloads for the complexity of the tooling. You remember the acquisition we made the data where it is, is around if the data's residing outside Maybe even start with, you know, of the Elastic Platform for Analytics. Given the ability to take those apps, GM and VP at the time, automation of the compute So, super excited to see what you guys do in the customer's infrastructure. Okay, so you were talking before about, and a lot of the things But my question to you and automate it and put into the products. you got some street cred. bit ahead of the competition for the customers to manage. that we do every June in Las Vegas. Definitely check that out live if you can, We'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
PatrickPERSON

0.99+

Dave VellantePERSON

0.99+

ArubaLOCATION

0.99+

AntonioPERSON

0.99+

BTEORGANIZATION

0.99+

Patrick OsbornePERSON

0.99+

HPEORGANIZATION

0.99+

June 19thDATE

0.99+

Antonio NeriPERSON

0.99+

Las VegasLOCATION

0.99+

PointnextORGANIZATION

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

NVIDIAORGANIZATION

0.99+

third areaQUANTITY

0.99+

21st Century FoxORGANIZATION

0.99+

Apollo 4200COMMERCIAL_ITEM

0.99+

@patrick_osbornePERSON

0.99+

Apollo 6500COMMERCIAL_ITEM

0.99+

InfoSightORGANIZATION

0.99+

MapRORGANIZATION

0.99+

Sands Convention CenterLOCATION

0.99+

Boston, MassachusettsLOCATION

0.98+

Apollo 2000COMMERCIAL_ITEM

0.98+

CloudEraORGANIZATION

0.98+

HPORGANIZATION

0.98+

NimbleORGANIZATION

0.98+

SparkTITLE

0.98+

SAP HANATITLE

0.98+

next yearDATE

0.98+

GDPRTITLE

0.98+

One appQUANTITY

0.98+

VenetianLOCATION

0.98+

two areasQUANTITY

0.98+

todayDATE

0.98+

hp.com/bigdataOTHER

0.97+

oneQUANTITY

0.97+

HortonworksORGANIZATION

0.97+

Mode 2OTHER

0.96+

single platformQUANTITY

0.96+

SQLTITLE

0.96+

OneQUANTITY

0.96+

21stDATE

0.96+

Elastic PlatformTITLE

0.95+

3PARTITLE

0.95+

Hadoop 1.0TITLE

0.94+

seven years agoDATE

0.93+

CUBE ConversationEVENT

0.93+

PalazzoLOCATION

0.93+

HadoopTITLE

0.92+

KafkaTITLE

0.92+

Hadoop 2.0TITLE

0.91+

ElasticsearchTITLE

0.9+

PlexxiORGANIZATION

0.87+

ApolloORGANIZATION

0.87+

of years agoDATE

0.86+

Elastic Platform for AnalyticsTITLE

0.85+

OracleORGANIZATION

0.83+

TensorFlowTITLE

0.82+

EdgelineORGANIZATION

0.82+

Intelligent EdgeORGANIZATION

0.81+

about 15,000 ofQUANTITY

0.78+

one issueQUANTITY

0.77+

fiveDATE

0.74+

HPE DiscoverORGANIZATION

0.74+

both dataQUANTITY

0.73+

dataORGANIZATION

0.73+

yearsDATE

0.72+

SiliconANGLELOCATION

0.71+

EDWTITLE

0.71+

EdgelineCOMMERCIAL_ITEM

0.71+

HPETITLE

0.7+

OneSphereORGANIZATION

0.68+

coupleQUANTITY

0.64+

3PARORGANIZATION

0.63+

Trey Layton | The Future of Converged Infrastructure


 

>> We're back with Trey Layton, who's the senior vice president and CTO of Converged at Dell EMC. Trey, it's always a pleasure, good to see you. >> Dave, good to see you as well. >> We're at eight years into Vblock. Take us back to the converged infrastructure early days. What problems were you trying to solve with CI? >> Well, one of the problems with IT in general is it's been hard, and one of the reasons why it's been hard is all the variability that customers consume, and how do you integrate all that variability in a sustaining manner to maintain the assets so it can support the business? The thing that we've learned is, the original recipe that we had for Vblock was to go at and solve that very problem. We have referred to that as lifecycle. Manage the lifecycle services of the data center assets that you're deploying. We have created some great intellectual property, some great innovation around helping minimize the complexity associated with managing the lifecycle of a very complex integration by way of one of the largest data center assets that people operate in their environments. >> So, yeah, thousands and thousands of customers. They're telling you lifecycle management is critical, but what are they doing? They're shifting their labor resource to more strategic activities? Is that what's going on? >> Well, there's so much variation and complexity in just maintaining the different integration points that they're spending an inordinate amount of their time, a lot of nights and weekends, on understanding and figuring out which software combinations, which configuration combinations that need to operate. What we do as an organization and have done since inception is, we manage that complexity for them. We deliver them an outcome-based architecture that is pre-integrated, and we sustain that integration over its life, so they spend less time doing that and letting the experts who actually build the components focus on maintaining those integrations. >> As an analyst, I always looked at converged infrastructure as an evolutionary trend, bringing together storage servers, networking, bespoke components. My question is, where's the innovation underneath converged infrastructure? >> I would say innovation is in two areas. We're blessed with a lot of technology innovations that come from our partner and our own companies, Dell EMC and Cisco. Cisco produces wonderful innovations in the space of networking compute in the context of Vblock. Dell EMC, storage innovations, data protection, et cetera. We harmonize all of these very complex integration in a manner where an organization can put those advanced integrations into solving business problems immediately. There's two vectors of innovation. There are the technology components that we're acquiring to solve business problems, and there's the method in which we integrate them to get to the business of solving problems. >> Okay, let's get into the announcement. What are you announcing, what's new, why should we care? >> The announcement is, we are announcing the VxBlock 1000. The interesting thing about Vblocks over the years is they have been individual systems architectures. A compute technology integrated with a particular storage architecture would produce a model of Vblock. With VxBlock 1000, we're actually introducing an architecture that provides a full gamut of array optionality for customers. Both blade and rack server options for customers on the UCS compute side, and before, we would integrate data protection technologies as an extension or an add-on into the architecture. Data protection is native to the offer. In addition to that, unstructured data storage. So, being able to include unstructured data into the architecture as one singular architecture, as opposed to buying individualized systems. >> Okay, so you're just further simplifying the underlying infrastructure, which is going to save me even more time, is that right? >> Producing a standard which can adapt to virtually any use case that a customer has in a data center environment, giving them the ability to expand and grow that architecture as their workload dictates in their environment, as opposed to buying a system to accommodate one workload, buying another system to accommodate another workload. This is breaking the barriers of traditional CI and moving it forward so that we can create an adaptive architecture that can accommodate not only the technologies available today, but the technologies on the horizon tomorrow. >> Okay, so it's workload diversity, which means greater asset leverage from that underlying infrastructure. >> Trey: Absolutely. >> Can you give us some examples? How do you envision customers using this? >> I would talk specifically about customers that we have today, and when they deploy, have deployed Vblocks in the past. We've done wonderful by building architectures that accommodate, or they're tailor-made for certain types of workloads. A customer environment would end up acquiring a Vblock model 700 to accommodate an SAP workload, for example. They would acquire a Vblock 300 or 500 to accommodate a VI workload. And then, as those workloads would grow, they would grow those individualized systems. What it did was, it created islands of stranded resource and capacity. Vblock 1000 is about bringing all those capabilities into a singular architecture where you can grow the resources based on pools. As your workload shifts in your environment, you can reallocate resources to accommodate the needs of that workload, as opposed to worrying about stranded capacity in the architecture. >> Where do you go from here with the architecture? Can you share with us, to the extent that you can, a little roadmap? Give us a vision as to how you see this playing out over the next several years. >> Well, one of the reasons why we did this was to simplify and make it easier to operate these very complex architectures that everyone's consuming around the world. Vblock has always been about simplifying complex technologies in the data center. There are a lot of innovations on the horizon. NVMe, for example. Next-generation compute platforms. There are new-generation fabric services that are merging. VxBlock 1000 is the place at which you will see all of these technologies introduce, and our customers won't have to wait on new models of Vblock to consume those technologies. They will be resident in them upon their availability to the market. >> The buzzword from the vendor community is "Futureproof," but you're saying you'll be able to, if you buy today, you'll be able to bring in things like NVMe and these new technologies down the road? >> The architecture inherently supports the idea of adapting to new technologies as they emerge, and will consume those integrations as a part of the architectural standard footprint for the life of the architecture. >> All right, excellent. Trey, thanks very much for that overview. Cisco, obviously, a huge partner of yours, with this whole initiative, many, many years. A lot of people have questioned where that goes, so we have a segment from Cisco Live. Stu Miniman's out there. Let's break to Stu, and then we'll come back and pick it up from there. Thanks for watching.

Published Date : Feb 18 2018

SUMMARY :

Trey, it's always a pleasure, good to see you. What problems were you trying to solve with CI? and one of the reasons why it's been hard to more strategic activities? and letting the experts who actually build the components as an evolutionary trend, in the space of networking compute in the context of Vblock. Okay, let's get into the announcement. as an extension or an add-on into the architecture. and moving it forward so that we can create from that underlying infrastructure. in the architecture. over the next several years. There are a lot of innovations on the horizon. for the life of the architecture. Let's break to Stu,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CiscoORGANIZATION

0.99+

Trey LaytonPERSON

0.99+

thousandsQUANTITY

0.99+

DavePERSON

0.99+

StuPERSON

0.99+

TreyPERSON

0.99+

Vblock 300COMMERCIAL_ITEM

0.99+

Vblock 1000COMMERCIAL_ITEM

0.99+

eight yearsQUANTITY

0.99+

tomorrowDATE

0.99+

Stu MinimanPERSON

0.99+

two areasQUANTITY

0.99+

oneQUANTITY

0.99+

500COMMERCIAL_ITEM

0.98+

BothQUANTITY

0.98+

Dell EMCORGANIZATION

0.98+

two vectorsQUANTITY

0.98+

Vblock model 700COMMERCIAL_ITEM

0.96+

todayDATE

0.96+

VblockORGANIZATION

0.85+

VblockCOMMERCIAL_ITEM

0.85+

InfrastructureTITLE

0.78+

Cisco LiveORGANIZATION

0.77+

VblocksCOMMERCIAL_ITEM

0.72+

one workloadQUANTITY

0.68+

one singular architectureQUANTITY

0.65+

VxBlockCOMMERCIAL_ITEM

0.64+

VxBlock 1000COMMERCIAL_ITEM

0.61+

1000TITLE

0.56+

ConvergedORGANIZATION

0.52+

UCSORGANIZATION

0.49+

ofTITLE

0.49+

VblockTITLE

0.46+

several yearsDATE

0.46+

SAPORGANIZATION

0.45+

Future of Converged infrastructure


 

>> Announcer: From the SiliconANGLE Media Office, in Boston, Massachusetts, it's The Cube. Now, here's your host, Dave Vellante. >> Hello everyone welcome to this special presentation, The Future of Converged Infrastructure, my name is David Vellante, and I'll be your host, for this event where the focus is on Dell EMC's converged infrastructure announcement. Nearly a decade ago, modern converged infrastructure really came to the floor in the marketplace, and what you had is compute, storage, and network brought together in a single managed entity. And when you talk to IT people, the impact was roughly a 30 to 50% total cost of ownership reduction, really depending on a number of factors. How much virtualization they had achieved, how complex their existing processees were, how much they could save on database and other software licenses and maintenance, but roughly that 30 to 50% range. Fast forward to 2018 and you're looking at a multibillion dollar market for converged infrastructure. Jeff Boudreau is here, he's the President of the Dell EMC Storage Division, Jeff thanks for coming on. >> Thank you for having me. >> You're welcome. So we're going to set up this announcement let me go through the agenda. Jeff and I are going to give an overview of the announcement and then we're going to go to Trey Layton, who's the Chief Technology Officer of the converged infrastructure group at Dell EMC. He's going to focus on the architecture, and some of the announcement details. And then, we're going to go to Cisco Live to a pre-recorded session that we did in Barcelona, and get the Cisco perspective, and then Jeff and I will come back to wrap it up. We also, you might notice we have a crowd chat going on, so underneath this video stream you can ask questions, you got to log in with LinkedIn, Twitter, or Facebook, I prefer Twitter, kind of an ask me anything crowd chat. We have analysts on, Stu Miniman is hosting that call. We're going to talk about what this announcement is all about, what the customer issues are that are being addressed by this announcement. So Jeff, let's get into it. From your perspective, what's the state of converged infrastructure today? >> Great question. I'm really bullish on CI, in regards to what converged infrastructure and kind of the way the market's going. We see continued interest in the growth of the market of our customers. Driven by the need for simplicity, agility, elasticity of those on-prem resources. Dell EMC pioneered the CI market several years ago, with the simple premise of simplify IT, and our focus and commitment to our customers has not changed of simplifying IT. As our customers continue to seek for new ways to simplify and consolidate infrastructure, we expect more and more of our customers to embrace CI, as a fast and easy way to modernize their infrastructure, and transform IT. >> You talk about transformation, we do a lot of events, and everybody's talking about digital transformation, and IT transformation, what role does converged infrastructure play in those types of transformations, maybe you could give us an example? >> Sure, so first I'd say our results speak for themselves. As I said we pioneered the CI industry, as the market leader, we enabled thousands of customers worldwide to drive business transformation and digital transformation. And when I speak to customers specifically, converged infrastructure is not just about the infrastructure, it's about the operating model, and how they simplify IT. I'd say two of the biggest areas of impact that customers highlight to me, are really about the acceleration of application delivery, and then the other big one is around the increase in operational efficiencies allowing customers to free up resources, to reinvest however they see fit. >> Now since the early days of converged infrastructure Cisco has been a big partner of yours, you guys were kind of quasi-exclusive for awhile, they went out and sought other partners, you went out and sought other partners, a lot of people have questions about that relationship, what's your perspective on that relationship. >> So our partnership with Cisco is strong as ever. We're proud of this category we've created together. We've been on this journey for a long time we've been working together, and that partnership will continue as we go forward. In full transparency there are of course some topics where we disagree, just like any normal relationship we have disagreements, an example of that would be HCI, but in the CI space our partnership is as strong as ever. We'll have thousands of customers between the two of us, that we will continue to invest and innovate together on. And I think later in this broadcast you're going to hear directly from Cisco on that, so we're both doubling down on the partnership, and we're both committed to CI. >> I want to ask you about leadership generally, and then specifically as it relates to converged infrastructure and hyper converged. My question is this, hyper converged is booming, it's a high growth market. I sometimes joke that Dell EMC is now your leader in the Gartner Magic Quadrants, 101 Gartner Magic Quadrants out of the 99. They're just leading everything and I think both the CI and the HCI categories, what's your take, is CI still relevant? >> First I'd say it's great to come from a leadership position so I thank you for bringing that up, I think it's really important. As Micheal talks about being the essential infrastructure company, that's huge for us as Dell Technologies, so we're really proud of that and we want to lean into that strength. Now on HCI vs CI, to me it's an AND world. Everybody wants to get stock that's in either or, to me it's about the AND story. All our customers are going on a journey, in regards to how they transform their businesses. But at the end of the day, if I took my macro view, and took a step back, it's about the data. The data's the critical asset. The good news for me and for our team is data always continues to grow, and is growing at an amazing rate. And as that critical asset, customers are really kind of thinking about a modern data strategy as they drive foreword. And as part of that, they're looking at how to store, protect, secure, analyze, move that data, really unleashing that data to provide value back to their businesses. So with all of that, not all data is going to be created equal, as part of that, as they build out those strategies, it's going to be a journey, in regards to how they do it. And if that's software defined, vs purpose built arrays, vs converged, or hyper converged, or even cloud, those deployment models, we, Dell EMC, and Dell Technologies want to be that strategic partner, that trusted advisor to help them on that journey. >> Alright Jeff, thanks for helping me with the setup. I want to ask you to hang around a little bit. >> Jeff: Sure. >> We're going to go to a video, and then we're going to bring back Trey Layton, talk about the architecture so keep it right there, we'll be right back. >> Announcer: Dell EMC has long been number one in converged infrastructure, providing technology that simplifies all aspects of IT, and enables you to achieve better business outcomes, faster, and we continue to lead through constant innovation. Introducing, the VxBlock System 1000, the next generation of converged infrastructure from Dell EMC. Featuring enhanced life cycle management, and a broad choice of technologies, to support a vast array of applications and resources. From general purpose to mission critical, big data to specialized workloads, VxBlock 1000 is the industry's first converged infrastructure system, with the flexible data services, power, and capacity to handle all data center workloads, giving you the ultimate in business agility, data center efficiency, and operational simplicity. Including best-of-breed storage and data protection from Dell EMC, and computer networking from Cisco. (orchestral music) Converged in one system, these technologies enable you to flexibly adapt resources to your evolving application's needs, pool resources to maximize utilization and increase ROI, deliver a turnkey system in lifecycle assurance experience, that frees you to focus on innovation. Four times storage types, two times compute types, and six times faster updates, and VME ready, and future proof for extreme performance. VxBlock 1000, the number one in converged now all-in-one system. Learn more about Dell EMC VxBlock 1000, at DellEMC.com/VxBlock. >> We're back with Trey Layton who's the Senior Vice President and CTO of converged at Dell EMC. Trey it's always a pleasure, good to see you. >> Dave, good to see you as well. >> So we're eight years into Vblock, take us back to the converged infrastructure early days, what problems were you trying to solve with CI. >> Well one of the problems with IT in general is it's been hard, and one of the reasons why it's been hard is all the variability that customers consume. And how do you integrate all that variability in a sustaining manner, to maintain the assets so it can support the business. And, the thing that we've learned is, the original recipe that we had for Vblock, was to go at and solve that very problem. We have referred to that as life cycle. Manage the life cycle services of the biggest inner assets that you're deploying. And we have created some great intellectual property, some great innovation around helping minimize the complexity associated with managing the life cycle of a very complex integration, by way of, one of the largest data center assets that people operate in their environment. >> So you got thousands and thousands of customers telling you life cycle management is critical. They're shifting their labor resource to more strategic activities, is that what's going on? Well there's so much variation and complexity in just maintaining the different integration points, that they're spending an inordinate amount of their time, a lot of nights and weekends, on understanding and figuring out which software combinations, which configuration combinations you need to operate. What we do as an organization, and have done since inception is, we manage that complexity for them. We delivery them an outcome based architecture that is pre-integrated, and we sustain that integration over it's life, so they spend less time doing that, and letting the experts who actually build the components focus on maintaining those integrations. >> So as an analyst I always looked at converged infrastructure as an evolutionary trend, bringing together storage, servers, networking, bespoke components. So my question is, where's the innovation underneath converged infrastructure. >> So I would say the innovation is in two areas. We're blessed with a lot of technology innovations that come from our partner, and our own companies, Dell EMC and Cisco. Cisco produces wonderful innovations in the space of networking compute, in the context of Vblock. Dell EMC, storage innovations, data protection, et cetera. We harmonize all of these very complex integrations in a manner where an organization can put those advanced integrations into solving business problems immediately. So there's two vectors of innovation. There are the technology components that we are acquiring, to solve business problems, and there's the method at which we integrate them, to get to the business of solving problems. >> Okay, let's get into the announcement. What are you announcing, what's new, why should we care. >> We are announcing the VxBlock 1000, and the interesting thing about Vblocks over the years, is they have been individual system architectures. So a compute technology, integrated with a particular storage architecture, would produce a model of Vblock. With VxBlock 1000, we're actually introducing an architecture that provides a full gamut of array optionality for customers. Both blade and rack server options, for customers on the UCS compute side, and before we would integrate data protection technologies as an extension or an add-on into the architecture, data protection is native to the offer. In addition to that, unstructured data storage. So being able to include unstructured data into the architecture as one singular architecture, as opposed to buying individualized systems. >> Okay, so you're just further simplifying the underlying infrastructure which is going to save me even more time? >> Producing a standard which can adapt to virtually any use case that a customer has in a data center environment. Giving them the ability to expand and grow that architecture, as their workload dictates, in their environment, as opposed to buying a system to accommodate one workload, buying another system to accommodate another workload, this is kind of breaking the barriers of traditional CI, and moving it foreword so that we can create an adaptive architecture, that can accommodate not only the technologies available today, but the technologies on the horizon tomorrow. >> Okay so it's workload diversity, which means greater asset leverage from that underlying infrastructure. >> Trey: Absolutely. >> Can you give us some examples, how do you envision customers using this? >> So I would talk specifically about customers that we have today. And when they deploy, or have deployed Vblocks in the past. We've done wonderful by building architectures that accommodate, or they're tailor made for certain types of workloads. And so a customer environment would end up acquiring a Vblock model 700, to accommodate an SAP workload for example. They would acquire a Vblock 300, or 500 to accommodate a VDI workload. And then as those workloads would grow, they would grow those individualized systems. What it did was, it created islands of stranded resource capacities. Vblock 1000 is about bringing all those capabilities into a singular architecture, where you can grow the resources based on pools. And so as your work load shifts in your environment, you can reallocate resources to accommodate the needs of that workload, as opposed to worrying about stranded capacity in the architecture. >> Okay where do you go from here with the architecture, can you share with us, to the extent that you can, a little roadmap, give us a vision as to how you see this playing out over the next several years. >> Well, one of the reasons why we did this was to simplify, and make it easier to operate, these very complex architectures that everyone's consuming around the world. Vblock has always been about simplifying complex technologies in the data center. There are a lot of innovations on the horizon in VME, for example, next generation compute platforms. There are new generation fabric services, that are emerging. VxBlock 1000 is the place at which you will see all of these technologies introduced, and our customers won't have to wait on new models of Vblock to consume those technologies, they will be resident in them upon their availability to the market. >> The buzz word from the vendor community is future proof, but your saying, you'll be able to, if you buy today, you'll be able to bring in things like NVME and these new technologies down the road. >> The architecture inherently supports the idea of adapting to new technologies as they emerge, and will consume those integrations, as a part of the architectural standard footprint, for the life of the architecture. >> Alright excellent Trey, thanks very much for that overview. Cisco obviously a huge partner of yours, with this whole initiative, many many years. A lot of people have questioned where that goes, so we have a segment from Cisco Live, Stu Miniman is out there, let's break to Stu, then we'll come back and pick it up from there. Thanks for watching. >> Thanks Dave, I'm Stu Miniman, and we're here at Cisco Live 2018 in Barcelona, Spain. Happy to be joined on the program by Nigel Moulton the EMEA CTO of Dell EMC, and Siva Sivakumar, who's the Senior Director of Data Center Solutions at Cisco, gentlemen, thanks so much for joining me. >> Thanks Stu. >> Looking at the long partnership of Dell and Cisco, Siva, talk about the partnership first. >> Absolutely. If you look back in time, when we launched UCS, the very first major partnership we brought, and the converged infrastructure we brought to the market was Vblock, it really set the trend for how customers should consume compute, network, and storage together. And we continue to deliver world class technologies on both sides and the partnership continues to thrive as we see tremendous adoption from our customers. So we are here, several years down, still a very vibrant partnership in trying to get the best product for the customers. >> Nigel would love to get your perspective. >> Siva's right I think I'd add, it defined a market, if you think what true conversion infrastructure is, it's different, and we're going to discuss some more about that as we go through. The UCS fabric is unique, in the way that it ties a network fabric to a compute fabric, and when you bring those technologies together, and converge them, and you have a partnership like Cisco, you have a partnership with us, yeah it's going to be a fantastic result for the market because the market moves on, and I think, VxBlock actually helped us achieve that. >> Alright so Siva we understand there's billions of reasons why Cisco and Dell would want to keep this partnership going, but talk about from an innovation standpoint, there's the new VxBlock 1000, what's new, talk about what's the innovation here. >> Absolutely. If you look at the VxBlock perspective, the 1000 perspective, first of all it simplifies an extremely fast successful product to the next level. It simplifies the storage options, and it provides a seamless way to consume those technologies. From a Cisco perspective, as you know we are in our fifth generation of UCS platform, continues to be a world class platform, leading blade service in the industry. But we also bring the innovation of rack mount servers, as well as 40 gig fabric, larger scale, fiber channel technology as well. As we bring our compute, network, as well as a sound fabric technology together, with world class storage portfolio, and then simplify that for a single pane of glass consumption model. That's absolutely the highest level of innovation you're going to find. >> Nigel, I think back in the early days the joke was you could have a Vblock anyway you want, as long as it's black. Obviously a lot of diversity in product line, but what's new and different here, how does this impact new customers and existing customers. >> I think there's a couple of things to pick up on, what Trey said, what Siva said. So the simplification piece, the way in which we do release certification matrix, the way in which you combine a single software image to manage these multiple discreet components, that is greatly simplified in VxBlock 1000. Secondly you remove a model number, because historically you're right, you bought a three series, a five series, and a seven series, and that sort of defined the architecture. This is now a system wide architecture. So those technologies that you might of thought of as being discreet before, or integrated at an RCM level that was perhaps a little complex for some people, that's now dramatically simplified. So those are two things that I think we amplify, one is the simplification and two, you're removing a model number and moving to a system wide architecture. >> Want to give you both the opportunity, gives us a little bit, what's the future when you talk about the 1000 system, future innovations, new use cases. >> Sure, I think if you look at the way enterprise are consuming, the demand for more powerful systems that'll bring together more consolidation, and also address the extensive data center migration opportunities we see, is very critical, that means the customers are really looking at whether it is a in-memory database that scales to, much larger scale than before, or large scale cluster databases, or even newer workloads for that matter, the appetite for a larger system, and the need to have it in the market, continues to grow. We see a huge install base of our customers, as well as new customers looking at options in the market, truly realize, the strength of the portfolio that each one of us brings to the table, and bringing the best-of-breed, whether it is today, or in the future from an innovation standpoint, this is absolutely the way that we are approaching building our partnership and building new solutions here. >> Nigel, when you're talking to customers out there, are they coming saying, I'm going to need this for a couple of months, I mean this is an investment they're making for a couple years, why is this a partnership built to last. >> An enterprise class customer certainly is looking for a technology that's synonymous with reliability, availability, performance. And if you look at what VxBlock has traditionally done and what the 1000 offers, you see that. But Siva's right, these application architectures are going to change. So if you can make an investment in a technology set now that keeps the premise of reliability, availability, and performance to you today, but when you look at future application architectures around high capacity memory, adjacent to a high performance CPU, you're almost in a position where you are preparing the ground for what that application architecture will need, and the investments that people make in the VxBlock system with the UCS power underneath at the compute layer, it's significant, because it lays out a very clear path to how you will integrate future application architectures with existing application architectures. >> Nigel Moulton, Siva Sivakumar, thank you so much for joining, talking about the partnership and the future. >> Siva: Thank you. >> Nigel: Pleasure. >> Sending back to Dave in the US, thanks so much for watching The Cube from Cisco Live Barcelona. >> Thank you. >> Okay thanks Stu, we're back here with Jeff Boudreau. We talked a little bit earlier about the history of conversion infrastructure, some of the impacts that we've seen in IT transformations, Trey took us through the architecture with some of the announcement details, and of course we heard from Cisco, was a lot of fun in Barcelona. Jeff bring it home, what are the take aways. >> Some of the key take aways I have is just I want to make sure everybody knows Dell EMC's continued commitment to modernizing infrastructure for conversion infrastructure. In addition to that was have a strong partnership with Cisco as you heard from me and you also heard from Cisco, that we both continue to invest and innovate in these spaces. In addition to that we're going to continue our leadership in CI, this is critical, and it's extremely important to Dell, and EMC, and Dell EMC's Cisco relationship. And then lastly, that we're going to continue to deliver on our customer promise to simplify IT. >> Okay great, thank you very much for participating here. >> I appreciate it. >> Now we're going to go into the crowd chat, again, it's an ask me anything. What make Dell EMC so special, what about security, how are the organizations affected by converged infrastructure, there's still a lot of, roll your own going on. There's a price to pay for all this integration, how is that price justified, can you offset that with TCO. So let's get into that, what are the other business impacts, go auth in with Twitter, LinkedIn, or Facebook, Twitter is my preferred. Let's get into it thanks for watching everybody, we'll see you in the crowd chat. >> I want IT to be dial tone service, where it's always available for our providers to access. To me, that is why IT exists. So our strategy at the hardware and software level is to ruthlessly standardize leverage in a converged platform technology. We want to create IT almost like a vending machine, where a user steps up to our vending machine, they select the product they want, they put in their cost center, and within seconds that product is delivered to that end user. And we really need to start running IT like a business. Currently we have a VxBlock that we will run our University of Vermont Medical Center epic install on. Having good performance while the provider is within that epic system is key to our foundation of IT. Having the ability to combine the compute, network, and storage in one aspect in one upgrade, where each component is aligned and regression tested from a Dell Technology perspective, really makes it easy as an IT individual to do an upgrade once or twice a year versus continually trying to keep each component of that infrastructure footprint upgraded and aligned. I was very impressed with the VxBlock 1000 from Dell Technologies, specifically a few aspects of it that really intrigued me. With the VxBlock 1000, we now have the ability to mix and match technologies within that frame. We love the way the RCM process works, from a converged perspective, the ability to bring the compute, the storage, and network together, and trust that Dell Technologies is going to upgrade all those components in a seamless manner, really makes it easier from an IT professional to continue to focus on what's really important to our organization, provider and patient outcomes.

Published Date : Feb 13 2018

SUMMARY :

Announcer: From the SiliconANGLE Media Office, Jeff Boudreau is here, he's the President of the Jeff and I are going to give an overview of the announcement and our focus and commitment to our customers as the market leader, we enabled Now since the early days of converged infrastructure but in the CI space our partnership is as strong as ever. both the CI and the HCI categories, But at the end of the day, if I took my macro view, I want to ask you to hang around a little bit. talk about the architecture so keep it right there, and capacity to handle all data center workloads, Trey it's always a pleasure, good to see you. what problems were you trying to solve with CI. and one of the reasons why it's been hard is all the and letting the experts who actually build the components So as an analyst I always looked at converged There are the technology components that we are acquiring, Okay, let's get into the announcement. and the interesting thing about and moving it foreword so that we can create from that underlying infrastructure. stranded capacity in the architecture. playing out over the next several years. There are a lot of innovations on the horizon in VME, and these new technologies down the road. for the life of the architecture. let's break to Stu, Nigel Moulton the EMEA CTO of Dell EMC, Siva, talk about the partnership first. and the converged infrastructure and when you bring those technologies together, Alright so Siva we understand That's absolutely the highest level of innovation you could have a Vblock anyway you want, and that sort of defined the architecture. Want to give you both the opportunity, and the need to have it in the market, continues to grow. I'm going to need this for a couple of months, and performance to you today, talking about the partnership and the future. Sending back to Dave in the US, and of course we heard from Cisco, Some of the key take aways I have is just I want to make how is that price justified, can you offset that with TCO. from a converged perspective, the ability to bring the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CiscoORGANIZATION

0.99+

David VellantePERSON

0.99+

JeffPERSON

0.99+

DavePERSON

0.99+

Dave VellantePERSON

0.99+

Trey LaytonPERSON

0.99+

Nigel MoultonPERSON

0.99+

Jeff BoudreauPERSON

0.99+

DellORGANIZATION

0.99+

Siva SivakumarPERSON

0.99+

BarcelonaLOCATION

0.99+

thousandsQUANTITY

0.99+

NigelPERSON

0.99+

twoQUANTITY

0.99+

EMCORGANIZATION

0.99+

SivaPERSON

0.99+

TreyPERSON

0.99+

40 gigQUANTITY

0.99+

2018DATE

0.99+

MichealPERSON

0.99+

Stu MinimanPERSON

0.99+

30QUANTITY

0.99+

Dell EMCORGANIZATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

StuPERSON

0.99+

two thingsQUANTITY

0.99+

LinkedInORGANIZATION

0.99+

billionsQUANTITY

0.99+

two areasQUANTITY

0.99+

The CubeTITLE

0.99+

VxBlock 1000COMMERCIAL_ITEM

0.99+

six timesQUANTITY

0.99+

University of Vermont Medical CenterORGANIZATION

0.99+

Vblock 1000COMMERCIAL_ITEM

0.99+

Barcelona, SpainLOCATION

0.99+

two timesQUANTITY

0.99+

Vblock 300COMMERCIAL_ITEM

0.99+

Boston, MassachusettsLOCATION

0.99+

two vectorsQUANTITY

0.99+

USLOCATION

0.99+

tomorrowDATE

0.99+

VxBlock System 1000COMMERCIAL_ITEM

0.99+

TwitterORGANIZATION

0.99+

bothQUANTITY

0.99+

500COMMERCIAL_ITEM

0.99+

Action Item | Converged & Hyper Converged Infrastructure


 

Hi, I'm Peter Burris, and welcome to Wikibon's Action Item. (electronic music) Every week, we bring together the Wikibon research team and we present the action items that we believe are most crucial for users to focus on against very important topics. This week, I'm joined by George Gilbert, David Floyer, here in the Cube studios in Palo Alto. And on the phone we have Ralph Phinos, Dave Vellante, and Jim Kobielus. Thank you guys, thank you team for being part of today's conversation. What were going to talk about today in Action Item is the notion of what we're calling enterprise hyperscale. Now we're going to take a route to get there that touches upon many important issues, but fundamentally the question is, at what point should enterprises choose to deploy their own hardware at scale to support applications that will have a consequential business impact on their shareholder, customer, and employee value? Now to kick us off here, 'cause this is a very complex topic, and it involves a lot of different elements, David Floyer, first question to you. What is the core challenge that enterprises face today as they think about build, buy, or rent across this increasingly mushed hardware continuum, or system continuum? >> So the biggest challenge from the traditional way that enterprises have put together systems is that the cost and the time to manage these systems is going up and up. And as we go from just systems of record, to with analytic systems being mainly in bash mode, towards systems of intelligence where the real-time analytics are going to combine in with the systems of record. So the complexity of the systems and the software layers are getting more and more complicated. And it takes more and more time and effort and elapsed time to keep things current. >> Why is it that not everybody can do this, David? Is there a fundamental economic reason to play here? >> Well, if you take systems, and build them yourself and put them together yourself, you'll always end up with the cheapest system. The issue is that the cost of maintaining those systems, and even more, the elapsed time cost of maintaining those systems, the time to value to putting in new releases, etc., has been extending. And there comes a time when that cost of delaying implementing new systems overwhelms the cost that you can save in the hardware itself. >> So there's some scale efficiencies in thinking about integration from a time standpoint. Dave Vellante, we've been looking at this for quite some time, and we think about true private could, for example. But if you would, kind of give us that core dynamic in simple terms between what is valuable to the business, what isn't valuable to the business, and the different options between renting and buying your cells, what is that kind of core dynamic at play? >> OK, so as we talked about a lot in our true private cloud research, hyper-converged systems are an attempt to substantially mimic public cloud environments on-prem. And this creates this bifurcated buying dynamic that I think is worth exploring a little bit. The big cloud players, as everybody talks about, have lots of engineers running around, they have skill, and they have time. So they'll spend time to build proprietary technologies and use their roll-your-own components to automate processes. In other words, they'll spend time to save money. And this is essentially the hyperscale as a form of their R&D, and they have an end-year lead, whatever it's five, six, four years on the enterprise. And that's not likely to change, that dynamic. The enterprise buyers, on the other hand, they don't have the resources, they're stretched thin, so they'll spend money to save time. So enterprises they want to cut labor costs, and shift useless IT labor to so-called vendor R&D. To wit, our forecasts show that about $150 billion is going to come out of low-value IT operations over the next ten years, and will shift to integrated products. >> So ultimately we end up seeing the vendors effectively capturing a lot of that spend that otherwise had been internally. Now this raises a new dynamic, when we think about this, David Floyer, in that there are still vendors that have to return something to their shareholders. There's this increased recognition that businesses or enterprises want this cloud experience, but not everybody is able to offer it, and we end up then with some really loosely-defined definitions. What's the continuum of where systems are today, from traditional all the way out to cloud, what does that look like? >> So a useful way of looking at it is to see what has happened over time and where we think it's going. We started with separate systems completely. Converged systems then came in, where the vendor put them together and reduced the time a little bit to value. But really the maintenance was still a responsibility of-- >> [Peter] But what was brought together? >> [David F] It was the traditional arrays, it was the servers-- >> Racks, power supplies-- >> All of that stuff put together, and delivered as a package. The next level up was so-called hyper-converged, where certainly some of the hyperconverged vendors went and put in software for each layer, software for the storage layer, software for the networking layer, put in more management. But a lot of vendors really took hyperconverged as being the old stuff with a few extra flavors. >> So they literally virtualized those underlying hardware sources, got some new efficiencies and economies. >> That's right, so they software virtualized each of those components. When you look at the cloud vendors, just skipping one there, they have gone hyperscale. And they have put in, as Dave spoke earlier, they have put in all of their software to make that hyperscale work. What we think in the middle of that is enterprise hyperscale, which is coming in, where you have the what we call service end. We have that storage capability, we have the networking capability, and the CPU capabilities, all separated, able to be scaled in whatever direction is required, and any processor to be able to get at any data through that network, with very, very little overhead. And it's software for the storage, it's software and firmware for the networking. The processor is relieved of all that processing. We think that architecture is going to mimic what the hyperscale have. But the vendors now have an opportunity of putting in the software to emulate that cloud experience, and take away from the people who want on-site equipment, take away all of the work that's necessary to keep that software stack up to date. The vendors are going to maintain that software stack as high as they can go. >> So David, is this theory, or are there practical examples of this happening today? >> Oh, absolutely, there are practical examples of those happening. There are practical examples at the lower levels, with people like Micron and SolidScale. That's at a technology level, when we're talking about hyperscale-- Well if you're looking at it from a practical point of view, ARCOL have put it into the marketplace. ARCOL cloud on-premises, ARCOL converged systems, where they are taking the responsibility of maintaining all of the software, all the way up to the database stack. And in the future, probably beyond that, towards the ARCOL applications as well. So they're taking that approach, putting it in, and arguing, persuasively, that the customer should focus on time to value as opposed to cost of just the hardware. >> Well we can also look at SaaS vendors right, who many of the have come off of infrastructure as a service, deployed their own enterprise hyperscale, increasingly starting to utilize some of this hyperscale componentry, as a basis for building things out. Now one of the key reasons why we want to do this, and George I'll turn it to you, is because as David mentioned earlier, the idea is we want to bring analytics and operations more closely together to improve automation, augmentation, other types of workloads. What is it about that effort that's encouraging this kind of adoption of these new approaches? >> [George] Well databases typically make great leaps forward when we have changes in the underlying trade-offs or relative price performance of compute storage and networking. What we're talking about with hyperscale, I guess either on-prem or the cloud version, is that we can build scale out that databases can support without having to be rewritten, so that they work just the way they did on tightly-coupled symmetric multiprocessors, shared memory. And so now they can go from a few nodes, or half a dozen nodes, or even say a dozen nodes, to thousands. And as David's research has pointed out, they have latency to get to memory in any node from any node in five microseconds. So building up from that, the point is we can now build databases that really do have the horsepower to handle the analytics to inform the transactions in the same database. Or, if you do separate them, because you don't want to touch a current system of record, you have a very powerful analytic system that can apply more data and do richer analytics to inform a decision in the form of a transaction, than you could with traditional architectures. >> So it's the data that's driving the need for a data-rich system that's architected in the context of data needs, that's driving a lot of this change. Now, David Floyer, we've talked about data tiering. We've talked about the notion of primary, secondary, and tertiary data. Without revisiting that entirely, what is it about this notion of enterprise hyperconverge that's going to make it easier to naturally place data where it belongs in the infrastructure? >> Well underlying this is that moving data is extremely expensive, so you want to, where possible, move the processing to the data itself. The origin of that data may be at the edge, for example, in IOT. It may be in a large central headquarters. It may be in the cloud, it may be operational data, end-user data, for people using their phones, which is available from the cloud. So there are multiple sources. So you want to place the processing as close to that data as possible so that you have the least cost of both moving it, and you have the lowest latency. And that's particularly important when you've got systems of intelligence where you want to combine the two. >> So Jim Kobielus, it seems as though there's a compelling case to be made here to focus on time, time to value, time to deploy, on the one hand, as well as another aspect of time, the time associated with latency, the time associated with reducing path length, and optimizing for path length. Which again has a scale impact. What are developers thinking? Are developers actually going to move the market to these kinds of solutions, or are they going to try to do something different? >> I think what developers will do is that they will begin to move the market towards hyperconverged systems. Much of the development that's going on now is for artificial intelligence, deep learning, and so forth, where you're building applications that have an increasing degree of autonomy, being able to make decisions based on system of record data, system of engagement data, system of insight data, in real time. What that increasingly requires, Peter, is a development platform that combines those different types of data bases, or data stores, and also combines the processing for deep learning, machine learning, and so forth. On devices that are increasingly tinier and tinier, and embedded in mobile devices and what not. So what I'm talking about here is an architecture for development where developers are going to say, I want to be able to develop it in the cloud, I'm going to need to. 'Cause we have huge teams of specialists who are building and training and deploying and iterating these in a cloud environment, a centralized modeling context, but then deploying their results of their work down to the smallest systems where these models will need to run, if not autonomously, in some loosely-coupled fashion with tier two and tier three systems, which will also be hyperconverged. And each of those systems in each of those tiers will need a self-similar data fabric, and an AI processing fabric. So what developers are saying is, I want to be able to take it and model it, and deploy it to these increasingly nano-scopic devices at the edge, and I need each of those components at every tier to have the same capabilities and hyperconverged form factors, essentially. >> For hyperscale, so here's where we are, guys. Where we are is that there are compelling economic reasons why we're going to see this notion of enterprise hyperscale emerge. It appears that the workloads are encouraging that. Developers seem to be moving towards adopting these technologies. But there's another group that we haven't talked about. Dave Vellante, the computing industry is not a simple go-to-market model. There's a lot of reasons why channels, partnerships, etc. are so complex. How are they going to weigh in on this change? >> [Dave Vellante] Well the cloud clearly is having an impact on the channel. I mean if you look at sort of the channel guys, you got the sort of box sellers, which still comprises most of the channel. You got more solution orientation, and then increasingly, you know, the developers are becoming a form of a channel. And I think the channel still has a lot of influence over how customers buy, and I think one of the reasons that people buy roll-your-own still, and it's somewhat artificial, is that the channel oftentimes prefers it that way. It's more complicated, and as their margins get squeezed, the channel players can maintain services, on top of those roll-your-own components. So I think buyers got to be careful, and they got to make sure that their service provider's motivations align with, you know, their desired outcomes, and they're not doing the roll-your-own bespoke approach for the wrong reasons. >> Yeah, and we've seen that a fair amount as we've talked to senior IT folks, that there's a clear misalignment, often, between what's being pushed from a technology standpoint and what the application actually requires, and that's one of the reasons why this question is so rich and so important. But Ralph Phinos, kind of sum up, when you think about some of these issues as they pertain to where to make investments, how to make investments. From our perspective, is there a relatively simple approach to thinking this through, and understanding how best to put your money to get the most value out of the technologies that you choose? (static hissing) Alright, I think we've lost Ralph there, so I'll try to answer the question myself. (chuckles) (David laughs) So here's how we would look at it, and David Floyer, help me out and see if you disagree with me. But at the end of the day, what we're looking for is we're suggesting to customers that have a cost orientation should worry a little bit less about risk, a little bit less about flexibility, and they can manage how that cost happens. And the goal is to try to reduce the cost as fast as possible, and not worry so much about the future options that they'll face in terms of how to reduce future types of cost out. And so that might push them more towards this public hyperscale approach. But for companies that are thinking in terms of revenue, that have to ensure that their systems are able to respond to competitive pressures, customer needs, that are increasingly worried about buying future options with today's technology choices. That there's a scale, but that's the group that's going to start looking more at the enterprise hyperscale. Clearly that's where SAS players are. Yeah. And then the question is and what requires further research is, where's that break point going to be? So if I'm looking at this from an automation, from a revenue standpoint, then I need a little bit greater visibility in where that break point's going to be between controlling my own destiny, with the technology that's crucial to my business, versus not having to deal with the near-term costs associated with doing the integration myself. But this time to value, I want to return to this time to value. >> [David] It's time to value that is the crucial thing here, isn't it? >> [Peter] Time to value now, and time to future value. >> And time to future value, yes. What is the consequence of doing everything yourself is that the time to put in new releases, the time to put in patches, the time to make your system secure, is increasingly high. And the more that you integrate systems into systems of intelligence, with the analytics and the systems of record, the more you start to integrate, the more complex the total environment, the more difficult it's going to be for people to manage that themselves. So in that environment, you would be pushing towards getting systems where the vendor is doing as much of that integration as they can-- And that's where they get the economies from. The vendors get the economies of scale because they can feed back into the system faster than anybody else. Rather than taking a snowflake approach, they're taking a volume approach, and they can feed back for example artificial intelligence in operational efficiency, in security. There's many, many opportunities for vendors to push down into the marketplace those findings. And those vendors can be cloud vendors as well. If you look at Microsoft, they can push down into their Azure Stack what they're finding in terms of artificial intelligence and in terms of capabilities. They can push those down into the enterprises themselves. So the more that they can go up the stack into the database layers, maybe even into the application layers, the higher they can go, the lower the cost, the lower the time to value will be for them to deploy applications using that. >> Alright, so we've very quickly got some great observations on this important dynamic. It's time for action items. So Jim Kobielus, let me start with you. What's the action item for this whole notion of hyperscale? Action items, Jim Kobielus. >> Yeah, the action item for hyperscale is to consider the degree of convergence you require at the lowest level of the system, the edge device. How much of that needs to be converged down to a commoditized component that can be flexible enough that you can develop a wide range of applications on top of that-- >> Excellent, hold on, OK. George Gilbert, action item. >> Really quickly you have to determine, are you going to keep your legacy system of record database, and add like an analytic database on a hyperscale infrastructure, so that you're not doing a heart and lung transplant on an existing system. If you can do that and you can manage the latency between the existing database and culling to the analytic database, that's great. Then there's little disruption. Otherwise you have to consider integrating the analytics into a hyperscale-ready legacy database. >> David Vellante, action item. >> Tasks like LUN management, and server provisioning, and just generally infrastructure management, and non-strategic. So as fast as possible, shift your "IT labor resources" up the stack toward more strategic initiatives, whether they're digital initiatives, data orientation, and other value-producing activities. >> David Floyer, action item. >> Well I was just about to say what Dave Vellante just said. So let me focus a little bit more on a step in order to get to that position. >> So Dave Floyer, action item. (David laughs) >> So the action item that I would choose would be that you have to know what your costs are, and you have to be able to, as senior management, look at those objectively and say, "What is my return on spending all of "this money and making the system operate?" The more that you can reduce the complexity, buy in, converge systems, hyperconverge systems, hyperscale systems, that are going to put that responsibility onto the vendors themselves, the better position you're going to be to really add value to the bottom line of applications that really can start to use all of this capability, advanced analytics that's coming into the marketplace. >> So I'm going to add an action item before I do a quick summary. And I'm just going to insert it. My action item, the relationship that you have with your vendors is going to change. It used to be focused on procurement and reducing the cost of acquisition. Increasingly, for those high-value, high-performing, revenue-producing, differentiating applications, it's going to be strategic vendor management. That implies a whole different range of activities. And companies that are going to build their business with technology and digital are going to have to move to a new relationship management framework. Alright, so let's summarize today's action item meeting. First of I want to thank very much George Gilbert, David Floyer, here in the studio with me. David Vellante, Ralph Phinos, Jim Kobielus on the phone. Today we talked about enterprise hyperscale. This is part of a continuum that we see happening, because the economics of technology are continuing to assert themselves in the marketplace, and it's having a significant range of impacts on all venues. When we think about scale economies, we typically think about how many chips we're going to stamp out, or how many copies of an operating system is going to be produced, and that still obtains, and it's very important. But increasingly users have to focus their attention to how we're going to generate economies out of the IT labor that's necessary to keep the digital businesses running. If we can shift some of those labor costs to other players, then we want to support those technology sets that embed those labor costs directly in the form of technology. So over the next few years, we're going to see the emergence of what we're calling enterprise hyperscale that embeds labor costs directly into hyperscale packaging, so that companies can focus more on generating revenue out of technology, and spend less time on the integration of work. The implications of that is that the traditional buying process of trying to economize on the time to purchase, the time to get access to the piece parts, is going to give way to a broader perspective on time to ultimate value of the application or of the outcome that we seek. And that's going to have a number of implications that CIOs have to worry about. From an external standpoint, it's going to mean valuing technology differently, valuing packaging differently. It means less of a focus on the underlying hardware, more of a focus on this common set of capabilities that allow us to converge applications. So whereas converge technology talked about converging hardware, enterprise hyperscale increasingly is about converging applications against common data, so that we can run more complex and interesting workloads and revenue-producing workloads, without scaling the labor and management costs of those workloads. A second key issue is, we have to step back and acknowledge that sometimes the way products go to market, and our outcomes or our desires, do not align. That there is the residual reality in the marketplace that large numbers of channel partners and vendors have an incentive to try to push more complex technologies that require more integration, because it creates a greater need for them and creates margin opportunities. So ensure that as you try to achieve this notion of converged applications and not converged infrastructure necessarily, that you are working with a partner who follows that basic program. And the last thing is I noted a second ago, that that is going to require a new approach to thinking about strategic vendor management. For the last 30 years, we've done a phenomenal job of taking cost out of technology, by focusing on procurement and trying to drive every single dime out of a purchase that we possibly could. Even if we didn't know what that was going to mean from an ongoing maintenance and integration and risk-cost standpoint, what we need to think about now is what will be the cost to the outcome. And not only this outcome, but because we're worried about digital business, future outcomes, that are predicated on today's decisions. So the whole concept here is, from a relationship management standpoint, the idea of what relationship is going to provide us the best time to value today, and streams of time to value in the future. And we have to build our relationships around that. So once again I want to thank the team. This is Peter Burris. Thanks again for participating or listening to the Action Item. From the Cube studios in Palo Alto, California, see you next week. (electronic music)

Published Date : Nov 10 2017

SUMMARY :

And on the phone we have Ralph Phinos, is that the cost and the time to The issue is that the cost of maintaining those systems, and the different options between renting and buying So they'll spend time to build proprietary What's the continuum of where systems are today, But really the maintenance was still a responsibility of-- the old stuff with a few extra flavors. So they literally virtualized those underlying putting in the software to emulate that cloud experience, and arguing, persuasively, that the customer the idea is we want to bring analytics and operations build databases that really do have the horsepower So it's the data that's driving the need for as possible so that you have the least cost the market to these kinds of solutions, in the cloud, I'm going to need to. It appears that the workloads are encouraging that. and they got to make sure that their service provider's And the goal is to try to reduce the cost is that the time to put in new releases, What's the action item for this whole notion of hyperscale? Yeah, the action item for hyperscale is to George Gilbert, action item. culling to the analytic database, that's great. So as fast as possible, shift your "IT labor resources" a step in order to get to that position. So Dave Floyer, action item. hyperscale systems, that are going to put that economize on the time to purchase,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FloyerPERSON

0.99+

DavidPERSON

0.99+

George GilbertPERSON

0.99+

Peter BurrisPERSON

0.99+

Jim KobielusPERSON

0.99+

Ralph PhinosPERSON

0.99+

Dave FloyerPERSON

0.99+

Dave VellantePERSON

0.99+

David VellantePERSON

0.99+

DavePERSON

0.99+

MicrosoftORGANIZATION

0.99+

GeorgePERSON

0.99+

PeterPERSON

0.99+

WikibonORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

RalphPERSON

0.99+

David FPERSON

0.99+

fiveQUANTITY

0.99+

sixQUANTITY

0.99+

thousandsQUANTITY

0.99+

eachQUANTITY

0.99+

next weekDATE

0.99+

TodayDATE

0.99+

todayDATE

0.99+

oneQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

about $150 billionQUANTITY

0.99+

This weekDATE

0.99+

each layerQUANTITY

0.99+

twoQUANTITY

0.99+

ARCOLORGANIZATION

0.99+

first questionQUANTITY

0.99+

FirstQUANTITY

0.99+

bothQUANTITY

0.98+

four yearsQUANTITY

0.98+

five microsecondsQUANTITY

0.98+

a dozen nodesQUANTITY

0.98+

second key issueQUANTITY

0.98+

half a dozen nodesQUANTITY

0.97+

Azure StackTITLE

0.89+

MicronORGANIZATION

0.84+

last 30 yearsDATE

0.8+

Cube studiosORGANIZATION

0.79+

SASORGANIZATION

0.76+

CubeORGANIZATION

0.74+

singleQUANTITY

0.72+

Action ItemORGANIZATION

0.68+

second agoDATE

0.67+

next few yearsDATE

0.64+

threeOTHER

0.61+

nextQUANTITY

0.58+

tier twoOTHER

0.56+

Luis Ceze & Anna Connolly, OctoML | AWS Startup Showcase S3 E1


 

(soft music) >> Hello, everyone. Welcome to theCUBE's presentation of the AWS Startup Showcase. AI and Machine Learning: Top Startups Building Foundational Model Infrastructure. This is season 3, episode 1 of the ongoing series covering the exciting stuff from the AWS ecosystem, talking about machine learning and AI. I'm your host, John Furrier and today we are excited to be joined by Luis Ceze who's the CEO of OctoML and Anna Connolly, VP of customer success and experience OctoML. Great to have you on again, Luis. Anna, thanks for coming on. Appreciate it. >> Thank you, John. It's great to be here. >> Thanks for having us. >> I love the company. We had a CUBE conversation about this. You guys are really addressing how to run foundational models faster for less. And this is like the key theme. But before we get into it, this is a hot trend, but let's explain what you guys do. Can you set the narrative of what the company's about, why it was founded, what's your North Star and your mission? >> Yeah, so John, our mission is to make AI sustainable and accessible for everyone. And what we offer customers is, you know, a way of taking their models into production in the most efficient way possible by automating the process of getting a model and optimizing it for a variety of hardware and making cost-effective. So better, faster, cheaper model deployment. >> You know, the big trend here is AI. Everyone's seeing the ChatGPT, kind of the shot heard around the world. The BingAI and this fiasco and the ongoing experimentation. People are into it, and I think the business impact is clear. I haven't seen this in all of my career in the technology industry of this kind of inflection point. And every senior leader I talk to is rethinking about how to rebuild their business with AI because now the large language models have come in, these foundational models are here, they can see value in their data. This is a 10 year journey in the big data world. Now it's impacting that, and everyone's rebuilding their company around this idea of being AI first 'cause they see ways to eliminate things and make things more efficient. And so now they telling 'em to go do it. And they're like, what do we do? So what do you guys think? Can you explain what is this wave of AI and why is it happening, why now, and what should people pay attention to? What does it mean to them? >> Yeah, I mean, it's pretty clear by now that AI can do amazing things that captures people's imaginations. And also now can show things that are really impactful in businesses, right? So what people have the opportunity to do today is to either train their own model that adds value to their business or find open models out there that can do very valuable things to them. So the next step really is how do you take that model and put it into production in a cost-effective way so that the business can actually get value out of it, right? >> Anna, what's your take? Because customers are there, you're there to make 'em successful, you got the new secret weapon for their business. >> Yeah, I think we just see a lot of companies struggle to get from a trained model into a model that is deployed in a cost-effective way that actually makes sense for the application they're building. I think that's a huge challenge we see today, kind of across the board across all of our customers. >> Well, I see this, everyone asking the same question. I have data, I want to get value out of it. I got to get these big models, I got to train it. What's it going to cost? So I think there's a reality of, okay, I got to do it. Then no one has any visibility on what it costs. When they get into it, this is going to break the bank. So I have to ask you guys, the cost of training these models is on everyone's mind. OctoML, your company's focus on the cost side of it as well as the efficiency side of running these models in production. Why are the production costs such a concern and where specifically are people looking at it and why did it get here? >> Yeah, so training costs get a lot of attention because normally a large number, but we shouldn't forget that it's a large, typically one time upfront cost that customers pay. But, you know, when the model is put into production, the cost grows directly with model usage and you actually want your model to be used because it's adding value, right? So, you know, the question that a customer faces is, you know, they have a model, they have a trained model and now what? So how much would it cost to run in production, right? And now without the big wave in generative AI, which rightfully is getting a lot of attention because of the amazing things that it can do. It's important for us to keep in mind that generative AI models like ChatGPT are huge, expensive energy hogs. They cost a lot to run, right? And given that model usage growth directly, model cost grows directly with usage, what you want to do is make sure that once you put a model into production, you have the best cost structure possible so that you're not surprised when it's gets popular, right? So let me give you an example. So if you have a model that costs, say 1 to $2 million to train, but then it costs about one to two cents per session to use it, right? So if you have a million active users, even if they use just once a day, it's 10 to $20,000 a day to operate that model in production. And that very, very quickly, you know, get beyond what you paid to train it. >> Anna, these aren't small numbers, and it's cost to train and cost to operate, it kind of reminds me of when the cloud came around and the data center versus cloud options. Like, wait a minute, one, it costs a ton of cash to deploy, and then running it. This is kind of a similar dynamic. What are you seeing? >> Yeah, absolutely. I think we are going to see increasingly the cost and production outpacing the costs and training by a lot. I mean, people talk about training costs now because that's what they're confronting now because people are so focused on getting models performant enough to even use in an application. And now that we have them and they're that capable, we're really going to start to see production costs go up a lot. >> Yeah, Luis, if you don't mind, I know this might be a little bit of a tangent, but, you know, training's super important. I get that. That's what people are doing now, but then there's the deployment side of production. Where do people get caught up and miss the boat or misconfigure? What's the gotcha? Where's the trip wire or so to speak? Where do people mess up on the cost side? What do they do? Is it they don't think about it, they tie it to proprietary hardware? What's the issue? >> Yeah, several things, right? So without getting really technical, which, you know, I might get into, you know, you have to understand relationship between performance, you know, both in terms of latency and throughput and cost, right? So reducing latency is important because you improve responsiveness of the model. But it's really important to keep in mind that it often leads diminishing returns. Below a certain latency, making it faster won't make a measurable difference in experience, but it's going to cost a lot more. So understanding that is important. Now, if you care more about throughputs, which is the time it takes for you to, you know, units per period of time, you care about time to solution, we should think about this throughput per dollar. And understand what you want is the highest throughput per dollar, which may come at the cost of higher latency, which you're not going to care about, right? So, and the reality here, John, is that, you know, humans and especially folks in this space want to have the latest and greatest hardware. And often they commit a lot of money to get access to them and have to commit upfront before they understand the needs that their models have, right? So common mistake here, one is not spending time to understand what you really need, and then two, over-committing and using more hardware than you actually need. And not giving yourself enough freedom to get your workload to move around to the more cost-effective choice, right? So this is just a metaphoric choice. And then another thing that's important here too is making a model run faster on the hardware directly translates to lower cost, right? So, but it takes a lot of engineers, you need to think of ways of producing very efficient versions of your model for the target hardware that you're going to use. >> Anna, what's the customer angle here? Because price performance has been around for a long time, people get that, but now latency and throughput, that's key because we're starting to see this in apps. I mean, there's an end user piece. I even seeing it on the infrastructure side where they're taking a heavy lifting away from operational costs. So you got, you know, application specific to the user and/or top of the stack, and then you got actually being used in operations where they want both. >> Yeah, absolutely. Maybe I can illustrate this with a quick story with the customer that we had recently been working with. So this customer is planning to run kind of a transformer based model for tech generation at super high scale on Nvidia T4 GPU, so kind of a commodity GPU. And the scale was so high that they would've been paying hundreds of thousands of dollars in cloud costs per year just to serve this model alone. You know, one of many models in their application stack. So we worked with this team to optimize our model and then benchmark across several possible targets. So that matching the hardware that Luis was just talking about, including the newer kind of Nvidia A10 GPUs. And what they found during this process was pretty interesting. First, the team was able to shave a quarter of their spend just by using better optimization techniques on the T4, the older hardware. But actually moving to a newer GPU would allow them to serve this model in a sub two milliseconds latency, so super fast, which was able to unlock an entirely new kind of user experience. So they were able to kind of change the value they're delivering in their application just because they were able to move to this new hardware easily. So they ultimately decided to plan their deployment on the more expensive A10 because of this, but because of the hardware specific optimizations that we helped them with, they managed to even, you know, bring costs down from what they had originally planned. And so if you extend this kind of example to everything that's happening with generative AI, I think the story we just talked about was super relevant, but the scale can be even higher, you know, it can be tenfold that. We were recently conducting kind of this internal study using GPT-J as a proxy to illustrate the experience of just a company trying to use one of these large language models with an example scenario of creating a chatbot to help job seekers prepare for interviews. So if you imagine kind of a conservative usage scenario where the model generates just 3000 words per user per day, which is, you know, pretty conservative for how people are interacting with these models. It costs 5 cents a session and if you're a company and your app goes viral, so from, you know, beginning of the year there's nobody, at the end of the year there's a million daily active active users in that year alone, going from zero to a million. You'll be spending about $6 million a year, which is pretty unmanageable. That's crazy, right? >> Yeah. >> For a company or a product that's just launching. So I think, you know, for us we see the real way to make these kind of advancements accessible and sustainable, as we said is to bring down cost to serve using these techniques. >> That's a great story and I think that illustrates this idea that deployment cost can vary from situation to situation, from model to model and that the efficiency is so strong with this new wave, it eliminates heavy lifting, creates more efficiency, automates intellect. I mean, this is the trend, this is radical, this is going to increase. So the cost could go from nominal to millions, literally, potentially. So, this is what customers are doing. Yeah, that's a great story. What makes sense on a financial, is there a cost of ownership? Is there a pattern for best practice for training? What do you guys advise cuz this is a lot of time and money involved in all potential, you know, good scenarios of upside. But you can get over your skis as they say, and be successful and be out of business if you don't manage it. I mean, that's what people are talking about, right? >> Yeah, absolutely. I think, you know, we see kind of three main vectors to reduce cost. I think one is make your deployment process easier overall, so that your engineering effort to even get your app running goes down. Two, would be get more from the compute you're already paying for, you're already paying, you know, for your instances in the cloud, but can you do more with that? And then three would be shop around for lower cost hardware to match your use case. So on the first one, I think making the deployment easier overall, there's a lot of manual work that goes into benchmarking, optimizing and packaging models for deployment. And because the performance of machine learning models can be really hardware dependent, you have to go through this process for each target you want to consider running your model on. And this is hard, you know, we see that every day. But for teams who want to incorporate some of these large language models into their applications, it might be desirable because licensing a model from a large vendor like OpenAI can leave you, you know, over provision, kind of paying for capabilities you don't need in your application or can lock you into them and you lose flexibility. So we have a customer whose team actually prepares models for deployment in a SaaS application that many of us use every day. And they told us recently that without kind of an automated benchmarking and experimentation platform, they were spending several days each to benchmark a single model on a single hardware type. So this is really, you know, manually intensive and then getting more from the compute you're already paying for. We do see customers who leave money on the table by running models that haven't been optimized specifically for the hardware target they're using, like Luis was mentioning. And for some teams they just don't have the time to go through an optimization process and for others they might lack kind of specialized expertise and this is something we can bring. And then on shopping around for different hardware types, we really see a huge variation in model performance across hardware, not just CPU vs. GPU, which is, you know, what people normally think of. But across CPU vendors themselves, high memory instances and across cloud providers even. So the best strategy here is for teams to really be able to, we say, look before you leap by running real world benchmarking and not just simulations or predictions to find the best software, hardware combination for their workload. >> Yeah. You guys sound like you have a very impressive customer base deploying large language models. Where would you categorize your current customer base? And as you look out, as you guys are growing, you have new customers coming in, take me through the progression. Take me through the profile of some of your customers you have now, size, are they hyperscalers, are they big app folks, are they kicking the tires? And then as people are out there scratching heads, I got to get in this game, what's their psychology like? Are they coming in with specific problems or do they have specific orientation point of view about what they want to do? Can you share some data around what you're seeing? >> Yeah, I think, you know, we have customers that kind of range across the spectrum of sophistication from teams that basically don't have MLOps expertise in their company at all. And so they're really looking for us to kind of give a full service, how should I do everything from, you know, optimization, find the hardware, prepare for deployment. And then we have teams that, you know, maybe already have their serving and hosting infrastructure up and ready and they already have models in production and they're really just looking to, you know, take the extra juice out of the hardware and just do really specific on that optimization piece. I think one place where we're doing a lot more work now is kind of in the developer tooling, you know, model selection space. And that's kind of an area that we're creating more tools for, particularly within the PyTorch ecosystem to bring kind of this power earlier in the development cycle so that as people are grabbing a model off the shelf, they can, you know, see how it might perform and use that to inform their development process. >> Luis, what's the big, I like this idea of picking the models because isn't that like going to the market and picking the best model for your data? It's like, you know, it's like, isn't there a certain approaches? What's your view on this? 'Cause this is where everyone, I think it's going to be a land rush for this and I want to get your thoughts. >> For sure, yeah. So, you know, I guess I'll start with saying the one main takeaway that we got from the GPT-J study is that, you know, having a different understanding of what your model's compute and memory requirements are, very quickly, early on helps with the much smarter AI model deployments, right? So, and in fact, you know, Anna just touched on this, but I want to, you know, make sure that it's clear that OctoML is putting that power into user's hands right now. So in partnership with AWS, we are launching this new PyTorch native profiler that allows you with a single, you know, one line, you know, code decorator allows you to see how your code runs on a variety of different hardware after accelerations. So it gives you very clear, you know, data on how you should think about your model deployments. And this ties back to choices of models. So like, if you have a set of choices that are equally good of models in terms of functionality and you want to understand after acceleration how are you going to deploy, how much they're going to cost or what are the options using a automated process of making a decision is really, really useful. And in fact, so I think these events can get early access to this by signing up for the Octopods, you know, this is exclusive group for insiders here, so you can go to OctoML.ai/pods to sign up. >> So that Octopod, is that a program? What is that, is that access to code? Is that a beta, what is that? Explain, take a minute and explain Octopod. >> I think the Octopod would be a group of people who is interested in experiencing this functionality. So it is the friends and users of OctoML that would be the Octopod. And then yes, after you sign up, we would provide you essentially the tool in code form for you to try out in your own. I mean, part of the benefit of this is that it happens in your own local environment and you're in control of everything kind of within the workflow that developers are already using to create and begin putting these models into their applications. So it would all be within your control. >> Got it. I think the big question I have for you is when do you, when does that one of your customers know they need to call you? What's their environment look like? What are they struggling with? What are the conversations they might be having on their side of the fence? If anyone's watching this, they're like, "Hey, you know what, I've got my team, we have a lot of data. Do we have our own language model or do I use someone else's?" There's a lot of this, I will say discovery going on around what to do, what path to take, what does that customer look like, if someone's listening, when do they know to call you guys, OctoML? >> Well, I mean the most obvious one is that you have a significant spend on AI/ML, come and talk to us, you know, putting AIML into production. So that's the clear one. In fact, just this morning I was talking to someone who is in life sciences space and is having, you know, 15 to $20 million a year cloud related to AI/ML deployment is a clear, it's a pretty clear match right there, right? So that's on the cost side. But I also want to emphasize something that Anna said earlier that, you know, the hardware and software complexity involved in putting model into production is really high. So we've been able to abstract that away, offering a clean automation flow enables one, to experiment early on, you know, how models would run and get them to production. And then two, once they are into production, gives you an automated flow to continuously updating your model and taking advantage of all this acceleration and ability to run the model on the right hardware. So anyways, let's say one then is cost, you know, you have significant cost and then two, you have an automation needs. And Anna please compliment that. >> Yeah, Anna you can please- >> Yeah, I think that's exactly right. Maybe the other time is when you are expecting a big scale up in serving your application, right? You're launching a new feature, you expect to get a lot of usage or, and you want to kind of anticipate maybe your CTO, your CIO, whoever pays your cloud bills is going to come after you, right? And so they want to know, you know, what's the return on putting this model essentially into my application stack? Am I going to, is the usage going to match what I'm paying for it? And then you can understand that. >> So you guys have a lot of the early adopters, they got big data teams, they're pushed in the production, they want to get a little QA, test the waters, understand, use your technology to figure it out. Is there any cases where people have gone into production, they have to pull it out? It's like the old lemon laws with your car, you buy a car and oh my god, it's not the way I wanted it. I mean, I can imagine the early people through the wall, so to speak, in the wave here are going to be bloody in the sense that they've gone in and tried stuff and get stuck with huge bills. Are you seeing that? Are people pulling stuff out of production and redeploying? Or I can imagine that if I had a bad deployment, I'd want to refactor that or actually replatform that. Do you see that too? >> Definitely after a sticker shock, yes, your customers will come and make sure that, you know, the sticker shock won't happen again. >> Yeah. >> But then there's another more thorough aspect here that I think we likely touched on, be worth elaborating a bit more is just how are you going to scale in a way that's feasible depending on the allocation that you get, right? So as we mentioned several times here, you know, model deployment is so hardware dependent and so complex that you tend to get a model for a hardware choice and then you want to scale that specific type of instance. But what if, when you want to scale because suddenly luckily got popular and, you know, you want to scale it up and then you don't have that instance anymore. So how do you live with whatever you have at that moment is something that we see customers needing as well. You know, so in fact, ideally what we want is customers to not think about what kind of specific instances they want. What they want is to know what their models need. Say, they know the SLA and then find a set of hybrid targets and instances that hit the SLA whenever they're also scaling, they're going to scale with more freedom, right? Instead of having to wait for AWS to give them more specific allocation for a specific instance. What if you could live with other types of hardware and scale up in a more free way, right? So that's another thing that we see customers, you know, like they need more freedom to be able to scale with whatever is available. >> Anna, you touched on this with the business model impact to that 6 million cost, if that goes out of control, there's a business model aspect and there's a technical operation aspect to the cost side too. You want to be mindful of riding the wave in a good way, but not getting over your skis. So that brings up the point around, you know, confidence, right? And teamwork. Because if you're in production, there's probably a team behind it. Talk about the team aspect of your customers. I mean, they're dedicated, they go put stuff into production, they're developers, there're data. What's in it for them? Are they getting better, are they in the beach, you know, reading the book. Are they, you know, are there easy street for them? What's the customer benefit to the teams? >> Yeah, absolutely. With just a few clicks of a button, you're in production, right? That's the dream. So yeah, I mean I think that, you know, we illustrated it before a little bit. I think the automated kind of benchmarking and optimization process, like when you think about the effort it takes to get that data by hand, which is what people are doing today, they just don't do it. So they're making decisions without the best information because it's, you know, there just isn't the bandwidth to get the information that they need to make the best decision and then know exactly how to deploy it. So I think it's actually bringing kind of a new insight and capability to these teams that they didn't have before. And then maybe another aspect on the team side is that it's making the hand-off of the models from the data science teams to the model deployment teams more seamless. So we have, you know, we have seen in the past that this kind of transition point is the place where there are a lot of hiccups, right? The data science team will give a model to the production team and it'll be too slow for the application or it'll be too expensive to run and it has to go back and be changed and kind of this loop. And so, you know, with the PyTorch profiler that Luis was talking about, and then also, you know, the other ways we do optimization that kind of prevents that hand-off problem from happening. >> Luis and Anna, you guys have a great company. Final couple minutes left. Talk about the company, the people there, what's the culture like, you know, if Intel has Moore's law, which is, you know, doubling the performance in few years, what's the culture like there? Is it, you know, more throughput, better pricing? Explain what's going on with the company and put a plug in. Luis, we'll start with you. >> Yeah, absolutely. I'm extremely proud of the team that we built here. You know, we have a people first culture, you know, very, very collaborative and folks, we all have a shared mission here of making AI more accessible and sustainable. We have a very diverse team in terms of backgrounds and life stories, you know, to do what we do here, we need a team that has expertise in software engineering, in machine learning, in computer architecture. Even though we don't build chips, we need to understand how they work, right? So, and then, you know, the fact that we have this, this very really, really varied set of backgrounds makes the environment, you know, it's say very exciting to learn more about, you know, assistance end-to-end. But also makes it for a very interesting, you know, work environment, right? So people have different backgrounds, different stories. Some of them went to grad school, others, you know, were in intelligence agencies and now are working here, you know. So we have a really interesting set of people and, you know, life is too short not to work with interesting humans. You know, that's something that I like to think about, you know. >> I'm sure your off-site meetings are a lot of fun, people talking about computer architectures, silicon advances, the next GPU, the big data models coming in. Anna, what's your take? What's the culture like? What's the company vibe and what are you guys looking to do? What's the customer success pattern? What's up? >> Yeah, absolutely. I mean, I, you know, second all of the great things that Luis just said about the team. I think one that I, an additional one that I'd really like to underscore is kind of this customer obsession, to use a term you all know well. And focus on the end users and really making the experiences that we're bringing to our user who are developers really, you know, useful and valuable for them. And so I think, you know, all of these tools that we're trying to put in the hands of users, the industry and the market is changing so rapidly that our products across the board, you know, all of the companies that, you know, are part of the showcase today, we're all evolving them so quickly and we can only do that kind of really hand in glove with our users. So that would be another thing I'd emphasize. >> I think the change dynamic, the power dynamics of this industry is just the beginning. I'm very bullish that this is going to be probably one of the biggest inflection points in history of the computer industry because of all the dynamics of the confluence of all the forces, which you mentioned some of them, I mean PC, you know, interoperability within internetworking and you got, you know, the web and then mobile. Now we have this, I mean, I wouldn't even put social media even in the close to this. Like, this is like, changes user experience, changes infrastructure. There's going to be massive accelerations in performance on the hardware side from AWS's of the world and cloud and you got the edge and more data. This is really what big data was going to look like. This is the beginning. Final question, what do you guys see going forward in the future? >> Well, it's undeniable that machine learning and AI models are becoming an integral part of an interesting application today, right? So, and the clear trends here are, you know, more and more competitional needs for these models because they're only getting more and more powerful. And then two, you know, seeing the complexity of the infrastructure where they run, you know, just considering the cloud, there's like a wide variety of choices there, right? So being able to live with that and making the most out of it in a way that does not require, you know, an impossible to find team is something that's pretty clear. So the need for automation, abstracting with the complexity is definitely here. And we are seeing this, you know, trends are that you also see models starting to move to the edge as well. So it's clear that we're seeing, we are going to live in a world where there's no large models living in the cloud. And then, you know, edge models that talk to these models in the cloud to form, you know, an end-to-end truly intelligent application. >> Anna? >> Yeah, I think, you know, our, Luis said it at the beginning. Our vision is to make AI sustainable and accessible. And I think as this technology just expands in every company and every team, that's going to happen kind of on its own. And we're here to help support that. And I think you can't do that without tools like those like OctoML. >> I think it's going to be an error of massive invention, creativity, a lot of the format heavy lifting is going to allow the talented people to automate their intellect. I mean, this is really kind of what we see going on. And Luis, thank you so much. Anna, thanks for coming on this segment. Thanks for coming on theCUBE and being part of the AWS Startup Showcase. I'm John Furrier, your host. Thanks for watching. (upbeat music)

Published Date : Mar 9 2023

SUMMARY :

Great to have you on again, Luis. It's great to be here. but let's explain what you guys do. And what we offer customers is, you know, So what do you guys think? so that the business you got the new secret kind of across the board So I have to ask you guys, And that very, very quickly, you know, and the data center versus cloud options. And now that we have them but, you know, training's super important. John, is that, you know, humans and then you got actually managed to even, you know, So I think, you know, for us we see in all potential, you know, And this is hard, you know, And as you look out, as And then we have teams that, you know, and picking the best model for your data? from the GPT-J study is that, you know, What is that, is that access to code? And then yes, after you sign up, to call you guys, OctoML? come and talk to us, you know, And so they want to know, you know, So you guys have a lot make sure that, you know, we see customers, you know, What's the customer benefit to the teams? and then also, you know, what's the culture like, you know, So, and then, you know, and what are you guys looking to do? all of the companies that, you know, I mean PC, you know, in the cloud to form, you know, And I think you can't And Luis, thank you so much.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AnnaPERSON

0.99+

Anna ConnollyPERSON

0.99+

John FurrierPERSON

0.99+

LuisPERSON

0.99+

Luis CezePERSON

0.99+

JohnPERSON

0.99+

1QUANTITY

0.99+

10QUANTITY

0.99+

15QUANTITY

0.99+

AWSORGANIZATION

0.99+

10 yearQUANTITY

0.99+

6 millionQUANTITY

0.99+

zeroQUANTITY

0.99+

IntelORGANIZATION

0.99+

threeQUANTITY

0.99+

NvidiaORGANIZATION

0.99+

FirstQUANTITY

0.99+

OctoMLORGANIZATION

0.99+

twoQUANTITY

0.99+

millionsQUANTITY

0.99+

todayDATE

0.99+

TwoQUANTITY

0.99+

$2 millionQUANTITY

0.98+

3000 wordsQUANTITY

0.98+

one lineQUANTITY

0.98+

A10COMMERCIAL_ITEM

0.98+

OctoMLTITLE

0.98+

oneQUANTITY

0.98+

three main vectorsQUANTITY

0.97+

hundreds of thousands of dollarsQUANTITY

0.97+

bothQUANTITY

0.97+

CUBEORGANIZATION

0.97+

T4COMMERCIAL_ITEM

0.97+

one timeQUANTITY

0.97+

first oneQUANTITY

0.96+

two centsQUANTITY

0.96+

GPT-JORGANIZATION

0.96+

single modelQUANTITY

0.95+

a minuteQUANTITY

0.95+

about $6 million a yearQUANTITY

0.95+

once a dayQUANTITY

0.95+

$20,000 a dayQUANTITY

0.95+

a millionQUANTITY

0.94+

theCUBEORGANIZATION

0.93+

OctopodTITLE

0.93+

this morningDATE

0.93+

first cultureQUANTITY

0.92+

$20 million a yearQUANTITY

0.92+

AWS Startup ShowcaseEVENT

0.9+

North StarORGANIZATION

0.9+

Dell Technologies MWC 2023 Exclusive Booth Tour with David Nicholson


 

>> And I'm here at Dell's Presence at MWC with vice president of marketing for telecom and Edge Computing, Aaron Chaisson. Aaron, how's it going? >> Doing great. How's it going today, Dave? >> It's going pretty well. Pretty excited about what you've got going here and I'm looking forward to getting the tour. You ready to take a closer look? >> Ready to do it. Let's go take a look! For us in the telecom ecosystem, it's really all about how we bring together the different players that are innovating across the industry to drive value for our CSP customers. So, it starts really, for us, at the ecosystem layer, bringing partners, bringing telecommunication providers, bringing (stutters) a bunch of different technologies together to innovate together to drive new value. So Paul, take us a little bit through what we're doing to- to develop and bring in these partnerships and develop our ecosystem. >> Uh, sure. Thank you Aaron. Uh, you know, one of the things that we've been focusing on, you know, Dell is really working with many players in the open telecom ecosystem. Network equipment providers, independent software vendors, and the communication service providers. And, you know, through our lines of business or open telecom ecosystem labs, what we want to do is bring 'em together into a community with the goal of really being able to accelerate open innovation and, uh, open solutions into the market. And that's what this community is really about, is being able to, you know, have those communications, develop those collaborations whether it's through, you know, sharing information online, having webinars dedicated to sharing Dell information, whether it's our next generation hardware portfolio we announced here at the show, our use case directory, our- how we're dealing with new service opportunities, but as well as the community to share, too, which I think is an exciting way for us to be able to, you know- what is the knowledge thing? As well as activities at other events that we have coming up. So really the key thing I think about, the- the open telecom ecosystem community, it's collaboration and accelerating the open industry forward. >> So- So Aaron, if I'm hearing this correctly you're saying that you can't just say, "Hey, we're open", and throw a bunch of parts in a box and have it work? >> No, we've got to work together to integrate these pieces to be able to deliver value, and, you know, we opened up a- (stutters) in our open ecosystem labs, we started a- a self-certification process a couple of months back. We've already had 13 partners go through that, we've got 16 more in the pipeline. Everything you see in this entire booth has been innovated and worked with partnerships from Intel to Microsoft to, uh, to (stutters) Wind River and Red Hat and others. You go all the way around the booth, everything here has partnerships at its core. And why don't we go to the next section here where we're going to be showing how we're pulling that all together in our open ecosystems labs to drive that innovation? >> So Aaron, you talked about the kinds of validation and testing that goes on, so that you can prove out an open stack to deliver the same kinds of reliability and performance and availability that we expect from a wireless network. But in the opens- in the open world, uh, what are we looking at here? >> Yeah absolutely. So one of the- one of the challenges to a very big, broad open ecosystem is the complexity of integrating, deploying, and managing these, especially at telecom scale. You're not talking about thousands of servers in one site, you're talking about one server in thousands of sites. So how do you deploy that predictable stack and then also manage that at scale? I'm going to show you two places where we're talkin' about that. So, this is actually representing an area that we've been innovating in recently around creating an integrated infrastructure and virtualization stack for the telecom industry. We've been doing this for years in IT with VxBlocks and VxRails and others. Here what you see is we got, uh, Dell hardware infrastructure, we've got, uh, an open platform for virtualization providers, in this case we've created an infrastructure block for Red Hat to be able to supply an infrastructure for core operations and Packet Cores for telecoms. On the other side of this, you can actually see what we're doing with Wind River to drive innovation around RAN and being able to simplify RAN- vRAN and O-RAN deployments. >> What does that virtualization look like? Are we talking about, uh, traditional virtual machines with OSs, or is this containerized cloud native? What does it look like? >> Yeah, it's actually both, so it can support, uh, virtual, uh-uh, software as well as containerized software, so we leverage the (indistinct) distributions for these to be able to deploy, you know, cloud native applications, be able to modernize how they're deploying these applications across the telecom network. So in this case with Red Hat, uh, (stutters) leveraging OpenShift in order to support containerized apps in your Packet Core environments. >> So what are- what are some of the kinds of things that you can do once you have infrastructure like this deployed? >> Yeah, I mean by- by partnering broadly across the ecosystem with VMware, with Red Hat, uh, with- with Wind River and with others, it gives them the ability to be able to deploy the right virtualization software in their network for the types of applications they're deploying. They might want to use Red Hat in their core, they may want to use Wind River in their RAM, they may want to use, uh, Microsoft or VMware for their- for their Edge workloads, and we allow them to be able to deploy all those, but centrally manage those with a common user interface and a common set of APIs. >> Okay, well I'm dying to understand the link between this and the Lego city that the viewers can't see, yet, but it's behind me. Let's take a look. >> So let's take a look at the Lego city that shows how we not deploy just one of these, but dozens or hundreds of these at scale across a cityscape. >> So Aaron, I know we're not in Copenhagen. What's all the Lego about? >> Yeah, so the Lego city here is to show- and, uh, really there's multiple points of Presence across an entire Metro area that we want to be able to manage if we're a telecom provider. We just talked about one infrastructure block. What if I wanted to deploy dozens of these across the city to be able to manage my network, to be able to manage, uh, uh- to be able to deploy private mobility potentially out into a customer enterprise environment, and be able to manage all of these, uh, very simply and easily from a common interface? >> So it's interesting. Now I think I understand why you are VP of marketing for both telecom and Edge. Just heard- just heard a lot about Edge and I can imagine a lot of internet of things, things, hooked up at that Edge. >> Yeah, so why don't we actually go over to another area? We're actually going to show you how one small microbrewery (stutters) in one of our cities nearby, uh, (stutters) my hometown in Massachusetts is actually using this technology to go from more of an analyzed- analog world to digitizing their business to be able to brew better beer. >> So Aaron, you bring me to a brewery. What do we have- what do we have going on here? >> Yeah, so, actually (stutters) about- about a year ago or so, I- I was able to get my team to come together finally after COVID to be able to meet each other and have a nice team event. One of those nights, we went out to dinner at a- at a brewery called "Exhibit 'A'" in Massachusetts, and they actually gave us a tour of their facilities and showed us how they actually go through the process of brewing beer. What we saw as we were going through it, interestingly, was that everything was analog. They literally had people with pen and paper walking around checking time and temperature and the process of brewing the beer, and they weren't asking for help, but we actually saw an opportunity where what we're doing to help businesses digitize what they're doing in their manufacturing floor can actually help them optimize how they build whatever product they're building, in this case it was beer. >> Hey Warren, good to meet you! What do we have goin' on? >> Yeah, it's all right. So yeah, basically what we did is we took some of their assets in the, uh, brewery that were completely manually monitored. People were literally walking around the floor with clipboards, writing down values. And we censorized the asset, in this case fermentation tanks and we measured the, uh, pressure and the temperature, which in fermentation are very key to monitor those, because if they get out of range the entire batch of beer can go bad or you don't get the consistency from batch to batch if you don't tightly monitor those. So we censorized the fermentation tank, brought that into an industrial I/O network, and then brought that into a Dell gateway which is connected 5G up to the cloud, which then that data comes to a tablet or a phone, which they, rather than being out on the floor and monitor it, can look at this data remotely at any time. >> So I'm not sure the exact date, the first time we have evidence of beer being brewed by humanity... >> Yep. >> But I know it's thousands of years ago. So it's taken that long to get to the point where someone had to come along, namely Dell, to actually digitally transform the beer business. Is this sort of proof that if you can digitally transform this, you can digitally transform anything? >> Absolutely. You name it, anything that's being manufactured, sold, uh, uh, taken care of, (stutters) any business out there that's looking to be able to be modernize and deliver better service to their customers can benefit from technologies like this. >> So we've taken a look at the ecosystem, the way that you validate architectures, we've seen an example of that kind of open architecture. Now we've seen a real world use case. Do you want to take a look a little deeper under the covers and see what's powering all of this? >> We just this week announced a new line of servers that power Edge and RAN use cases, and I want to introduce Mike to kind of take us through what we've been working on and really what the power of what this providing. >> Hey Mike, welcome to theCube. >> Oh, glad- glad to be here. So, what I'd really like to talk about are the three new XR series servers that we just announced last week and we're showing here at Mobile World Congress. They are all short depth, ruggedized, uh, very environmentally tolerant, and able to withstand, you know, high temperatures, high humidities, and really be deployed to places where traditional data center servers just can't handle, you know, due to one fact or another, whether it's depth or the temperature. And so, the first one I'd like to show you is the XR7620. This is, uh, 450 millimeters deep, it's designed for, uh, high levels of acceleration so it can support up to 2-300 watt, uh, GPUs. But what I really want to show you over here, especially for Mobile World Congress, is our new XR8000. The XR8000 is based on Intel's latest Sapphire Rapids technology, and this is- happens to be one of the first, uh, EE boost processors that is out, and basically what it is (stutters) an embedded accelerator that makes, uh, the- the processing of vRAN loads very, uh, very efficient. And so they're actually projecting a, uh, 3x improvement, uh, of processing per watt over the previous generation of processors. This particular unit is also sledded. It's very much like, uh, today's traditional baseband unit, so it's something that is designed for low TCO and easy maintenance in the field. This is the frew. When anything fails, you'll pull one out, you pop a new one in, it comes back into service, and the- the, uh, you know, your radio is- is, uh, minimally disrupted. >> Yeah, would you describe this as quantitative and qualitative in terms of the kinds of performance gains that these underlying units are delivering to us? I mean, this really kind of changes the game, doesn't it? It's not just about more, is it about different also in terms of what we can do? >> Well we are (stutters) to his point, we are able to bring in new accelerator technologies. Not only are we doing it with the Intel, uh, uh, uh, of the vRAN boost technologies, but also (stutters) we can bring it, too, but there's another booth here where we're actually working with our own accelerator cards and other accelerator cards from our partners across the industry to be able to deliver the price and performance capabilities required by a vRAN or an O-RAN deployment in the network. So it's not- it's not just the chip technology, it's the integration and the innovation we're doing with others, as well as, of course, the unique power cooling capabilities that Dell provides in our servers that really makes these the most efficient way of being able to power a network. >> Any final thoughts recapping the whole picture here? >> Yeah, I mean I would just say if anybody's, uh, i- is still here in Mobile World Congress, wants to come and learn what we're doing, I only showed you a small section of the demos we've got here. We've got 13 demos across on 8th floor here. Uh, for those of you who want to talk to us (stutters) and have meetings with us, we've got 13 meeting rooms back there, over 500 costumer partner meetings this week, we've got some whisper suites for those of you who want to come and talk to us but we're innovating on going forward. So, you know, there's a lot that we're doing, we're really excited, there's a ton of passion at this event, and, uh, we're really excited about where the industry is going and our role in it. >> 'Preciate the tour, Aaron. Thanks Mike. >> Mike: Thank you! >> Well, for theCube... Again, Dave Nicholson here. Thanks for joining us on this tour of Dell's Presence here at MWC 2023.

Published Date : Mar 1 2023

SUMMARY :

with vice president of marketing for it going today, Dave? to getting the tour. the industry to drive value and the communication service providers. to be able to deliver value, and availability that we one of the challenges to a to be able to deploy, you know, the ecosystem with and the Lego city that the the Lego city that shows how What's all the Lego about? Yeah, so the Lego city here is to show- think I understand why you are to be able to brew better beer. So Aaron, you bring me to and temperature and the process to batch if you don't So I'm not sure the to get to the point that's looking to be able to the way that you validate architectures, to kind of take us through and really be deployed to the industry to be able to come and talk to us but we're 'Preciate the tour, Aaron. Thanks for joining us on this

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AaronPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Dave NicholsonPERSON

0.99+

Aaron ChaissonPERSON

0.99+

PaulPERSON

0.99+

MassachusettsLOCATION

0.99+

MikePERSON

0.99+

CopenhagenLOCATION

0.99+

WarrenPERSON

0.99+

13 partnersQUANTITY

0.99+

David NicholsonPERSON

0.99+

13 demosQUANTITY

0.99+

450 millimetersQUANTITY

0.99+

DavePERSON

0.99+

DellORGANIZATION

0.99+

last weekDATE

0.99+

two placesQUANTITY

0.99+

XR7620COMMERCIAL_ITEM

0.99+

one siteQUANTITY

0.99+

XR8000COMMERCIAL_ITEM

0.99+

dozensQUANTITY

0.99+

LegoORGANIZATION

0.99+

8th floorQUANTITY

0.99+

IntelORGANIZATION

0.99+

EdgeORGANIZATION

0.98+

this weekDATE

0.98+

bothQUANTITY

0.98+

todayDATE

0.98+

first timeQUANTITY

0.98+

threeQUANTITY

0.98+

Wind RiverORGANIZATION

0.98+

hundredsQUANTITY

0.98+

13 meeting roomsQUANTITY

0.98+

thousands of years agoDATE

0.97+

thousands of serversQUANTITY

0.97+

oneQUANTITY

0.97+

Wind RiverORGANIZATION

0.97+

OpenShiftTITLE

0.97+

Red HatORGANIZATION

0.97+

Red HatTITLE

0.97+

one serverQUANTITY

0.96+

3xQUANTITY

0.96+

Red HatTITLE

0.96+

Mobile World CongressEVENT

0.95+

OneQUANTITY

0.94+

firstQUANTITY

0.94+

Mobile World CongressEVENT

0.93+

16 moreQUANTITY

0.93+

first oneQUANTITY

0.92+

EdgeTITLE

0.92+

over 500 costumer partner meetingsQUANTITY

0.92+

dozens of theseQUANTITY

0.9+

MWC 2023EVENT

0.88+

thousands of sitesQUANTITY

0.88+

about a year agoDATE

0.87+

Sapphire RapidsOTHER

0.87+

RAN- vRANTITLE

0.87+

one small microbreweryQUANTITY

0.86+

Edge ComputingORGANIZATION

0.86+

Wind RiverTITLE

0.83+

one infrastructure blockQUANTITY

0.82+

up to 2-300 wattQUANTITY

0.82+

RANTITLE

0.81+

VMwareORGANIZATION

0.8+