Image Title

Search Results for Fabric:

How to Make a Data Fabric Smart A Technical Demo With Jess Jowdy


 

(inspirational music) (music ends) >> Okay, so now that we've heard Scott talk about smart data fabrics, it's time to see this in action. Right now we're joined by Jess Jowdy, who's the manager of Healthcare Field Engineering at InterSystems. She's going to give a demo of how smart data fabrics actually work, and she's going to show how embedding a wide range of analytics capabilities, including data exploration business intelligence, natural language processing and machine learning directly within the fabric makes it faster and easier for organizations to gain new insights and power intelligence predictive and prescriptive services and applications. Now, according to InterSystems, smart data fabrics are applicable across many industries from financial services to supply chain to healthcare and more. Jess today is going to be speaking through the lens of a healthcare focused demo. Don't worry, Joe Lichtenberg will get into some of the other use cases that you're probably interested in hearing about. That will be in our third segment, but for now let's turn it over to Jess. Jess, good to see you. >> Hi, yeah, thank you so much for having me. And so for this demo, we're really going to be bucketing these features of a smart data fabric into four different segments. We're going to be dealing with connections, collections, refinements, and analysis. And so we'll see that throughout the demo as we go. So without further ado, let's just go ahead and jump into this demo, and you'll see my screen pop up here. I actually like to start at the end of the demo. So I like to begin by illustrating what an end user's going to see, and don't mind the screen 'cause I gave you a little sneak peek of what's about to happen. But essentially what I'm going to be doing is using Postman to simulate a call from an external application. So we talked about being in the healthcare industry. This could be, for instance, a mobile application that a patient is using to view an aggregated summary of information across that patient's continuity of care or some other kind of application. So we might be pulling information in this case from an electronic medical record. We might be grabbing clinical history from that. We might be grabbing clinical notes from a medical transcription software, or adverse reaction warnings from a clinical risk grouping application, and so much more. So I'm really going to be simulating a patient logging in on their phone and retrieving this information through this Postman call. So what I'm going to do is I'm just going to hit send, I've already preloaded everything here, and I'm going to be looking for information where the last name of this patient is Simmons, and their medical record number or their patient identifier in the system is 32345. And so as you can see, I have this single JSON payload that showed up here of, just, relevant clinical information for my patient whose last name is Simmons, all within a single response. So fantastic, right? Typically though, when we see responses that look like this there is an assumption that this service is interacting with a single backend system, and that single backend system is in charge of packaging that information up and returning it back to this caller. But in a smart data fabric architecture, we're able to expand the scope to handle information across different, in this case, clinical applications. So how did this actually happen? Let's peel back another layer and really take a look at what happened in the background. What you're looking at here is our mission control center for our smart data fabric. On the left we have our APIs that allow users to interact with particular services. On the right we have our connections to our different data silos. And in the middle here, we have our data fabric coordinator which is going to be in charge of this refinement and analysis, those key pieces of our smart data fabric. So let's look back and think about the example we just showed. I received an inbound request for information for a patient whose last name is Simmons. My end user is requesting to connect to that service, and that's happening here at my patient data retrieval API location. Users can define any number of different services and APIs depending on their use cases. And to that end, we do also support full life cycle API management within this platform. When you're dealing with APIs, I always like to make a little shout out on this, that you really want to make sure you have enough, like a granular enough security model to handle and limit which APIs and which services a consumer can interact with. In this IRIS platform, which we're talking about today we have a very granular role-based security model that allows you to handle that, but it's really important in a smart data fabric to consider who's accessing your data and in what context. >> Can I just interrupt you for a second, Jess? >> Yeah, please. >> So you were showing on the left hand side of the demo a couple of APIs. I presume that can be a very long list. I mean, what do you see as typical? >> I mean you could have hundreds of these APIs depending on what services an organization is serving up for their consumers. So yeah, we've seen hundreds of these services listed here. >> So my question is, obviously security is critical in the healthcare industry, and API securities are like, really hot topic these days. How do you deal with that? >> Yeah, and I think API security is interesting 'cause it can happen at so many layers. So, there's interactions with the API itself. So can I even see this API and leverage it? And then within an API call, you then have to deal with all right, which end points or what kind of interactions within that API am I allowed to do? What data am I getting back? And with healthcare data, the whole idea of consent to see certain pieces of data is critical. So, the way that we handle that is, like I said, same thing at different layers. There is access to a particular API, which can happen within the IRIS product, and also we see it happening with an API management layer, which has become a really hot topic with a lot of organizations. And then when it comes to data security, that really happens under the hood within your smart data fabric. So, that role-based access control becomes very important in assigning, you know, roles and permissions to certain pieces of information. Getting that granular becomes the cornerstone of the security. >> And that's been designed in, it's not a bolt on as they like to say. >> Absolutely. >> Okay, can we get into collect now? >> Of course, we're going to move on to the collection piece at this point in time, which involves pulling information from each of my different data silos to create an overall aggregated record. So commonly, each data source requires a different method for establishing connections and collecting this information. So for instance, interactions with an EMR may require leveraging a standard healthcare messaging format like Fire. Interactions with a homegrown enterprise data warehouse for instance, may use SQL. For a cloud-based solutions managed by a vendor, they may only allow you to use web service calls to pull data. So it's really important that your data fabric platform that you're using has the flexibility to connect to all of these different systems and applications. And I'm about to log out, so I'm going to (chuckles) keep my session going here. So therefore it's incredibly important that your data fabric has the flexibility to connect to all these different kinds of applications and data sources, and all these different kinds of formats and over all of these different kinds of protocols. So let's think back on our example here. I had four different applications that I was requesting information for to create that payload that we saw initially. Those are listed here under this operations section. So these are going out and connecting to downstream systems to pull information into my smart data fabric. What's great about the IRIS platform is, it has an embedded interoperability platform. So there's all of these native adapters that can support these common connections that we see for different kinds of applications. So using REST, or SOAP, or SQL, or FTP, regardless of that protocol, there's an adapter to help you work with that. And we also think of the types of formats that we typically see data coming in as in healthcare we have HL7, we have Fire, we have CCDs, across the industry, JSON is, you know, really hitting a market strong now, and XML payloads, flat files. We need to be able to handle all of these different kinds of formats over these different kinds of protocols. So to illustrate that, if I click through these when I select a particular connection on the right side panel, I'm going to see the different settings that are associated with that particular connection that allows me to collect information back into my smart data fabric. In this scenario, my connection to my chart script application in this example, communicates over a SOAP connection. When I'm grabbing information from my clinical risk grouping application I'm using a SQL based connection. When I'm connecting to my EMR, I'm leveraging a standard healthcare messaging format known as Fire, which is a REST based protocol. And then when I'm working with my health record management system, I'm leveraging a standard HTTP adapter. So you can see how we can be flexible when dealing with these different kinds of applications and systems. And then it becomes important to be able to validate that you've established those connections correctly, and be able to do it in a reliable and quick way. Because if you think about it, you could have hundreds of these different kinds of applications built out and you want to make sure that you're maintaining and understanding those connections. So I can actually go ahead and test one of these applications and put in, for instance my patient's last name and their MRN, and make sure that I'm actually getting data back from that system. So it's a nice little sanity check as we're building out that data fabric to ensure that we're able to establish these connections appropriately. So turnkey adapters are fantastic, as you can see we're leveraging them all here, but sometimes these connections are going to require going one step further and building something really specific for an application. So why don't we go one step further here and talk about doing something custom or doing something innovative. And so it's important for users to have the ability to develop and go beyond what's an out-of-the box or black box approach to be able to develop things that are specific to their data fabric, or specific to their particular connection. In this scenario, the IRIS data platform gives users access to the entire underlying code base. So you not only get an opportunity to view how we're establishing these connections or how we're building out these processes, but you have the opportunity to inject your own kind of processing, your own kinds of pipelines into this. So as an example, you can leverage any number of different programming languages right within this pipeline. And so I went ahead and I injected Python. So Python is a very up and coming language, right? We see more and more developers turning towards Python to do their development. So it's important that your data fabric supports those kinds of developers and users that have standardized on these kinds of programming languages. This particular script here, as you can see actually calls out to our turnkey adapters. So we see a combination of out-of-the-box code that is provided in this data fabric platform from IRIS, combined with organization specific or user specific customizations that are included in this Python method. So it's a nice little combination of how do we bring the developer experience in and mix it with out-of-the-box capabilities that we can provide in a smart data fabric. >> Wow. >> Yeah, I'll pause. (laughs) >> It's a lot here. You know, actually- >> I can pause. >> If I could, if we just want to sort of play that back. So we went to the connect and the collect phase. >> Yes, we're going into refine. So it's a good place to stop. >> So before we get there, so we heard a lot about fine grain security, which is crucial. We heard a lot about different data types, multiple formats. You've got, you know, the ability to bring in different dev tools. We heard about Fire, which of course big in healthcare. And that's the standard, and then SQL for traditional kind of structured data, and then web services like HTTP you mentioned. And so you have a rich collection of capabilities within this single platform. >> Absolutely. And I think that's really important when you're dealing with a smart data fabric because what you're effectively doing is you're consolidating all of your processing, all of your collection, into a single platform. So that platform needs to be able to handle any number of different kinds of scenarios and technical challenges. So you've got to pack that platform with as many of these features as you can to consolidate that processing. >> All right, so now we're going into refinement. >> We're going into refinement. Exciting. (chuckles) So how do we actually do refinement? Where does refinement happen? And how does this whole thing end up being performant? Well the key to all of that is this SDF coordinator, or stands for Smart Data Fabric coordinator. And what this particular process is doing is essentially orchestrating all of these calls to all of these different downstream systems. It's aggregating, it's collecting that information, it's aggregating it, and it's refining it into that single payload that we saw get returned to the user. So really this coordinator is the main event when it comes to our data fabric. And in the IRIS platform we actually allow users to build these coordinators using web-based tool sets to make it intuitive. So we can take a sneak peek at what that looks like. And as you can see, it follows a flow chart like structure. So there's a start, there is an end, and then there are these different arrows that point to different activities throughout the business process. And so there's all these different actions that are being taken within our coordinator. You can see an action for each of the calls to each of our different data sources to go retrieve information. And then we also have the sync call at the end that is in charge of essentially making sure that all of those responses come back before we package them together and send them out. So this becomes really crucial when we're creating that data fabric. And you know, this is a very simple data fabric example where we're just grabbing data and we're consolidating it together. But you can have really complex orchestrators and coordinators that do any number of different things. So for instance, I could inject SQL logic into this or SQL code, I can have conditional logic, I can do looping, I can do error trapping and handling. So we're talking about a whole number of different features that can be included in this coordinator. So like I said, we have a really very simple process here that's just calling out, grabbing all those different data elements from all those different data sources and consolidating it. We'll look back at this coordinator in a second when we introduce, or we make this data fabric a bit smarter, and we start introducing that analytics piece to it. So this is in charge of the refinement. And so at this point in time we've looked at connections, collections, and refinements. And just to summarize what we've seen 'cause I always like to go back and take a look at everything that we've seen. We have our initial API connection, we have our connections to our individual data sources and we have our coordinators there in the middle that are in charge of collecting the data and refining it into a single payload. As you can imagine, there's a lot going on behind the scenes of a smart data fabric, right? There's all these different processes that are interacting. So it's really important that your smart data fabric platform has really good traceability, really good logging, 'cause you need to be able to know, you know, if there was an issue, where did that issue happen in which connected process, and how did it affect the other processes that are related to it? In IRIS, we have this concept called a visual trace. And what our clients use this for is basically to be able to step through the entire history of a request from when it initially came into the smart data fabric, to when data was sent back out from that smart data fabric. So I didn't record the time, but I bet if you recorded the time it was this time that we sent that request in and you can see my patient's name and their medical record number here, and you can see that that instigated four different calls to four different systems, and they're represented by these arrows going out. So we sent something to chart script, to our health record management system, to our clinical risk grouping application, into my EMR through their Fire server. So every request, every outbound application gets a request and we pull back all of those individual pieces of information from all of those different systems, and we bundle them together. And from my Fire lovers, here's our Fire bundle that we got back from our Fire server. So this is a really good way of being able to validate that I am appropriately grabbing the data from all these different applications and then ultimately consolidating it into one payload. Now we change this into a JSON format before we deliver it, but this is those data elements brought together. And this screen would also be used for being able to see things like error trapping, or errors that were thrown, alerts, warnings, developers might put log statements in just to validate that certain pieces of code are executing. So this really becomes the one stop shop for understanding what's happening behind the scenes with your data fabric. >> Sure, who did what when where, what did the machine do what went wrong, and where did that go wrong? Right at your fingertips. >> Right. And I'm a visual person so a bunch of log files to me is not the most helpful. While being able to see this happened at this time in this location, gives me that understanding I need to actually troubleshoot a problem. >> This business orchestration piece, can you say a little bit more about that? How people are using it? What's the business impact of the business orchestration? >> The business orchestration, especially in the smart data fabric, is really that crucial part of being able to create a smart data fabric. So think of your business orchestrator as doing the heavy lifting of any kind of processing that involves data, right? It's bringing data in, it's analyzing that information it's transforming that data, in a format that your consumer's not going to understand. It's doing any additional injection of custom logic. So really your coordinator or that orchestrator that sits in the middle is the brains behind your smart data fabric. >> And this is available today? It all works? >> It's all available today. Yeah, it all works. And we have a number of clients that are using this technology to support these kinds of use cases. >> Awesome demo. Anything else you want to show us? >> Well, we can keep going. I have a lot to say, but really this is our data fabric. The core competency of IRIS is making it smart, right? So I won't spend too much time on this, but essentially if we go back to our coordinator here, we can see here's that original, that pipeline that we saw where we're pulling data from all these different systems and we're collecting it and we're sending it out. But then we see two more at the end here, which involves getting a readmission prediction and then returning a prediction. So we can not only deliver data back as part of a smart data fabric, but we can also deliver insights back to users and consumers based on data that we've aggregated as part of a smart data fabric. So in this scenario, we're actually taking all that data that we just looked at, and we're running it through a machine learning model that exists within the smart data fabric pipeline, and producing a readmission score to determine if this particular patient is at risk for readmission within the next 30 days. Which is a typical problem that we see in the healthcare space. So what's really exciting about what we're doing in the IRIS world, is we're bringing analytics close to the data with integrated ML. So in this scenario we're actually creating the model, training the model, and then executing the model directly within the IRIS platform. So there's no shuffling of data, there's no external connections to make this happen. And it doesn't really require having a PhD in data science to understand how to do that. It leverages all really basic SQL-like syntax to be able to construct and execute these predictions. So, it's going one step further than the traditional data fabric example to introduce this ability to define actionable insights to our users based on the data that we've brought together. >> Well that readmission probability is huge, right? Because it directly affects the cost for the provider and the patient, you know. So if you can anticipate the probability of readmission and either do things at that moment, or, you know, as an outpatient perhaps, to minimize the probability then that's huge. That drops right to the bottom line. >> Absolutely. And that really brings us from that data fabric to that smart data fabric at the end of the day, which is what makes this so exciting. >> Awesome demo. >> Thank you! >> Jess, are you cool if people want to get in touch with you? Can they do that? >> Oh yes, absolutely. So you can find me on LinkedIn, Jessica Jowdy, and we'd love to hear from you. I always love talking about this topic so we'd be happy to engage on that. >> Great stuff. Thank you Jessica, appreciate it. >> Thank you so much. >> Okay, don't go away because in the next segment, we're going to dig into the use cases where data fabric is driving business value. Stay right there. (inspirational music) (music fades)

Published Date : Feb 22 2023

SUMMARY :

and she's going to show And to that end, we do also So you were showing hundreds of these APIs depending in the healthcare industry, So can I even see this as they like to say. that are specific to their data fabric, Yeah, I'll pause. It's a lot here. So we went to the connect So it's a good place to stop. So before we get So that platform needs to All right, so now we're that are related to it? Right at your fingertips. I need to actually troubleshoot a problem. of being able to create of clients that are using this technology Anything else you want to show us? So in this scenario, we're and the patient, you know. And that really brings So you can find me on Thank you Jessica, appreciate it. in the next segment,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Joe LichtenbergPERSON

0.99+

Jessica JowdyPERSON

0.99+

JessicaPERSON

0.99+

Jess JowdyPERSON

0.99+

InterSystemsORGANIZATION

0.99+

ScottPERSON

0.99+

PythonTITLE

0.99+

SimmonsPERSON

0.99+

JessPERSON

0.99+

32345OTHER

0.99+

hundredsQUANTITY

0.99+

IRISORGANIZATION

0.99+

eachQUANTITY

0.99+

todayDATE

0.99+

LinkedInORGANIZATION

0.99+

third segmentQUANTITY

0.98+

FireCOMMERCIAL_ITEM

0.98+

SQLTITLE

0.98+

single platformQUANTITY

0.97+

each dataQUANTITY

0.97+

oneQUANTITY

0.97+

singleQUANTITY

0.95+

single responseQUANTITY

0.94+

single backend systemQUANTITY

0.92+

two moreQUANTITY

0.92+

four different segmentsQUANTITY

0.89+

APIsQUANTITY

0.88+

one stepQUANTITY

0.88+

fourQUANTITY

0.85+

Healthcare Field EngineeringORGANIZATION

0.82+

JSONTITLE

0.8+

single payloadQUANTITY

0.8+

secondQUANTITY

0.79+

one payloadQUANTITY

0.76+

next 30 daysDATE

0.76+

IRISTITLE

0.75+

FireTITLE

0.72+

PostmanTITLE

0.71+

everyQUANTITY

0.68+

four different callsQUANTITY

0.66+

JesPERSON

0.66+

a secondQUANTITY

0.61+

servicesQUANTITY

0.6+

evelopersPERSON

0.58+

PostmanORGANIZATION

0.54+

HL7OTHER

0.4+

How to Make a Data Fabric "Smart": A Technical Demo With Jess Jowdy


 

>> Okay, so now that we've heard Scott talk about smart data fabrics, it's time to see this in action. Right now we're joined by Jess Jowdy, who's the manager of Healthcare Field Engineering at InterSystems. She's going to give a demo of how smart data fabrics actually work, and she's going to show how embedding a wide range of analytics capabilities including data exploration, business intelligence natural language processing, and machine learning directly within the fabric, makes it faster and easier for organizations to gain new insights and power intelligence, predictive and prescriptive services and applications. Now, according to InterSystems, smart data fabrics are applicable across many industries from financial services to supply chain to healthcare and more. Jess today is going to be speaking through the lens of a healthcare focused demo. Don't worry, Joe Lichtenberg will get into some of the other use cases that you're probably interested in hearing about. That will be in our third segment, but for now let's turn it over to Jess. Jess, good to see you. >> Hi. Yeah, thank you so much for having me. And so for this demo we're really going to be bucketing these features of a smart data fabric into four different segments. We're going to be dealing with connections, collections, refinements and analysis. And so we'll see that throughout the demo as we go. So without further ado, let's just go ahead and jump into this demo and you'll see my screen pop up here. I actually like to start at the end of the demo. So I like to begin by illustrating what an end user's going to see and don't mind the screen 'cause I gave you a little sneak peek of what's about to happen. But essentially what I'm going to be doing is using Postman to simulate a call from an external application. So we talked about being in the healthcare industry. This could be for instance, a mobile application that a patient is using to view an aggregated summary of information across that patient's continuity of care or some other kind of application. So we might be pulling information in this case from an electronic medical record. We might be grabbing clinical history from that. We might be grabbing clinical notes from a medical transcription software or adverse reaction warnings from a clinical risk grouping application and so much more. So I'm really going to be assimilating a patient logging on in on their phone and retrieving this information through this Postman call. So what I'm going to do is I'm just going to hit send, I've already preloaded everything here and I'm going to be looking for information where the last name of this patient is Simmons and their medical record number their patient identifier in the system is 32345. And so as you can see I have this single JSON payload that showed up here of just relevant clinical information for my patient whose last name is Simmons all within a single response. So fantastic, right? Typically though when we see responses that look like this there is an assumption that this service is interacting with a single backend system and that single backend system is in charge of packaging that information up and returning it back to this caller. But in a smart data fabric architecture we're able to expand the scope to handle information across different, in this case, clinical applications. So how did this actually happen? Let's peel back another layer and really take a look at what happened in the background. What you're looking at here is our mission control center for our smart data fabric. On the left we have our APIs that allow users to interact with particular services. On the right we have our connections to our different data silos. And in the middle here we have our data fabric coordinator which is going to be in charge of this refinement and analysis those key pieces of our smart data fabric. So let's look back and think about the example we just showed. I received an inbound request for information for a patient whose last name is Simmons. My end user is requesting to connect to that service and that's happening here at my patient data retrieval API location. Users can define any number of different services and APIs depending on their use cases. And to that end we do also support full lifecycle API management within this platform. When you're dealing with APIs I always like to make a little shout out on this that you really want to make sure you have enough like a granular enough security model to handle and limit which APIs and which services a consumer can interact with. In this IRIS platform, which we're talking about today we have a very granular role-based security model that allows you to handle that, but it's really important in a smart data fabric to consider who's accessing your data and in what contact. >> Can I just interrupt you for a second? >> Yeah, please. >> So you were showing on the left hand side of the demo a couple of APIs. I presume that can be a very long list. I mean, what do you see as typical? >> I mean you can have hundreds of these APIs depending on what services an organization is serving up for their consumers. So yeah, we've seen hundreds of these services listed here. >> So my question is, obviously security is critical in the healthcare industry and API securities are really hot topic these days. How do you deal with that? >> Yeah, and I think API security is interesting 'cause it can happen at so many layers. So there's interactions with the API itself. So can I even see this API and leverage it? And then within an API call, you then have to deal with all right, which end points or what kind of interactions within that API am I allowed to do? What data am I getting back? And with healthcare data, the whole idea of consent to see certain pieces of data is critical. So the way that we handle that is, like I said, same thing at different layers. There is access to a particular API, which can happen within the IRIS product and also we see it happening with an API management layer, which has become a really hot topic with a lot of organizations. And then when it comes to data security, that really happens under the hood within your smart data fabric. So that role-based access control becomes very important in assigning, you know, roles and permissions to certain pieces of information. Getting that granular becomes the cornerstone of security. >> And that's been designed in, >> Absolutely, yes. it's not a bolt-on as they like to say. Okay, can we get into collect now? >> Of course, we're going to move on to the collection piece at this point in time, which involves pulling information from each of my different data silos to create an overall aggregated record. So commonly each data source requires a different method for establishing connections and collecting this information. So for instance, interactions with an EMR may require leveraging a standard healthcare messaging format like FIRE, interactions with a homegrown enterprise data warehouse for instance may use SQL for a cloud-based solutions managed by a vendor. They may only allow you to use web service calls to pull data. So it's really important that your data fabric platform that you're using has the flexibility to connect to all of these different systems and and applications. And I'm about to log out so I'm going to keep my session going here. So therefore it's incredibly important that your data fabric has the flexibility to connect to all these different kinds of applications and data sources and all these different kinds of formats and over all of these different kinds of protocols. So let's think back on our example here. I had four different applications that I was requesting information for to create that payload that we saw initially. Those are listed here under this operations section. So these are going out and connecting to downstream systems to pull information into my smart data fabric. What's great about the IRIS platform is it has an embedded interoperability platform. So there's all of these native adapters that can support these common connections that we see for different kinds of applications. So using REST or SOAP or SQL or FTP regardless of that protocol there's an adapter to help you work with that. And we also think of the types of formats that we typically see data coming in as, in healthcare we have H7, we have FIRE we have CCDs across the industry. JSON is, you know, really hitting a market strong now and XML, payloads, flat files. We need to be able to handle all of these different kinds of formats over these different kinds of protocols. So to illustrate that, if I click through these when I select a particular connection on the right side panel I'm going to see the different settings that are associated with that particular connection that allows me to collect information back into my smart data fabric. In this scenario, my connection to my chart script application in this example communicates over a SOAP connection. When I'm grabbing information from my clinical risk grouping application I'm using a SQL based connection. When I'm connecting to my EMR I'm leveraging a standard healthcare messaging format known as FIRE, which is a rest based protocol. And then when I'm working with my health record management system I'm leveraging a standard HTTP adapter. So you can see how we can be flexible when dealing with these different kinds of applications and systems. And then it becomes important to be able to validate that you've established those connections correctly and be able to do it in a reliable and quick way. Because if you think about it, you could have hundreds of these different kinds of applications built out and you want to make sure that you're maintaining and understanding those connections. So I can actually go ahead and test one of these applications and put in, for instance my patient's last name and their MRN and make sure that I'm actually getting data back from that system. So it's a nice little sanity check as we're building out that data fabric to ensure that we're able to establish these connections appropriately. So turnkey adapters are fantastic, as you can see we're leveraging them all here, but sometimes these connections are going to require going one step further and building something really specific for an application. So let's, why don't we go one step further here and talk about doing something custom or doing something innovative. And so it's important for users to have the ability to develop and go beyond what's an out of the box or black box approach to be able to develop things that are specific to their data fabric or specific to their particular connection. In this scenario, the IRIS data platform gives users access to the entire underlying code base. So you cannot, you not only get an opportunity to view how we're establishing these connections or how we're building out these processes but you have the opportunity to inject your own kind of processing your own kinds of pipelines into this. So as an example, you can leverage any number of different programming languages right within this pipeline. And so I went ahead and I injected Python. So Python is a very up and coming language, right? We see more and more developers turning towards Python to do their development. So it's important that your data fabric supports those kinds of developers and users that have standardized on these kinds of programming languages. This particular script here, as you can see actually calls out to our turnkey adapters. So we see a combination of out of the box code that is provided in this data fabric platform from IRIS combined with organization specific or user specific customizations that are included in this Python method. So it's a nice little combination of how do we bring the developer experience in and mix it with out of the box capabilities that we can provide in a smart data fabric. >> Wow. >> Yeah, I'll pause. >> It's a lot here. You know, actually, if I could >> I can pause. >> If I just want to sort of play that back. So we went through the connect and the collect phase. >> And the collect, yes, we're going into refine. So it's a good place to stop. >> Yeah, so before we get there, so we heard a lot about fine grain security, which is crucial. We heard a lot about different data types, multiple formats. You've got, you know the ability to bring in different dev tools. We heard about FIRE, which of course big in healthcare. >> Absolutely. >> And that's the standard and then SQL for traditional kind of structured data and then web services like HTTP you mentioned. And so you have a rich collection of capabilities within this single platform. >> Absolutely, and I think that's really important when you're dealing with a smart data fabric because what you're effectively doing is you're consolidating all of your processing, all of your collection into a single platform. So that platform needs to be able to handle any number of different kinds of scenarios and technical challenges. So you've got to pack that platform with as many of these features as you can to consolidate that processing. >> All right, so now we're going into refine. >> We're going into refinement, exciting. So how do we actually do refinement? Where does refinement happen and how does this whole thing end up being performant? Well the key to all of that is this SDF coordinator or stands for smart data fabric coordinator. And what this particular process is doing is essentially orchestrating all of these calls to all of these different downstream systems. It's aggregating, it's collecting that information it's aggregating it and it's refining it into that single payload that we saw get returned to the user. So really this coordinator is the main event when it comes to our data fabric. And in the IRIS platform we actually allow users to build these coordinators using web-based tool sets to make it intuitive. So we can take a sneak peek at what that looks like and as you can see it follows a flow chart like structure. So there's a start, there is an end and then there are these different arrows that point to different activities throughout the business process. And so there's all these different actions that are being taken within our coordinator. You can see an action for each of the calls to each of our different data sources to go retrieve information. And then we also have the sync call at the end that is in charge of essentially making sure that all of those responses come back before we package them together and send them out. So this becomes really crucial when we're creating that data fabric. And you know, this is a very simple data fabric example where we're just grabbing data and we're consolidating it together. But you can have really complex orchestrators and coordinators that do any number of different things. So for instance, I could inject SQL Logic into this or SQL code, I can have conditional logic, I can do looping, I can do error trapping and handling. So we're talking about a whole number of different features that can be included in this coordinator. So like I said, we have a really very simple process here that's just calling out, grabbing all those different data elements from all those different data sources and consolidating it. We'll look back at this coordinator in a second when we introduce or we make this data fabric a bit smarter and we start introducing that analytics piece to it. So this is in charge of the refinement. And so at this point in time we've looked at connections, collections, and refinements. And just to summarize what we've seen 'cause I always like to go back and take a look at everything that we've seen. We have our initial API connection we have our connections to our individual data sources and we have our coordinators there in the middle that are in charge of collecting the data and refining it into a single payload. As you can imagine, there's a lot going on behind the scenes of a smart data fabric, right? There's all these different processes that are interacting. So it's really important that your smart data fabric platform has really good traceability, really good logging 'cause you need to be able to know, you know, if there was an issue, where did that issue happen, in which connected process and how did it affect the other processes that are related to it. In IRIS, we have this concept called a visual trace. And what our clients use this for is basically to be able to step through the entire history of a request from when it initially came into the smart data fabric to when data was sent back out from that smart data fabric. So I didn't record the time but I bet if you recorded the time it was this time that we sent that request in. And you can see my patient's name and their medical record number here and you can see that that instigated four different calls to four different systems and they're represented by these arrows going out. So we sent something to chart script to our health record management system, to our clinical risk grouping application into my EMR through their FIRE server. So every request, every outbound application gets a request and we pull back all of those individual pieces of information from all of those different systems and we bundle them together. And for my FIRE lovers, here's our FIRE bundle that we got back from our FIRE server. So this is a really good way of being able to validate that I am appropriately grabbing the data from all these different applications and then ultimately consolidating it into one payload. Now we change this into a JSON format before we deliver it, but this is those data elements brought together. And this screen would also be used for being able to see things like error trapping or errors that were thrown alerts, warnings, developers might put log statements in just to validate that certain pieces of code are executing. So this really becomes the one stop shop for understanding what's happening behind the scenes with your data fabric. >> Etcher, who did what, when, where what did the machine do? What went wrong and where did that go wrong? >> Exactly. >> Right in your fingertips. >> Right, and I'm a visual person so a bunch of log files to me is not the most helpful. Well, being able to see this happened at this time in this location gives me that understanding I need to actually troubleshoot a problem. >> This business orchestration piece, can you say a little bit more about that? How people are using it? What's the business impact of the business orchestration? >> The business orchestration, especially in the smart data fabric is really that crucial part of being able to create a smart data fabric. So think of your business orchestrator as doing the heavy lifting of any kind of processing that involves data, right? It's bringing data in, it's analyzing that information, it's transforming that data, in a format that your consumer's not going to understand it's doing any additional injection of custom logic. So really your coordinator or that orchestrator that sits in the middle is the brains behind your smart data fabric. >> And this is available today? This all works? >> It's all available today. Yeah, it all works. And we have a number of clients that are using this technology to support these kinds of use cases. >> Awesome demo. Anything else you want to show us? >> Well we can keep going. 'Cause right now, I mean we can, oh, we're at 18 minutes. God help us. You can cut some of this. (laughs) I have a lot to say, but really this is our data fabric. The core competency of IRIS is making it smart, right? So I won't spend too much time on this but essentially if we go back to our coordinator here we can see here's that original that pipeline that we saw where we're pulling data from all these different systems and we're collecting it and we're sending it out. But then we see two more at the end here which involves getting a readmission prediction and then returning a prediction. So we can not only deliver data back as part of a smart data fabric but we can also deliver insights back to users and consumers based on data that we've aggregated as part of a smart data fabric. So in this scenario, we're actually taking all that data that we just looked at and we're running it through a machine learning model that exists within the smart data fabric pipeline and producing a readmission score to determine if this particular patient is at risk for readmission within the next 30 days. Which is a typical problem that we see in the healthcare space. So what's really exciting about what we're doing in the IRIS world is we're bringing analytics close to the data with integrated ML. So in this scenario we're actually creating the model, training the model, and then executing the model directly within the IRIS platform. So there's no shuffling of data, there's no external connections to make this happen. And it doesn't really require having a PhD in data science to understand how to do that. It leverages all really basic SQL like syntax to be able to construct and execute these predictions. So it's going one step further than the traditional data fabric example to introduce this ability to define actionable insights to our users based on the data that we've brought together. >> Well that readmission probability is huge. >> Yes. >> Right, because it directly affects the cost of for the provider and the patient, you know. So if you can anticipate the probability of readmission and either do things at that moment or you know, as an outpatient perhaps to minimize the probability then that's huge. That drops right to the bottom line. >> Absolutely, absolutely. And that really brings us from that data fabric to that smart data fabric at the end of the day which is what makes this so exciting. >> Awesome demo. >> Thank you. >> Fantastic people, are you cool? If people want to get in touch with you? >> Oh yes, absolutely. So you can find me on LinkedIn, Jessica Jowdy and we'd love to hear from you. I always love talking about this topic, so would be happy to engage on that. >> Great stuff, thank you Jess, appreciate it. >> Thank you so much. >> Okay, don't go away because in the next segment we're going to dig into the use cases where data fabric is driving business value. Stay right there.

Published Date : Feb 15 2023

SUMMARY :

for organizations to gain new insights And to that end we do also So you were showing hundreds of these APIs in the healthcare industry So the way that we handle that it's not a bolt-on as they like to say. that data fabric to ensure that we're able It's a lot here. So we went through the So it's a good place to stop. the ability to bring And so you have a rich collection So that platform needs to we're going into refine. that are related to it. so a bunch of log files to of being able to create this technology to support Anything else you want to show us? So in this scenario, we're Well that readmission and the patient, you know. to that smart data fabric So you can find me on you Jess, appreciate it. because in the next segment

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jessica JowdyPERSON

0.99+

Joe LichtenbergPERSON

0.99+

InterSystemsORGANIZATION

0.99+

Jess JowdyPERSON

0.99+

ScottPERSON

0.99+

JessPERSON

0.99+

18 minutesQUANTITY

0.99+

hundredsQUANTITY

0.99+

32345OTHER

0.99+

PythonTITLE

0.99+

SimmonsPERSON

0.99+

eachQUANTITY

0.99+

IRISORGANIZATION

0.99+

third segmentQUANTITY

0.99+

EtcherORGANIZATION

0.99+

todayDATE

0.99+

LinkedInORGANIZATION

0.98+

SQLTITLE

0.98+

single platformQUANTITY

0.98+

oneQUANTITY

0.98+

JSONTITLE

0.96+

each data sourceQUANTITY

0.96+

singleQUANTITY

0.95+

one stepQUANTITY

0.94+

one stepQUANTITY

0.94+

single backendQUANTITY

0.92+

single responseQUANTITY

0.9+

two moreQUANTITY

0.85+

single payloadQUANTITY

0.84+

SQL LogicTITLE

0.84+

a secondQUANTITY

0.83+

IRISTITLE

0.83+

four different segmentsQUANTITY

0.82+

PostmanPERSON

0.78+

FIRETITLE

0.77+

SOAPTITLE

0.76+

four different applicationsQUANTITY

0.74+

one stopQUANTITY

0.74+

PostmanTITLE

0.73+

one payloadQUANTITY

0.72+

each ofQUANTITY

0.71+

RESTTITLE

0.7+

Healthcare Field EngineeringORGANIZATION

0.67+

next 30 daysDATE

0.65+

fourQUANTITY

0.63+

these APIsQUANTITY

0.62+

secondQUANTITY

0.54+

GodPERSON

0.53+

everyQUANTITY

0.53+

servicesQUANTITY

0.51+

H7COMMERCIAL_ITEM

0.5+

applicationQUANTITY

0.48+

FIREORGANIZATION

0.38+

XMLTITLE

0.38+

A Day in the Life of Data with the HPE Ezmeral Data Fabric


 

>>Welcome everyone to a day in the life of data with HPE as well. Data fabric, the session is being recorded and will be available for replay at a later time. When you want to come back and view it again, feel free to add any questions that you have into the chat. And Chad and I joined stark. We'll, we'll be more than willing to answer your questions. And now let me turn it over to Jimmy Bates. >>Thanks. Uh, let me go ahead and share my screen here and we'll get started. >>Hey everyone. Uh, once again, my name is Jimmy Bates. I'm a director of solutions architecture here for HPS Merle in the Americas. Uh, today I'd like to walk you through a journey on how our everyday life is evolving, how everything about our world continues to grow more connected about, and about how here at HPE, how we support the data that represents that digital evolution for our customers, with the HPE as rural data fabric to start with, let's define that term data. The concept of that data can be simplified to a record of life's events. No matter if it's personal professional or mechanical in nature, data is just records that represent and describe what has happened, what is happening or what we think will happen. And it turns out the more complete record we have of these events, the easier it is to figure out what comes next. >>Um, I like to refer to that as the omnipotence protocol. Um, let's look at this from a personal perspective of two very different people. Um, let me introduce you to James. He's a native citizen of the digital world. He's, he's been, he's been a citizen of this, uh, an a career professional in the it world for years. He's always on always connected. He loves to get all the information he needs on a smartphone. He works constantly with analytics. He predicts what his customers need, what they want, where they are, uh, and how best to reach them. Um, he's fully embraced the use of data in his life. This is Sue SCA. She's, she's a bit of a, um, of an opposite to James. She's not yet immigrated to our digital world. She's been dealing with the changes that are prevalent in our times. And she started a new business that allows her customers, the option of, um, of expressing their personalities and the mask that they wear. She wants to make sure her customers can upload images, logos, and designs in order to deliver that customized mask, uh, to brighten their interactions with others while being safe as they go about their day. But she needs a crash course in digital and the digital journey. She's recently as, as most of us have as transitioned from an office culture to a work from home culture, and she wants to continue to grow that revenue venture on the side >>At the core of these personalities is a journey that is, that is representative common challenge that we're all facing today. Our world has been steadily shrinking as our ability to reach out to one another has steadily increased. We're all on that journey together to know more about what is happening to be connected to what our business is doing to be instantly responsive to our customer needs and to deliver that personalized service to every individual. And it as moral, we see this across every industry, the challenge of providing tailored experiences to potential customers in a connected world to provide constant information on deliveries that we requested or provide an easier commute to our destination to, to change the inventories, um, to the just-in-time arrival for our fabrications to identify quality issues in real time to alter the production of each product. So it's tailored to the request of the end user to deliver energy in, in smarter, more efficient ways, uh, without injury w while protecting the environment and to identify those, those, uh, medical emerging threats, and to deliver those personalized treatments safely. >>And at the core of all of these changes, all of these different industries is data. Um, if you look at the major technology trends, um, they've been evolving down this path for some time now, we're we're well into our cloud journey. The mobile platform world is, is now just part of our core strategies. IOT is feeding constant streams of data often over those mobile, uh, platforms. And the edge is increasingly just part of our core, all of this combined with the massive amounts of data that's becoming, becoming available through it is driving autonomous solutions with machine learning and AI. Uh, this is, this is just one aspect of this, this data journey that we're on, but for success, it's got, uh, sorry for success. It's got to be paired. Um, it's gotta be paired with action. >>Um, >>Well, when you look at the, uh, um, if we take a look at James and Cisco, right, we can start to see, um, with the investments in those actions, um, how their travel they're realizing >>Their goals, >>Services, efforts, you know, uh, focused, deliver new data-driven applications are done in new ways that are smaller in nature and kind of rapidly iterate, um, to respond to the digital needs of, of our new world, um, containerization to deploy and manage those apps anywhere in our connected world, they need to be secure we'll time streaming architecture, um, from, from the, from the beginning to allow for continual interactions with our changing customer demands and all of this, especially in our current environment, while running cost reduction initiatives. This is just the current world that, that our solutions must live in. Um, with that framework in mind, um, I'd like to take the remainder of our time and kind of walk through some of the use cases where, where we at HPE helped organizations through this journey with, with, with the ASML data fabrics, >>Let's >>Start with what's happening in the mobile world. In fact, the HPE as moral data fabric is being used by a number of companies to provide infinitely personalized experiences. In this case, it could be James could be sushi. It could be anyone that opens up their smartphone in the morning, uh, quickly checking what's transpiring in the world with a selection of curated, relative relevant articles, images, and videos provided by data-driven algorithm workloads, all that data, the logs, the recommendations, and the delivery of those recommendations are done through a variety of companies using HP as rural software, um, that provides a very personalized experience for our users. In addition, other companies monitor the service quality of those mobile devices to ensure optimize connectivity as they move throughout their day. The same is true for digital communication for that video communication, what we're doing right now, especially in these days where it's our primary method of connecting as we deal with limited physical engagements. Um, there's been a clear spike in the usage of these types of services. HPE, as Merle is helping a number of these companies deliver on real time telemetry analysis, predicting demand, latency, monitoring, user experience, and analyzing in real time, responding with autonomous adjustments to maintain pleasant experiences for all participants involved. >>Um, >>Another area, um, we're eight or HBS ML data fabric is playing a crucial role in the daily experience inside our automobiles. We invest a lot of ourselves in our cars. We expect tailored experiences that help us stay safe and connected as we move from one destination to another, in the areas of autonomous driving connected car, a number of major car companies in the world are using our data fabric to take autonomous driving to the next level where it should be effectively collecting all data from sensors and cameras, and then feeding that back into a global data fabric. So that engineers that develop cars can train next generation, future driving algorithms that make our driving experience safer and more autonomy going forward. >>Now let's take a look at a different mode of travel. Uh, the airline industry is being impaired. Varied is being impacted very differently today from, from the car companies, with our software, uh, we help airlines travel agencies, and even us as consumers deal with pricing, calculations and challenges, uh, with, um, air traffic services. We, we deal with, um, um, uh, delivering services around route predictions on time arrivals, weather patterns, and tagging and tracking luggage. We help people with flight connections and finding out what the figuring out what the best options are for your, for your travel. Uh, we collect mountains of data, secure it in a global data fabric, so it can provide, be provided back in an analyzed form with it. The stressed industry can contain some very interesting insights, provide competitive offerings and better services to us as travelers. >>This is also true for powering biometrics. At scale, we work with the biggest biometrics databases in the world, providing the back end for their enormous biometric authentication pursuit. Just to kind of give you a rough idea. A biometric authentication is done with a number of different data points from fingerprints. I re scans numerous facial features. All of these data points are captured for every individual and uploaded into the database, such that when the user is requesting services, their biometric metrics can be pooled and validated in seconds. From a scale perspective, they're onboarding 1 million people a day more than 200 million a year with a hundred percent business continuity and the options do multi-master and a global data fabric as needed ensuring that users will have no issues in securely accessing their pension payouts medical services or what other types of services. They may be guaranteed >>Pivoting >>To a very different industry. Even agriculture was being impacted in digital ways. Using HPE as well, data fabric, we help farmers become more digital. We help them predict weather patterns, optimize sea production. We even helped see producers create custom seed for very specific weather and ground conditions. We combine all of these things to help optimize production and ensure we can feed future generations. In some cases, all of these data sources collected at the edge can be provided back to insurance companies to help farmers issue claims when micro patterns affect farmers in negative ways, we all benefit from optimized farming and the HBS Modena fabric is there to assist in that journey. We provide the framework and the workload guidance to collect relevant data, analyze it and optimize food production. Our customers demonstrate the agricultural industry is most definitely my immigrating to our digital world. >>Now >>That we've got the food, we need to ship it along with everything else, all over the world, as well as offer can be found in action in many of the largest logistics companies in the world. I mean, just tracking things with greater efficiency can lead to astounding insights. What flights and ships did the package take? What Hans held it along its journey, what weather conditions did it encounter? What, what customs office did it go through and, and how much of it's requested and being delivered this along with hundreds of other telemetry points can be used to provide very accurate trade and economic predictions around what's going on with trade in the world. These data sets are being used very intensively to understand economy conditions and plan for future event consequences. We also help answer, uh, questions for shipping containers that are, that are more basic. Uh, like where is my container located at is my container still on the correct ship? Uh, surprisingly, uh, this helps cut down on those pesky little events like lost containers. >>Um, it's astounding the amount of data that's in DNA, and it's not just the pairs. It's, it's the never ending patterns found with other patterns that none of it can be fully understood unless the micro is maintained in context to the macro. You can't really understand these small patterns unless you maintain that overall understanding of the entire DNA structure to help the HVS mold data fabric can be found across every aspect of the medical field. Most recently was there providing the software framework to collect genomic sequencing, landing it in the data fabric, empowering connected availability for analysis to predict and find patterns of significance to shorten the effort it takes to identify those potential triggers and make things like vaccines become becoming available. In record time. >>Data is about people at HPE asthma. We keep people connected all around the world. We do this in a variety of ways. We we've already looked at several of the ways that that happens. We help you find data. You need, we help you get from point a to point B. We help make sure those birthday gifts show up on time. Some other interesting ways we connect people via recipes, through social platforms and online services. We help people connect to that new recipe that is unexpected, but may just be the kind of thing you need for dinner tonight at HPDs where we provide our customers with the power to deliver services that are tailored to the individual from edge to core, from containers to cloud. Many of the services you encounter everyday are delivered to you through an HV as oral global data fabric. You may not see it, but we're there in the morning in the morning when you get up and we're there in the evening. Um, when you wind down, um, at HPE as role, we make data globally available across everywhere that your business needs to go. Um, I'd like to thank everyone, uh, for the time that you've given us today. And I'd like to turn it back over and open up the floor for questions at this time, >>Jimmy, here's a question. What are the ways consumers can get started with HPS >>The fabric? Well, um, uh, there's several ways to get started, right? We, we, uh, first off we have software available that you can download that there's extensive documentation and use cases posted on our website. Um, uh, we have services that we offer, like, um, assessment services that can come in and help you assess the, the data challenges that you're having, whether you're, you're just dealing with a scale issue, a security issue, or trying to migrate to a more containerized approach. We have a services to help you come in, assess that aspect. Um, we have a getting started bundles, um, and we have, um, so there's all kinds of services that, that help you get started on your journey. So what >>Does a typical first deployment look like? >>Well, that's, that's a very, very interesting question. Um, a typical first deployment, it really kind of varies depending on where you're at in the material. Are you James? Are you, um, um, Cisco, right? It really depends on, on where you're at in your journey. Um, but a typical deployment, um, is, is, is involved. Uh, we, we like to come in, we we'd like to do workshops, really understand your specific challenges and problems so that we can determine what solutions are best for you. Um, that to take a look at when we kind of settle on that we, we, um, the first deployment, uh, is, um, there's typically, um, a deployment of, uh, a, uh, a service offering, um, w with a software to kind of get you started along the way we kind of bundle that aspect. Um, as you move forward, if you're more mature and you already have existing container solutions, you already have existing, large scale data aspects of it. Um, it's really about the specific use case of your current problem that you're dealing with. Um, every solution, um, is tailored towards the individual challenges and problems that, that each one of us are facing. >>I break, they mentioned as part of the asthma family. So how does data fabric pair with the other solutions within Israel? >>Well, so I like to say there's, um, there, there's, there's three main areas, um, from a software standpoint, um, for when you count some of our, um, offerings with the GreenLake solution, but there are, so there are really four main areas with ESMO. There's the data fabric offering, which is really focused on, on, on, on delivering that data at scale for AI ML workloads for big data workloads for containerized workloads. There is the ESMO container platform, which really solves a lot of, um, some of the same problems, but really focus more on a compute delivery, uh, and a hundred percent Kubernetes environment. We also have security offerings, um, which, which help you take in this containerized world, uh, that help you take the different aspects of, um, securing those applications. Um, so that when the application, the containerized applications move from one framework or one infrastructure from one to the other, it really helps those, the security go with those applications so that they can operate in a zero trust environment. And of course, all of this, uh, options of being available to you, where everything has a service, including the hardware through some of our GreenLake offerings. So those are kind of the areas that, uh, um, that pair with the HPE, um, data fabric, uh, when you look at the entire ESMO pro portfolio. >>Well, thanks, Jimmy really appreciate it. That's all the questions we have right now. So is there anything that you'd like to close with? >>Uh, you know, the, um, I I'm, I find it I'm very, uh, I'm honored to be here at HPE. Um, I, I really find it, it's amazing. Uh, as we work with our customers solving some really challenging problems that are core to their business, um, it's, it's always an interesting, um, interesting, um, day in the office because, uh, every problem is different because every problem is tailored to the specific challenges that our customers face. Um, while they're all will well, we will, what we went over today is a lot of the general areas and the general concepts that we're all on together in a journey, but the devil's always in the details. It's about understanding the specific challenges in the organization and, and as moral software is designed to help adapt, um, and, and empower your growth in your, in your company. So that you're focused on your business, in the complexity of delivering services across this connected world. That's what as will takes off your plate so that you don't have to worry about that. It just works, and you can focus on the things that impact your business more directly. >>Okay. Well, we really thank everyone for coming today and hope you learned, uh, an idea about how data fabric can begin to help your business with it. All of a sudden analytics, thank you for coming. Thanks.

Published Date : Mar 17 2021

SUMMARY :

Welcome everyone to a day in the life of data with HPE as well. Uh, let me go ahead and share my screen here and we'll get started. that digital evolution for our customers, with the HPE as rural data fabric to and designs in order to deliver that customized mask, uh, to brighten their interactions with others while protecting the environment and to identify those, those, uh, medical emerging threats, all of this combined with the massive amounts of data that's becoming, becoming available through it is This is just the current world that, that our solutions must live in. the service quality of those mobile devices to ensure optimize connectivity as they move a number of major car companies in the world are using our data fabric to take autonomous uh, we help airlines travel agencies, and even us as consumers deal with pricing, Just to kind of give you a rough idea. from optimized farming and the HBS Modena fabric is there to assist in that journey. and how much of it's requested and being delivered this along with hundreds of other telemetry points landing it in the data fabric, empowering connected availability for analysis to Many of the services you encounter everyday are delivered to you through What are the ways consumers can get started with HPS We have a services to help you uh, a service offering, um, w with a software to kind of get you started with the other solutions within Israel? uh, um, that pair with the HPE, um, data fabric, uh, when you look at the entire ESMO pro portfolio. That's all the questions we have right now. in the organization and, and as moral software is designed to help adapt, an idea about how data fabric can begin to help your business with it.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JamesPERSON

0.99+

ChadPERSON

0.99+

Jimmy BatesPERSON

0.99+

JimmyPERSON

0.99+

CiscoORGANIZATION

0.99+

HPORGANIZATION

0.99+

todayDATE

0.99+

HansPERSON

0.99+

HPS MerleORGANIZATION

0.99+

IsraelLOCATION

0.99+

hundredsQUANTITY

0.99+

HPEORGANIZATION

0.99+

AmericasLOCATION

0.99+

tonightDATE

0.99+

each productQUANTITY

0.98+

HPDsORGANIZATION

0.98+

three main areasQUANTITY

0.97+

ESMOTITLE

0.97+

four main areasQUANTITY

0.96+

more than 200 million a yearQUANTITY

0.96+

MerleORGANIZATION

0.96+

hundred percentQUANTITY

0.95+

one aspectQUANTITY

0.95+

GreenLakeORGANIZATION

0.94+

first deploymentQUANTITY

0.94+

one frameworkQUANTITY

0.93+

two very different peopleQUANTITY

0.92+

one infrastructureQUANTITY

0.92+

zero trustQUANTITY

0.88+

Sue SCAPERSON

0.88+

1 million people a dayQUANTITY

0.87+

firstQUANTITY

0.84+

ModenaCOMMERCIAL_ITEM

0.82+

HBSORGANIZATION

0.82+

each oneQUANTITY

0.82+

one destinationQUANTITY

0.77+

eightQUANTITY

0.73+

yearsQUANTITY

0.72+

A DayQUANTITY

0.67+

telemetry pointsQUANTITY

0.67+

KubernetesTITLE

0.61+

EzmeralORGANIZATION

0.58+

JamesORGANIZATION

0.56+

HPEOTHER

0.53+

Ethernet Storage Fabric with Mellanox


 

(light music) >> Hi, I'm Stu Miniman here at theCUBE studio in Palo Alto in the center of Silicon Valley. Happy to welcome back first of all a many time guest at theCUBE, Kevin Deierling with Mellanox, and also someone I've known for many years, but the first time we've actually gotten under the lights in front of the cameras, Marty Lans with Hewlett-Packard Enterprise. Here to talk a lot about networking today and not just networking but storage networking. So, you know, kind of one of the dark corners of the IT world that... There's those of us that have known each other for decades it seems. And, but you know, pretty critical to a lot of what goes on in the environment. Kevin, you know, let's start with you. You know, we've caught up with Mellanox a bunch. Obviously we do a lot of video with HPE. We'll be at the Discover show in Europe coming soon. But why'd you bring Marty along to talk about some of this stuff? >> Yeah, so HPE has been a long-time partner of Mellanox. We're really not necessarily known as a storage networking company, but in fact we're in a ton of storage platforms with our InfiniBand. So, we have super-high quality reliability. We're built into the major storage platforms in the world and Enterprise Appliances, and now with this new work that we're doing with Marty's team and HPE, we're really building what we consider to be the first Ethernet storage fabric that will scale out what we've done in other worlds with dedicated storage platforms. >> Okay, Marty, before we get into some of the things you're doing with Mellanox, tell us a little bit about your role, how you fit inside Hewlett-Packard Enterprise as it's made up today. >> I'm responsible for storage networking, or the connectivity for storage as well as our interoperability. So if you think about it, it's a very broad category from a role perspective. We have a lot of challenges with all the new types of storage technologies today. And that's where Mellanox gets to come in. >> So just elaborate a little bit. What products do you have? NICs and host bus adapters, switches, what falls under your purview? >> Pretty much everything, everything you just mentioned. We carry traditionally, all the traditional storage connectivity products, Fibre Channels, switches, adapters, optics cables, pretty much the whole ecosystem. >> So what we're talking about is the Ethernet storage fabric. So can one of you set it up for us, as to what that term means? And we talked about Fibre Channel. Fibre Channel is a bespoke network designed for storage, a lot of times run by storage people or storage networking people underneath that umbrella. What's happening with the Ethernet side? >> Yeah, I think when you look at the traditional SAN network it was Fibre Channel and the metrics that people evaluate that on are performance, and reliability, and intelligence, storage intelligence. Today when you look at that on all those metrics Ethernet actually wins. So we can get three times the performance for 1/3 the price. Everything is built in in terms of all of the new protocols like NVMe over Fabrics, which is a new one that's coming. Obviously iSCSI. And taking some of the things that we do in terms of intelligence, like RDMA, which is RoCE over Ethernet, that's what really enables NVMe over Fabrics. We have that end-to-end supply of switches, adapters, and cables. And working with HPE, we can bring all of the benefits of the platform that they have and all of the software to that world. Suddenly you've got something that's unmatched with Ethernet. And that's the internet storage fabric. >> So Marty, one of the things I've said a bunch over the last couple of years is nothing ever dies. But Fibre Channel, it's dead, right? Isn't that what this means? Why don't you help us a little bit with the nuance of what you're seeing, what customers are asking, and of course there are certain administrators that are like, I know it, I love it, I'm going to keep buying it for years. >> I guess Fibre Channel's still alive. It's doing very well. I think from a primary storage perspective, I mean that's where Fibre Channel is used, right? Today's storage has a lot of different technologies. And I like to look at this in a couple of ways. One, you look at the evolution of media. You're going from disk, we went from tape to disk, and now we're going from disk to Flash. And Flash to NVMe. And now we have things like performance and latency requirements that weren't there before. And the bottleneck is moved from the storage array to the network. So having a network that creates great latency is really the issue at stake. We have latency road maps. We don't have performance road maps from a storage perspective. So that's the big one. >> Kevin, I'm sure you want to comment on some of the latency piece. That's Mellanox's legacy. >> So with some of the things we're doing now, NVMe over Fabrics, we're adding 10 microseconds of latency. So you've got an NVMe Flash drive. When it was spinning rust, and it took 10 milliseconds, who cared what the network added? Today you really care. We're down to the tens of microseconds to access an NVMe Flash drive. When you move it out of the box, now you need to network it. And that's what we really do, is allow you to access NVMe over Fabrics and iSCSI and iSER and things like that in a remote box and you're adding less than 10 microseconds of latency. It's incredible. >> Yeah, Marty, I think back. Even 10 years ago, there was a lot of times, okay, do I want InfiniBand, do I want Ethernet, do I want Fibre Channel? And there were more political implications than there were technical, architectural implications. I said five years ago, the storage protocol wars are dead. That being said, it doesn't mean that we're still sorting those out. What do you hear from customers? Any more nuance you want to give on that piece? Architecturally, right, Ethernet can do it all today, right? >> Sure, yeah, yeah, it is. So I think those challenges are still there. You still have that... you mentioned political, and I think that's something that's still going to be there for quite some time. The nice thing we did with Mellanox, and what we did in our own technology for storage connectivity, we innovated in an area that I think really hasn't been innovated that was ripe for innovation. So creating an environment that gives the storage network administrator the same capabilities of what you get in Fibre Channel we can do on an Ethernet network today. >> And Marty, one of the things. When we get a partnership announcement like this, bring us inside. Talk to us about what engineering is being done. How is this more than just sticking a lovely new logo on it? What development, what's HPE been bringing to this offering? >> So we did, first when we started, before we get to the Ethernet side, we built something called Smart SAN. It's automation orchestration for Fibre Channel networks. And that was a big success. What we did after that was we looked at it from the Ethernet perspective. We said why can't we do it there? It's in-band, it's real-time access, and it gives you the ability to do all the nuances of what makes Ethernet hard. Automate and orchestrate all the Ethernet capabilities to behave much like a Fibre Channel network. So this is a four- to five-year development cycle that we're in, in terms of developing these products. And sitting down with Mellanox, this is not just a marketing relationship. There is a lot of engineering development work that we've done with Mellanox to storage optimize their products. To make them specifically designed to handle storage traffic. >> Kevin, it's interesting. I think back to, let's say the big other Ethernet company. When they got into Fibre Channel, they learned a lot from the storage side that they drove into some of their Ethernet products. So you kind of see learning going back and forth. It's a small industry we have here. What did HPE bring to the table, and more importantly, what's the latest as to what makes the Ethernet storage fabrics... What's going to move the needle on some of that storage adoption? >> I think the key thing is, as Marty said, if you look at it you've got to be able to be familiar with all of the same things. You need to provide the same level of protection. So whether you're using data center bridging to have a lossless network. We have zero packet loss switches, which means that our switches don't drop packets under the cases where you've actually over-subscribed a network. We can actually push back, we can use PFC, we can use ECN. All of that, and on top of that, what's happened is the look and feel to be able to manage things just like it's Fibre Channel. So all that intelligence that HPE has invested in so much over the years is now being brought to bear on Ethernet. One of the big things we see is in the cloud, people have already moved to a converged network where you're seeing compute and networking and storage all on the same fabric. And really that's Ethernet. And so what we're doing now is bringing all of those capabilities to the enterprise. So we think that 15 or 20 years ago there was really no choice. Fibre Channel was absolutely the right choice. Now we're really trying to make it as easy as possible to make that enterprise transformation to be cloud-like. >> It's funny. Marty, you and I worked for EMC back when that storage network was being designed. Architecturally, those of us who have been in networking since before Fibre Channel, we would have loved to do it with Ethernet, but there were limitations with CPU, the network itself. It would have been nice. But fast forward, it was like, Flash had been around for a long time before, oh wait, now it's ready for enterprise. Now it feels like Ethernet has gone through a lot of that journey. You're welcome to comment on that. But the question I want to have from the storage side, we're going through so many changes. HPE has a very large portfolio, a number of acquisitions as well as many things HPE's doing. We talked about NVMe, NVMe over Fabric, we talked about hyper-converge, we talked about scale-out NAS. Networking is not trivial when it comes to building out distributed architectures. And of course storage has very particular requirements when it comes to network. So what are you hearing from your customers from the storage side of the business? How does HPE pull those pieces together and how does this Ethernet storage fabric fit into it? >> I mentioned it earlier. We talked about the primary array being Fibre Channel. If you take a look at where storage has gone, you talk about the cloud, you talk about all these big data, now you've got secondary storage, you've got hyper-converged storage, you've got NAS scale-out, you've got object. I mean, you go on and on. And all these different storage technologies are representing almost 80% of all the data that's out there. Most of that data, or all that data, now that I think about it, is connected by Ethernet. Now what's interesting is, from our perspective, is that we have a purview of all that capability. I see that challenge that customers are having. And the problem that these customers are finding is they go through the first layer of the challenges which is the storage capabilities they need in these storage technologies. And then they get to the next layer that says oh, by the way, the network isn't that great. And so this is where we saw an opportunity to create something that created the same category of capabilities as you got in your primary to the rest of the storage technologies. They're already using Ethernet. It's a great opportunity to provide another dedicated network that does connectivity for all those other types of storage devices, including primary. >> Is there anything along the management of these type of environments? How similar, how much retraining do you need to do? If your customers are probably going to manage both for a while. >> From a usability perspective, it's quite easy. I think what customers are going to find. We use Fibre Channel as the lowest common denominator in terms of everything has to meet, the Ethernet network has to meet those kind of requirements. So what we did was we replicated that capability throughout the rest. With our automation orchestration capabilities it gives us the feature. From a customer perspective it's really a hands-off kind of solution. It's really nice. >> The other piece is... Kevin, how's the application portfolio changing? You mentioned a little bit, some of those really specific latencies that we have. What are you seeing from customers from the application portfolio? David Floyer from Wikibon has been talking for a long time. HPC is going to become mainstream in the enterprise which seems to pull all of these pieces together. >> That's Mellanox's heritage. We came from the InfiniBand world with HBC. We're really good at building giant supercomputers. And the cloud looks very much like that. And when you talk about things like big data, and Hadoop, and Spark, all of these activities for analytics, all these workloads. So it's not just the traditional enterprise database workloads that need the performance, but all of these new data intensive. And Marty really talked about the two different elements. One was the faster media, and the second was just the breadth of the offering. So it's not just primary block storage anymore. You're talking about object storage, and file storage, and hyper-converged systems. We're seeing all of that come into play here with the M-series switches that we're introducing with HPE. What's happening now is you've got a virtualized, containerized world that's using massive amounts of data on superfast storage media. And it needs the network to support that. All of the accelerations that we've built into our adapters all of the smarts that we're building into the switches and taking all of this management framework and automation that HPE's delivering, we've got a really nice solution together. >> Excellent. One thing I love when we talk networking here, is the containerized world, we're talking about serverless, some of this stuff is trying to explain it in a way that people can understand. Marty, an M-series is probably boxes. There's actually physical... You can buy the software, and everything critically important. Walk us through the product line and what sets it apart from what you've done before and what makes up the product line there. >> A lot of compliments to Mellanox and the way they've designed their products. We have, first and foremost I'd like to call out they have a smaller product that we're working with from an ASIC perspective. It's the 2100 series. It's nice because it's a half-width box. It allows you to get full redundancy on a single 1U tray if you want to think about it that way. From a real estate perspective it's really nice. And it's extremely powerful. So with that solution, you have the power and the cost savings being able to do what many different networks can do at three times the cost in a very small form factor. That's very nice. And with the software that we do, we talked about what kind of automation we have. It's all the basic stuff that you'd imagine like the discovery, the diagnostics, all the things that are manual in an Ethernet world we provide automated in a storage environment. >> What about some of the speeds and feeds? We've got so many different flavors of Ethernet now. I remember it took a decade for 10-gig to go from standards to most customer doing now. It wasn't just 40 and 100, but we've got 25 and 50 in there. So all of them, are there interoperability concerns? Any things that you want to say, yes this, or not ready for that? >> I'll say that the market has diverged on many different speeds and feeds. So we do support all of them in the technology. Even from a storage perspective, some of our platforms support 25 gig, some will support 40 gig. So with a solution, we can do one, we can do 10, 25, 40, 50, 100. What's nice is it gives you, regardless of what technology you're using you have the capability to use the technology. >> Kevin, I want to give you the opportunity. What are you hearing from the customers these days? What are the pain points? It used to be some of those speeds and feeds. Wait around, when can I do the upgrade? It's something that's a massive thing that we have to undertake from the backbone all the way through. So are we moving faster? I know we all talk, it's agility and speed, but how about the network? Is it keeping up? >> Yeah, I think we are keeping up. The thing we hear from customers is about efficiency of using their platform. So whether it's the server or the storage. And the network they don't want to be in the way. So you don't want to have stranded assets with an NVMe drive stuck inside of a server that's run at 10% and you've got another unit that's at 100% and needs more. And really that's what this disk aggregation and software-defined storage is all about is taking advantage and getting the most out of the infrastructure that you've invested in. One NVMe drive can saturate a 25-gig link. So we have people that are saying give me more bandwidth, give me more bandwidth. So we can saturate with 24 drives, 600-gig links. The bandwidth is incredible, and we're able to deliver that with zero packet loss technologies. So really that's what people are asking for. There's more data being generated and processed and analyzed to do efficient business models, new business models. And they don't want to worry about the network. They want it to configure itself automatically, and just work and not be the bottleneck. And we can do that. >> Marty, can you up-level for us a little bit here? When I think about HPE, it comes pre-configured, I know. That's what I've known HPE for. Of course HP for most of my career. Even back in some of the earliest jobs, it's like well, rack comes fully configured. Everything's in it. When I look at this announcement, HPE, server, storage, network, some of your pieces. What's important about this? How does this fit in to the overall picture? >> Customers are used to having that service level from us. Delivering those kind of solutions. And this is no different. We saw a lot of challenges with all these different types of networks. The network being the challenge with these new types of storage technologies. So having these solutions brought to you in the way that we've done with the primary storage array I think is going to make customers pretty happy about it. >> Kevin, want to give me the final word? What should we look for in this announcement? Any last things that we haven't covered? And what should we look for for the rest of 2017? >> I think as Marty said, this is a beginning. We have a strong relationship with HPE on the adapter side, on the cables, on the switches. Also on the synergy platform that we've done the switch for that as well. So 25, 50, 100-gig is here today. With shipping we're really saying 25 is the new 10. Because this faster storage needs faster networks and we're here to deliver. I think, pay attention, we're going to do some new things. There's lots of innovation coming. >> Kevin Deierling, Marty Lans, thanks so much for bringing us the update. And thank you for watching theCUBE. I'm Stu Miniman. (light music)

Published Date : Sep 25 2017

SUMMARY :

of the IT world that... We're built into the major storage platforms in the world some of the things you're doing with Mellanox, or the connectivity for storage What products do you have? all the traditional storage connectivity products, is the Ethernet storage fabric. and all of the software to that world. So Marty, one of the things I've said a bunch from the storage array to the network. on some of the latency piece. And that's what we really do, the storage protocol wars are dead. the same capabilities of what you get in Fibre Channel And Marty, one of the things. Automate and orchestrate all the Ethernet capabilities So you kind of see learning going back and forth. One of the big things we see is in the cloud, So what are you hearing from your customers And the problem that these customers are finding How similar, how much retraining do you need to do? the Ethernet network has to meet from the application portfolio? And it needs the network to support that. is the containerized world, we're talking about serverless, and the way they've designed their products. What about some of the speeds and feeds? I'll say that the market has diverged from the backbone all the way through. And the network they don't want to be in the way. Even back in some of the earliest jobs, in the way that we've done with the primary storage array on the adapter side, on the cables, on the switches. And thank you for watching theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Susan WojcickiPERSON

0.99+

Dave VellantePERSON

0.99+

Lisa MartinPERSON

0.99+

JimPERSON

0.99+

JasonPERSON

0.99+

Tara HernandezPERSON

0.99+

David FloyerPERSON

0.99+

DavePERSON

0.99+

Lena SmartPERSON

0.99+

John TroyerPERSON

0.99+

Mark PorterPERSON

0.99+

MellanoxORGANIZATION

0.99+

Kevin DeierlingPERSON

0.99+

Marty LansPERSON

0.99+

TaraPERSON

0.99+

JohnPERSON

0.99+

AWSORGANIZATION

0.99+

Jim JacksonPERSON

0.99+

Jason NewtonPERSON

0.99+

IBMORGANIZATION

0.99+

Daniel HernandezPERSON

0.99+

Dave WinokurPERSON

0.99+

DanielPERSON

0.99+

LenaPERSON

0.99+

Meg WhitmanPERSON

0.99+

TelcoORGANIZATION

0.99+

Julie SweetPERSON

0.99+

MartyPERSON

0.99+

Yaron HavivPERSON

0.99+

AmazonORGANIZATION

0.99+

Western DigitalORGANIZATION

0.99+

Kayla NelsonPERSON

0.99+

Mike PiechPERSON

0.99+

JeffPERSON

0.99+

Dave VolantePERSON

0.99+

John WallsPERSON

0.99+

Keith TownsendPERSON

0.99+

fiveQUANTITY

0.99+

IrelandLOCATION

0.99+

AntonioPERSON

0.99+

Daniel LauryPERSON

0.99+

Jeff FrickPERSON

0.99+

MicrosoftORGANIZATION

0.99+

sixQUANTITY

0.99+

Todd KerryPERSON

0.99+

John FurrierPERSON

0.99+

$20QUANTITY

0.99+

MikePERSON

0.99+

January 30thDATE

0.99+

MegPERSON

0.99+

Mark LittlePERSON

0.99+

Luke CerneyPERSON

0.99+

PeterPERSON

0.99+

Jeff BasilPERSON

0.99+

Stu MinimanPERSON

0.99+

DanPERSON

0.99+

10QUANTITY

0.99+

AllanPERSON

0.99+

40 gigQUANTITY

0.99+

Jas Tremblay, Broadcom


 

[Music] for decades the technology industry had marched the cadence of moore's law it was a familiar pattern system oems would design in the next generation of intel microprocessors every couple of years or so maybe bump up the memory ranges periodically and the supporting hardware would kind of go along for the ride upgrading its performance and bandwidth system designers they might beef up the cash maybe throwing some more spinning disk spindles at the equation to create a balanced environment and this was pretty predictable and consistent in the pattern and was reasonably straightforward compared to today's challenges this is all changed the confluence of cloud distributed global networks the diversity of applications ai machine learning and the massive growth of data outside of the data center requires new architectures to keep up as we've reported the traditional moore's law curve is flattening and along with that we've seen new packages with alternative processors like gpus npus accelerators and the like and the rising importance of supporting hardware to offload tasks like storage and security and it's created a massive challenge to connect all these components together the storage the memories and all of the enabling hardware and do so securely at very low latency at scale and of course cost effectively this is the topic of today's segment the shift from a world that is cpu centric to one where the connectivity of the various hardware components is where much of the innovation is occurring and to talk about that there is no company who knows more about this topic than broadcom and with us today is jazz tremblay who was general manager data center solutions group at broadcom jazz welcome to thecube hey dave thanks for having me really appreciate it yeah you bet now broadcom is a company that a lot of people might not know about i mean but the vast majority of the internet traffic flows through broadcom products like pretty much all of it it's a company with trailing 12-month revenues of nearly 29 billion and a 240 billion dollar market cap jazz what else should people know about broadcom well they've uh 99 of the internet traffic goes through broadcom silicon or devices and i think what people are not often aware of is how breath it is it starts with the devices phones and tablets that use our wi-fi technology or rf filters and then those connect to access points either at home at work or public access points using your wi-fi technology and if you're working from home you're using a residential or broadband gateway and that uses broadcom technology also from there you go to access networks core networks and eventually you'll work your way into the data center all connected by bartcom so really we're at the heart of enabling this connectivity ecosystem and we're at the core of it we're a technology company we invest about five billion dollars a year in r d and as you were saying or last year we achieved 27.5 billion of revenue and our mission is really to connect the ecosystem to enable what you said this transformation around the data centric world so talk about your scope of responsibility what's your what's your role generally and specifically with storage so i've been with the company for 16 years and i head up the data center solutions group which includes three product franchises a pci fabric storage connectivity and broadcom ethernet nics so my chart and my team's charter is really server connectivity inside the data center and and what specifically is broadcom doing in storage jazz so it's been quite a journey uh over the past eight years we've made a series of acquisition and built up a pretty impressive storage portfolio this first started with lsi and that's where i came from and the team here came from lsi that had two product franchises around storage the first one was server connectivity hba raid expanders for ssds and hdds the second product group was actually chips that go inside the hard drives so socs and preamps so that was an acquisition that we made and actually that's how i came into the broadcom group through lsi the next acquisition we made was plx the industry's leader in pcie fabrics they've been doing pcie switches for about 15 years we acquired the company and really saw an acceleration in the requirements for nvme attach and ai ml fabrics very specialized low latency fabrics after that we acquired a large system and software company brocade and dave if you recall brocade they're the market leader in fiber channel switching this is where if you're a financial or government institution you want to build a mission critical ultra secure really best in class storage network following brocade acquisition we acquired mulx that is now the number one provider of fibre channel adapters inside servers and the last acquisition for this puzzle was was actually broadcom where avago acquired broadcom and took on the broadcom name and there we acquired um ethernet switching capabilities and ethernet adapters that go into storage servers or external storage systems so with all this it's been quite the journey to build up this portfolio uh we're number one in each of these storage product categories and we now have four divisions that are focused on storage connectivity you know that's quite remarkable when you think about i mean i know all these companies that you were talking about and they were they were very quality companies but they were kind of bespoke and the fact that you had the vision to kind of connect the dots and now take responsibility for that integration we're going to talk about what that means in terms of competitive advantage but but i wonder if we could zoom out and maybe you could talk about the key storage challenges and elaborate a little bit on why connectivity is now so so important like what are the trends that are driving that shift that we talked about earlier from a cpu-centric world the one that's connectivity-centric i think at broadcom we recognize the importance of storage and storage connectivity and if you look at data centers whether it be private public cloud or hybrid data centers they're getting inundated with data if you look at the digital universe it's growing at about 23 keger day so over a course of four to five years you're doubling the amount of new information and that poses really two key challenges for the infrastructure the first one is you have to take all this data and for a good chunk of it you have to store it be able to access it and protect it the second challenge is you actually have to go and analyze and process this data and doing this at scale that's the uh the key challenge and what we're seeing these data centers uh getting a tsunami of data and historically they've been cpu-centric architectures and what that means is the cpu is at the heart of the data center and a lot of the workloads are processed by software running on the cpu we believe that we're currently transforming the architecture from cpu centric to connectivity centric and what we mean by connectivity centric is you architect your data center thinking about the connectivity first and the goal of the connectivity is to use all the components inside the data center the memory the spinning media the flash storage the networking the specialized accelerators the fpga all these elements and use them for what they're best at to process all this data and the goal uh dave is really to drive down power and deliver the performance so that we can achieve all the innovation we want inside the data centers so it's really a shift from cpu centric to uh bringing in more specialized components and architecting the the connectivity inside the data center to help we think that's a really important part okay so you have this need for connectivity at scale you mentioned and you're dealing with massive massive amounts of data i mean we're going to look back the last decade and say oh that you've seen nothing compared to to the when we get to 2030. but you at the same time you have to control costs so what are the technical challenges to achieving that vision so it's really challenging it's it's not that complex to build up faster bigger solution if you have no cost or or power budget and really the key challenges that our team is facing working with customers is first i'd say it's architectural challenges so we would all like to have one fabric that i think to connect all the devices and bring us all the characteristics that we need but the reality is we can't we can't do that so you need distinct fabrics inside the data center and you need them to work together you'll need an ethernet backbone in some cases you'll need a fiber channel network in some cases you'll need a small fabric for thousands or hundreds of thousands of hdds you will need pcie fabrics for aiml servers and and one of the key architectural challenges is which fabric do you use when and how do you develop these fabrics to meet their purpose-built needs the that's one thing the second architectural challenge dave is what i challenge my team with is example how do i double bandwidth while reducing net power double bandwidth reducing that power how do i take a storage controller and increase the iops by 10x and will i locate only 50 more power budget so that equation requires tremendous uh innovation and that's really what we focus on and power is becoming more and more important in that equation so you've got decisions from an architecture perspective as to which fabric to use you've got this architectural challenge around we need to innovate and do things smarter better to drive down power while delivering more performance then if you take those things together the problem statement becomes more complex so you've had these silicon devices with complex firmware on them that need to interoperate with multiple devices they're getting more and more complex so there's execution challenges and what we need to do and what we're investing to do is shift left quality so to do these complex devices they come out tight time to market with high quality and one of the key things that we've invested in is emulation of the environment before you tape out your silicon so effectively taking the application software running it on an emulation environment making sure that works running your tests before you tape out and that ensures quality silicon so it's it's challenging but the team loves challenges and that's kind of what we're facing on one hand architectural challenges on the other hand a new level of execution challenges so you're you're compressing the time to final tape out you know versus maybe traditional techniques and then you mentioned architecture my am i right jazz that you're essentially from an architectural standpoint trying to minimize the because your latency is so important you're trying to minimize the amount of data that you have to move around and actually bringing you know compute to the data is that the right way to think about it well i think there's multiple parts of the problem one of them is you need to do more data transactions example data protection with rate algorithms we need to do millions of transactions per second and the only way to achieve this with the minimal power impact is to hardware accelerate these that's one piece of investment the other investment is um you're absolutely right dave so it's shuffling the data around the data center so in the data center in some cases you need to have multiple pieces of the puzzle multiple ingredients processing the same data at the same time and you need advanced methodologies to share the data and avoid moving it all over the data center so that's another big piece of investment that we're focused on okay yeah so let's let's stay on that because i see this as disruptive you talk about spending five billion dollars you know a year in r d um and talk a little bit more about the disruptive technologies or the supportive technologies that you're introducing specifically to support this vision so let's break it down in a couple couple big industry problems that our team is focused on so the first one is i'll take an enterprise workload database if you want the fastest running database you want to utilize local storage nvme based drives and you need to protect that data and raid is the mechanism of choice to protect your data in local environments and there what we need to do is really just do the transactions a lot faster historically the storage has been a bit of a bottleneck in these types of applications for example our newest generation product we're doubling the bandwidth increasing iops by 4x but more importantly we're accelerating rate rebuilds by 50x and that's important dave if you are using a database in some cases you limit the size of that database based on how fast you can do those rebuilds so this 50x acceleration and rebuilds is something we're getting a lot of good feedback on for customers the the last metric we're really focused on is right latency so how fast can the cpu send the right to the storage connectivity subsystem and commit it to drives and we're improving that by 60x generation of regeneration so we're talking fully loaded latency 10 microseconds so from an enterprise workload it's about data protection much much faster using nvme drives that's one big problem the other one is if you're um if you look at dave youtube facebook tick tock the amount of user generated content specifically video content that they're producing on an hour-by-hour basis is mind-boggling and the hyperscale customers are really counting on us to help them scale the connectivity of hundreds of thousands of hard drive to store and access all that data in a very reliable way so there we're leading the industry in the transition to 24 gig sas and multi-actuator drives third big problem is around aiml servers so these are some of the highest performance servers that they basically need super low latency connectivity between gp gpus networking nvme drives cpus and orchestrate that all together and the fabric of choice for that is pcie fabric so here we're talking about 150 nanosecond latency in a pcie fabric fully non-blocking very reliable and here we're helping the industry transition from pca gen 4 to pcie gen 5. and the last piece is okay i've got a aiml server i have a storage system with hard drives or a storage server in the enterprise space all these devices systems need to be connected to the ethernet backbone and my team is heavily investing in ethernet mix transitioning to 100 gig 200 gig 400 gig and putting capabilities optimized for uh for storage workloads so those are kind of the four big things that we're focused on at the industry level from a connectivity perspective dave yeah and that makes a lot of sense and really resonates uh particularly as we have that shift from a cpu centric to a connectivity center because the other thing you said i mean you talk about 50x raid rebuild times you know one thing a couple of things you know in storage is if you ask the question what happens when something goes wrong because it's all about recovery you can't lose data and the other thing you mentioned is write latency which has always been you know the problem okay reads i can read out of cash but ultimately you've got to get it to where it's persisted so some real technical challenges there that you guys are dealing with absolutely dave yeah and uh these these are these are the type of problems that gets the engineers excited give them really tough tough technical problems to go solve i wonder if we could take a couple of examples or an example of scaling with a large customer for instance obviously hyperscalers or take a company like dell i mean they're a big company big customer take us through that so i we use the word scale a lot at broadcom we work with some of the industry leaders and data centers and oems and scale means different things to them so example if i'm working with a hyperscaler that is getting inundated with data and they need half a million storage controllers to store all that that data well their scale problem is can you deliver and dave you know how much of a hot topic that is these days so they need a partner that can scale from a delivery perspective but if i take a company like example dell that's very focused on storage from storage servers their acquisition of emc they have a very broad portfolio of data center storage offerings and scale to them from a broadcom from a connected by broadcom perspective means that you need to have the investment scale to meet their end-to-end requirements all the way from a low end storage connectivity solution for booting a server all the way up to a very high-end all-flash array or high-density hdd system so they want a company a partner that can invest and has a scale to invest to meet their end-to-end requirements second thing is their different products are unique and have different requirements and you need to adapt your collaboration model for example some products within dell portfolio might say i just want a storage adapter plug it in the operating system will automatically recognize it i need this turnkey i want to do minimal investment this is not an area of high differentiation for me at the other end of the spectrum they may have applications where they want deep integration with their management and our silicon tools so that they can deliver the highest quality highest performance to their customers so they need a partner that can scale from an r d investment perspective from a silicon software and hardware perspective but they also need a company that can scale from support and business model perspective and give them the flexibility that their end customers need so dell is a great company to work with we have a long lasting relationship with them and the relationship is very deep in some areas example server storage and it's also quite broad they they are adopters of the vast majority of our storage uh connectivity products well and i imagine it was i want to talk about the uniqueness of broadcom and again it's i'm in awe of the fact that somebody had the vision you guys your team uh obviously your ceo was one of the visionaries in the industry had the sense to to look out and say okay we can put these these pieces together so i would imagine a company like dell you've they're able to consolidate their their vendor their supplier base and push you for integration in an innovation in innovation how unique is that is the broadcom model what's compelling you know to your customers about that model so i think what's unique from a storage perspective is the breadth of the portfolio and also the scale at which we can invest so you know if you look at some of the things we talked about from a scale perspective how data centers throughout the world are getting inundated with data dave they need help and we need to equip them with cutting edge technology to increase performance drive down power improve reliability so so they need partners that in each of the the product categories that they you partner with them on you know we can invest with scale so that's i think one of the first things the second thing is if you look at this connectivity-centric data center you need multiple types of fabric and whether it be cloud customers or large oems they are organizing themselves to be able to look at things holistically they're no longer product company they're very data center architecture companies and um so it's good for them to have a partner that can look across product groups across division says okay this is the innovation we need to bring to market these are the problems we need to go solve and they really appreciate that and i think the last thing is a flexible business model within example my division we're we offer different business models different engagement and collaboration models with technology but there's another division that if you want to innovate at the silicon level and build custom silicon for you like many of the hyper scalers or other companies are doing that division is just focus on that so i i feel like broadcom is unique from a storage perspective its ability to innovate breadth a portfolio and the flexibility in the collaboration model to help our customers solve their customers problems so you're saying you can deal with merchant products slash open products or you can do highly you know high customization where does software differentiation fit into this model so it's it's actually one of the most important elements uh i think a lot of our customers take it for granted that will take care of the silicon we'll anticipate the requirements and deliver the performance that they need but from a software firmware driver utilities that is where a lot of differentiation lies some cases will offer an sdk model where you know customers can build their entire applications on top of that in some cases they want a complete turnkey solution where you take technology integrate it into server and the operating system recognizes it and you have out of box drivers from broadcom so we need to offer them that flexibility because you know their needs are quite quite broad there so last question what's the future of the business look like to jazz tremblay give us your point of view on that well uh it's it's fun i gotta tell you dave we're having uh we're having a great time we've got i've got a great team they're uh the world's experts on storage connectivity and working with them is a pleasure and we've got a rich great set of customers that are giving us cool problems to go solve and we're excited about it so i think this is really you know with the acceleration of all this digital transformation that we're seeing we're excited we're having fun and i think there's a lot of problems to be solved and we also have a responsibility i think the ecosystem in the industry is counting on our team to deliver the innovation from a storage connectivity perspective and i'll tell you dave we're having fun it's great but we take that responsibility uh pretty seriously jazz great stuff i really appreciate you laying all that out very important role you guys are playing you have a really unique perspective thank you thank you dave and thank you for watching this is dave vellante for the cube and we'll see you next time you

Published Date : May 5 2022

SUMMARY :

and the goal of the connectivity is to

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
thousandsQUANTITY

0.99+

27.5 billionQUANTITY

0.99+

12-monthQUANTITY

0.99+

16 yearsQUANTITY

0.99+

60xQUANTITY

0.99+

100 gigQUANTITY

0.99+

davePERSON

0.99+

10 microsecondsQUANTITY

0.99+

five billion dollarsQUANTITY

0.99+

200 gigQUANTITY

0.99+

400 gigQUANTITY

0.99+

last yearDATE

0.99+

24 gigQUANTITY

0.99+

second challengeQUANTITY

0.99+

50xQUANTITY

0.99+

fourQUANTITY

0.99+

Jas TremblayPERSON

0.99+

lsiORGANIZATION

0.99+

five yearsQUANTITY

0.99+

10xQUANTITY

0.99+

first oneQUANTITY

0.99+

2030DATE

0.99+

two productQUANTITY

0.98+

second thingQUANTITY

0.98+

dave vellantePERSON

0.98+

4xQUANTITY

0.98+

hundreds of thousands of hddsQUANTITY

0.98+

firstQUANTITY

0.98+

nearly 29 billionQUANTITY

0.97+

dellORGANIZATION

0.97+

secondQUANTITY

0.97+

todayDATE

0.97+

oneQUANTITY

0.97+

99QUANTITY

0.96+

one pieceQUANTITY

0.96+

half a million storage controllersQUANTITY

0.96+

three productQUANTITY

0.96+

about 15 yearsQUANTITY

0.96+

every couple of yearsQUANTITY

0.95+

eachQUANTITY

0.95+

two key challengesQUANTITY

0.95+

daveORGANIZATION

0.95+

240 billion dollarQUANTITY

0.93+

about five billion dollars a yearQUANTITY

0.92+

youtubeORGANIZATION

0.92+

bartcomORGANIZATION

0.9+

second productQUANTITY

0.9+

one fabricQUANTITY

0.9+

avagoORGANIZATION

0.89+

50 moreQUANTITY

0.89+

four big thingsQUANTITY

0.88+

intelORGANIZATION

0.88+

one big problemQUANTITY

0.87+

last decadeDATE

0.87+

about 150 nanosecondQUANTITY

0.86+

BroadcomPERSON

0.85+

hundreds of thousands of hardQUANTITY

0.85+

facebookORGANIZATION

0.84+

a yearQUANTITY

0.83+

millions of transactions per secondQUANTITY

0.82+

one thingQUANTITY

0.8+

broadcomORGANIZATION

0.79+

one of themQUANTITY

0.79+

four divisionsQUANTITY

0.78+

one of the most important elementsQUANTITY

0.78+

third bigQUANTITY

0.78+

lot of peopleQUANTITY

0.74+

key thingsQUANTITY

0.72+

pcieORGANIZATION

0.7+

decadesQUANTITY

0.67+

coupleQUANTITY

0.61+

about 23QUANTITY

0.59+

piecesQUANTITY

0.57+

pca gen 4OTHER

0.51+

couple bigQUANTITY

0.5+

Jas Tremblay, Broadcom


 

(upbeat music) >> For decades the technology industry had marched the cadence of Moore's law. It was a familiar pattern. System OEMs would design in the next generation of Intel microprocessors, every couple of years or so maybe bump up the memory ranges periodically and the supporting hardware would kind of go along for the ride, upgrading its performance and bandwidth. System designers then they might beef up the cache, maybe throwing some more spinning disc spindles at the equation to create a balanced environment. And this was pretty predictable and consistent in the pattern and was reasonably straightforward compared to is challenges. This has all changed. The confluence of cloud, distributed global networks, the diversity of applications, AI, machine learning and the massive growth of data outside of the data center requires new architectures to keep up. As we've reported the traditional Moore's Law curve is flattening. And along with that we've seen new packages with alternative processors like GPUs, NPUs, accelerators and the like and the rising importance of supporting hardware to offload tasks like storage and security. And it's created a massive challenge to connect all these components together, the storage, the memories and all of the enabling hardware and do so securely at very low latency at scale and of course, cost effectively. This is the topic of today's segment. The shift from a world that is CPU centric to one where the connectivity of the various hardware components is where much of the innovation is occurring. And to talk about that, there is no company who knows more about out this topic than Broadcom. And with us today is Jas Tremblay, who is general manager, data center solutions group at Broadcom. Jas, welcome to theCUBE. >> Hey Dave, thanks for having me, really appreciate it. >> Yeah, you bet. Now Broadcom is a company that a lot of people might not know about. I mean, but the vast majority of the internet traffic flows through Broadcom products. (chuckles) Like pretty much all of it. It's a company with trailing 12 month revenues of nearly 29 billion and a 240 billion market cap. Jas, what else should people know about Broadcom? >> Well, Dave, 99% of the internet traffic goes through Broadcom silicon or devices. And I think what people are not often aware of is how breadth it is. It starts with the devices, phones and tablets that use our Wi-Fi technology or RF filters. And then those connect to access points either at home, at work or public access points using our Wi-Fi technology. And if you're working from home, you're using a residential or broadband gateway and that uses Broadcom technology also. From there you go to access networks, core networks and eventually you'll work your way into the data center, all connected by Broadcom. So really we're at the heart of enabling this connectivity ecosystem and we're at the core of it, we're a technology company. We invest about 5 billion a year in R&D. And as you were saying our last year we achieved 27.5 billion of revenue. And our mission is really to connect the ecosystem to enable what you said, this transformation around the data-centric world. >> So talk about your scope of responsibility. What's your role generally and specifically with storage? >> So I've been with the company for 16 years and I head up the data center solutions group which includes three product franchises PCA fabric, storage connectivity and Broadcom ethernet nics. So my charter, my team's charter is really server connectivity inside the data center. >> And what specifically is Broadcom doing in storage, Jas? >> So it's been quite a journey. Over the past eight years we've made a series of acquisition and built up a pretty impressive storage portfolio. This first started with LSI and that's where I came from. And the team here came from LSI that had two product franchises around storage. The first one was server connectivity, HBA raid, expanders for SSDs and HDDs. The second product group was actually chips that go inside the hard drives. So SOCs and pre amps. So that was an acquisition that we made and actually that's how I came into the Broadcom group through LSI. The next acquisition we made was PLX, the industry's leader in PCIe fabrics. They'd been doing PCIe switches for about 15 years. We acquired the company and really saw an acceleration in the requirements for NVMe attached and AI ML fabrics, very specialized, low latency fabrics. After that, we acquired a large system and software company, Brocade, and Dave if you recall, Brocade they're the market leader in fiber channel switching, this is where if you're financial or government institution you want to build a mission critical, ultra secure really best in class storage network. Following Brocade acquisition we acquired Emulex that is now the number one provider of fiber channel adapters inside servers. And the last acquisition for this puzzle was actually Broadcom where Avago acquired Broadcom and took on the Broadcom name. And there we acquired ethernet switching capabilities and ethernet adapters that go into storage servers or external storage systems. So with all this it's been quite the journey to build up this portfolio. We're number one in each of these storage product categories. And we now have four divisions that are focused on storage connectivity. >> That's quite remarkable when you think about it. I mean, I know all these companies that you were talking about and they were very quality companies but they were kind of bespoke in the fact that you had the vision to kind of connect the dots and now take responsibility for that integration. We're going to talk about what that means in terms of competitive advantage, but I wonder if we could zoom out and maybe you could talk about the key storage challenges and elaborate a little bit on why connectivity is now so important. Like what are the trends that are driving that shift that we talked about earlier from a CPU centric world to one that's connectivity centric? >> I think at Broadcom, we recognize the importance of storage and storage connectivity. And if you look at data centers whether it be private, public cloud or hybrid data centers, they're getting inundated with data. If you look at the digital universe it's growing at about 23% a day. So over a course of four to five years you're doubling the amount of new information and that poses really two key challenges for the infrastructure. The first one is you have to take all this data and for a good chunk of it, you have to store it, be able to access it and protect it. The second challenge is you actually have to go and analyze and process this data and doing this at scale that's the key challenge and what we're seeing these data centers getting a tsunami of data. And historically they've been CPU centric architectures. And what that means is the CPU's at the heart of the data center. And a lot of the workloads are processed by software running on the CPU. We believe that we're currently transforming the architecture from CPU centric to connectivity centric. And what we mean by connectivity centric is you architect your data center thinking about the connectivity first. And the goal of the connectivity is to use all the components inside the data center, the memory, the spinning media, the flash storage, the networking, the specialized accelerators, the FPGA all these elements and use them for what they're best at to process all this data. And the goal Dave is really to drive down power and deliver the performance so that we can achieve all the innovation we want inside the data centers. So it's really a shift from CPU centric to bringing in more specialized components and architecting the connectivity inside the data center to help. We think that's a really important part. >> So you have this need for connectivity at scale, you mentioned, and you're dealing with massive, massive amounts of data. I mean, we're going to look back to the last decade and say, oh, you've seen nothing compared to when we get to 2030, but at the same time you have to control costs. So what are the technical challenges to achieving that vision? >> So it's really challenging. It's not that complex to build up faster, bigger solution, if you have no cost or power budget. And really the key challenges that our team is facing working with customers is first, I'd say it's architectural challenges. So we would all like to have one fabric that aim to connect all the devices and bring us all the characteristics that we need. But the reality is, we can't do that. So you need distinct fabrics inside the data center and you need them to work together. You'll need an ethernet backbone. In some cases, you'll need a fiber channel network. In some cases, you'll need a small fabric for thousands or hundreds of thousands of HDDs. You will need PCIe fabrics for AI ML servers. And one of the key architectural challenges is which fabric do you use when and how do you develop these fabrics to meet their purpose built needs. That's one thing. The second architectural challenge, Dave is what I challenge my team with is example, how do I double bandwidth while reducing net power, double bandwidth, reducing net power? How do I take a storage controller and increase the IOPS by 10X and will allocate only 50% more power budget? So that equation requires tremendous innovation. And that's really what we focus on and power is becoming more and more important in that equation. So you've got decisions from an architecture perspective as to which fabric to use. You've got this architectural challenge around we need to innovate and do things smarter, better, to drive down power while delivering more performance. Then if you take those things together the problem statement becomes more complex. So you've had these silicon devices with complex firmware on them that need to inter-operate with multiple devices. They're getting more and more complex. So there's execution challenges and what we need to do. And what we're we're investing to do is shift left quality. So to do these complex devices that they come out time to market with high quality. And one of the key things Dave that we've invested in is emulation of the environment before you tape out your silicon. So effectively taking the application software, running it on an emulation environment, making sure that works, running your tests before you tape out and that ensures quality silicon. So it's challenging, but the team loves challenges. And that's kind of what we're facing, on one hand architectural challenges, on the other hand a new level of execution challenges. >> So you're compressing the time to final tape out versus maybe traditional techniques. And then, you mentioned architecture, am I right Jas that you're essentially from an architectural standpoint trying to minimize the... 'cause your latency's so important you're trying to minimize the amount of data that you have to move around and actually bringing compute to the data. Is that the right way to think about it? >> Well, I think that there's multiple parts of the problem. One of them is you need to do more data transactions, example data protection with rate algorithms. We need to do millions of transactions per second. And the only way to achieve this with the minimal power impact is to hardware accelerate these. That's one piece of investment. The other investment is, you're absolutely right, Dave. So it's shuffling the data around the data center. So in the data center in some cases you need to have multiple pieces of the puzzle, multiple ingredients processing the same data at the same time and you need advanced methodologies to share the data and avoid moving it all over the data center. So that's another big piece of investment that we're focused on. >> So let's stay on that because I see this as disruptive. You talk about spending $5 billion a year in R&D and talk a little bit more about the disruptive technologies or the supportive technologies that you're introducing specifically to support this vision. >> So let's break it down in a couple big industry problems that our team is focused on. So the first one is I'll take an enterprise workload database. If you want the fastest running database you want to utilize local storage and NVMe based drives and you need to protect that data. And raid is the mechanism of choice to protect your data in local environments. And there what we need to do is really just do the transactions a lot faster. Historically the storage has been a bit of a bottleneck in these types of applications. So example our newest generation product. We're doubling the bandwidth, increasing IOPS by four X, but more importantly we're accelerating raid rebuilds by 50X. And that's an important Dave, if you are using a database in some cases, you limit the size of that database based on how fast you can do those rebuilds. So this 50X acceleration in rebuilds is something we're getting a lot of good feedback on for customers. The last metric we're really focused on is write latency. So how fast can the CPU send the write to the storage connectivity subsystem and committed to drives? And we're improving that by 60X generation over generation. So we're talking fully loaded latency, 10 microseconds. So from an enterprise workload it's about data protection, much, much faster using NVMe drives. That's one big problem. The other one is if you look at Dave YouTube, Facebook, TikTok the amount of user generated content specifically video content that they're producing on an hour by hour basis is mind boggling. And the hyperscale customers are really counting on us to help them scale the connectivity of hundreds of thousands of hard drive to store and access all that data in a very reliable way. So there we're leading the industry in the transition to 24 gig SaaS and multi actuator drives. Third big problem is around AI ML servers. So these are some of the highest performance servers, that they basically need super low latency connectivity between GPGPUs, networking, NVMe drives, CPUs and orchestrate that all together. And the fabric of choice for that is PCIe fabric. So here, we're talking about 115 nanosecond latency in a PCIe fabric, fully nonblocking, very reliable. And here we're helping the industry transition from PCA gen four to PCIe gen five. And the last piece is okay, I've got a AI ML server, I have a storage system with hard drives or a storage server in the enterprise space. All these devices, systems need to be connected to the ethernet backbone. And my team is heavily investing in ethernet mix transitioning to 100 gig, 200 gig, 400 gig and putting capabilities optimized for storage workloads. So those are kind of the four big things that we're focused on at the industry level, from a connectivity perspective, Dave. >> And that makes a lot of sense and really resonates particularly as we have that shift from a CPU centric to a connectivity centric. And the other thing you said, I mean, you're talking about 50X rate rebuild times, a couple of things you know in storage is if you ask the question, what happens when something goes wrong? 'Cause it's all about recovery, you can't lose data. And the other thing you mentioned is write latency, which has always been the problem. Okay, reads, I can read out cache but ultimately you've got to get it to where it's persisted. So some real technical challenges there that you guys are dealing with. >> Absolutely, Dave. And these are the type of problems that gets the engineers excited. Give them really tough technical problems to go solve. >> I wonder if we could take a couple of examples or an example of scaling with a large customer, for instance obviously hyperscalers or take a company like Dell. I mean they're big company, big customer. Take us through that. >> So we use the word scale a lot at Broadcom. We work with some of the industry leaders and data centers and OEMs and scale means different things to them. So example, if I'm working with a hyperscaler that is getting inundated with data and they need half a million storage controllers to store all that data, well their scale problem is, can you deliver? And Dave, you know how much of a hot topic that is these days. So they need a partner that can scale from a delivery perspective. But if I take a company like example Dell that's very focused on storage, from storage servers, their acquisition of EMC. They have a very broad portfolio of data center storage offerings and scale to them from a connected by Broadcom perspective means that you need to have the investment scale to meet their end to end requirements. All the way from a low end storage connectivity solution for booting a server all the way up to a very high end all flash array or high density HDD system. So they want a company a partner that can invest and has a scale to invest to meet their end to end requirements. Second thing is their different products are unique and have different requirements and you need to adapt your collaboration model. So example, some products within Dell portfolio might say, I just want a storage adaptor, plug it in, the operating system will automatically recognize it. I need this turnkey. I want to do minimal investment, is not an area of high differentiation for me. At the other end of the spectrum they may have applications where they want deep integration with their management and our silicon tools so that they can deliver the highest quality, highest performance to their customers. So they need a partner that can scale from an R&D investment perspective from silicon software and hardware perspective but they also need a company that can scale from support and business model perspective and give them the flexibility that their end customers need. So Dell is a great company to work with. We have a long lasting relationship with them and the relationship is very deep in some areas, example server storage, and is also quite broad. They are adopters of the vast majority of our storage connectivity products. >> Well, and I imagine it was. Well I want to talk about the uniqueness of Broadcom again, I'm in awe of the fact that somebody had the vision, you guys, your team obviously your CEO was one of the visionaries of the industry, had the sense to look out and say, okay, we can put these pieces together. So I would imagine a company like Dell, they're able to consolidate their vendor their supplier base and push you for integration and innovation. How unique is the Broadcom model? What's compelling to your customer about that model? >> So I think what's unique from a storage perspective is the breadth of the portfolio and also the scale at which we can invest. So if you look at some of the things we talked about from a scale perspective how data centers throughout the world are getting inundated with data, Dave, they need help. And we need to equip them with cutting edge technology to increase performance, drive down power, improve reliability. So they need partners that in each of the product categories that you partner with them on, we can invest with scale. So that's, I think one of the first things. The second thing is, if you look at this connectivity centric data center you need multiple types of fabric. And whether it be cloud customers or large OEMs they are organizing themselves to be able to look at things holistically. They're no longer product company, they're very data center architecture companies. And so it's good for them to have a partner that can look across product groups across divisions says, okay this is the innovation we need to bring to market. These are the problems we need to go solve and they really appreciate that. And I think the last thing is a flexible business model. Within example, my division, we offer different business models, different engagement and collaboration models with technology. But there's another division that if you want to innovate at the silicon level and build custom silicon for you like many of the hyperscalers or other companies are doing that division is just focus on that. So I feel like Broadcom is unique from a storage perspective it's ability to innovate, breadth of portfolio and the flexibility in the collaboration model to help our customers solve their customers problems. >> So you're saying you can deal with merchant products slash open products or you can do high customization. Where does software differentiation fit into this model? >> So it's actually one of the most important elements. I think a lot of our customers take it for granted that will take care of the silicon will anticipate the requirements and deliver the performance that they need, but from a software, firmware, driver, utilities that is where a lot of differentiation lies. Some cases we'll offer an SDK model where customers can build their entire applications on top of that. In some cases they want to complete turnkey solution where you take technology, integrate it into server and the operating system recognizes it and you have outer box drivers from Broadcom. So we need to offer them that flexibility because their needs are quite broad there. >> So last question, what's the future of the business look like to Jas Tremblay? Give us your point of view on that. >> Well, it's fun. I got to tell you, Dave, we're having a great time. I've got a great team, they're the world's experts on storage connectivity and working with them is a pleasure. And we've got a rich, great set of customers that are giving us cool problems to go solve and we're excited about it. So I think this is really, with the acceleration of all this digital transformation that we're seeing, we're excited, we're having fun. And I think there's a lot of problems to be solved. And we also have a responsibility. I think the ecosystem and the industry is counting on our team to deliver the innovation from a storage connectivity perspective. And I'll tell you, Dave, we're having fun. It's great but we take that responsibility pretty seriously. >> Jas, great stuff. I really appreciate you laying all that out. Very important role you guys are playing. You have a really unique perspective. Thank you. >> Thank you, Dave. >> And thank you for watching. This is Dave Vellante for theCUBE and we'll see you next time.

Published Date : Apr 28 2022

SUMMARY :

and all of the enabling hardware me, really appreciate it. of the internet traffic flows Well, Dave, 99% of the internet traffic and specifically with storage? inside the data center. And the last acquisition for this puzzle kind of connect the dots And a lot of the workloads are processed but at the same time you And one of the key things Dave the time to final tape out So in the data center or the supportive technologies So how fast can the CPU send the write And the other thing you said, that gets the engineers excited. or an example of scaling with and the relationship is that somebody had the vision, and also the scale at which we can invest. So you're saying you can and the operating system recognizes it look like to Jas Tremblay? of problems to be solved. I really appreciate you and we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

DellORGANIZATION

0.99+

Dave VellantePERSON

0.99+

BroadcomORGANIZATION

0.99+

27.5 billionQUANTITY

0.99+

fourQUANTITY

0.99+

BrocadeORGANIZATION

0.99+

16 yearsQUANTITY

0.99+

100 gigQUANTITY

0.99+

12 monthQUANTITY

0.99+

thousandsQUANTITY

0.99+

200 gigQUANTITY

0.99+

400 gigQUANTITY

0.99+

24 gigQUANTITY

0.99+

five yearsQUANTITY

0.99+

EmulexORGANIZATION

0.99+

second challengeQUANTITY

0.99+

Jas TremblayPERSON

0.99+

60XQUANTITY

0.99+

99%QUANTITY

0.99+

240 billionQUANTITY

0.99+

FacebookORGANIZATION

0.99+

50XQUANTITY

0.99+

eachQUANTITY

0.99+

AvagoORGANIZATION

0.99+

10 microsecondsQUANTITY

0.99+

one pieceQUANTITY

0.99+

YouTubeORGANIZATION

0.99+

LSIORGANIZATION

0.99+

JasPERSON

0.99+

oneQUANTITY

0.99+

first oneQUANTITY

0.99+

firstQUANTITY

0.99+

second thingQUANTITY

0.99+

PLXORGANIZATION

0.99+

two key challengesQUANTITY

0.98+

last yearDATE

0.98+

10XQUANTITY

0.98+

2030DATE

0.98+

about 15 yearsQUANTITY

0.98+

EMCORGANIZATION

0.98+

hundreds of thousandsQUANTITY

0.98+

nearly 29 billionQUANTITY

0.97+

about 23% a dayQUANTITY

0.97+

IntelORGANIZATION

0.97+

todayDATE

0.97+

TikTokORGANIZATION

0.97+

$5 billion a yearQUANTITY

0.96+

Moore'sTITLE

0.95+

first thingsQUANTITY

0.94+

Vikas Ratna and James Leach, Cisco | Simplifying Hybrid Cloud


 

(upbeat music) >> Welcome back to theCUBE special presentation, Simplifying Hybrid Cloud brought to you by Cisco. We're here with Vikas Ratna who's the director of product management for UCS at Cisco and James Leach, who is director of business development at Cisco. Gents welcome back to theCUBE, good to see you again. >> Hey, thanks for having us. >> Okay Jim, let's start. We know that when it comes to navigating a transition to hybrid cloud, it's a complicated situation for a lot of customers. And as organizations as they hit the pavement for their hybrid cloud journeys, what are the most common challenges that they face? What are they telling you? How Cisco specifically UCS helping them deal with these problems? >> Well, you know, first I think that's a, you know, that's a great question and, you know, customer-centric view is the way that we've taken, is kind of the approach we've taken from day one right? So I think that if you look at the challenges that we're solving for that our customers are facing, you could break them into just a few kind of broader buckets. The first would definitely be applications right? That's the, that's where the rubber meets your proverbial road with the customer, and I would say that, you know, what we're seeing is the challenges customers are facing within applications come from the way that applications have evolved. So what we're seeing now is more data-centric applications for example. Those require that we, you know, are able to move, and process large datasets really in real time. And the other aspect of applications I think that give our customers kind of some, you know, pose some challenges, would be around the fact that they're changing so quickly. So the application that exists today, or the day that they, you know, make a purchase of infrastructure to be able to support that application, that application is most likely changing so much more rapidly than the infrastructure can keep up with today. So, that creates some challenges around, you know, how do I build the infrastructure? How do I rightsize it without over provisioning for example? But also there's a need for some flexibility around life cycle and planning those purchase cycles based on the life cycle of the different hardware elements. And within the infrastructure, which I think is the second bucket of challenges, we see customers who are being forced to move away from the, like a modular or Blade approach which offers a lot of operational and consolidation benefits, and they have to move to something like a rack server model for some applications because of these needs that these data-centric applications have, and that creates a lot of, you know, opportunity for siloing infrastructure. And those silos in turn create multiple operating models within the, you know, a data center environment that, you know, again drive a lot of complexity. So that complexity is definitely the enemy here. And then finally I think life cycles. We're seeing this democratization of processing if you will, right? So it's no longer just CPU-focused, we have GPU, we have FPGA, we have, you know, things that are being done in storage and the fabrics that stitch them together, that are all changing rapidly and have very different life cycles. So, when those life cycles don't align, for a lot of our customers they see a challenge in how they can manage this, you know, these different life cycles and still make a purchase, without having to make too big of a compromise in one area or another because of the misalignment of life cycles. So that is a, you know, kind of the other bucket. And then finally I think management is huge, right? So management, you know, at its core is really rightsized for our customers and give them the most value when it meets the mark around scale and scope. You know, back in 2009 we weren't meeting that mark in the industry and UCS came about and took a management outside the chassis, right? We put it at the top of the rack and that worked great for the scale and scope we needed at that time, however, as things have changed, we're seeing a very new scale and scope needed right? So we're talking about a hybrid cloud world that has to manage across data centers, across clouds, and, you know, having to stitch things together for some of our customers poses a huge challenge. So there are tools for all of those operational pieces that touch the application, that touch the infrastructure but they're not the same tool. They tend to be disparate tools that have to be put together. >> Dave: All right. >> So our customers, you know, don't really enjoy being in the business of, you know, building their own tools so that creates a huge challenge. And one where I think that they really crave that full hybrid cloud stack that has that application visibility but also can reach down into the infrastructure. >> Right, you know, Jim I said in my open that you guys, Cisco had sort of changed the server game with the original UCS, but the X-Series is the next generation, the generation for the next decade which is really important 'cause you touched on a lot of things. These data-intensive workloads, alternative processors to sort of meet those needs, the whole cloud operating model and hybrid cloud has really changed so how is it going with with the X-Series? You made a big splash last year, what's the reception been in the field? >> Actually it's been great. You know, we're finding that customers can absolutely relate to our, you know, UCS X-Series story. I think that, you know, the main reason they relate to it is they helped create it, right? It was their feedback and their partnership that gave us really the, those problem areas, those areas that we could solve for the customer that actually add, you know, significant value. So, you know, since we brought UCS to market back in 2009, you know, we had this unique architectural paradigm that we created, and I think that created a product which was the fastest in Cisco history in terms of growth. What we're seeing now is X-Series is actually on a faster trajectory. So we're seeing a tremendous amount of uptake, we're seeing, you know, both in terms of, you know, the number of customers, but also more importantly, the number of workloads that our customers are using, and the types of workloads are growing, right? So we're growing this modular segment that exists, not just, you know, bringing customers onto a new product but we're actually bringing them into the product in the way that we had envisioned which is one infrastructure that can run any application into it seamlessly. So we're really excited to be growing this modular segment. I think the other piece, you know, that, you know, we judge ourselves is, you know, sort of not just within Cisco but also within the industry. And I think right now as a, you know, a great example, you know, our competitors have taken kind of swings and misses over the past five years at this, at a, you know, kind of the new next architecture, and we're seeing a tremendous amount of growth even faster than any of our competitors have seen when they announced something that was new to this space. So, I think that the ground-up work that we did is really paying off, and I think that what we're also seeing is it's not really a leapfrog game as it may have been in the past. X-Series is out in front today and, you know, we're extending that lead with some of the new features and capabilities we have. So we're delivering on the story that's already been resonating with customers, and, you know, we're pretty excited that we're seeing the results as well. So as our competitors hit walls, I think we're, you know, we're executing on the plan that we laid out back in June, when we launched X-Series to the world. And, you know, as we continue to do that, we're seeing, you know, again, tremendous uptake from our customers. >> So thank you for that Jim. So, Vikas I was just on Twitter just today actually talking about the gravitational pull, you've got the public clouds pulling CXOs one way, and you know, on-prem folks pulling the other way, and hybrid cloud so, organizations are struggling with a lot of different systems and architectures, and ways to do things. And I said that what they're trying to do is abstract all that complexity away and they need infrastructure to support that and I think your stated aim is really to try to help with that confusion with the X-Series right? I mean, so how so? Can you explain that? >> Sure, and that's the right, the context that you built up right there Dave. If you walk into enterprise data center you'll see plethora of compute systems spread all across because every application has its unique needs, and hence you find drive node, drive-dense system, memory-dense system, GPU-dense system, core-dense system, and variety of form factors, 1U, 2U, 4U, and every one of them typically come with, you know, variety of adapters and cables and so forth. This creates the siloness of resources. Fabric is brought, the adapter is brought, the power and cooling implications, the rack, you know, space challenges. And above all, the multiple management plane that they come up with which makes it very difficult for IT to have one common center policy, and enforce it all across the firmware, and software, and so forth. And then think about upgrade challenges of the siloness makes it even more complex as these go through the upgrade references of their own. As a result we observe quite a few of our customers, you know, really, seeing a slowness in their agility, and high burdened in the cost of overall ownership. This is where with the X-Series powered by Intersight, we have one simple goal. We want to make sure our customers get out of that complexities, they become more agile, and drive lower these issues. And we are delivering it by doing three things, three aspects of simplification. First, simplify their whole infrastructure by enabling them to run their entire workload on single infrastructure. An infrastructure which removes the siloness of form factor. An infrastructure which reduces the rightful footprint that is required. Infrastructure where power and cooling budgets are in the lower. Second, we want to simplify with, by delivering a cloud operating model. Where they can create the policy once across compute, network, storage, and deploy it all across. And third, we want to take away the pain they have by simplifying the process of upgrade, and any platform evolution that they're going to go through in the next two, three years. So that's where, the focus is on just driving down the simplicity, lowering down their issues. >> Oh, that's key. Less friction is always a good thing. Now of course, Vikas we heard from the HyperFlex guys earlier, they had news not to be outdone, you have hard news as well, what innovations are you announcing around X-Series today? >> Absolutely, so we are following up on the exciting X-Series announcement that we made in June last year Dave, and we are now introducing three innovation on X-Series with the goal of three things. First, expand the supported workload on X-Series. Second, take the performance to new levels. Third, dramatically reduce the complexities in the data center by driving down the number of adapters and cables that are needed. To that end, three new innovations are coming in. First, we are introducing the support for the GPU node using a cableless and very unique X Fabric architecture. This is the most elegant design to add the GPUs to the compute node in the modular form factor. Thereby our customers can now power in AI/ML workload, or any workload that need many more number of GPUs. Second, we are bringing in GPUs right onto the compute node. And thereby our customers can now fire up the accelerated VDI workload for example. And third, which is what you know, we are extremely proud about, is we are innovating again by introducing the 5th generation of our very popular Unified Fabric Technology. With the increased bandwidth that it brings in, coupled with the local drive capacity and densities that we have on the compute node, our customers can now fire up the big data workload, the HCI workload, the SDS workload, all these workloads that have historically not lived in the modular farm factor, can be run over there and benefit from the architectural benefits that we have. Second, with the announcement of fifth generation fabric we've become the only vendor to now finally enable 100 Gig end-to-end single port bandwidth, and there are multiple of those that are coming in there. And we are working very closely with our CI partners to deliver the benefit of this performance through our Cisco Validated Design to our CI franchise. And third, the innovations in the fifth gen fabric will again allow our customers to have fewer physical adapters, may it be ethernet adapter, may it be with fiber channel adapters, or may it be the other storage adapters, they've reduced it down and coupled with the reduction in the cable. So very, very excited about these three big announcements that we are making in the smart release. >> Great, a lot there, you guys have been busy, so thank you for that Vikas. So Jim you talked a little bit about the momentum that you have, customers are adopting, what problems are they telling you that X-Series addresses and how do they align with where they want to go in the future? >> That's a great question. I think if you go back to and think about some of the things that we mentioned before in terms of the problems that we originally set out to solve, we're seeing a lot of traction. So what Vikas mentioned I think is is really important, right? Those pieces that we just announced really enhanced that story and really move, again, to the, kind of to the next level of taking advantage of some of these, you know, problem solving for our customers. You know, if you look at, you know, I think Vikas mentioned accelerated VDI, that's a great example. These are where customers, you know, they need to have this dense compute, they need video acceleration, they need tight policy management, right? And they need to be able to deploy these systems anywhere in the world. Well, that's exactly what we're hitting on here with X-Series right now. We're hitting the market every, every single way, right? We have the highest compute config density that we can offer across the, you know, the very top end configurations of CPUs, and a lot of room to grow, we have the, you know, the premier cloud-based management you know, hybrid cloud suite in the industry right? So check there. We have the flexible GPU accelerators that you, that Vikas just talked about that we're announcing both on the system and also adding additional ones to the, through the use of the X Fabric, which is really, really critical to this launch as well, and, you know, I think finally the fifth generation of Fabric Interconnect, and Virtual Interface Card, and Intelligent Fabric Module go hand in hand in creating this 100 Gig end-to-end bandwidth story that we can move a lot of data. Again, you know, having all this performance is only as good as what we can get in and out of it right? So giving customers the ability to manage it anywhere, to be able to get the bandwidth that they need, to be able to get the accelerators that are flexible to, that it fit exactly their needs, this is huge, right? It solves a lot of the problems we can tick off right away. With the infrastructure as I mentioned, X Fabric is really critical here because it opens a lot of doors here, you know, we're talking about GPUs today, but in the future there are other elements that we can disaggregate like the GPUs that solve of these life cycle mismanagement issues, they solve issues around the form factor limitations. It solves all these issues for, like it does for GPU we can do that with storage or memory in the future. So that's going to be huge, right? This is disaggregation that actually delivers, right? It's not just a gimmicky bar trick here that we're doing, this is something that customers can really get value out of day one. And then finally, I think the, you know, the future readiness here, you know, we avoid saying future proof because we're kind of embracing the future here. We know that not only are the GPUs going to evolve, the CPUs are going to evolve, the drives, you know, the storage modules are going to evolve. All of these things are changing very rapidly, the fabric that stitches them together is critical and we know that we're just on the edge of some of the developments that are coming with CXL, with some of the PCI Express changes that are coming in the very near future, so we're ready to go. X, and the X Fabric is exactly the vehicle that's going to be able to deliver those technologies to our customers, right? Our customers are out there saying that, you know, they want to buy into something like X-Series that has all the operational benefits, but at the same time, they have to have the comfort in knowing that they're protected against being locked out of some technology that's coming in the future right? We want our customers to take these disruptive technologies and not be disrupted but use them to disrupt their competition as well. So we, you know, we're really excited about the pieces today, and I think it goes a long way towards continuing to tell the customer benefit story that X-Series brings, and, you know, again, you know, stay tuned because it's going to keep getting better as we go. >> Yeah, a lot of headroom for scale and the management piece is key there. Just have time for one more question Vikas, talk, give us some nuggets on the roadmap. What's next for X-Series that we can look forward to. >> Absolutely Dave. As we talked about and James also hinted, this is a future-ready architecture. A lot of focus and innovation that we are going through is about enabling our customers to seamlessly and painlessly adopt very disruptive hardware technologies that are coming up, no refund replace. And there we are looking into enabling the customer's journey as they transition from PCA in less than four to five to six, without rip and replace, as they embrace CXL without rip and replace, as they embrace the newer paradigm of computing through the disaggregated memory, disaggregated PCI or NVMe-based dense drives and so forth. We are also looking forward to X Fabric next generation which will allow dynamic assignment of GPUs anywhere within the chassis and much more. So this is again all about focusing on the innovation that will make the enterprise data center operations a lot more simpler, and drive down the TCO, by keeping them not only covered for today but also for future. So that's where some of the focus is on Dave. >> Okay, thank you guys, we'll leave it there, in a moment I'll have some closing thoughts. (bright upbeat music) We're seeing a major evolution perhaps even a bit of a revolution in the underlying infrastructure necessary to support hybrid work. Look, virtualizing compute and running general purpose workloads is something it figured out a long time ago. But just when you have it nailed down in the technology business, things change don't they? You can count on that. The cloud operating model has bled into on-premises locations, and is creating a new vision for the future, which we heard a lot about today. It's a vision that's turning into reality and it supports much more diverse and data-intensive workloads and alternative compute modes. It's one where flexibility is a watchword enabling change, attacking complexity, and bringing a management capability that allows for a granular management of resources at massive scale. I hope you've enjoyed this special presentation, remember all these videos are available on demand at thecube.net, and if you want to learn more please click on the information link. Thanks for watching Simplifying Hybrid Cloud brought to you by Cisco and theCUBE, your leader in enterprise tech coverage. This is Dave Vellante be well, and we'll see you next time. (upbeat music)

Published Date : Mar 23 2022

SUMMARY :

brought to you by Cisco. challenges that they face? So that is a, you know, being in the business of, you know, that you guys, Cisco had sort in the way that we had envisioned and you know, on-prem folks the rack, you know, space challenges. heard from the HyperFlex guys and densities that we that you have, customers are adopting, we have the, you know, the and the management piece is key there. and drive down the TCO, and we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JamesPERSON

0.99+

DavePERSON

0.99+

Dave VellantePERSON

0.99+

JimPERSON

0.99+

CiscoORGANIZATION

0.99+

2009DATE

0.99+

UCSORGANIZATION

0.99+

JuneDATE

0.99+

James LeachPERSON

0.99+

VikasPERSON

0.99+

last yearDATE

0.99+

June last yearDATE

0.99+

Vikas RatnaPERSON

0.99+

SecondQUANTITY

0.99+

5th generationQUANTITY

0.99+

FirstQUANTITY

0.99+

100 GigQUANTITY

0.99+

thecube.netOTHER

0.99+

thirdQUANTITY

0.99+

firstQUANTITY

0.99+

ThirdQUANTITY

0.99+

IntersightORGANIZATION

0.98+

bothQUANTITY

0.98+

VikasORGANIZATION

0.98+

less than fourQUANTITY

0.98+

three yearsQUANTITY

0.98+

todayDATE

0.98+

second bucketQUANTITY

0.97+

Simplifying Hybrid CloudTITLE

0.97+

fifth genQUANTITY

0.97+

one more questionQUANTITY

0.96+

three aspectsQUANTITY

0.96+

theCUBEORGANIZATION

0.96+

HyperFlexORGANIZATION

0.96+

X FabricORGANIZATION

0.95+

fifth generationQUANTITY

0.93+

next decadeDATE

0.93+

X-SeriesTITLE

0.92+

X-SeriesCOMMERCIAL_ITEM

0.92+

one simple goalQUANTITY

0.91+

three thingsQUANTITY

0.9+

single infrastructureQUANTITY

0.89+

singleQUANTITY

0.89+

oneQUANTITY

0.88+

three new innovationsQUANTITY

0.87+

one common center policyQUANTITY

0.86+

TwitterORGANIZATION

0.86+

three big announcementsQUANTITY

0.84+

sixQUANTITY

0.81+

Cisco: Simplifying Hybrid Cloud


 

>> The introduction of the modern public cloud in the mid 2000s, permanently changed the way we think about IT. At the heart of it, the cloud operating model attacked one of the biggest problems in enterprise infrastructure, human labor costs. More than half of IT budgets were spent on people, and much of that effort added little or no differentiable value to the business. The automation of provisioning, management, recovery, optimization, and decommissioning infrastructure resources has gone mainstream as organizations demand a cloud-like model across all their application infrastructure, irrespective of its physical location. This has not only cut cost, but it's also improved quality and reduced human error. Hello everyone, my name is Dave Vellante and welcome to Simplifying Hybrid Cloud, made possible by Cisco. Today, we're going to explore Hybrid Cloud as an operating model for organizations. Now the definite of cloud is expanding. Cloud is no longer an abstract set of remote services, you know, somewhere out in the clouds. No, it's an operating model that spans public cloud, on-premises infrastructure, and it's also moving to edge locations. This trend is happening at massive scale. While at the same time, preserving granular control of resources. It's an entirely new game where IT managers must think differently to deal with this complexity. And the environment is constantly changing. The growth and diversity of applications continues. And now, we're living in a world where the workforce is remote. Hybrid work is now a permanent state and will be the dominant model. In fact, a recent survey of CIOs by Enterprise Technology Research, ETR, indicates that organizations expect 36% of their workers will be operating in a hybrid mode. Splitting time between remote work and in office environments. This puts added pressure on the application infrastructure required to support these workers. The underlying technology must be more dynamic and adaptable to accommodate constant change. So the challenge for IT managers is ensuring that modern applications can be run with a cloud-like experience that spans on-prem, public cloud, and edge locations. This is the future of IT. Now today, we have three segments where we're going to dig into these issues and trends surrounding Hybrid Cloud. First up, is DD Dasgupta, who will set the stage and share with us how Cisco is approaching this challenge. Next, we're going to hear from Manish Agarwal and Darren Williams, who will help us unpack HyperFlex which is Cisco's hyperconverged infrastructure offering. And finally, our third segment will drill into Unified Compute. More than a decade ago, Cisco pioneered the concept of bringing together compute with networking in a single offering. Cisco frankly, changed the legacy server market with UCS, Unified Compute System. The X-Series is Cisco's next generation architecture for the coming decade and we'll explore how it fits into the world of Hybrid Cloud, and its role in simplifying the complexity that we just discussed. So, thanks for being here. Let's go. (upbeat music playing) Okay, let's start things off. DD Dasgupta is back on theCUBE to talk about how we're going to simplify Hybrid Cloud complexity. DD welcome, good to see you again. >> Hey Dave, thanks for having me. Good to see you again. >> Yeah, our pleasure. Look, let's start with big picture. Talk about the trends you're seeing from your customers. >> Well, I think first off, every customer these days is a public cloud customer. They do have their on-premise data centers, but, every customer is looking to move workloads, new services, cloud native services from the public cloud. I think that's one of the big things that we're seeing. While that is happening, we're also seeing a pretty dramatic evolution of the application landscape itself. You've got, you know, bare metal applications, you always have virtualized applications, and then most modern applications are containerized, and, you know, managed by Kubernetes. So I think we're seeing a big change in, in the application landscape as well. And, probably, you know, triggered by the first two things that I mentioned, the execution venue of the applications, and then the applications themselves, it's triggering a change in the IT organizations in the development organizations and sort of not only how they work within their organizations, but how they work across all of these different organizations. So I think those are some of the big things that, that I hear about when I talk to customers. >> Well, so it's interesting. I often say Cisco kind of changed the game in server and compute when it developed the original UCS. And you remember there were organizational considerations back then bringing together the server team and the networking team and of course the storage team as well. And now you mentioned Kubernetes, that is a total game changer with regard to whole the application development process. So you have to think about a new strategy in that regard. So how have you evolved your strategy? What is your strategy to help customers simplify, accelerate their hybrid cloud journey in that context? >> No, I think you're right Dave, back to the origins of UCS and we, you know, why did a networking company build a server? Well, we just enabled with the best networking technologies so, would do compute better. And now, doing something similar on the software, actually the managing software for our hyperconvergence, for our, you know, Rack server, for our blade servers. And, you know, we've been on this journey for about four years. The software is called Intersight, and, you know, we started out with Intersight being just the element manager, the management software for Cisco's compute and hyperconverged devices. But then we've evolved it over the last few years because we believe that a customer shouldn't have to manage a separate piece of software, would do manage the hardware, the underlying hardware. And then a separate tool to connect it to a public cloud. And then a third tool to do optimization, workload optimization or performance optimization, or cost optimization. A fourth tool to now manage, you know, Kubernetes and like, not just in one cluster, one cloud, but multi-cluster, multi-cloud. They should not have to have a fifth tool that does, goes into observability anyway. I can go on and on, but you get the idea. We wanted to bring everything onto that same platform that manage their infrastructure. But it's also the platform that enables the simplicity of hybrid cloud operations, automation. It's the same platform on which you can use to manage the, the Kubernetes infrastructure, Kubernetes clusters, I mean, whether it's on-prem or in a cloud. So, overall that's the strategy. Bring it to a single platform, and a platform is a loaded word we'll get into that a little bit, you know, in this conversation, but, that's the overall strategy, simplify. >> Well, you know, you brought platform. I like to say platform beats products, but you know, there was a day, and you could still point to some examples today in the IT industry where, hey, another tool we can monetize that. And another one to solve a different problem, we can monetize that. And so, tell me more about how Intersight came about. You obviously sat back, you saw what your customers were going through, you said, "We can do better." So tell us the story there. >> Yeah, absolutely. So, look, it started with, you know, three or four guys in getting in a room and saying, "Look, we've had this, you know, management software, UCS manager, UCS director." And these are just the Cisco's management, you know, for our, softwares for our own platforms. And every company has their own flavor. We said, we took on this bold goal of like, we're not, when we rewrite this or we improve on this, we're not going to just write another piece of software. We're going to create a cloud service. Or we're going to create a SaaS offering. Because the same, the infrastructure built by us whether it's on networking or compute, or the cyber cloud software, how do our customers use it? Well, they use it to write and run their applications, their SaaS services, every customer, every customer, every company today is a software company. They live and die by how their applications work or don't. And so, we were like, "We want to eat our own dog food here," right? We want to deliver this as a SaaS offering. And so that's how it started, we've being on this journey for about four years, tens of thousands of customers. But it was a pretty big, bold ambition 'cause you know, the big change with SaaS as you're familiar Dave is, the job of now managing this piece of software, is not on the customer, it's on the vendor, right? This can never go down. We have a release every Thursday, new capabilities, and we've learned so much along the way, whether it's to announce scalability, reliability, working with, our own company's security organizations on what can or cannot be in a SaaS service. So again, it's been a wonderful journey, but, I wanted to point out, we are in some ways eating our own dog food 'cause we built a SaaS application that helps other companies deliver their SaaS applications. >> So Cisco, I look at Cisco's business model and I compare, of course compare it to other companies in the infrastructure business and, you're obviously a very profitable company, you're a large company, you're growing faster than most of the traditional competitors. And, so that means that you have more to invest. You, can afford things, like to you know, stock buybacks, and you can invest in R&D you don't have to make those hard trade offs that a lot of your competitors have to make, so-- >> You got to have a talk with my boss on the whole investment. >> Yeah, right. You'd never enough, right? Never enough. But in speaking of R&D and innovations that you're intro introducing, I'm specifically interested in, how are you dealing with innovations to help simplify hybrid cloud, the operations there, improve flexibility, and things around Cloud Native initiatives as well? >> Absolutely, absolutely. Well, look, I think, one of the fundamentals where we're kind of philosophically different from a lot of options that I see in the industry is, we don't need to build everything ourselves, we don't. I just need to create a damn good platform with really good platform services, whether it's, you know, around, searchability, whether it's around logging, whether it's around, you know, access control, multi-tenants. I need to create a really good platform, and make it open. I do not need to go on a shopping spree to buy 17 and 1/2 companies and then figure out how to stich it all together. 'Cause it's almost impossible. And if it's impossible for us as a vendor, it's three times more difficult for the customer who then has to consume it. So that was the philosophical difference and how we went about building Intersight. We've created a hardened platform that's always on, okay? And then you, then the magic starts happening. Then you get partners, whether it is, you know, infrastructure partners, like, you know, some of our storage partners like NetApp or PR, or you know, others, who want their conversion infrastructures also to be managed, or their other SaaS offerings and software vendors who have now become partners. Like we did not write Terraform, you know, but we partnered with Hashi and now, you know, Terraform service's available on the Intersight platform. We did not write all the algorithms for workload optimization between a public cloud and on-prem. We partner with a company called Turbonomic and so that's now an offering on the Intersight platform. So that's where we're philosophically different, in sort of, you know, how we have gone about this. And, it actually dovetails well into, some of the new things that I want to talk about today that we're announcing on the Intersight platform where we're actually announcing the ability to attach and be able to manage Kubernetes clusters which are not on-prem. They're actually on AWS, on Azure, soon coming on GC, on GKE as well. So it really doesn't matter. We're not telling a customer if you're comfortable building your applications and running Kubernetes clusters on, you know, in AWS or Azure, stay there. But in terms of monitoring, managing it, you can use Intersight, and since you're using it on-prem you can use that same piece of software to manage Kubernetes clusters in a public cloud. Or even manage DMS in a EC2 instance. So. >> Yeah so, the fact that you could, you mentioned Storage Pure, NetApp, so Intersight can manage that infrastructure. I remember the Hashi deal and I, it caught my attention. I mean, of course a lot of companies want to partner with Cisco 'cause you've got such a strong ecosystem, but I thought that was an interesting move, Turbonomic you mentioned. And now you're saying Kubernetes in the public cloud. So a lot different than it was 10 years ago. So my last question is, how do you see this hybrid cloud evolving? I mean, you had private cloud and you had public cloud, and it was kind of a tug of war there. We see these two worlds coming together. How will that evolve on for the next few years? >> Well, I think it's the evolution of the model and I, really look at Cloud, you know, 2.0 or 3.0, or depending on, you know, how you're keeping terms. But, I think one thing has become very clear again, we, we've be eating our own dog food, I mean, Intersight is a hybrid cloud SaaS application. So we've learned some of these lessons ourselves. One thing is for sure that the customers are looking for a consistent model, whether it's on the edge, on the COLO, public cloud, on-prem, no data center, it doesn't matter. They're looking for a consistent model for operations, for governance, for upgrades, for reliability. They're looking for a consistent operating model. What (indistinct) tells me I think there's going to be a rise of more custom clouds. It's still going to be hybrid, so applications will want to reside wherever it most makes most sense for them which is obviously data, 'cause you know, data is the most expensive thing. So it's going to be complicated with the data goes on the edge, will be on the edge, COLO, public cloud, doesn't matter. But, you're basically going to see more custom clouds, more industry specific clouds, you know, whether it's for finance, or transportation, or retail, industry specific, I think sovereignty is going to play a huge role, you know, today, if you look at the cloud provider there's a handful of, you know, American and Chinese companies, that leave the rest of the world out when it comes to making, you know, good digital citizens of their people and you know, whether it's data latency, data gravity, data sovereignty, I think that's going to play a huge role. Sovereignty's going to play a huge role. And the distributor cloud also called Edge, is going to be the next frontier. And so, that's where we are trying line up our strategy. And if I had to sum it up in one sentence, it's really, your cloud, your way. Every customer is on a different journey, they will have their choice of like workloads, data, you know, upgrade reliability concern. That's really what we are trying to enable for our customers. >> You know, I think I agree with you on that custom clouds. And I think what you're seeing is, you said every company is a software company. Every company is also becoming a cloud company. They're building their own abstraction layers, they're connecting their on-prem to their public cloud. They're doing that across clouds, and they're looking for companies like Cisco to do the hard work, and give me an infrastructure layer that I can build value on top of. 'Cause I'm going to take my financial services business to my cloud model, or my healthcare business. I don't want to mess around with, I'm not going to develop, you know, custom infrastructure like an Amazon does. I'm going to look to Cisco and your R&D to do that. Do you buy that? >> Absolutely. I think again, it goes back to what I was talking about with platform. You got to give the world a solid open, flexible platform. And flexible in terms of the technology, flexible in how they want to consume it. Some of our customers are fine with the SaaS, you know, software. But if I talk to, you know, my friends in the federal team, no, that does not work. And so, how they want to consume it, they want to, you know, (indistinct) you know, sovereignty we talked about. So, I think, you know, job for an infrastructure vendor like ourselves is to give the world a open platform, give them the knobs, give them the right API tool kit. But the last thing I will mention is, you know, there's still a place for innovation in hardware. And I think some of my colleagues are going to get into some of those, you know, details, whether it's on our X-Series, you know, platform or HyperFlex, but it's really, it's going to be software defined, it's a SaaS service and then, you know, give the world an open rock solid platform. >> Got to run on something All right, Thanks DD, always a pleasure to have you on the, theCUBE, great to see you. >> Thanks for having me. >> You're welcome. In a moment, I'll be back to dig into hyperconverged, and where HyperFlex fits, and how it may even help with addressing some of the supply chain challenges that we're seeing in the market today. >> It used to be all your infrastructure was managed here. But things got more complex in distributing, and now IT operations need to be managed everywhere. But what if you could manage everywhere from somewhere? One scalable place that brings together your teams, technology, and operations. Both on-prem and in the cloud. One automated place that provides full stack visibility to help you optimize performance and stay ahead of problems. One secure place where everyone can work better, faster, and seamlessly together. That's the Cisco Intersight cloud operations platform. The time saving, cost reducing, risk managing solution for your whole IT environment, now and into the future of this ever-changing world of IT. (upbeat music) >> With me now are Manish Agarwal, senior director of product management for HyperFlex at Cisco, @flash4all, number four, I love that, on Twitter. And Darren Williams, the director of business development and sales for Cisco. MrHyperFlex, @MrHyperFlex on Twitter. Thanks guys. Hey, we're going to talk about some news and HyperFlex, and what role it plays in accelerating the hybrid cloud journey. Gentlemen, welcome to theCUBE, good to see you. >> Thanks a lot Dave. >> Thanks Dave. >> All right Darren, let's start with you. So, for a hybrid cloud, you got to have on-prem connection, right? So, you got to have basically a private cloud. What are your thoughts on that? >> Yeah, we agree. You can't have a hybrid cloud without that prime element. And you've got to have a strong foundation in terms of how you set up the whole benefit of the cloud model you're building in terms of what you want to try and get back from the cloud. You need a strong foundation. Hyperconversions provides that. We see more and more customers requiring a private cloud, and they're building it with Hyperconversions, in particular HyperFlex. Now to make all that work, they need a good strong cloud operations model to be able to connect both the private and the public. And that's where we look at Intersight. We've got solution around that to be able to connect that around a SaaS offering. That looks around simplified operations, gives them optimization, and also automation to bring both private and public together in that hybrid world. >> Darren let's stay with you for a minute. When you talk to your customers, what are they thinking these days when it comes to implementing hyperconverged infrastructure in both the enterprise and at the edge, what are they trying to achieve? >> So there's many things they're trying to achieve, probably the most brutal honesty is they're trying to save money, that's probably the quickest answer. But, I think they're trying to look in terms of simplicity, how can they remove layers of components they've had before in their infrastructure? We see obviously collapsing of storage into hyperconversions and storage networking. And we've got customers that have saved 80% worth of savings by doing that collapse into a hyperconversion infrastructure away from their Three Tier infrastructure. Also about scalability, they don't know the end game. So they're looking about how they can size for what they know now, and how they can grow that with hyperconvergence very easy. It's one of the major factors and benefits of hyperconversions. They also obviously need performance and consistent performance. They don't want to compromise performance around their virtual machines when they want to run multiple workloads. They need that consistency all all way through. And then probably one of the biggest ones is that around the simplicity model is the management layer, ease of management. To make it easier for their operations, yeah, we've got customers that have told us, they've saved 50% of costs in their operations model on deploying HyperFlex, also around the time savings they make massive time savings which they can reinvest in their infrastructure and their operations teams in being able to innovate and go forward. And then I think probably one of the biggest pieces we've seen as people move away from three tier architecture is the deployment elements. And the ease of deployment gets easy with hyperconverged, especially with Edge. Edge is a major key use case for us. And, what I want, what our customers want to do is get the benefit of a data center at the edge, without A, the big investment. They don't want to compromise in performance, and they want that simplicity in both management and deployment. And, we've seen our analysts recommendations around what their readers are telling them in terms of how management deployment's key for our IT operations teams. And how much they're actually saving by deploying Edge and taking the burden away when they deploy hyperconversions. And as I said, the savings elements is the key bit, and again, not always, but obviously those are case studies around about public cloud being quite expensive at times, over time for the wrong workloads. So by bringing them back, people can make savings. And we again have customers that have made 50% savings over three years compared to their public cloud usage. So, I'd say that's the key things that customers are looking for. Yeah. >> Great, thank you for that Darren. Manish, we have some hard news, you've been working a lot on evolving the HyperFlex line. What's the big news that you've just announced? >> Yeah, thanks Dave. So there are several things that we are announcing today. The first one is a new offer called HyperFlex Express. This is, you know, Cisco Intersight led and Cisco Intersight managed eight HyperFlex configurations. That we feel are the fastest path to hybrid cloud. The second is we are expanding our server portfolio by adding support for HX on AMD Rack, UCS AMD Rack. And the third is a new capability that we are introducing, that we are calling, local containerized witness. And let me take a minute to explain what this is. This is a pretty nifty capability to optimize for Edge environments. So, you know, this leverages the, Cisco's ubiquitous presence of the networking, you know, products that we have in the environments worldwide. So the smallest HyperFlex configuration that we have is a 2-node configuration, which is primarily used in Edge environments. Think of a, you know, a backroom in a departmental store or a oil rig, or it might even be a smaller data center somewhere around the globe. For these 2-node configurations, there is always a need for a third entity that, you know, industry term for that is either a witness or an arbitrator. We had that for HyperFlex as well. And the problem that customers face is, where you host this witness. It cannot be on the cluster because the job of the witness is to, when the infrastructure is going down, it basically breaks, sort of arbitrates which node gets to survive. So it needs to be outside of the cluster. But finding infrastructure to actually host this is a problem, especially in the Edge environments where these are resource constraint environments. So what we've done is we've taken that witness, we've converted it into a container reform factor. And then qualified a very large slew of Cisco networking products that we have, right from ISR, ASR, Nexus, Catalyst, industrial routers, even a Raspberry Pi that can host this witness. Eliminating the need for you to find yet another piece of infrastructure, or doing any, you know, care and feeding of that infrastructure. You can host it on something that already exists in the environment. So those are the three things that we are announcing today. >> So I want to ask you about HyperFlex Express. You know, obviously the whole demand and supply chain is out of whack. Everybody's, you know, global supply chain issues are in the news, everybody's dealing with it. Can you expand on that a little bit more? Can HyperFlex Express help customers respond to some of these issues? >> Yeah indeed Dave. You know the primary motivation for HyperFlex Express was indeed an idea that, you know, one of the folks are on my team had, which was to build a set of HyperFlex configurations that are, you know, would have a shorter lead time. But as we were brainstorming, we were actually able to tag on multiple other things and make sure that, you know, there is in it for, something in it for our customers, for sales, as well as our partners. So for example, you know, for our customers, we've been able to dramatically simplify the configuration and the install for HyperFlex Express. These are still HyperFlex configurations and you would at the end of it, get a HyperFlex cluster. But the part to that cluster is much, much simplified. Second is that we've added in flexibility where you can now deploy these, these are data center configurations, but you can deploy these with or without fabric interconnects, meaning you can deploy with your existing top of rack. We've also, you know, added attractive price point for these, and of course, you know, these will have better lead times because we've made sure that, you know, we are using components that are, that we have clear line of sight from our supply perspective. For partner and sales, this is, represents a high velocity sales motion, a faster turnaround time, and a frictionless sales motion for our distributors. This is actually a set of disty-friendly configurations, which they would find very easy to stalk, and with a quick turnaround time, this would be very attractive for the distys as well. >> It's interesting Manish, I'm looking at some fresh survey data, more than 70% of the customers that were surveyed, this is the ETR survey again, we mentioned 'em at the top. More than 70% said they had difficulty procuring server hardware and networking was also a huge problem. So that's encouraging. What about, Manish, AMD? That's new for HyperFlex. What's that going to give customers that they couldn't get before? >> Yeah Dave, so, you know, in the short time that we've had UCS AMD Rack support, we've had several record making benchmark results that we've published. So it's a powerful platform with a lot of performance in it. And HyperFlex, you know, the differentiator that we've had from day one is that it has the industry leading storage performance. So with this, we are going to get the fastest compute, together with the fastest storage. And this, we are hoping that we'll, it'll basically unlock, you know, a, unprecedented level of performance and efficiency, but also unlock several new workloads that were previously locked out from the hyperconverged experience. >> Yeah, cool. So Darren, can you give us an idea as to how HyperFlex is doing in the field? >> Sure, absolutely. So, both me and Manish been involved right from the start even before it was called HyperFlex, and we've had a great journey. And it's very exciting to see where we are taking, where we've been with the technology. So we have over 5,000 customers worldwide, and we're currently growing faster year over year than the market. The majority of our customers are repeat buyers, which is always a good sign in terms of coming back when they've proved the technology and are comfortable with the technology. They, repeat buyer for expanded capacity, putting more workloads on. They're using different use cases on there. And from an Edge perspective, more numbers of science. So really good endorsement of the technology. We get used across all verticals, all segments, to house mission critical applications, as well as the traditional virtual server infrastructures. And we are the lifeblood of our customers around those, mission critical customers. I think one big example, and I apologize for the worldwide audience, but this resonates with the American audience is, the Super Bowl. So, the SoFi stadium that housed the Super Bowl, actually has Cisco HyperFlex running all the management services, through from the entire stadium for digital signage, 4k video distribution, and it's completely cashless. So, if that were to break during Super Bowl, that would've been a big news article. But it was run perfectly. We, in the design of the solution, we're able to collapse down nearly 200 servers into a few nodes, across a few racks, and have 120 virtual machines running the whole stadium, without missing a heartbeat. And that is mission critical for you to run Super Bowl, and not be on the front of the press afterwards for the wrong reasons, that's a win for us. So we really are, really happy with HyperFlex, where it's going, what it's doing, and some of the use cases we're getting involved in, very, very exciting. >> Hey, come on Darren, it's Super Bowl, NFL, that's international now. And-- >> Thing is, I follow NFL. >> The NFL's, it's invading London, of course, I see the, the picture, the real football over your shoulder. But, last question for Manish. Give us a little roadmap, what's the future hold for HyperFlex? >> Yeah. So, you know, as Darren said, both Darren and I have been involved with HyperFlex since the beginning. But, I think the best is yet to come. There are three main pillars for HyperFlex. One is, Intersight is central to our strategy. It provides a, you know, lot of customer benefit from a single pane of class management. But we are going to take this beyond the lifecycle management, which is for HyperFlex, which is integrated into Intersight today, and element management. We are going to take it beyond that and start delivering customer value on the dimensions of AI Ops, because Intersight really provides us a ideal platform to gather stats from all the clusters across the globe, do AI/ML and do some predictive analysis with that, and return back as, you know, customer valued, actionable insights. So that is one. The second is UCS expand the HyperFlex portfolio, go beyond UCS to third party server platforms, and newer UCS server platforms as well. But the highlight there is one that I'm really, really excited about and think that there is a lot of potential in terms of the number of customers we can help. Is HX on X-Series. X-Series is another thing that we are going to, you know, add, we're announcing a bunch of capabilities on in this particular launch. But HX on X-Series will have that by the end of this calendar year. And that should unlock with the flexibility of X-Series of hosting a multitude of workloads and the simplicity of HyperFlex. We're hoping that would bring a lot of benefits to new workloads that were locked out previously. And then the last thing is HyperFlex data platform. This is the heart of the offering today. And, you'll see the HyperFlex data platform itself it's a distributed architecture, a unique distributed architecture. Primarily where we get our, you know, record baring performance from. You'll see it can foster more scalable, more resilient, and we'll optimize it for you know, containerized workloads, meaning it'll get granular containerized, container granular management capabilities, and optimize for public cloud. So those are some things that we are, the team is busy working on, and we should see that come to fruition. I'm hoping that we'll be back at this forum in maybe before the end of the year, and talking about some of these newer capabilities. >> That's great. Thank you very much for that, okay guys, we got to leave it there. And you know, Manish was talking about the HX on X-Series that's huge, customers are going to love that and it's a great transition 'cause in a moment, I'll be back with Vikas Ratna and Jim Leach, and we're going to dig into X-Series. Some real serious engineering went into this platform, and we're going to explore what it all means. You're watching Simplifying Hybrid Cloud on theCUBE, your leader in enterprise tech coverage. >> The power is here, and here, but also here. And definitely here. Anywhere you need the full force and power of your infrastructure hyperconverged. It's like having thousands of data centers wherever you need them, powering applications anywhere they live, but manage from the cloud. So you can automate everything from here. (upbeat music) Cisco HyperFlex goes anywhere. Cisco, the bridge to possible. (upbeat music) >> Welcome back to theCUBE's special presentation, Simplifying Hybrid Cloud brought to you by Cisco. We're here with Vikas Ratna who's the director of product management for UCS at Cisco and James Leach, who is director of business development at Cisco. Gents, welcome back to theCUBE, good to see you again. >> Hey, thanks for having us. >> Okay, Jim, let's start. We know that when it comes to navigating a transition to hybrid cloud, it's a complicated situation for a lot of customers, and as organizations as they hit the pavement for their hybrid cloud journeys, what are the most common challenges that they face? What are they telling you? How is Cisco, specifically UCS helping them deal with these problems? >> Well, you know, first I think that's a, you know, that's a great question. And you know, customer centric view is the way that we've taken, is kind of the approach we've taken from day one. Right? So I think that if you look at the challenges that we're solving for that our customers are facing, you could break them into just a few kind of broader buckets. The first would definitely be applications, right? That's the, that's where the rubber meets your proverbial road with the customer. And I would say that, you know, what we're seeing is, the challenges customers are facing within applications come from the the way that applications have evolved. So what we're seeing now is more data centric applications for example. Those require that we, you know, are able to move and process large data sets really in real time. And the other aspect of applications I think to give our customers kind of some, you know, pause some challenges, would be around the fact that they're changing so quickly. So the application that exists today or the day that they, you know, make a purchase of infrastructure to be able to support that application, that application is most likely changing so much more rapidly than the infrastructure can keep up with today. So, that creates some challenges around, you know, how do I build the infrastructure? How do I right size it without over provisioning, for example? But also, there's a need for some flexibility around life cycle and planning those purchase cycles based on the life cycle of the different hardware elements. And within the infrastructure, which I think is the second bucket of challenges, we see customers who are being forced to move away from the, like a modular or blade approach, which offers a lot of operational and consolidation benefits, and they have to move to something like a Rack server model for some applications because of these needs that these data centric applications have, and that creates a lot of you know, opportunity for siloing the infrastructure. And those silos in turn create multiple operating models within the, you know, a data center environment that, you know, again, drive a lot of complexity. So that, complexity is definitely the enemy here. And then finally, I think life cycles. We're seeing this democratization of processing if you will, right? So it's no longer just CPU focused, we have GPU, we have FPGA, we have, you know, things that are being done in storage and the fabrics that stitch them together that are all changing rapidly and have very different life cycles. So, when those life cycles don't align for a lot of our customers, they see a challenge in how they can manage this, you know, these different life cycles and still make a purchase without having to make too big of a compromise in one area or another because of the misalignment of life cycles. So, that is a, you know, kind of the other bucket. And then finally, I think management is huge, right? So management, you know, at its core is really right size for our customers and give them the most value when it meets the mark around scale and scope. You know, back in 2009, we weren't meeting that mark in the industry and UCS came about and took management outside the chassis, right? We put it at the top of the rack and that worked great for the scale and scope we needed at that time. However, as things have changed, we're seeing a very new scale and scope needed, right? So we're talking about a hybrid cloud world that has to manage across data centers, across clouds, and, you know, having to stitch things together for some of our customers poses a huge challenge. So there are tools for all of those operational pieces that touch the application, that touch the infrastructure, but they're not the same tool. They tend to be disparate tools that have to be put together. >> Right. >> So our customers, you know, don't really enjoy being in the business of, you know, building their own tools, so that creates a huge challenge. And one where I think that they really crave that full hybrid cloud stack that has that application visibility but also can reach down into the infrastructure. >> Right. You know Jim, I said in my open that you guys, Cisco sort of changed the server game with the original UCS, but the X-Series is the next generation, the generation for the next decade which is really important 'cause you touched on a lot of things, these data intensive workload, alternative processors to sort of meet those needs. The whole cloud operating model and hybrid cloud has really changed. So, how's it going with with the X-Series? You made a big splash last year, what's the reception been in the field? >> Actually, it's been great. You know, we're finding that customers can absolutely relate to our, you know, UCS X-Series story. I think that, you know, the main reason they relate to it is they helped create it, right? It was their feedback and their partnership that gave us really the, those problem areas, those areas that we could solve for the customer that actually add, you know, significant value. So, you know, since we brought UCS to market back in 2009, you know, we had this unique architectural paradigm that we created, and I think that created a product which was the fastest in Cisco history in terms of growth. What we're seeing now is X-Series is actually on a faster trajectory. So we're seeing a tremendous amount of uptake. We're seeing all, you know, both in terms of, you know, the number of customers, but also more importantly, the number of workloads that our customers are using, and the types of workloads are growing, right? So we're growing this modular segment that exist, not just, you know, bringing customers onto a new product, but we're actually bring them into the product in the way that we had envisioned, which is one infrastructure that can run any application and do it seamlessly. So we're really excited to be growing this modular segment. I think the other piece, you know, that, you know, we judge ourselves is, you know, sort of not just within Cisco, but also within the industry. And I think right now is a, you know, a great example, you know, our competitors have taken kind of swings and misses over the past five years at this, at a, you know, kind of the new next architecture. And, we're seeing a tremendous amount of growth even faster than any of our competitors have seen when they announced something that was new to this space. So, I think that the ground up work that we did is really paying off. And I think that what we're also seeing is it's not really a leap frog game, as it may have been in the past. X-Series is out in front today, and, you know, we're extending that lead with some of the new features and capabilities we have. So we're delivering on the story that's already been resonating with customers and, you know, we're pretty excited that we're seeing the results as well. So, as our competitors hit walls, I think we're, you know, we're executing on the plan that we laid out back in June when we launched X-Series to the world. And, you know, as we continue to do that, we're seeing, you know, again, tremendous uptake from our customers. >> So thank you for that Jim. So Vikas, I was just on Twitter just today actually talking about the gravitational pull, you've got the public clouds pulling CXOs one way and you know, on-prem folks pulling the other way and hybrid cloud. So, organizations are struggling with a lot of different systems and architectures and ways to do things. And I said that what they're trying to do is abstract all that complexity away and they need infrastructure to support that. And I think your stated aim is really to try to help with that confusion with the X series, right? I mean, so how so can you explain that? >> Sure. And, that's the right, the context that you built up right there Dave. If you walk into enterprise data center you'll see plethora of compute systems spread all across. Because, every application has its unique needs, and, hence you find drive node, drive-dense system, memory dense system, GPU dense system, core dense system, and variety of form factors, 1U, 2U, 4U, and, every one of them typically come with, you know, variety of adapters and cables and so forth. This creates the siloness of resources. Fabric is (indistinct), the adapter is (indistinct). The power and cooling implication. The Rack, you know, face challenges. And, above all, the multiple management plane that they come up with, which makes it very difficult for IT to have one common center policy, and enforce it all across, across the firmware and software and so forth. And then think about upgrade challenges of the siloness makes it even more complex as these go through the upgrade processes of their own. As a result, we observe quite a few of our customers, you know, really seeing an inter, slowness in that agility, and high burden in the cost of overall ownership. This is where with the X-Series powered by Intersight, we have one simple goal. We want to make sure our customers get out of that complexities. They become more agile, and drive lower TCOs. And we are delivering it by doing three things, three aspects of simplification. First, simplify their whole infrastructure by enabling them to run their entire workload on single infrastructure. An infrastructure which removes the siloness of form factor. An infrastructure which reduces the Rack footprint that is required. An infrastructure where power and cooling budgets are in the lower. Second, we want to simplify by delivering a cloud operating model, where they can and create the policy once across compute network storage and deploy it all across. And third, we want to take away the pain they have by simplifying the process of upgrade and any platform evolution that they're going to go through in the next two, three years. So that's where the focus is on just driving down the simplicity, lowering down their TCOs. >> Oh, that's key, less friction is always a good thing. Now, of course, Vikas we heard from the HyperFlex guys earlier, they had news not to be outdone. You have hard news as well. What innovations are you announcing around X-Series today? >> Absolutely. So we are following up on the exciting X-Series announcement that we made in June last year, Dave. And we are now introducing three innovation on X-Series with the goal of three things. First, expand the supported workload on X-Series. Second, take the performance to new levels. Third, dramatically reduce the complexities in the data center by driving down the number of adapters and cables that are needed. To that end, three new innovations are coming in. First, we are introducing the support for the GPU node using a cableless and very unique X-Fabric architecture. This is the most elegant design to add the GPUs to the compute node in the modular form factor. Thereby, our customers can now power in AI/ML workload, or any workload that need many more number of GPUs. Second, we are bringing in GPUs right onto the compute node, and thereby our customers can now fire up the accelerated VDI workload for example. And third, which is what you know, we are extremely proud about, is we are innovating again by introducing the fifth generation of our very popular unified fabric technology. With the increased bandwidth that it brings in, coupled with the local drive capacity and densities that we have on the compute node, our customers can now fire up the big data workload, the FCI workload, the SDS workload. All these workloads that have historically not lived in the modular form factor, can be run over there and benefit from the architectural benefits that we have. Second, with the announcement of fifth generation fabric, we've become the only vendor to now finally enable 100 gig end to end single port bandwidth, and there are multiple of those that are coming in there. And we are working very closely with our CI partners to deliver the benefit of these performance through our Cisco Validated Design to our CI franchise. And third, the innovations in the fifth gen fabric will again allow our customers to have fewer physical adapters made with ethernet adapter, made with power channel adapters, or made with, the other storage adapters. They've reduced it down and coupled with the reduction in the cable. So very, very excited about these three big announcements that we are making in this month's release. >> Great, a lot there, you guys have been busy, so thank you for that Vikas. So, Jim, you talked a little bit about the momentum that you have, customers are adopting, what problems are they telling you that X-Series addresses, and how do they align with where they want to go in the future? >> That's a great question. I think if you go back to, and think about some of the things that we mentioned before, in terms of the problems that we originally set out to solve, we're seeing a lot of traction. So what Vikas mentioned I think is really important, right? Those pieces that we just announced really enhance that story and really move again, to the, kind of, to the next level of taking advantage of some of these, you know, problem solving for our customers. You know, if you look at, you know, I think Vikas mentioned accelerated VDI. That's a great example. These are where customers, you know, they need to have this dense compute, they need video acceleration, they need tight policy management, right? And they need to be able to deploy these systems anywhere in the world. Well, that's exactly what we're hitting on here with X-Series right now. We're hitting the market in every single way, right? We have the highest compute config density that we can offer across the, you know, the very top end configurations of CPUs, and a lot of room to grow. We have the, you know, the premier cloud based management, you know, hybrid cloud suite in the industry, right? So check there. We have the flexible GPU accelerators that Vikas just talked about that we're announcing both on the system and also adding additional ones to the, through the use of the X-Fabric, which is really, really critical to this launch as well. And, you know, I think finally, the fifth generation of fabric interconnect and virtual interface card, and, intelligent fabric module go hand in hand in creating this 100 gig end to end bandwidth story, that we can move a lot of data. Again, you know, having all this performance is only as good as what we can get in and out of it, right? So giving customers the ability to manage it anywhere, to be able to get the bandwidth that they need, to be able to get the accelerators that are flexible that it fit exactly their needs, this is huge, right? This solves a lot of the problems we can tick off right away. With the infrastructure as I mentioned, X-Fabric is really critical here because it opens a lot of doors here, you know, we're talking about GPUs today, but in the future, there are other elements that we can disaggregate, like the GPUs that solve these life cycle mismanagement issues. They solve issues around the form factor limitations. It solves all these issues for like, it does for GPU we can do that with storage or memory in the future. So that's going to be huge, right? This is disaggregation that actually delivers, right? It's not just a gimmicky bar trick here that we're doing, this is something that customers can really get value out of day one. And then finally, I think the, you know, the future readiness here, you know, we avoid saying future proof because we're kind of embracing the future here. We know that not only are the GPUs going to evolve, the CPUs are going to evolve, the drives, you know, the storage modules are going to evolve. All of these things are changing very rapidly. The fabric that stitches them together is critical, and we know that we're just on the edge of some of the development that are coming with CXL, with some of the PCI Express changes that are coming in the very near future, so we're ready to go. And the X-Fabric is exactly the vehicle that's going to be able to deliver those technologies to our customers, right? Our customers are out there saying that, you know, they want to buy into to something like X-Series that has all the operational benefits, but at the same time, they have to have the comfort in knowing that they're protected against being locked out of some technology that's coming in the future, right? We want our customers to take these disruptive technologies and not be disrupted, but use them to disrupt their competition as well. So, you know, we're really excited about the pieces today, and, I think it goes a long way towards continuing to tell the customer benefit story that X-Series brings, and, you know, again, you know, stay tuned because it's going to keep getting better as we go. >> Yeah, a lot of headroom for scale and the management piece is key there. Just have time for one more question Vikas. Give us some nuggets on the roadmap. What's next for X-Series that we can look forward to? >> Absolutely Dave. As we talked about, and as Jim also hinted, this is a future ready architecture. A lot of focus and innovation that we are going through is about enabling our customers to seamlessly and painlessly adopt very disruptive hardware technologies that are coming up, no refund replace. And, there we are looking into, enabling the customer's journey as they transition from PCI generation four, to five to six without driven replace, as they embrace CXL without driven replace. As they embrace the newer paradigm of computing through the disaggregated memory, disaggregated PCIe or NVMe based dense drives, and so forth. We are also looking forward to X-Fabric next generation, which will allow dynamic assignment of GPUs anywhere within the chassis and much more. So this is again, all about focusing on the innovation that will make the enterprise data center operations a lot more simpler, and drive down the TCO by keeping them not only covered for today, but also for future. So that's where some of the focus is on Dave. >> Okay. Thank you guys we'll leave it there, in a moment, I'll have some closing thoughts. (upbeat music) We're seeing a major evolution, perhaps even a bit of a revolution in the underlying infrastructure necessary to support hybrid work. Look, virtualizing compute and running general purpose workloads is something IT figured out a long time ago. But just when you have it nailed down in the technology business, things change, don't they? You can count on that. The cloud operating model has bled into on-premises locations. And is creating a new vision for the future, which we heard a lot about today. It's a vision that's turning into reality. And it supports much more diverse and data intensive workloads and alternative compute modes. It's one where flexibility is a watch word, enabling change, attacking complexity, and bringing a management capability that allows for a granular management of resources at massive scale. I hope you've enjoyed this special presentation. Remember, all these videos are available on demand at thecube.net. And if you want to learn more, please click on the information link. Thanks for watching Simplifying Hybrid Cloud brought to you by Cisco and theCUBE, your leader in enterprise tech coverage. This is Dave Vellante, be well and we'll see you next time. (upbeat music)

Published Date : Mar 22 2022

SUMMARY :

and its role in simplifying the complexity Good to see you again. Talk about the trends you're of the big things that, and of course the storage team as well. UCS and we, you know, Well, you know, you brought platform. is not on the customer, like to you know, stock buybacks, on the whole investment. hybrid cloud, the operations Like we did not write Terraform, you know, Kubernetes in the public cloud. that leave the rest of the world out you know, custom infrastructure And flexible in terms of the technology, have you on the, theCUBE, some of the supply chain challenges to help you optimize performance And Darren Williams, the So, for a hybrid cloud, you in terms of what you want to in both the enterprise and at the edge, is that around the simplicity What's the big news that Eliminating the need for you to find are in the news, and of course, you know, more than 70% of the is that it has the industry is doing in the field? and not be on the front Hey, come on Darren, the real football over your shoulder. and return back as, you know, And you know, Manish was Cisco, the bridge to possible. theCUBE, good to see you again. We know that when it comes to navigating or the day that they, you know, the business of, you know, my open that you guys, can absolutely relate to our, you know, and you know, on-prem the context that you What innovations are you And third, which is what you know, the momentum that you have, the future readiness here, you know, for scale and the management a lot more simpler, and drive down the TCO brought to you by Cisco and theCUBE,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JimPERSON

0.99+

Dave VellantePERSON

0.99+

UCSORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Manish AgarwalPERSON

0.99+

2009DATE

0.99+

80%QUANTITY

0.99+

DavePERSON

0.99+

50%QUANTITY

0.99+

JuneDATE

0.99+

17QUANTITY

0.99+

36%QUANTITY

0.99+

DarrenPERSON

0.99+

James LeachPERSON

0.99+

threeQUANTITY

0.99+

100 gigQUANTITY

0.99+

Darren WilliamsPERSON

0.99+

Enterprise Technology ResearchORGANIZATION

0.99+

June last yearDATE

0.99+

AMDORGANIZATION

0.99+

FirstQUANTITY

0.99+

one sentenceQUANTITY

0.99+

TurbonomicORGANIZATION

0.99+

Super BowlEVENT

0.99+

thecube.netOTHER

0.99+

more than 70%QUANTITY

0.99+

last yearDATE

0.99+

VikasORGANIZATION

0.99+

third segmentQUANTITY

0.99+

VikasPERSON

0.99+

OneQUANTITY

0.99+

fourth toolQUANTITY

0.99+

AWSORGANIZATION

0.99+

thirdQUANTITY

0.99+

oneQUANTITY

0.99+

Vikas RatnaPERSON

0.99+

IntersightORGANIZATION

0.99+

ETRORGANIZATION

0.99+

SecondQUANTITY

0.99+

HyperFlexORGANIZATION

0.99+

mid 2000sDATE

0.99+

third toolQUANTITY

0.99+

TodayDATE

0.99+

More than 70%QUANTITY

0.99+

X-SeriesTITLE

0.99+

10 years agoDATE

0.99+

Vikas Ratna and James Leach, Cisco


 

>>Mm. >>Welcome back to the Cube. Special presentation. Simplifying Hybrid Cloud Brought to You by Cisco We're here with Vegas Rattana, who's the director of product management for you? CSS Cisco and James Leach, who was director of business development at Cisco. Gents, welcome back to the Cube. Good to see you again. >>Hey, thanks for having us. >>Okay, Jim, let's start. We know that when it comes to navigating a transition to hybrid cloud, it's a complicated situation for a lot of customers and as organisations that they hit the pavement for their hybrid cloud journeys, one of the most common challenges that they face. What are they telling you? How is Cisco specifically UCS helping them deal with these problems? >>Well, you know, first, I think that's a That's a great question. And, you know, the customer centric view is is the way that we've taken. Um, it's kind of the approach we've taken from Day one, right? So I think that if you look at the challenges that we're solving for their customers are facing, you could break them into just a few kind of broader buckets. The first would definitely be applications, right? That's the That's where the rubber meets your proverbial road. Um, with the customer. And I would say that you know, what we're seeing is the challenges customers are facing within applications come from the way that applications have evolved. So what we're seeing now is more data centric applications. For example, um, those require that we are able to move, um, and process large datasets really in real time. Um, and the other aspect of application, I think, to give our customers kind of some pose some challenges would be around the fact that they're changing so quickly. So the application that exists today or the day that they make a purchase of infrastructure to be able to support that application. That application is most likely changing so much more rapidly than the infrastructure can't keep up with today. So, um, that creates some some challenges around. How do I build the infrastructure? How do I write? Size it without over provisioning, for example. But also there's a need for some flexibility around life cycle and planting those purchase cycles based on the life cycle of the different hardware elements and within the infrastructure, which I think is the second bucket of challenges. We see customers who are being forced to move away from the like a modular or blade approach, which offers a lot of operational and consolidation benefits. And they have to move to something like, um, Iraq server model for some applications because of these needs that these data centric applications have. And that creates a lot of opportunity for silo going. The infrastructure and those silos, in turn, create multiple operating models within the A data centre environment that, you know, again drive a lot of complexity. So that complexity is definitely the the enemy here. Um, and then finally, I think life cycles. We're seeing this democratisation of of processing, if you will, right, so it's no longer just CPU focus. We have GPU. We have F p g A. We have things that are being done in storage and the fabrics that stitch them together that are all changing rapidly and have very different life cycles. So when those life cycles don't align for a lot of our customers, they see a challenge in how they can can manage this these different life cycles and still make a purchase without having to make too big of a compromise in one area or another because of the misalignment of life cycles. So that is a kind of the other bucket. And then finally, I think management is huge, right? So management at its core is really right size for for our customers and give them the most value when it when it meets the mark around scale and scope. Um, back in 2000 and nine, we weren't meeting that mark in the industry and UCS came about and took management outside the chassis, right? We put at the top of the rack, and that works great for the scale and scope we needed at that time. However, as things have changed, we're seeing a very new scale and scope needed, Right? So we're talking about hybrid cloud world that has to manage across data centres across clouds. And, um, you know, having to stitch things together for some of our customers poses a huge challenge. So there are tools for all of those those operational pieces that that touched the application that touched the infrastructure. But they're not the same tool. They tend to be, um, disparate tools that have to be put together. So our customers, you know, don't really enjoy being in the business of building their own tools. So, um, so that creates a huge challenge. And one where I think that they really crave that full hybrid cloud stack that has that application visibility but also can reach down into the infrastructure. >>Right? You know, Jim, I said in my my Open that you guys, Cisco sort of changed the server game with the original UCS. But the X Series is the next generation, the generation of the next decade, which is really important cause you touched on a lot of things. These data intensive workloads, alternative processors to sort of meet those needs. The whole cloud operating model and hybrid cloud has really changed. So how's it going with the X Series? You made a big splash last year. What's the reception been in the field? >>Actually, it's been great. Um, you know, we're finding that customers can absolutely relate to our UCS X series story. Um, I think that the main reason they relate to it as they helped create it, right, it was their feedback and their partnership that they gave us Really, those problem areas, those, uh, those areas that we could solve for the customer that actually add significant value. So, you know, since we brought you see s to market back in 2000 and nine, we had this unique architectural, um uh, paradigm that we created. And I think that created a product which was the fastest in Cisco history. Um, in terms of growth, Um, what we're seeing now is X series is actually on a faster trajectory. So we're seeing a tremendous amount of uptake. We're seeing, uh, both in terms of the number of customers. But also, more importantly, the number of workloads that our customers are using and the types of workloads are growing. Right? So we're growing this modular segment that exists not just, um, you know, bringing customers onto a new product, But we're actually bringing them into the product in the way that we had envisioned, which is one infrastructure that can run any application and do it seamlessly. So we're really excited to be growing this modular segment. Um, I think the other piece, you know that, you know, we judge ourselves is, you know, sort of not just within Cisco, but also within the industry and I think right now is a You know, a great example. Our competitors have taken kind of swings and misses over the past five years at this, um, at a kind of a new next architecture, and we're seeing a tremendous amount of growth even faster than any any of our competitors have seen. When they announced something, um, that was new to this space. So I think that the ground up work that we did is really paying off. Um, and I think that what we're also seeing is it's not really a leapfrog game, Um, as it may have been in the past, Um, X series is out in front today, and we're extending that lead with some of the new features and capabilities we have. So we're delivering on the story that's already been resonating with customers, and we're pretty excited that we're seeing the results as well. So as our competitors hit walls, I think we're you know, we're executing on the plan that we laid out back in June when we launched that series to the world. And, uh, you know, as we as we continue to do that, um, we're seeing, you know, again tremendous uptake from our customers. >>So thank you for that, Jim. So viscous. I was just on Twitter just today, actually talking about the gravitational pull. You've got the public clouds pulling C x o is one way. And you know I'm Prem folks pulling the other way and hybrid cloud So organisations are struggling with a lot of different systems and architectures and and ways to do things. And I said that what they're trying to do is abstract all that complexity away, and they need infrastructure to support that. And I think your stated aim is really to try to help with that with that confusion with the X series. Right? So how so? Can you explain that? >>Sure. And and and that's the right, Uh, the context that you built up right there, Dave, if you walk into Enterprise Data Centre, you see platform of computer systems spread all across because every application has its unique needs. And hence you find Dr Note Driving system memory system, computing system, coordinate system and a variety of farm factors. When you do, you, for you and every one of them typically come with a variety of adapters and cables and so forth Just create silence of resources. Fabric is broad. The actress brought the power and cooling implications the rack, you know, the space challenges and above all, the multiple management plane that they come of it, which makes it very difficult for I t to have one common centre policy and enforce it all across across the firmware and software and so forth and then think about the great challenges of the baroness makes it even more complex as these go through the great references of their own. As a result, we observe quite a few of our customers. Uh, you know, really, uh, seeing Anna slowness in that agility and high burden, uh, in the cost of overall ownership, this is where the X rays powered by inter side. We have one simple goal. We want to make sure our customers get out of that complexities. They become more Asyl and drive lower tco and we are delivering it by doing three things. Three aspects of simplification first simplify their whole infrastructure by enabling them to run their entire workload on single infrastructure and infrastructure, which removes the narrowness of fun factor and infrastructure which reduces direct from footprint that is required infrastructure were power and cooling better served in the Lord. Second, we want to simplify it with by delivering a cloud operating model where they can create the policy ones across compute network stories and deployed all across. And third, we want to take away the pain they have by simplifying the process of upgrade and any platform evolution that they are going to go through the next 23 years. So that's where the focus is on just driving down the simplicity lowering down there. >>That's key. Less friction is is always a good thing now, of course, because we heard from the hyper flex guys earlier, they had news. Not to be outdone, you have hard news as well. What innovations are you announcing around X series today? >>Absolutely. So we are following up on the excited, exciting extras announcement that we made in June last year. Day and we are now introducing three innovation on experience with the bowl of three things First, expand the supported World War and extra days. Second, take the performance to new levels. Third dramatically reduced the complex cities in the data centre by driving down the number of adapters and cables. To that end, three new innovations are coming in. First, we are introducing the support for the GPU note using a cable list and very unique X fabric architecture. This is the most elegant design to add the GPS to the compute note in the model of form factor thereby, our customers can now power in AML workload on any workload that needs many more number of GPS. Second, we are bringing in GPS right onto the computer note and thereby the our customers can now fire up the accelerated video upload, for example, and turf, which is what you know we are extremely proud about, is we are innovating again by introducing the fifth generation of our very popular unified fabric technology with the increased bandwidth that it brings in, coupled with the local drive capacity and density is that we have on the computer note our customers can now fire up the big data workloads the F C I work. Lord, uh, the FDA has worked with all these workloads that have historically not lived in the model of form. Factor can be run over there and benefit from the architectural benefits that we have. Second, with the announcement of fifth generation fabric, we become the only vendor to now finally enable 100 gig and two and single board banned word and the multiple of those that are coming in there. And we are working very closely with our partners to deliver the benefit of these performance through our Cisco validated design to oversee a franchise. And third, the innovations in, uh, in the in the fifth and public again allow our customers to have fewer physical adapters, made the Internet adapter made with our general doctors or maybe the other stories adapters. They reduced it down and coupled with the reduction in the cable so very, very excited about these three big announcements that we're making in this part of the great >>A lot There. You guys have been busy. So thank you for that. Because so, Jim, you talked a little bit about the momentum that you have. Customers are adopting. What problems are they telling you that X series addresses and and how do they align with where where they want to go in the future? >>Um, that's a great question. I think if you go back to um and think about some of the things that we mentioned before. Um, in terms of the problems that we originally set out to solve, we're seeing a lot of traction. So what the cost mentioned, I think, is really important, right? Those pieces that we just announced really enhanced that story and really move again to kind of to the next level of, of taking advantage of some of these problem solving for our customers. You know, if you look, you know, I think the cost mentioned accelerated VD. That's a great example. Um, these are where customers you know, they need to have this dense compute. They need video acceleration, they need type policy management, right. And they need to be able to deploy these, um, these systems anywhere in the world. Well, that's exactly what we're hitting on here with X series right now, we're hitting the mark in every every single way, right? We have the highest compute config density that we can offer across the, you know, the very top end configurations of CPUs. Um, and a lot of room to grow. Um, we have the the premier cloud based management. You know, hybrid cloud suite. Um uh, in the industry. Right. So check there. We have the flexible GPU accelerators that that the cost just talked about that we're announcing both on the system and also adding additional ones to the through the use of the X fabric, which is really, really critical to this launch as well. And, uh, you know, I think finally, the fifth generation of fabric interconnect and virtual interface card, um, and an intelligent fabric module go hand in hand in creating this 100 gig and end bandwidth story that we can move a lot of data again. You know, having all this performance is only as good as what we can get in and out of it, right? So giving customers the ability to manage it anywhere be able to get the bandwidth that they need to be able to get the accelerators that are flexible to that fit exactly their needs. This is huge, right? This solves a lot of the problems we can take off right away with the infrastructure. As I mentioned, X fabric is really critical here because it opens a lot of doors here. We're talking about GPS today, but in the future, there are other elements that we can disaggregate like the GPS that solve these lifecycle mismanagement issues. They solve issues around the form factor limitations. It solves all these issues for like it does for GPU. We can do that with storage or memory in the future, So that's going to be huge, right? This is disaggregate Asian that actually delivers right. It's not just a gimmicky bar trick here that we're doing. This is something that that customers can really get value out of Day one. And then finally, I think the future readiness here. You know, we avoid saying future proof because we're kind of embracing the future here. We know that not only are the GPS going to evolve, the CPUs are going to evolve the drives, the storage modules are going to evolve. All of these things are changing very rapidly. The fabric that stitches them together. It's critical, and we know that we're just on the edge of some of the developments that are coming with C XL with with some of the the PC express changes that are coming in the in the very near future. So we're ready to go X and the X fabric is exactly the vehicle that's going to be able to deliver those technologies to our customers. Our customers are out there saying that you know, they want to buy into something like X Series that has all the operational benefits, but at the same time, they have to have the comfort in knowing that they're protected against being locked out of some technology that's coming in the future. We want our customers to take these disruptive technologies and not be disrupted, but use them to disrupt, um, their competition as well. So, um, you know, we're really excited about the pieces today, and I think it goes a long way towards continuing to tell the customer benefit story that X Series brings And, um, again, stay tuned because it's going to keep getting better as we go. >>A lot of headroom, uh, for scale and the management piece is key. There just have time for one more question because talk to give us some nuggets on the road map. What's next for? For X X series that we can look forward to? >>Absolutely Dave, as as we talked about. And James also hinted this is the future radio architecture, a lot of focus and innovation that we are going through is about enabling our customers to seamlessly and painlessly adopt very disruptive hardware technologies that are coming up no infantry place. And there we are, looking into enabling the customer journey as the transition from PCH in less than 4 to 5 to six without rip and replace as they embraced the Excel without rip and replace as they embrace the newer paradigm of computing through the desegregated memory desegregated P. C, A, r N B and dance drives and so forth. We're also looking forward to extract Brick Next Generation, which will and now that dynamic assignment of GPS anywhere within the chassis and much more. Um, so this this is again all about focusing on the innovation that will make the Enterprise Data Centre operations a lot more simpler and drive down the PCO by keeping them not only covered for today, but also for future. So that's where some of the focus is on there. >>Okay, Thank you guys. We'll leave it there in a moment. I'll have some closing thoughts. >>Mhm

Published Date : Mar 11 2022

SUMMARY :

Good to see you again. We know that when it comes to navigating a transition to hybrid Um, and the other aspect of application, I think, to give our customers kind generation, the generation of the next decade, which is really important cause you touched on a lot of things. product in the way that we had envisioned, which is one infrastructure that can run any application So thank you for that, Jim. implications the rack, you know, the space challenges and above Not to be outdone, you have hard news as well. This is the most elegant design to add the GPS to So thank you for that. This solves a lot of the problems we can take off right away with the For X X series that we can look forward to? is the future radio architecture, a lot of focus and innovation Okay, Thank you guys.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JimPERSON

0.99+

JamesPERSON

0.99+

CiscoORGANIZATION

0.99+

James LeachPERSON

0.99+

JuneDATE

0.99+

twoQUANTITY

0.99+

DavePERSON

0.99+

100 gigQUANTITY

0.99+

ExcelTITLE

0.99+

last yearDATE

0.99+

June last yearDATE

0.99+

2000DATE

0.99+

FirstQUANTITY

0.99+

SecondQUANTITY

0.99+

thirdQUANTITY

0.99+

ThirdQUANTITY

0.99+

bothQUANTITY

0.99+

World WarEVENT

0.98+

UCSORGANIZATION

0.98+

todayDATE

0.98+

next decadeDATE

0.98+

firstQUANTITY

0.98+

one simple goalQUANTITY

0.98+

second bucketQUANTITY

0.98+

Three aspectsQUANTITY

0.97+

X X seriesTITLE

0.97+

three thingsQUANTITY

0.97+

fifthQUANTITY

0.97+

IraqLOCATION

0.97+

less than 4QUANTITY

0.97+

fifth generationQUANTITY

0.97+

three big announcementsQUANTITY

0.97+

oneQUANTITY

0.96+

FDAORGANIZATION

0.96+

Vikas RatnaORGANIZATION

0.95+

one more questionQUANTITY

0.94+

hyper flexORGANIZATION

0.94+

single boardQUANTITY

0.93+

three new innovationsQUANTITY

0.92+

X seriesTITLE

0.91+

singleQUANTITY

0.91+

XTITLE

0.91+

sixQUANTITY

0.9+

nineDATE

0.9+

Day oneQUANTITY

0.9+

one wayQUANTITY

0.87+

single infrastructureQUANTITY

0.85+

next 23 yearsDATE

0.85+

X fabricORGANIZATION

0.85+

one commonQUANTITY

0.84+

5QUANTITY

0.83+

Enterprise Data CentreORGANIZATION

0.83+

TwitterORGANIZATION

0.82+

one areaQUANTITY

0.81+

X SeriesTITLE

0.78+

PCOORGANIZATION

0.73+

threeQUANTITY

0.7+

VegasPERSON

0.69+

AsylORGANIZATION

0.68+

past five yearsDATE

0.66+

Dell APEX Data Storage Services + Equinix Colo | CUBE Conversation


 

(upbeat music) >> Welcome to this CUBE conversation. I'm Lisa Martin, pleased to welcome back Caitlin Gordon, vice president of product management at Dell Technologies. Caitlin is great to see you again though virtually. >> This is good to see you as well, Lisa. >> Tony Frank is here as well, global client executive at Equinix. Tony, welcome to the program. >> Thank you, Lisa. Good to be here. >> We're going to be talking about some news. Caitlin, let's go back. You and I, before we started filming, we're trying to remember when did we last see each other of course it was virtual, but just refresh the audience's memories with respect to the catalyst for Dell to go into this as a service offering. >> Yeah, I think we're all losing track of the virtual months here. (all laughs) Go back in time a little bit. Yeah, exactly right. The first actual APEX offers really came to market in the spring in May with our APEX Data Storage Services. And at that time we actually had preannounced, but we're going to talk more about here today with our partnership with Equinix. But if we take a step back, why did Dell talk about this as a project and is now really investing for the future, it really connects to a lot of the conversations you guys have here in theCUBE, right? What's happening in IT. What's happening with our customers is that they're looking for outcomes. Yes, they're predominantly, still buying products today, but they're really starting to look for outcomes. They want to be buying those outcomes. They want to have something that is an operating expense for them. Something that we can take, we as the technology, the infrastructure experts can take on, the management can take on the ownership of that equipment and really enable them to focus on their business. So really consumption-based, usage-based infrastructure, all being elastic resources that Dell owns and manages, but customers can still operate. And of course, one of the first offers was APEX Data Storage Services. >> Talk to me a little bit Caitlin about outcomes. I just want to understand what Dell actually is focusing on for its customers, where outcomes are concerned. >> Yeah, and it's interesting as a company, it's a pretty big transformation for us. We have always been a product led company, but it's not really about the product. So when I talk about APEX Data Storage Services, you're not going to hear me mention a product name or anything 'cause what it's about, it's about offering our customers what they're actually looking for, which is in the case of storage, they're looking for, I want either block or file storage. I want a certain tier. So it was at a higher performance. I want a certain capacity of it. And I want to commit for some period of time. That's it. Those are the questions we ask. There's no product names and sizing, and it's really, really simple. And that's what we're talking about. It's really the beginning of really trying to deliver customers an outcome versus a product. >> Got it. APEX Data Storage Services, this is Dell's efforts to supply managed file and block storage of services. Talk to me about that. Talk to me about some of the things, how does it enable the fast time to value as little as 14 days for your customers? >> Yeah, so there's a lot of really important things we're doing here. We're not just taking the products we had and kind of packaging it up in a new financial model. There's a lot of parts to this. It all centers around the APEX console. So the APEX console is where you start, begin really ongoing manage and experience these outcomes from Dell Technologies. And it starts with selecting the service you want. So if you select that you want APEX Data Store and Services, you've pick your type, you pick your tier, pick your time period, and you pick your size, right? And then you're off to the races. And we will be able to, what we're committing to do is delivering that in this view and as little as 14 days, time to value. And for us, one of the benefits of being able to do this as Dell, we have always really thrived in our supply chain and the ability to have that predictability and being able to deliver things as a service, including storage is really something it's just an extension of what we've been able to do there. And our partnerships with Equinix actually is going to enable us to even look at that further and see what we can do to really bring value to our customers as quickly as possible. >> That speed, that time to value is even more important as we've lived through the last tumultuous 18 months. Let's break into the news now. You guys pre-announced some the partnership with Equinix, but talk to me about with respect to APEX Data Storage Services, what's being announced? Caitlin, let's start with you and then Tony will bring you into the conversation. >> Yeah, absolutely. So again, we first released the APEX Data Storage Services in the spring, and we're already enhancing that today. Couple of exciting things. So geographic expansion, so expanding out into additional regions across Europe and Asia who are expanding our support. So we talked about the fact that it's block and it's file. Well, actually on our file capability here in our file outcome, we now will have the ability to support an S3 protocol. So you can do that app development and run your operations all off the same platform. So that's an exciting new expansion there. We're also enabling partners sell-through. Our partners are really, really important, whether they're resell partners or technology partners like Equinix. Partner sell-through is another important piece. And of course, most important for our conversation today is the exciting new announcement of the fact that we are going to offer APEX Data Storage Services available on Equinix facilities all integrated into the APEX console. The fifth question is now, where do you want your APEX Data Storage Services? You can select a Dell provided facility and you get the choice to select the different cities of Equinix locations. And we're going to provide that single bill and experience through Dell, but on the backend, we've worked with Tony and team for months to get this, to be a very streamlined experience for our customers. >> Tony talk to us about this from Equinix's perspective. >> Yeah, we're very excited. Caitlin, thank you very much and Lisa, thank you. Very excited to be part of what Dell is doing with APEX and enable enterprise customers to deliver, to get delivered to them at Equinix facilities storage as a service, in addition to additional Equinix capabilities, really enabling agile enterprises to distribute their infrastructure across the world, leveraging Dell product, Dell management, and to get access to partners, to their other footprints, to cloud service providers, et cetera, all within the footprint of Equinix. >> So Caitlin, APEX Data Storage Service in secure colo facilities in conjunction with Equinix, talk to me about what the reception has been from from Dell customers. >> Yeah, it's been really fun. I mean, first of all, when we thought about data center providers are a critical part of us being able to deliver that outcome to customers. And when we looked at the ecosystem of partners it was very clear who we were going to be partnering with. Equinix was really the best partner for us. We all already had been working together in many different ways and matters taking this partnership to the next level and what we've already seen actually all the way since earlier this year, we've had many, many customers coming to us either first step first, it was separately, but now it's actually jointly to say, I'm having a challenge and here's my challenge. And most of these conversations start in one way. I'm getting out of the data center business. And the nice thing for us is that between our two companies, we can solve that, right? We have the combination of the right infrastructure. And with our partnership with Equinix, you partner that with the data center services, you can actually give that full outcome to a customer and we were solving those separately and now we're solving those together. >> The spokes wanting to get out of the data center. If we think about in the last year and a half, how inaccessible the data centers were, Tony, I want to get your perspective on the colo market. And as we look at IT today, the acceleration of it and digital and cloud adoption and getting out of the data centers that we've seen in the last 18 months, help me understand why the colo market is really key today for the future of IT. >> Absolutely Lisa. So focusing on outcomes as Caitlin outlined earlier is a really important part of really how IT has managed this pandemic and thinking about how do we solve for this vast distributed set of employees that we used to have aggregated in a single building or multiple buildings, but really spearheaded in a couple locations. And all of a sudden everything became out in rural America, out rural Europe, out everywhere, employees were spread out and they needed a way for as an IT team to bring together the network, the security and the ability to be very agile and focus on an outcome as opposed to, how am I going to get this next piece of equipment, this next storage device, this next compute system in my data center and add the cooling and the power and all the things that they have to think about. And really it was an outcome. How do I give my employees the best experience possible, my partners that access they need to my systems and the various ways that we interact together? So the colo market as a whole has been really changed dramatically through the whole pandemic. And if you didn't know Zoom two years ago, it's your best friend now, or it's your least favorite way to do business. But the only way we have to do business in the world that we're living in today. >> A lifeline, and here we are zooming with each other right now. Let's talk about, Tony I want to stick with you. Let's talk about this partnership between Dell and Equinix. Why is this such a compelling partnership? Talk to me about that from Equinix's perspective. >> Yeah, we're so excited to be able to be partnered with the number one pro leader and provider of infrastructure and infrastructure services. We have really been a niche provider for the last 15 years. We're a 21, 22 year old company, and we focused on developing ecosystems. And those were at first the internet. We brought the telecom providers together to make the internet work. And then on top of that started enabling things like digital trading, also enabling all sorts of ad exchanges so that you see the banner ads that apply to you when you go to a website. And so we were well known within those ecosystems that we worked within, but getting out to the enterprise has been a big challenge. And Dell brings us those relationships. They bring that expertise, that trusted advisor kind of role. And so being able to extend our sales team and really leverage what Dell has done across small, medium, large, and very large enterprise is a real win for us. And it allows us to achieve a scale that we wouldn't have been able to achieve by ourselves without breaking the bank, trying to hire people and trying to get them familiar with those customers. And so Dell brings us into that. We're able to complete what I call the three legged stool, the compute, the storage, and now the networking aspects can be dealt with in a single conversation around an outcome. And APEX gives us a chance to really be agilely available as Dell's customers define that for themselves and to deploy the infrastructure where they need it and to achieve those outcomes that they're trying to get to. >> So some ostensible value that Equinix is getting by the Dell partnership, He said, pulling us into the enterprise, facilitating that scale. Caitlin, talk to me about this from Dell's lens. What makes this partnership so compelling for Dell and the future of it as a service? >> I'm laughing as Tony's talking through that 'cause it tees it up perfectly. From Dell's perspective, when we looked at data center providers, one of the challenges for us is we're a global IT provider. So we had to partner with someone who understood what it meant to operate and manage data centers at a global scale and locations all over the world. There were very short list to choose from. Once you look at it from that lens, but more importantly, and what Tony, you already hit on, the networking, the interconnects that we have in our partnership with Equinix are incredibly valuable. 'Cause ultimately, although customers start going to a colo facility because they want out of the data center business, they don't want to be managing racks and power and cooling and all of that, oftentimes what actually the value they find once they get there and why they stay and grow is those interconnects, the ability to connect to other tenants in these facilities and the ability to connect into the hyperscalers and the richness of those interconnects with Equinix was truly unmatched. And that's why it's been such an important partnership for us. >> Tony, what's been some feedback from the Equinix customer base. >> Well, it's really funny. I spent half of my time trying to figure out with my team, how we're going to solve for storage as a service, the next geography, the next product. But the other half of the time is spent who on the team is the right person to go pair up with the Dell team and get the Dell team brought into a discussion. And it's going by directionally right now, the volume is picking up. The velocity is picking up and it really seems to be like a snowball, just going down the hill, it's just picking up speed. And with every interaction we're gaining trust with each other, we're gaining competence in what the message is and how to solve for it. And we're working out the various ways, in a predictive way, what are most people asking for? But the wonderful thing is there's custom availability to figure out a solution for just about any problem that the IT or infrastructure focus teams in the enterprise are looking to solve for. >> Tony, sticking with you for a final question or two, in terms of the last few months, or have you seen any industries in particular that are really readily adopting this? We've seen so much change across industries in the last 18 months. I'm just curious if you're seeing any industries that are particularly taking advantage of this capability in this partnership. >> Yeah, I would point to highly regulated industries thinking about financial, thinking about governments, and it's not just a U.S. situation. This is a global situation and data sovereignty where that matters to a particular customer is really important that they keep that data in the geography that it needs to stay in is defined by the different governments around the world. You see the financial industry has been a first mover towards electronic trading and really disrupted thankfully prior to the pandemic, the way trading was done because in-person trading wasn't going to happen anymore. And so in the highly regulated world, the healthcare, the financials. Those folks are definitely looking for a solution that has certifications across the board to help them say to their auditors we've got this covered. That's something we're able to bring to the table for Dell. And then it also helps that the first movers sort of towards a digital infrastructure were insurance companies and others that saw the value of leveraging partnerships and bringing together things as quickly and fast as they could without deploying huge global networks to try and make it all happen. They can instead virtually meet in the same room, leveraging our software defined network called Equinix Fabric. It's been a real win for the regulated industries certainly. >> Got it, thanks for that Tony. Caitlin, last question for you. This is Dell managed so single bill from Dell, where can the viewers go to learn more information about this new partnership? >> Delltechnologies.com/apex, you'll learn more about all things, APEX, really the APEX consoles, the experience. So you can learn more about it there. And then of course, your friendly neighborhood, Dell EMC rep, and or channel partner. Now that we've got that partner enablement as well. >> Delltechnologies.com/apex. Caitlin and Tony, thank you so much for joining us today sharing the exciting news about what's new with Dell and Equinix, and what's in it for your customers and your partners. We appreciate your time. >> Thanks Lisa. >> Thank you Lisa. >> For Caitlin Gordon and Tony Frank, I'm Lisa Martin, you've been watching theCUBE conversation. (upbeat music)

Published Date : Jan 19 2022

SUMMARY :

Caitlin is great to see Tony Frank is here as well, Good to be here. but just refresh the audience's and really enable them to Talk to me a little bit but it's not really about the product. how does it enable the fast time to value and the ability to have but talk to me about with respect in the spring, and we're Tony talk to us about this and to get access to partners, talk to me about what And the nice thing for us is and getting out of the data centers and all the things that Talk to me about that from and now the networking and the future of it as a service? and the ability to connect from the Equinix customer base. and it really seems to be in terms of the last few months, in the geography that it needs to stay in to learn more information really the APEX consoles, the experience. sharing the exciting news about what's new For Caitlin Gordon and

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CaitlinPERSON

0.99+

Lisa MartinPERSON

0.99+

TonyPERSON

0.99+

EquinixORGANIZATION

0.99+

LisaPERSON

0.99+

Caitlin GordonPERSON

0.99+

APEX Data Storage ServicesORGANIZATION

0.99+

Tony FrankPERSON

0.99+

DellORGANIZATION

0.99+

two companiesQUANTITY

0.99+

EuropeLOCATION

0.99+

14 daysQUANTITY

0.99+

AsiaLOCATION

0.99+

fifth questionQUANTITY

0.99+

twoQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

APEX Data Storage ServicesORGANIZATION

0.99+

todayDATE

0.98+

firstQUANTITY

0.98+

two years agoDATE

0.98+

singleQUANTITY

0.98+

Simon McCormack, Aruba | Aruba & Pensando Announce New Innovations


 

(fastpaced upbeat music) >> Welcome back to theCubes coverage of the power of N and the collaborations between HPE Aruba and Pensando. Where the two companies are setting out to create a new category in network switching. Joining me now is Simon McCormack, who looks after product management at HPE Aruba. Welcome Simon. Good to see you. >> Good morning. Thanks for having me today. >> You're very welcome. So Simon, we've been talking all day about the Aruba switching fabric that you're bringing to market, embedding the Pensando technology. Can you tell us what's the primary value prop that AFC brings to its customers? >> Sure. Aruba Fabric Composer. This is orchestration and management for the Aruba wide switching platform. Primarily for data centers. It does a lot of things. I'll give you three key ones just to get a feel for it. So in data center networking, there's a lot of complex technologies. I'm afraid to say, lease spines, overlays, underlays, EDP and OSPF BGP. I can throw out loads of acronyms for you. Fabric Composer can really simplify through a bunch of intent based workflows, the deployment and management of these fabrics. We can do it either interactively through a UI or fully API driven, if you want to. So it really takes away a lot of the plexity there makes it dead easy to deploy these and that scale. Number two, in a data center, a lot of compute storage hypervisor technologies that you have to interact with the THEO network products. So in Fabric Composer, we built an integration layer into it that interacts with other orchestrators, vCenter, VMware vcenter is a good example of that. So an operator may make changes to vCenter that affect the network. You don't want to call the network team for it. Fabric Composer can automate that network side configuration on the Aruba switch, making your day to operations, insertion of new services, much more simpler. And then finally, number three, because we've got all these capabilities I've just told you about. We actually have a great typology model that we build from it. And we can use that to visualize this virtual to physical network layer that is really powerful for troubleshooting the environment. >> Great? So three things, actually four right. To simplify or integrate and automate. And it's kind of two and two way, I'm going to to call it. and then the visualization piece for troubleshooting. Awesome. What about security policy? How are you thinking about that in this release? >> Yeah, so that's where in this release, we're extending it with the Pensando PSM technologies embedded into the 10K. Now we can use Aruba Fabric Composer to actually orchestrate the policy in addition to the network. So you think about today, Fabric Composer does network primarily. You bring policy into it. You've got one single pane of glass now that does network and policy. It actually provides a really powerful capabilities for operators of different skill sets to be able to manage and orchestrate this environment. >> What about the sort of operational model as it pertains to the network and security, I'm interested in how flexible that is. For instance, if a customer wants to use their own tooling or operational frameworks. What if they want to leverage multi-vendor fabrics like a third-party spine? How do you deal with all of that? >> Yeah, and I think that's, we built that into essentially the DNA of this technology is that we're, we're expecting to often go into brownfield environments. Where they've already got best practices for security and networking. They've already got networking vendors there. The 10K is a very powerful lease switch on its own. We want those lease switches to go in all of these different environments, not just Greenfield. It's really great for Greenfield. And I'm going to explain this a little bit in a few ways. First of all, the technology we have with Aruba fabric Composer and Pensando PSM, you can do a pure operational split between them SecOps, NetOps. A lot of customers that's how they deal with it. They've got the security operations team, network operations team. If they're split, you can use the two tools and make a fantastic product using that. However, if they're not split, and you've got a single policy for it. You can use Aruba Fabric Composer to do both of them. So you've got the options there and we fully embrace that in the architecture of what we built. This extends to multiple layers for the technology build as well. Again, as I said, the 10K's is a lease switch, it can connect to third-party spines. So you could use Fabric Composer to manage this lease Spitch and the policy you could use Fabric Composer just to manage the least switch and connect and interoperate the lease to the spine, or you can do a full Aruba solution, the full Aruba spine and use that operating model. There's one final thing in this area is fabric Composers are a UI based orchestrator, API driven. Some customers love it. Some customers love their CLIs. We fully embrace the operational model where customers still use their own APIs and their own CLIs. So the customer may be using Ansible to automate through API. They can still use that directly to the switch and they can use it to AFC and mix the two. If you talk directly to a switch and change it, Fabric Composer detects it and basically sinks its configuration together. So we can insert all or any part of this solution into existing or new Netflix. >> Yeah, that's nice. Right? Because I mean, so there's the network hard guys, right they, they want that CLI access. So you you're accommodating that. And then as well, being able to bring those SecOps view and the netOps view together is important because let's say, let's face it. A lot of organizations, especially some of the smaller ones, they don't actually have a full blown SecOps team. That's really the netOps responsibility. And so that's nice flexibility, you can handle both worlds. How about segmentation? What a customer is telling you that they want regarding segmentation and how are you guys approaching that? >> Yeah, I mean, it's, it's actually a key feature of what we're doing in this area. Now the iland segmentation generates it's kind of a wide area with many layers to it and we could talk about it for hours. So let me talk briefly about some of the areas we're going into when it comes to the segmentation. But particularly of a compute and virtual type environment. So when you, when you're typically creating policies in today's world, current policies based on addresses, IP addresses, or Mac addresses. You have lots of rules and big lists of addresses. It's really annoying. Customers generally don't talk in addresses. They talk in machines and names of machines. So if you think about what I've already told you with the Fabric Composer, we've already got these hooks in the compute hypervisor layer. So we didn't know about the virtual machines? So it said obviously, a natural extension now for you to be able to create these policies based on the machines. So there's, there's a scale problem in policy distribution at two levels, at the top and the bottom. The top level is your chronic create the policy. You've got this massive distribution addresses. So Fabric Composer can really help you by allowing you to then create these groups, sensible groups, using the names then you can distribute. The 10K solution with the distributed architecture of the bottom layer, now allows us to distribute these policies and rules across your racks within your data center. So it scales really well, but that's one level I've described. You know, you're creating groups of machines with names, so it's easier to define it, but there's auto and automation angle to this as well. You might not want to even create it interactively. Now a lot of customers with VMware vCenter, For example, are tagging the virtual machines. So the tag tells you a group information. Again, Fabric Composer can already get the tag within its database model. So we can use the tag now either to fully automate or use as a hint to creating these groups. So now I've got a really simple way to basically just categorize my machines into the groups so that now I can push rules down onto them. And there's one, one final thing that I just want to tell you before, before we move on. There's, there's often a zero trust model you want to do in the data center for segmentation. Meaning I've got two virtual machines on the same network on the same host. Normally they can talk to each other, nothing's stopping them, but sometimes you want to isolate even those two. You can do it in products like vCenter with PV land technologies. A bit cumbersome to configure on the vSphere side, you got to match it with what you see on the switch side. It's one of those that's a real headache, unless you've got an orchestrator to do it. So Fabric Composer could basically orchestrate this isolated solution. You're now grouping your machines and you're saying they're isolated. We can do the smarts and both of the vCenter side and the switch side, get them in sync, get it all configured. And now the masses can start to do this kind of segmentation at scale. >> Got it. Thank you Simon. Can the Fabric Composer kind of be used as the primary prism for troubleshooting? How do you handle troubleshooting and this art combined architecture? Who, who do I call when there's a problem? How do you approach that? >> Well, definitely start by calling me or actually call my product first, so fabric Composer. If you're using it, use that as the front tool for what you're going to try and figure out what's going on. There is a global health dashboard. It encompasses networking security policy across the solution, across the fabric. So that's your, tells you what's going on immediately. Down to port stats on what's happening within the physical topology of the network. Down to the end-to-end view, we have in terms of policy connectivity between machines. So Fabric Composer is your first port of call, but we built a solution here that we don't want to hide the pieces underneath it. Any networking guy knows when they're deep troubleshooting networking stuff, they're going to end up with the switch. So you started the orchestrator, but sometimes in the deep troubleshooting, not day-to-day, hopefully. You'll go to the switch and you'll troubleshoot that way. We've got the same technology here with the policy, with the firewall rules, with Pensando PSM. We still fully embrace for deep troubleshooting, go to Pensando PSM. They have really advanced tools in their bag of tricks in the product to give you advanced troubleshooting down to the policy layer. They have a really powerful firewall log capability, where you can search and sort, and see exactly what role is allowing or stopping any traffic going through the environment. And the two orchestrated model, we really like it 'cause it scales really well. It allows Fabric Composer to remain lightweight, PSM focused on the policy orchestration bit. But again, if your that customer that wants to do single pane of glass use Fabric Composer for the standard day-to-day stuff. But you've got the tools there to do the advanced troubleshooting between the different elements that we have within the Pensando and the Aruba tools. >> Yeah, really well thought out. You got the simplification angle nailed, the integration automation we talked about that, the visualization and the topology map, zero trust. And then remediation with deep^ened inspection. Simon, thanks so much for taking us through the announcements. Really appreciate your insights and time today. >> Thank you very much. >> You're welcome. Okay. Keep it right there, this is Dave Vellante for theCube. More content from the HPE Aruba Pensando announcements coming right up. (soothing music)

Published Date : Oct 20 2021

SUMMARY :

coverage of the power of N for having me today. about the Aruba switching fabric lot of the plexity there I'm going to to call it. embedded into the 10K. What about the sort and the policy you could and the netOps view together is important So the tag tells you a group information. as the primary prism for troubleshooting? that as the front tool You got the simplification angle nailed, More content from the HPE

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

SimonPERSON

0.99+

two toolsQUANTITY

0.99+

Simon McCormackPERSON

0.99+

two companiesQUANTITY

0.99+

PensandoORGANIZATION

0.99+

bothQUANTITY

0.99+

ArubaORGANIZATION

0.99+

AFCORGANIZATION

0.99+

HPE ArubaORGANIZATION

0.99+

NetflixORGANIZATION

0.99+

two levelsQUANTITY

0.99+

twoQUANTITY

0.99+

oneQUANTITY

0.99+

first portQUANTITY

0.98+

three thingsQUANTITY

0.98+

todayDATE

0.98+

both worldsQUANTITY

0.98+

vCenterTITLE

0.97+

vSphereTITLE

0.96+

10KQUANTITY

0.96+

one levelQUANTITY

0.95+

Pensando PSMORGANIZATION

0.95+

MacCOMMERCIAL_ITEM

0.95+

one final thingQUANTITY

0.95+

single policyQUANTITY

0.95+

FirstQUANTITY

0.95+

zero trustQUANTITY

0.95+

ComposerORGANIZATION

0.93+

ArubaLOCATION

0.89+

firstQUANTITY

0.88+

Fabric ComposerTITLE

0.88+

two virtual machinesQUANTITY

0.85+

three key onesQUANTITY

0.85+

one single paneQUANTITY

0.84+

HPEORGANIZATION

0.84+

GreenfieldORGANIZATION

0.84+

single paneQUANTITY

0.83+

10KCOMMERCIAL_ITEM

0.83+

SecOpsOTHER

0.8+

fourQUANTITY

0.79+

Fabric ComposerORGANIZATION

0.79+

theCubesORGANIZATION

0.79+

Number twoQUANTITY

0.77+

VMware vCenterTITLE

0.76+

VMware vcenterTITLE

0.71+

AFCTITLE

0.7+

FabricTITLE

0.7+

themQUANTITY

0.68+

theCubeORGANIZATION

0.68+

netOpsOTHER

0.68+

AnsibleTITLE

0.68+

zeroQUANTITY

0.66+

threeQUANTITY

0.62+

modelQUANTITY

0.56+

ilandLOCATION

0.55+

2021 015 Caitlin Gordon and Tony Frank


 

>> Welcome to this "Cube" conversation. I'm Lisa Martin. Pleased to welcome back Caitlin Gordon, Vice President of Product Management at Dell technologies. Caitlin it's great to see you again, though virtually. >> Yes, it's good to see you as well, Lisa. >> Tony Frank is here as well. Global client executive at Equinix, Tony, welcome to the program. >> Thank you, Lisa. Good to be here. >> We're going to be talking about some news. Caitlin let's go back. You and I, before we started filming, we were trying to remember, when did we last see each other? Of course it was virtual. So APEX was announced product APEX, October 2021. Just about a year ago. Released it in may, but just refresh the audience's memories with respect to the catalyst for Dell to go into this as a service offering. >> Yeah, I think we're all losing track of the virtual months here, (all laugh) so go back in time a little bit. Yeah, exactly right. So in the fall of last year, we had announced Project APEX. The first actual APEX offers really came to market in the spring in May with our APEX Data Storage Services. And at that time we actually had pre-announced what we're going to talk more about here today with our partnership with Equinix. But if we take a step back, you know, why did Dell talk about this as a project and is now really investing for the future? It really connects to a lot of the conversations you guys have here in "theCube", right? What's happening in IT? What's happening with our customers? Is that they're looking for outcomes. Yes, they're predominantly still buying products today, but they're really starting to look for outcomes. They want to be buying those outcomes. They want to have something that is an operating expense for them. Something that we can take, we as the technology, the infrastructure experts can take on the management, can take on the ownership of that equipment and really enable them to focus on their business. So really consumption-based, usage based infrastructure, all being elastic resources that Dell owns and manages, but customers can still operate. And of course, one of the first offers was APEX Data Storage Services, which we're extending here this fall. >> Talk to me a little bit, Caitlin, about outcomes. I just want to understand what Dell actually is focusing on for its customers, where outcomes are concerned. >> Yeah. And it's interesting as a company, it's a pretty big transformation for us. We have always been a product led company, but it's not really about a product. So when I talk about APEX Data Storage Services, you're not going to hear me mention a product name or anything. Because what it's about, it's about offering our customers what they're actually looking for. Which in the case of storage, they're all looking for, I want either block or file storage. I want a certain tier, so it is at a higher performance. I want a certain capacity of it, and I want to commit for some period of time. That's it. Those are the questions we ask. There's no product names and sizing and it's really, really simple. And that's what we're talking about. It's really the beginning of really trying to deliver customers an outcome versus a product. >> Got it. APEX Data Storage Services. This is Dell's efforts to supply managed file and block Storage as Services. Talk to me about that. Talk to me about some of the things, how does it enable the fast time to value as little as 14 days for your customers? >> Yeah, so there's a lot of really important things we're doing here. We're not just taking the products we had and kind of packaging it up in a new financial model. There's a lot of parts to this. It all centers around the APEX console. So the APEX console is where you start, begin really ongoing manage and experience these outcomes from Dell Technologies. And it starts with selecting the service you want. So if you select that you want APEX Data Storage Services, you pick your type, you pick your tier, you pick your time period, and you pick your size, right? And then you're off to the races. And we will be able to, what we're committing to do is delivering that in as few, and as little as 14 days time to value. And for us, you know, one of the benefits of being able to do this as Dell, we have always really thrived in our supply chain and the ability to have that predictability and being able to deliver things as a service, including storage, is really something that's just an extension of what we've been able to do there. And our partnerships with Equinix actually is going to enable us to even look at that further and see what we can do to really bring value to our customers as quickly as possible. >> That speed, that time to value, is even more important as we've lived through the last tumultuous 18 months. Let's break into the news now. You guys pre-announced the partnership with Equinix, but talk to me about, with respect to APEX Data Storage Services, what's being announced? Caitlin, we'll start with you and then Tony we'll bring you into the conversation. >> Yeah, absolutely. So again, we first released APEX Data Storage Services in the spring, and we're already enhancing that today. Couple exciting things. So geographic expansion, so expanding out into additional regions across Europe and Asia, who are expanding our support. So we talked about the fact that it's block and it's file. Well, actually on our file capability here, on our file outcome, we now will have the ability to support an S3 protocol. So you can do that app development and run your operations all off the same platform. So that's an exciting new expansion there. We're also enabling partners sell through. Our partners are really, really important, whether the resell partners or technology partners like Equinix. So partner sell through is another important piece. And of course the most important for our conversation today, is the exciting new announcement of the fact that we are going to offer APEX Data Storage Services available in Equinix facilities, all integrated into the APEX console. The fifth question is, now, where do you want your APEX Data Storage Services? You can select a Dell provided facility and you get the choice to select the different cities of Equinix locations. And we're going to provide that single bill and experience through Dell, but on the backend, we've worked with Tony and team for months to get this to be a very streamlined experience for our customers. >> Tony, talk to us about this from Equinix's perspective. >> Yeah, we're very excited. Caitlin, thank you very much and Lisa, thank you. Very excited to be part of what Dell's doing with APEX and enable enterprise customers to get delivered to them at Equinix facilities Storage as a Service, in addition to additional Equinix capabilities, really enabling agile enterprises to distribute their infrastructure across the world, leveraging Dell product, Dell management, and to get access to partners, to their other footprints, to cloud service providers, et cetera, all within the footprints of Equinix. >> So Caitlin, APEX Data Storage Service in secure colo facilities in conjunction with Equinix. Talk to me about what the reception has been from Dell customers. >> Yeah, it's been really fun. I mean, first of all, when we thought about data center providers are a critical part of us being able to deliver that outcome to customers. And when we looked at the ecosystem of partners, it was very clear who we were going to be partnering with. Equinix was really the best partner for us. We already had been working together in many different ways and we're just taking this partnership to the next level. And what we've already seen actually, all the way since earlier this year, we've had many, many customers coming to us, at first it was separately, but now it's actually jointly to say, I'm having a challenge and here's my challenge. And most of these conversations start in one way. I'm getting out of the data center business. And the nice thing for us is that between our two companies, we can solve that. Right, we have the combination of the right infrastructure, and with our partnership with Equinix, you partner that with the data center services, you can actually give that full outcome to a customer. And we were solving those separately, and now we're solving those together. >> Those folks wanting to get out of the data center, if we think about in the last year and a half, how inaccessible the data centers were, Tony, I want to get your perspective on the colo market, and as we look at IT today, the acceleration of it and digital and cloud adoption and getting out of the data center that we've seen in the last 18 months. Help me understand why the colo market is really key today for the future of IT. >> Absolutely Lisa. So, you know, focusing on outcomes as Caitlin outlined earlier, is a really important part of, really how IT has managed this pandemic and thinking about how do we solve for this vast distributed set of employees that we used to have aggregated in a single building or multiple buildings, but really spearheaded in a couple locations. And all of a sudden everything became, you know, out in rural America, out in rural Europe, out everywhere, employees were spread out and they needed a way as an IT team, to bring together the network, the security and the ability to be very agile and focus on an outcome as opposed to, how am I going to get this next piece of equipment, this next storage device, this next compute system in my data center and add the cooling and the power and all the things that they have to think about. And really it was an outcome. How do I give my employees the best experience possible? My partners, that access they need to my systems and the various ways that we interact together. So the colo market as a whole has been really changed dramatically through the whole pandemic. And if you didn't know Zoom two years ago, it's your best friend now, or it's your, you know, least favorite way to do business, but the only way we have to do business in the world that we're living in today. >> A lifeline, and here we are Zooming with each other right now. (Caitlin and Tony laugh) Tony, I want to stick with you. Let's talk about this partnership between Dell and Equinix. Why is this such a compelling partnership? Talk to me about that from Equinix's perspective. >> Yeah. We're so excited to be able to be partnered with the number one leader and provider of infrastructure and infrastructure services. We have really been a niche provider for the last 15 years. We're a 21, 22 year old company, and we focused on developing ecosystems and those were at first the internet. We brought the telecom providers together to make the internet work. And then on top of that started enabling things like digital trading. Also enabling all sorts of ad exchanges so that you see the banner ads that apply to you when you go to a website. And so we were well known within those ecosystems that we worked within, but getting out to the enterprise has been a big challenge. And Dell brings us those relationships. They bring that expertise, that trusted advisor kind of role. And so being able to extend our sales team and really leverage what Dell has done across small, medium, large and very large enterprise is a real win for us. And it allows us to achieve a scale that we wouldn't have been able to achieve by ourselves without breaking the bank trying to hire people, and trying to get them familiar with those customers. And so Dell brings us into that. We're able to complete what I call the three legged stool. The compute, the storage, and now the networking aspects can be dealt with in a single conversation around an outcome. And APEX gives us a chance to really be agilely available as Dell's customers define that for themselves and to deploy the infrastructure where they need it and to achieve those outcomes that they're trying to get to. >> So it's an ostensible value that Equinix is getting by the Dell partnership. You said, pulling us into the enterprise, facilitating that scale. Caitlin, talk to me about this from Dell's lens. What makes this partnership so compelling for Dell and the future of it as a service? >> I'm laughing as Tony's talking through that because it tees it up perfectly. From Dell's perspective, when we looked at data center providers, one of the challenges for us is we're a global IT provider. So we had to partner with someone who understood what it meant to operate and manage data centers at a global scale and locations all over the world. There's a very short list to choose from once you look at it from that lens, but more importantly, and what Tony you already hit on, the networking. The interconnects that we have in our partnership with Equinix are incredibly valuable. Cause ultimately, although customers start going to a colo facility because they want out of data center business, they don't want to be managing racks and power and cooling and all of that. Oftentimes actually the value they find once they get there and why they stay and grow is those interconnects. The ability to connect to other tenants in these facilities and the ability to connect into the hyper-scalers. And the richness of those interconnects with Equinix was truly unmatched, and that's why it's been such an important partnership for us. >> Tony, what's been some feedback from the Equinix customer base? >> Well, it's really funny. I spent half of my time trying to figure out with my team, how we're going to solve for Storage as a Service. The next geography, the next product. But the other half of the time is spent, who on the team is the right person to go pair up with the Dell team and get the Dell team brought into a discussion. And it's going bi-directionally right now. The volume is picking up. The velocity is picking up and it really seems to be like that snowball just going down the hill. It's just picking up speed and with every interaction we're gaining trust with each other, we're gaining competence in what the message is and how to solve for it. And we're working out the various ways, you know, in a predictive way, what are most people asking for? But the wonderful thing is, there's custom availability to figure out a solution for just about any problem that the IT or infrastructure focused teams in the enterprise are looking to solve for. >> Tony, sticking with you for a final question or two, in terms of the last, you know, few months, have you seen any industries in particular that are really readily adopting this? We've seen so much change across industries in the last 18 months. I'm just curious if you're seeing any industries that are particularly taking advantage of this capability and this partnership. >> Yeah. I would point to highly regulated industries. Thinking about financial, thinking about governments, and it's not just a US situation. This is a global situation and data sovereignty where that matters to a particular customer, is really important that they keep that data in the geography that it needs to stay in. It's defined by the different governments around the world. You know, you see, the financial industry has been a first mover towards electronic trading and really disrupted, thankfully, prior to the pandemic, the way trading was done. Because in-person trading wasn't going to happen anymore. And so in the highly regulated world, that healthcares, the financials. Those folks are definitely looking for a solution that has certifications across the board to help them say to their auditors, we've got this covered. That's something we were able to bring to the table for Dell. And then it also helps that the first movers sort of towards a digital infrastructure were insurance companies and others that saw the value of leveraging partnerships and bringing together things as quickly and fast as they could, without deploying huge global networks to try and make it all happen. They can instead virtually meet in the same room, leveraging our software defined network called Equinix Fabric. It's been a real win for the regulated industries, certainly. >> Got it. Thanks for that, Tony. Caitlin, last question for you. This is Dell managed, so single bill from Dell. Where can the viewers go to learn more information about this new partnership? >> Delltechnologies.com/apex. You'll learn more about all things APEX. Really, the APEX consoles, the experience, so you can learn more about it there. And then of course, your friendly neighborhood, Dell EMC rep, and or channel partner, now that we've got that partner enablement as well. >> Delltechnologies.com/apex. Caitlin and Tony, thank you so much for joining us today, sharing the exciting news about what's new with Dell and Equinix and what's in it for your customers and your partners. We appreciate your time. >> Thanks, Lisa. >> Thank you, Lisa. >> For Caitlin Gordon and Tony Frank, I'm Lisa Martin, you've been watching a "Cube" conversation. (soft music playing)

Published Date : Sep 17 2021

SUMMARY :

Caitlin it's great to see Yes, it's good to Tony Frank is here as well. Good to be here. but just refresh the audience's memories and really enable them to Talk to me a little bit, Those are the questions we ask. how does it enable the fast time to value and the ability to have That speed, that time to value, And of course the most important Tony, talk to us about this and to get access to partners, Talk to me about what And the nice thing for us is that and getting out of the data center and the ability to be very agile Talk to me about that from ads that apply to you and the future of it as a service? and the ability to connect and it really seems to in the last 18 months. in the geography that it needs to stay in. Where can the viewers go to learn Really, the APEX consoles, the experience, sharing the exciting news For Caitlin Gordon and Tony Frank,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TonyPERSON

0.99+

LisaPERSON

0.99+

CaitlinPERSON

0.99+

EquinixORGANIZATION

0.99+

Caitlin GordonPERSON

0.99+

Lisa MartinPERSON

0.99+

Tony FrankPERSON

0.99+

APEX Data Storage ServicesORGANIZATION

0.99+

October 2021DATE

0.99+

DellORGANIZATION

0.99+

EuropeLOCATION

0.99+

two companiesQUANTITY

0.99+

AsiaLOCATION

0.99+

twoQUANTITY

0.99+

14 daysQUANTITY

0.99+

fifth questionQUANTITY

0.99+

oneQUANTITY

0.99+

firstQUANTITY

0.99+

todayDATE

0.99+

singleQUANTITY

0.98+

Bruno Aziza, Google | CUBEconversation


 

(gentle music) >> Welcome to the new abnormal. Yes, you know, the pandemic, it did accelerate the shift to digital, but it's also created disorder in our world. I mean, every day it seems that companies are resetting their office reopening playbooks. They're rethinking policies on large gatherings and vaccination mandates. There's an acute labor shortage in many industries, and we're seeing an inventory glutton in certain goods, like bleach and hand sanitizer. Airline schedules and pricing algorithms, they're all unsettled. Is inflation transitory? Is that a real threat to the economy? GDP forecasts are seesawing. In short, the world is out of whack and the need for fast access to quality, trusted and governed data has never been greater. Can coherent data strategies help solve these problems, or will we have to wait for the world to reach some type of natural equilibrium? And how are companies, like Google, helping customers solve these problems in critical industries, like financial services, retail, manufacturing, and other sectors? And with me to share his perspectives on data is a long-time CUBE alum, Bruno Aziza. He's the head of data analytics at Google. Bruno, my friend, great to see you again, welcome. >> Great to see you, thanks for having me, Dave. >> So you heard my little narrative upfront, how do you see this crazy world of data today? >> I think you're right. I think there's a lot going on in the world of data analytics today. I mean, certainly over the last 30 years, we've all tried to just make the life of people better and give them access more readily to the information that they need. But certainly over the last year and half, two years, we've seen an amazing acceleration in digital transformation. And what I think we're seeing is that even after three decades of investment in the data analytics world, you know, the opportunity is still really out wide and is still available for organizations to get value out of their data. I was looking at some of the latest research in the market, and, you know, only 32% of companies are actually able to say that they get tangible, valuable insights out of their data. So after all these years, we still have a lot of opportunity ahead of us, of course, with the democratization of access to data, but also the advent in machine learning and AI, so that people can make better decisions faster than their competitors. >> So do you think that the pandemic has heightened that sort of awareness as they were sort of forced to pivot to digital, that they're maybe not getting enough out of their data strategies? That maybe their whatever, their organization, their technology, their way they were thinking about data was not adequate and didn't allow them to be agile enough? Why do you think that only 32% are getting that type of value? >> I think it's true. I think, one, digital transformation has been accelerated over the last two years. I think, you know, if you look at research the last two years, I've seen almost a decade of digital acceleration, you know, happening. But I also think that we're hitting a particular time where employees are expecting more from their employers in terms of the type of insights that can get. Consumers are now evolving, right? So they want more information. And I think now technology has evolved to a point where it's a lot easier to provision a data cloud environment so you can get more data out to your constituents. So I think the connection of these three things, expectation of employees, expectation of customers to better customer experiences, and, of course, the global environment, has accelerated quite a bit, you know, where the space can go. And for people like me, you know, 20 years ago, nobody really cared about databases and so forth. And now I feel like, you know, everybody's, you know, understands the value that we can get out of it. And we're kind of getting, you know, in the sexy territory, finally, data now is sexy for everyone and there's a lot of interest in the space. >> You and I met, of course, in the early days of Hadoop. And there were many things about Hadoop that were profound and, of course, many things that, you know, just were overly complex, et cetera. And one of the things we saw was this sort of decentralization. We thought that Hadoop was going to send five megabytes of code to petabytes of data. And what happened is everything, you know, came into this centralized repository and that centralized thinking, the data pipeline organization was very centralized. Are you seeing companies rethink that? I mean, has the cloud changed their thinking? You know, especially as the cloud expands to the edge, on-prem, everywhere. How are you seeing organizations rethink their regimes for data? >> Yeah, I think, you know, we've seen over the last three decades kind of the pendulum, right, from really centralizing everything and making the IT organization kind of the center of excellence for data analytics, all the way to now, you know, providing data as a self-service, you know, application for end-users. And I think what we're seeing now is there's a few forces happening. The first one is, of course, multicloud, right? So the world today is clearly multicloud and it's going to be multicloud for many, many years. So I think not only are now people considering their on-prem information, but they're also looking at data across multiple clouds. And so I think that is a huge force for chief data officers to consider is that, you know, you're not going to have data centralized in one place, nicely organized, because sometimes it's going to be a factor of where you want to be as an organization. Maybe you're going to be partnering with other organizations that have data in other clouds. And so you want to have an architecture that is modern and that accommodates this idea of an open cloud. The second problem that we see is this idea around data governance, intelligent data governance, right? So the world of managing data is becoming more complex because, of course, you're now dealing with many different speeds, you're dealing with many different types of data. And so you want to be able to empower people to get access to the information, without necessarily having to move this data, so they can make quick decisions on the data. So this idea of a data fabric is becoming really important. And then the third trend that we see, of course, is this idea around data sharing, right? People are now looking to use their own data to create a data economy around their business. And so the ability to augment their existing data with external data and create data products around it is becoming more and more important to the chief data officers. So it's really interesting we're seeing a switch from, you know, this chief data officer really only worried about governance, to this we're now worried about innovation, while making sure that security and governance is taken care of. You know, we call this freedom within the framework, which is a great challenge, but a great opportunity for many of these data leaders. >> You mentioned several things there. Self-service, multicloud, the governance key, especially if we can federate that governance in a decentralized world. Data fabric is interesting. I was talking to Zhamak Dehghani this weekend on email. She coined the term data mesh. And there seems to be some confusion, data mesh, data fabric. I think Gartner's using the term fabric. I know like NetApp, I think coined that term, which to me is like an infrastructure layer, you know. But what do you mean by data fabric? >> Well, the first thing that I would say is that it's not up to the vendors to define what it is. It really is up to the customer. The problem that we're seeing these customers trying to fix is you have a diversity of data, right? So you have data stored in the data mart, in a data lake, in a data warehouse, and they all have their specific, you know, reasons for being there. And so this idea of a data fabric is that without moving the data, can you, one, govern it intelligently? And, two, can you provide landing zones for people to actually do their work without having to go through the pain of setting up new infrastructure, or moving information left and right, and creating new applications? So it's this idea of basically taking advantage of your existing environment, but also governing it centrally, and also now providing self-service capabilities so people can do their job easily. So, you know, you might call it a data mesh, you might call it a data fabric. You know, the terminology to me, you know, doesn't seem to be the barrier. The issue today is how do we enable, you know, this freedom for customers? Because, you know, I think what we've seen with vendors out there is they're trying to just take the customer down to their paradigms. So if they believe in all the answers need to be in a data warehouse, they're going to guide the customer there. If they believe that, you know, everything needs to be in a data lake, they're going to guide the customer there. What we believe in is this idea of choice. You should be able to do every single use case. And we should be able to enable you to manage it intelligently, both from an access standpoint, as well as a governance standpoint. >> So when you think about those different, and I like that, you're making it somewhat technology agnostic, so whether it's a data warehouse, or a data lake, or a data hub, a data mart, those are nodes within the mesh or the fabric, right? That are discoverable, accessible, I guess, governed. I think that there's got to be some kind of centralized governance edict, but in a federated governance model so you don't have to move the data around. Is that how you're thinking about it? >> Absolutely, you know, in our recent event, in the Data Cloud Summit, we had Equifax. So the gentleman there was the VP of data governance and data fabric. So you can start seeing now these roles, you know, created around this problem. And really when you listen to what they're trying to do, they're trying to provide as much value as they can without changing the habits of their users. I think that's what's key here, is that the minute you start changing habits, force people into paradigms that maybe, you know, are useful for you as a vendor, but not so useful to the customer, you get into the danger zone. So the idea here is how can you provide a broad enough platform, a platform that is deep enough, so the data can be intelligently managed and also distributed and activated at the point of interaction for the end-user, so they can do their job a lot easier? And that's really what we're about, is how do you make data simpler? How do you make, you know, the process of getting to insight a lot more fluid without changing habits necessarily, both on the IT side and the business side? >> I want to get to specifics on what Google is doing, but the last sort of uber-trends I want to ask you about 'cause, again, we've known each other for a long time. We've seen this data world grow up. And you're right, 20, 30 years ago, nobody cared about database. Well, maybe 30 years ago. But 20 years ago, it was a boring market, right now it's like the hottest thing going. But we saw, you know, bromide like data is the new oil. Well, we found out, well, actually data is more valuable than oil 'cause you can use, you know, data in a lot of different places, oil you can use once. And then the term like data as an asset, and you said data sharing. And it brings up the notion that, you know, you don't want to share your assets, but you do want to share your data as long as it can be governed. So we're starting to change the language that we use to describe data and our thinking is changing. And so it says to me that the next 10 years, aren't going to be like the last 10 years. What are your thoughts on that? >> I think you're absolutely right. I think if you look at how companies are maturing their use of data, obviously the first barrier is, "How do I, as a company, make sure that I take advantage of my data as an asset? How do I turn, you know, all this information into a sustainable, competitive advantage, really top of mind for organizations?" The second piece around it is, "How do I create now this innovation flywheel so that I can create value for my customers, and my employees, and my partners?" And then, finally, "How do I use data as the center of a product that I can then further monetize and create further value into my ecosystem?" I think the piece that's been happening that people have not talked a lot about I think, with the cloud, what's come is it's given us the opportunity to think about data as an ecosystem. Now you and I are partnering on insights. You and I are creating assets that might be the combination of your data and my data. Maybe it's an intelligent application on top of that data that now has become an intelligent, rich experience, if you will, that we can either both monetize or that we can drive value from. And so I think, you know, it's just scratching the surface on that. But I think that's where the next 10 years, to your point, are going to be, is that the companies that win with data are going to create products, intelligent products, out of that data. And they're just going to take us to places that, you know, we are not even thinking about right now. >> Yeah, and I think you're right on. That is going to be one of the big differences in the coming years is data as product. And that brings up sort of the line of business, right? I mean the lines of business heads historically have been kind of removed from the data group, that's why I was asking you about the organization before. But let's get into Google. How do you describe Google's strategy, its approach, and why it's unique? >> You know, I think one of the reasons, so I just, you know, started about a year ago, and one of the reasons for why I found, you know, the Google mission interesting, is that it's really rooted at who we are and what we do. If you think about it, we make data simple. That's really what we're about. And we live that value. If you go to google.com today, what's happening? Right, as an end-user, you don't need any training. You're going to type in whatever it is that you're looking for, and then we're going to return to you highly personalized, highly actionable insights to you as a consumer of insights, if you will. And I think that's where the market is going to. Now, you know, making data simple doesn't mean that you have to have simple infrastructure. In fact, you need to be able to handle sophistication at scale. And so simply our differentiation here is how do we go from highly sophisticated world of the internet, disconnected data, changing all the time, vast volume, and a lot of different types of data, to a simple answer that's actionable to the end-user? It's intelligence. And so our differentiation is around that. Our mission is to make data simple and we use intelligence to take the sophistication and provide to you an answer that's highly actionable, highly relevant, highly personalized for you, so you can go on and do your job, 'cause ultimately the majority of people are not in the data business. And so they need to get the information just like you said, as a business user, that's relevant, actionable, timely, so they can go off and, you know, create value for their organization. >> So I don't think anybody would argue that Google, obviously, are data experts, arguably the best in the world. But it's interesting, some of the uniqueness here that I'm hearing in your language. You used the word multicloud, Amazon doesn't, you know, use that term. So that's a differentiation. And you sell a cloud, right? You sell cloud services, but you're talking about multicloud. You sell databases, but, of course, you host other databases, like Snowflake. So where do you fit in all this? Do you see your role, as the head of data analytics, is to sort of be the chef that helps combine all these different capabilities? Or are you sort of trying to help people adopt Google products and services? How should we think about that? >> Yeah, the best way to think about, you know, I spend 60 to 70% of my time with customers. And the best way I can think about our role is to be your innovation partner as an organization. And, you know, whichever is the scenario that you're going to be using, I think you talked about open cloud, I think another uniqueness of Google is that we have a very partner friendly, you know, approach to the business. Because we realized that when you walk into an enterprise or a digital native, and so forth, they already have a lot of assets that they have accumulated over the years. And it might be technology assets, but also might be knowledge, and know-how, right? So we want to be able to be the innovation vendor that enables you to take these assets, put them together, and create simplicity towards the data. You know, ultimately, you can have all types of complexity in the backend. But what we can do the best for you is make that really simple, really integrated, really unified, so you, as a business user, you don't have to worry about, "Where is my data? Do I need to think about moving data from here to there? Are there things that I can do only if the data is formatted that way and this way?" We want to remove all that complexity, just like we do it on google.com, so you can do your job. And so that's our job, and that's the reason for why people come to us, is because they see that we can be their best innovation partner, regardless where the data is and regardless, you know, what part of the stack they're using. >> Well, I want to take an example, because my example, I mean, I don't know Google's portfolio like you do, obviously, but one of the things I hear from customers is, "We're trying to inject as much machine intelligence into our data as possible. We see opportunities to automate." So I look at something like BigQuery, which has a strong affinity in embedded machine learning and machine intelligence, as an example, maybe of that simplification. But maybe you could pick up on that and give us some other concrete examples. >> Yeah, specifically on products, I mean, there are a lot products we can talk about, and certainly BigQuery has tremendous market momentum. You know, and it's really anchored on this idea that, you know, the idea behind BigQuery is that just add data and we'll do the rest, right? So that's kind of the idea where you can start small and you can scale at incredible, you know, volumes without really having to think about tuning it, about creating indexes, and so forth. Also, we think about BigQuery as the place that people start in order to build their ecosystem. That's why we've invested a lot in machine learning. Just a few years ago, we introduced this functionality called BigQuery Machine Learning, or BigQuery ML, if you're familiar with it. And you notice out of the top 100 customers we have, 80% of these customers are using machine learning right out of, you know, BigQuery. So now why is that? Why is it that it's so easy to use machine learning using BigQuery is because it's built in. It was built from the ground up. Instead of thinking about machine learning as an afterthought, or maybe something that only data scientists have access to that you're going to license just for narrow scenarios, we think about you have your data in a warehouse that can scale, that is equally awesome at small volume as very large volume, and we build on top of that. You know, similarly, we just announced our analytics exchange, which is basically the place where you can now build these data analytics assets that we discussed, so you can now build an ecosystem that creates value for end-users. And so BigQuery is really at the center of a lot of that strategy, but it's not unlike any of the other products that we have. We want to make it simple for people to onboard, simple to scale, to really accomplish, you know, whatever success is ahead of them. >> Well, I think ecosystems is another one of those big differences in the coming decade, because you're able to build ecosystems around data, especially if you can share that data, you know, and do so in a governed and secure way. But it leads to my question on industries, and I'm wondering if you see any patterns emerging in industries? And each industry seems to have its own unique disruption scenario. You know, retail obviously has been, you know, disrupted with online commerce. And healthcare with, of course, the pandemic. Financial services, you wonder, "Okay, are traditional banks going to lose control of payment systems?" Manufacturing you see our reliance on China's supply chain in, of course, North America. Are you seeing any patterns in industry as it pertains to data? And what can you share with us in terms of insights there? >> Yeah, we are. And, I mean, you know, there's obviously the industries that are, you know, very data savvy or data hungry. You think about, you know, the telecommunication industry, you think about manufacturing, you think about financial services and retail. I mean, financial services and retailers are particularly interesting, because they're kind of both in the retail business and having to deal with this level of complexity of they have physical locations and they also have a relationship with people online, so they really want to be able to bring these two worlds together. You know, I think, you know, about those scenarios of Carrefour, for instance. It's a large retailer in Europe that has been able to not only to, you know, onboard on our platform and they're using, you know, everything from BigQuery, all the way to Looker, but also now create the data assets that enable them to differentiate within their own industry. And so we see a lot of that happening across pretty much all industries. It's difficult to think about an industry that is not really taking a hard look at their data strategy recently, especially over the last two years, and really thought about how they're creating innovation. We have actually created what we call design patterns, which are basically blueprints for organization to take on. It's free, it's free guidance, it's free datasets and code that can accelerate their building of these innovative solutions. So think about the, you know, ability to determine propensity to purchase. Or build, you know, a big trend is recommendation systems. Another one is anomaly detection, and this was great because anomaly detection is a scenario that works in telco, but also in financial services. So we certainly are seeing now companies moving up in their level of maturity, because we're making it easier and simpler for them to assemble these technologies and create, you know, what we call data-rich experiences. >> The last question is how you see the emerging edge, IoT, analytics in that space? You know, a lot of the machine learning or AI today is modeling in the cloud, as you well know. But when you think about a lot of the consumer applications, whether it's voice recognition or, you know, or fingerprinting, et cetera, you're seeing some really interesting use cases that could bleed into the enterprise. And we think about AI inferencing at the edge as really driving a lot of value. How do you see that playing out and what's Google's role there? >> So there's a lot going on in that space. I'll give you just a simple example. Maybe something that's easy for the community to understand is there's still ways that we define certain metrics that are not taking into account what actually is happening in reality. I was just talking to a company whose job is to deliver meals to people. And what they have realized is that in order for them to predict exactly the time it's going to take them from the kitchen to your desk, they have to take into account the fact that distance sometimes it's not just horizontal, it's also vertical. So if you're distributing and you're delivering meals, you know, in Singapore, for instance, high density, you have to understand maybe the data coming from the elevators. So you can determine, "Oh, if you're on the 20th floor, now my distance to you, and my ability to forecast exactly when you're going to get that meal, is going to be different than if you are on the fifth floor. And, particularly, if you're ordering at 11:32, versus if you're ordering at 11:58." And so what's happening here is that as people are developing these intelligent systems, they're now starting to input a lot of information that historically we might not have thought about, but that actually is very relevant to the end-user. And so, you know, how do you do that? Again, and you have to have a platform that enables you to have a large diversity of use cases, and that thinks ahead, if you will, of the problems you might run into. Lots and lots of innovation in this space. I mean, we work with, you know, companies like Ford to, you know, reinvent the connected, you know, cars. We work with companies like Vodafone, 700 use cases, to think about how they're going to deal with what they call their data ocean. You know, I thought you would like this term, because we've gone from data lakes to data oceans. And so there is certainly a ton of innovation and certainly, you know, the chief data officers that I have the opportunity to work with are really not short of ideas. I think what's been happening up until now, they haven't had this kind of single, unified, simple experience that they can use in order to onboard quickly and then enable their people to build great, rich-data applications. >> Yeah, we certainly had fun with that over the years, data lake or data ocean. And thank you for remembering that, Bruno. Always a pleasure seeing you. Thanks so much for your time and sharing your perspectives, and informing us about what Google's up to. Can't wait to have you back. >> Thanks for having me, Dave. >> All right, and thank you for watching, everybody. This is Dave Vellante. Appreciate you watching this CUBE Conversation, and we'll see you next time. (gentle music)

Published Date : Aug 9 2021

SUMMARY :

to see you again, welcome. Great to see you, you know, the opportunity And for people like me, you know, you know, came into this all the way to now, you know, But what do you mean by data fabric? You know, the terminology to me, you know, so you don't have to move the data around. is that the minute you But we saw, you know, bromide And so I think, you know, that's why I was asking you and provide to you an answer Amazon doesn't, you know, use that term. and regardless, you know, But maybe you could pick up on that we think about you have your data has been, you know, So think about the, you know, recognition or, you know, of the problems you might run into. And thank you for remembering that, Bruno. and we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
FordORGANIZATION

0.99+

CarrefourORGANIZATION

0.99+

VodafoneORGANIZATION

0.99+

BrunoPERSON

0.99+

EuropeLOCATION

0.99+

Dave VellantePERSON

0.99+

SingaporeLOCATION

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

DavePERSON

0.99+

Zhamak DehghaniPERSON

0.99+

Bruno AzizaPERSON

0.99+

fifth floorQUANTITY

0.99+

80%QUANTITY

0.99+

60QUANTITY

0.99+

GartnerORGANIZATION

0.99+

BigQueryTITLE

0.99+

20th floorQUANTITY

0.99+

11:58DATE

0.99+

second pieceQUANTITY

0.99+

11:32DATE

0.99+

bothQUANTITY

0.99+

EquifaxORGANIZATION

0.99+

todayDATE

0.99+

pandemicEVENT

0.99+

North AmericaLOCATION

0.99+

oneQUANTITY

0.98+

second problemQUANTITY

0.98+

HadoopTITLE

0.98+

30 years agoDATE

0.98+

first barrierQUANTITY

0.98+

LookerORGANIZATION

0.98+

Data Cloud SummitEVENT

0.98+

20 years agoDATE

0.98+

70%QUANTITY

0.98+

first oneQUANTITY

0.98+

third trendQUANTITY

0.97+

32%QUANTITY

0.96+

first thingQUANTITY

0.96+

twoQUANTITY

0.96+

two yearsQUANTITY

0.96+

each industryQUANTITY

0.96+

telcoORGANIZATION

0.95+

two worldsQUANTITY

0.95+

three decadesQUANTITY

0.95+

HadoopPERSON

0.95+

three thingsQUANTITY

0.94+

20DATE

0.94+

BigQuery MLTITLE

0.93+

few years agoDATE

0.93+

100 customersQUANTITY

0.91+

700 use casesQUANTITY

0.91+

last three decadesDATE

0.89+

singleQUANTITY

0.89+

google.comORGANIZATION

0.89+

a year agoDATE

0.88+

next 10 yearsDATE

0.88+

last two yearsDATE

0.88+

single use caseQUANTITY

0.85+

ChinaLOCATION

0.84+

one placeQUANTITY

0.82+

Dominique Dubois, IBM | IBM Think 2021


 

>> Announcer: From around the globe, it's theCUBE, with digital coverage of IBM Think 2021, brought to you by IBM. >> Hey, welcome to theCUBE's coverage of IBM Think, the digital event experience. I'm your host, Lisa Martin, welcoming back to the program one of our CUBE alumn. Dominique Dubois joins me. She's the Global Strategy and Offerings Executive in the Business Transformation Services of IBM. Dominique, it's great to talk to you again. >> Hi Lisa, great to be with you today. >> So we're going to be talking about the theme of this interview. It's going to be the ROI of AI for business. We've been talking about AI emerging technologies for a long time now. We've also seen a massive change in the world. I'd love to talk to you about how organizations are adopting these emerging technologies to really help transform their businesses. And one of the things that you've talked about in the past, is that there's these different elements of AI for business. One of them is trust, right, the second is ease of use, and then there's this importance of data in all of these important emerging technologies that require so much data. How do those elements of AI come together to help IBM's clients be able to deliver the products and services that their customers are depending on? >> Yeah. Thank you, Lisa. So, when we look at AI and AI solutions with our clients, I think how that comes together is in the way in which we don't look at AI, or AI application solution, independently, right. We're looking at it and we're applying it within our customer's operations with respect to the work that it's going to do, with respect to the part of the operations and the workflow and the function that it sits in, right. So the idea around trust and ease of use and the data that can be leveraged in order to kind of create that AI and allow that AI to be self-learning and continue to add value really is fundamental around how we design and how we implement it within the workflow itself. And how we are working with the employees, with the actual humans, that are going to be touching that AI, right, to help them with new skills that are required to work with AI, to help them with what we call the new ways of working, right, 'cause it's that adoption that really is critical to get the use of AI in enterprises at scale. >> That adoption that you just mentioned, that's critical. That can be kind of table stakes. But what we've seen in the last year is that we've all had to pivot, multiple times, and be reactionary, or reactive, to so many things out of our control. I'm curious what you've seen in the last year in terms of the appetite for adoption on the employees front. Are they more willing to go, all right, we've got to change the way we do things, and it's probably going to be, some of these are going to be permanent? >> Yeah. Lisa, we've absolutely seen a huge rise in the adoption, right, or in the openness, the mindset. Let's just call it the mindset, right. It's more of an open mindset around the use of technology, the use of technology that might be AI backed or AI based, and the willingness to, and I will say, the willingness to try is really then what starts that journey of trust, right. And we're seeing that open up in spades. >> That is absolutely critical. It's just the willingness, being open-minded enough to go, all right, we've got to do this, so we've got to think about this. We don't really have any other choices here. Things are changing pretty quickly. So talk to me, in this last year of change, we've seen massive disruptions and some silver linings for sure, but I'd love to know what IBM and the state of Rhode Island have done together in its challenging time. >> Yeah, so, really interesting partnership that we started with the state of Rhode Island. Obviously, I think this year, there's been lots of things. One of them has been speed, so everything that we had to do has been with haste, right, with urgency. And that's no different than what we did with the state of Rhode Island. The governor there, Gina Raimondo, she took very swift action, right, when the pandemic started. And one of the actions she took was to partner with private firms, such as IBM and others, to really help get her economy back open. And that required a lot of things. One of them, as you mentioned, trust, right, was a major part of what the governor there needed with her citizenships, with her citizens, excuse me, in order to be able to open back up the economy, right. And so, a key pillar of her program, and with our partnership, was around the AI-backed solutions that we brought to the state of Rhode Island, so inclusive of contact tracing, inclusive of work that we had provided around AI-based analytics that allowed really the governor to speak to citizens with hard facts quickly, almost real time, right, and start to build that trust, but also competence, and competence was the main, one of the main things that was required during this pandemic time. And so, there were, through this, the AI-based solutions that we provided, which were, there were many pillars, we were able to help Rhode Island not only open their economy, but they were one of the only states that had their schools open in the fall, and as a parent, I always see that as a litmus, if you will, of how our state is doing, right. And so they opened in the fall, and they, as far as I know, have stayed open. And I think part of that was from the AI-based contact tracing, the AI-backed virtual, sorry, AI analytics, the analytics suite around infections and predictions and what we were able to provide the governor in order to make swift decisions and take action. >> That's really impressive. That's one of the challenges I've had living in California, is you (mumbles) you are going to be data-driven than actually be data-driven, but the technology, living in Silicon Valley, the technology is there to be able to facilitate that, yet there was such a disconnect, and I think that's, you bring up the word confidence, and customers need confidence, citizens need confidence, knowing that what we've seen in the last year has shown in a lot of examples that real time isn't a nice-to-have anymore, it's a requirement. I mean, this is clearly life-and-death situations. That's a great example of how a state came to IBM to partner and say, how can we actually leverage emerging technologies like AI to really and truly make real-time data-driven decisions that affect every single person in our state. >> Mm-hmm. Absolutely, absolutely! Really, really, I think, a great example of the public-private partnerships that are really popping up now, more and more so because of that sense of urgency and that need to build greater ecosystems to create better solutions. >> So that's a great example in healthcare, one that our government in public health, and I think everybody, it will resonate with everybody here, but you've also done some really interesting work that I want to talk about with AI-driven insights into supply chain. We've also seen massive changes to supply chain, and so many organizations having to figure out, whether they were brick-and-mortar only, changing that, or really leveraging technology to figure out where do we need to be distributing products and services, where do we need to be investing. Talk to me about Bestseller India, and what it is that you guys have done there with intelligent workflows to really help them transform their supply chain. >> Yeah, Bestseller India, really great, hugely successful fashion forward company in India, and that term fashion forward always is mind boggling to me because basically, these are clothing retailers who go from runway to store within a matter of days, couple of weeks, which always is just hugely impressive, right, just what goes into that. And when you think about what happens in a supply chain to be able to do that, the requirements around demand forecasting, what quantities, of what style, what design, to what stores, and you think about the India market, which is notoriously a difficult market, lots of micro-segments, and so very difficult to serve. And then you couple that what's been happening from an environmental sustainability perspective, right. I think every industry has been looking more about how they can be more environmentally sustainable, and the clothing industry is no different. And when, and there is a lot of impact, right, so a stat that really has hit home with me, right: 20% of all the clothes that are made globally goes unsold. That's all a lot of clothing, that's a lot of material, and that's a lot of environmental product that goes into creating it. And so, Bestseller India really took it to heart to become not only more environmentally sustainable, but to help itself and be digitally ready for things like the pandemic that ultimately hit. And they were in a really good position. And we worked with them to create something called Fabric AI. So Fabric AI is India's only, first and only, AI-based platform that drives their supply chain, so it drives not only their decisions on what design should they manufacture, but it also helps to improve the entire workflow of what we call design to store. And the AI-based solution is really revolutionary, right, within India, but I think it's pretty revolutionary globally, right, globally as well. And it delivered really big impact, so, reductions in the cost, right, 15-plus reduction in cost. It helped their top line, so they saw a 5% plus top line, but it also reduced their unsold inventory by 5% and more, right. They're continuing to focus on that environmental sustainability that I think is a really important part of their DNA, right, the Bestseller India's DNA. >> And it's one that so many companies and other industries can learn from. I was reading in that case study on Bestseller India on the IBM website that I think it was 40 liters of water to make a cotton shirt. And to your point about the percentage of clothing that actually goes unsold and ends up in landfills, you see there the opportunity for AI to unlock the visibility that companies in any industry need to determine what is the demand that we should be filling, where should it be distributed, where should we not be distributing things. And so I think it was an interesting kind of impetus that Bestseller India had about one of their retail lines or brands was dropping in revenue, but they had been able to apply this technology to other areas of the business and make a pretty big impact. >> Yeah, absolutely. So they had been been very fortunate to have 11 years of growth, right, in all of their brands. And then one of their brands kind of hit headwinds, but the CIO and head of supply chain at that time really had the foresight to be able to say, you know what, we're hitting a problem, one of our brands, but this really is indicative of a more systemic problem. And that problem was lack of transparency, lack of data-driven, predictive, and automation to be able to drive a more effective and efficient kind of supply chain in the end, so, really had the forethought to dive into that and fix it. >> Yeah. And now talk to me about IBM Garage Band, and how's that, how did that help in this particular case? >> Yeah. So, in order to do this, right, it was, they had no use of AI, no use of automation, at the time that we started this. And so to really not only design and build and execute on Fabric AI, but to actually focus on the adoption, right, of AI within the business, we really needed to bring together the leaders across many lines of businesses, IT and HR, right. And when you think about pulling all of these different units together, we used our IBM Garage approach, which really is, there are many attributes and many facets of the IBM Garage, but I think one of the great results of using our IBM Garage approach is being able to pull from across all those different businesses, all of which may have some different objectives, right, they're coming from a different lens, from a different space, and pulling them together around one focus mission, which for here was Fabric AI. And we were able to actually design and build this in less than six months, which I think is pretty dramatic and pretty incredible from a speed and acceleration perspective. But I think even more so was the adoption, was the way in which we had, through all of it, already been working with the employees 'cause it's really touched almost every part of Bestseller India, so really being able to work with them and all the employees to make sure that they were ready for these new ways of working, that they had the right skills, that they had the right perspective, and that it was going to be adopted. >> That, we, if we unpack that, if we had time, that can be a whole separate conversation because the important, the most important thing about adoption is the cultures of these different business units have to come together. You said you rolled this out in a very short period of time, but you also were taking the focus on the employees. They need to understand the value in it. why they should be adopting it. And changing that culture, that's a whole other separate conversation, but that's an, that's a very interesting and very challenging thing to do. I wish we had more time to talk about that one. >> Yeah. It really is an, that the approach of bringing everyone together, it makes it just very dynamic, which is what's needed when you have all of those different lenses coming together, so, yeah. >> It is, 'cause you get a little bit of thought diversity as well when we're using AI. Well, Dominic, thank you for joining me today. Talked to me about what you guys are doing with many different types of customers, how you're helping them to integrate emerging technologies to really transform their business and their culture. We appreciate your time. >> Well, thank you, Lisa. Thanks >> For Dominique Dubois, I'm Lisa Martin. You're watching theCUBE's coverage of IBM Think, the digital event. (upbeat music)

Published Date : May 12 2021

SUMMARY :

brought to you by IBM. to talk to you again. And one of the things that and allow that AI to be self-learning and it's probably going to be, and the willingness to, and I will say, and the state of Rhode Island really the governor to speak to citizens the technology is there to and that need to build greater ecosystems need to be distributing in a supply chain to be able to do that, And to your point about to be able to say, And now talk to me about IBM Garage Band, and all the employees to make sure And changing that culture, It really is an, that Talked to me about what you guys are doing the digital event.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
LisaPERSON

0.99+

Lisa MartinPERSON

0.99+

DominiquePERSON

0.99+

IBMORGANIZATION

0.99+

DominicPERSON

0.99+

CaliforniaLOCATION

0.99+

Dominique DuboisPERSON

0.99+

IndiaLOCATION

0.99+

40 litersQUANTITY

0.99+

Gina RaimondoPERSON

0.99+

20%QUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

11 yearsQUANTITY

0.99+

OneQUANTITY

0.99+

Rhode IslandLOCATION

0.99+

15QUANTITY

0.99+

5%QUANTITY

0.99+

oneQUANTITY

0.99+

firstQUANTITY

0.99+

last yearDATE

0.99+

secondQUANTITY

0.99+

less than six monthsQUANTITY

0.99+

this yearDATE

0.99+

todayDATE

0.98+

Bestseller IndiaORGANIZATION

0.98+

pandemicEVENT

0.98+

plusQUANTITY

0.94+

theCUBEORGANIZATION

0.87+

one focusQUANTITY

0.86+

CUBEORGANIZATION

0.8+

2021DATE

0.77+

IBM ThinkEVENT

0.77+

IBM Think 2021EVENT

0.76+

BestsellerORGANIZATION

0.75+

single personQUANTITY

0.62+

GarageTITLE

0.61+

IBM GarageORGANIZATION

0.58+

o FabricORGANIZATION

0.52+

BandCOMMERCIAL_ITEM

0.49+

Kirk Borne, Booz Allen | HPE Ezmeral Day 2021


 

>>okay. Getting data right is one of the top priorities for organizations to affect digital strategy. So right now we're going to dig into the challenges customers face when trying to deploy enterprise wide data strategies. And with me to unpack this topic is Kirk born principal data Scientists and executive advisor Booz Allen Hamilton. Kirk, great to see you. Thank you, sir, for coming on the program. >>Great to be here, Dave. >>So hey, enterprise scale data science and engineering initiatives there. Nontrivial. What do you see? Some of the challenges and scaling data science and data engineering ops. >>Well, one of the first challenge is just getting it out of the sandbox because so many organizations, they say, let's do cool things with data. But how do you take it out of that sort of play phase into an operational phase? And so being able to do that is one of the biggest challenges. And then being able to enable that for many different use cases then creates an enormous challenge. Because do you replicate the technology and the team for each individual use case, or can you unify teams and technologies to satisfy all possible use cases? And so those are really big challenges for companies, organizations everywhere to think about >>what about the idea of industrializing those those data operations? I mean, what does that? What does that mean to you? Is that a security connotation? A compliance? How do you think about it? >>It's actually all of those industrialized to me is sort of like How do you not make it a one off? But you make it sort of a reproducible, solid, risk compliant and so forth system that can be reproduced many different times and again using the same infrastructure and the same analytic tools and techniques, but for many different use cases, so we don't have to rebuild the will reinvent the wheel, reinvent the car, so to speak. Every time you need a different type of vehicle, you build a car or a truck or a race car. There's some fundamental principles that are common to all of those, and that's where that industrialization is, and it includes security, compliance with regulations and all those things. But it also means just being able to scale it out to to new opportunities beyond the ones that you dreamed of when you first invented the thing >>you know, data by its very nature. As you well know, it's distributed, but for you've been at this a while. For years, we've been trying to sort of shove everything into a monolithic architecture and and in hardening infrastructures around that and many organizations, it's It's become a block to actually getting stuff done. But so how? How are you seeing things like the edge emerged? How do you How do you think about the edge? How do you see that evolving? And how do you think customers should be dealing with with edge and edge data? >>Well, it's really kind of interesting. I had many years at NASA working on data systems, and back in those days, the the idea was you would just put all the data in a big data center, and then individual scientists would retrieve that data and do analytics on it, do their analysis on their local computer. And you might say that sort of like edge analytics, so to speak, because they're doing analytics at at their home computer. But that's not what edge means. It means actually doing the analytics, the insights, discovery at the point of data collection, and so that's that's really real time Business decision making. You don't bring the data back and then try to figure out sometime in the future what to do. And I think an autonomous vehicle is a good example of why you don't want to do that. Because if you collect data from all the cameras and radars and light ours that are on a self driving car and you move that data back to a data cloud while the car is driving down the street and let's say a child walks in front of the car, you send all the data back. It computes and does some object recognition and pattern detection, and 10 minutes later sent a message to the car. Hey, you need to put your brakes on. Well, it's a little kind of late at that point, and so you need to make those discoveries, insight, discoveries, those pattern discoveries and hence the proper decisions from the patterns in the data at the point of data collection. And so that's Data Analytics at the edge. And so, yes, you can bring the data back to a central cloud or distributed cloud. It almost doesn't even matter if if your data is distributed, so any use case in any data, scientists or any analytic team in the business can access it. Then what you really have is a data mesh or a data fabric that makes it accessible at the point that you need it, whether it's at the edge or in some static post, uh, event processing. For example, typical business quarter reporting takes a long look at your last three months of business. Well, that's fine in that use case, but you can't do that for a lot of other real time analytic decision making. Well, >>that's interesting. I mean, it sounds like you think the the edge not as a place, but as you know, where it makes sense to actually, you know, the first opportunity, if you will, to process the data at low latency, where it needs to be low latency. Is that a good way to think about it? >>Absolutely. It's a little late and see that really matters. Uh, sometimes we think we're gonna solve that with things like five G networks. We're gonna be able to send data really fast across the wire. But again, that self driving cars yet another example because what if you all of a sudden the network drops out, you still need to make the right decision with the network not even being there, >>that darn speed of light problem. Um, and so you use this term data mash or or data fabric? Double click on that. What do you mean by that? >>Well, for me, it's it's, uh, it's a sort of a unified way of thinking about all your data. And when I think of mesh, I think of like weaving on a loom, or you're you're creating a blanket or a cloth and you do weaving, and you do that. All that cross layering of the different threads and so different use cases in different applications and different techniques can make use of this one fabric, no matter where it is in the in the business. Or again if it's at the edge or or back at the office. One unified fabric, which has a global name space so anyone can access the data they need, sort of uniformly, no matter where they're using it. And so it's a way of this unifying all the data and use cases and sort of a virtual environment that that no longer you need to worry about. So what's what's the actual file name or what's the actual server of this thing is on? Uh, you can just do that for whatever use case you have. But I think it helps Enterprises now to reach a stage which I like to call the self driving enterprise. Okay, so it's modeled after the self driving car. So the self driving enterprise needs the business leaders in the business itself. You would say it needs to make decisions oftentimes in real time, all right. And so you need to do sort of predictive modeling and cognitive awareness of the context of what's going on. So all these different data sources enable you to do all those things with data. And so, for example, any kind of a decision in a business, any kind of decision in life, I would say, is a prediction, right? You say to yourself, If I do this such and such will happen If I do that, this other thing will happen. So a decision is always based upon a prediction about outcomes, and you want to optimize that outcome so both predictive and prescriptive analytics need to happen in this in this same stream of data and not statically afterwards, so that self driving enterprises enabled by having access to data wherever and whenever you need it. And that's what that fabric that data fabric and data mesh provides for you, at least in my opinion. >>Well, so like carrying that analogy like the self driving vehicle, your abstracting, that complexity away in this metadata layer that understands whether it's on prem or in the public cloud or across clouds or at the edge where the best places to process that data, what makes sense? Does it make sense to move it or not? Ideally, I don't have to. Is that how you're thinking about it? Is that why we need this notion of a data fabric >>right? It really abstracts away all the sort of complexity that the I T aspects of the job would require. But not every person in the business is going to have that familiarity with the servers and the access protocols and all kinds of it related things, and so abstracting that away. And that's in some sense what containers do. Basically, the containers abstract away that all the information about servers and connectivity protocols and all this kind of thing You just want to deliver some data to an analytic module that delivers me. And inside our prediction, I don't need to think about all those other things so that abstraction really makes it empowering for the entire organization. You like to talk a lot about data, democratization and analytics democratization. This really gives power to every person in the organization to do things without becoming an I t. Expert. >>So the last last question, we have time for years. So it sounds like Kirk the next 10 years of data not going to be like the last 10 years will be quite different. >>I think so. I think we're moving to this. Well, first of all, we're going to be focused way more on the why question. Why are we doing this stuff? The more data we collect, we need to know why we're doing it. And one of the phrases I've seen a lot in the past year, which I think is going to grow in importance in the next 10 years, is observe ability, so observe ability to me is not the same as monitoring. Some people say monitoring is what we do. But what I like to say is, yeah, that's what you do. But why you do it is observe ability. You have to have a strategy. Why what? Why am I collecting this data? Why am I collecting it here? Why am I collecting it at this time? Resolution? And so getting focused on those why questions create be able to create targeted analytic solutions for all kinds of different different business problems. And so it really focuses it on small data. So I think the latest Gartner data and Analytics trending reports said we're gonna see a lot more focused on small data in the near future. >>Kirk born your dot connector. Thanks so much >>for coming on. The Cuban >>being part of the program. >>My pleasure. Mm mm.

Published Date : Mar 10 2021

SUMMARY :

for coming on the program. What do you see? the technology and the team for each individual use case, or can you unify teams and opportunities beyond the ones that you dreamed of when you first invented the thing And how do you think customers should be dealing with with edge and edge data? fabric that makes it accessible at the point that you need it, whether it's at the edge or in some static I mean, it sounds like you think the the edge not as a place, But again, that self driving cars yet another example because what if you all of a sudden the network drops out, Um, and so you use this term data And so you need to do sort of predictive modeling and cognitive awareness Well, so like carrying that analogy like the self driving vehicle, But not every person in the business is going to have that familiarity So it sounds like Kirk the next 10 And one of the phrases I've seen a lot in the past year, which I think is going to grow in importance in the next 10 years, Thanks so much for coming on.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

NASAORGANIZATION

0.99+

KirkPERSON

0.99+

oneQUANTITY

0.99+

bothQUANTITY

0.98+

Booz Allen HamiltonPERSON

0.98+

GartnerORGANIZATION

0.98+

first opportunityQUANTITY

0.98+

first challengeQUANTITY

0.97+

each individualQUANTITY

0.96+

DoubleQUANTITY

0.91+

Kirk BornePERSON

0.91+

firstQUANTITY

0.91+

Booz AllenPERSON

0.89+

next 10 yearsDATE

0.86+

10 minutes laterDATE

0.86+

past yearDATE

0.85+

five GORGANIZATION

0.83+

last 10 yearsDATE

0.83+

CubanPERSON

0.82+

one fabricQUANTITY

0.76+

next 10 yearsDATE

0.75+

2021DATE

0.75+

caseQUANTITY

0.73+

One unifiedQUANTITY

0.71+

HPE Ezmeral DayEVENT

0.56+

yearsQUANTITY

0.54+

three monthsQUANTITY

0.53+

lastDATE

0.38+

Kirk Borne, Principal Data Scientist & Executive Advisor, Booz Allen


 

(soft music) >> Getting data right, is one of the top priorities for organizations to affect digital strategy. So, right now we're going to dig into the challenges customers face when trying to deploy enterprise wide data strategies and with me to unpack this topic is Kirk Borne, Principal-Data Scientist, and Executive Advisor Booz Allen Hamilton. Kirk, great to see you, thank you sir for coming on the program. >> Great to be here, Dave. >> So hey, enterprise scale, data science and engineering initiatives, they're non-trivial. What do you see as some of the challenges in scaling data science and data engineering ops? >> First challenge is just getting it out of the sandbox, because so many organizations, they say let's do cool things with data but how do you take it out of that sort of play phase into an operational phase? And so being able to do that is one of the biggest challenges and then being able to enable that for many different use cases then creates an enormous challenge, because do you replicate the technology and the team for each individual use case or can you unify teams and technologies to satisfy all possible use cases? And so those are really big challenges for companies, organizations everywhere to think about. >> Well, what about the idea of you know, industrializing those data operations? I mean, what does that mean to you, is that a security connotation, a compliance? How do you think about it? >> It's actually all of those. And industrialized to me is sort of like, how do you not make it a one-off but you make it a sort of a reproducible, solid risk compliant and so forth system that can be reproduced many different times. And again, using the same infrastructure and the same analytic tools and techniques but for many different use cases. So we don't have to rebuild the wheel, reinvent the wheel, reinvent the car so to speak every time you need a different type of vehicle. You can either build a car, or a truck, or a race car there's some fundamental principles that are common to all of those. And that's where that industrialization is. And it includes security, compliance with regulations and all those things but it also means just being able to scale it out to to new opportunities beyond the ones that you dreamed of when you first invented the thing. >> Yeah, data by its very nature as you well know, is it's distributive but for you you've been at this awhile, for years we've been trying to sort of shove everything into a monolithic architecture, and in hardening infrastructures around that. And in many organizations it's become, you know, a block to actually getting stuff done. But, so how are you seeing things like the Edge emerge you know, how do you think about the edge, how do you see that evolving and how do you think customers should be dealing with edge and edge data? >> Well, that's really kind of interesting. I had many years at NASA working on data systems, and back in those days the idea was you would just put all the data in a big data center and then individual scientists would retrieve that data and do analytics on it, do their analysis on their local computer. And you might say that's sort of like edge analytics so to speak because they're doing analytics at their home computer, but that's not what edge means. It means actually doing the analytics, the insights discovery at the point of data collection. And so that's really real time business decision-making. You don't bring the data back and then try to figure out sometime in the future what to do. And I think autonomous vehicles is a good example of why you don't want to do that because if you collect data from all the cameras and radars and lidars that are on a self-driving car, and you move that data back to a data cloud while the car is driving down the street and let's say a child walks in front of the car, you send all the data back it computes and does some object recognition and pattern detection. And 10 minutes later, it sends a message to the car, "Hey, you need to put your brakes on." Well, it's a little kind of late at that point (laughs) and so you need to make those discoveries those insight discoveries, those pattern discoveries and hence the proper decisions from the patterns in the data at the point of data collection. And so that's data analytics at the edge. And so yes, you can bring the data back to a central cloud or distributed cloud. It almost doesn't even matter. If your data is distributed at sort of any use case in any data scientist or any analytic team and the business can access it then what you really have is a data mesh or a data fabric that makes it accessible at the point that you need it, whether it's at the edge or in some static post event processing, for example, typical business quarter reporting takes a long look at your last three months of business. Well, that's fine in that use case, but you can't do that for a lot of other real time analytic decision-making >> Well that's interesting. I mean, it sounds like you think of the edge not as a place, but as you know where it makes sense to actually, you know the first opportunity, if you will, to process the data at low latency where it needs to be low latency, is that a good way to think about it? >> Yeah, absolutely. It's the low latency that really matters. Sometimes we think we're going to solve that with things like 5G networks. We're going to be able to send data really fast across the wire, but again, that self-driving car is yet another example because what if all of a sudden the network drops out you still need to make the right decision with the network not even being there. >> Yeah that darn speed of light problem. And so you use this term data mesh or data fabric, double click on that, what do you mean by that? >> Well, for me, it's sort of a unified way of thinking about all your data. And when I think of mesh, I think of like weaving on a loom, you're creating a a blanket or a cloth and you do weaving and you do that all that cross layering of the different threads. And so different use cases in different applications in different techniques can make use of this one fabric no matter where it is in the business or again, if it's at the edge or back at the office. One unified fabric, which has a global namespace so anyone can access the data they need, sort of uniformly no matter where they're using it. And so it's a way of unifying all of the data and use cases and sort of a virtual environment that you no longer need to worry about. So what's the actual file name or what's the actual server this thing is on, you can just do that for whatever use case you have. I think it helps the enterprises now to reach a stage which I like to call the self-driving enterprise, okay? So it's modeled after the self-driving car. So the self-driving enterprise, the business leaders and the business itself you would say needs to make decisions, oftentimes in real time, All right? And so you need to do sort of predictive modeling and cognitive awareness of the context of what's going on. So all of these different data sources enable you to do all those things with data. And so, for example, any kind of a decision in a business, any kind of decision in life, I would say is a prediction, right? You say to yourself, if I do this such and such will happen. If I do that, this other thing will happen. So a decision is always based upon a prediction about outcomes and you want to optimize that outcome. So both predictive and prescriptive analytics need to happen in this same stream of data and not statically afterwards. And so that self-driving enterprise is enabled by having access to data wherever and whenever you need it and that's what that fabric, that data fabric and data mesh provides for you, at least in my opinion. >> Also like carrying that analogy like the self-driving vehicle, you're abstracting that complexity away and there's a metadata layer that understands whether it's on prem or in the public cloud or across clouds, or at the edge, where are the best places to process that data, what makes sense, does it make sense to move it or not, ideally, I don't have to, Is that how you're thinking about it? Is that why we need this notion of a data fabric? >> Right, it really abstracts away all the, sort of the complexity that the IT aspects of the job would require, but not every person in the business is going to have that familiarity with the servers and the access protocols and all kinds of IT related things. And so abstracting that away, and that's in some sense what containers do. Basically the containers abstract away all the information about servers and connectivity, you know, and protocols and all this kind of thing. You just want to deliver some data to an analytic module that delivers me an insight or a prediction, I don't need to think about all those other things. And so that abstraction really makes it empowering for the entire organization. We like to talk a lot about data democratization and analytics democratization. This really gives power to every person in the organization to do things without becoming an IT expert. >> So the last question we have time for here is, so it sounds like Kirk, the next 10 years of data are not going to be like the last 10 years, it will be quite different. >> I think so. I think we're moving to this, well, first of all, we're going to be focused way more on the why question, like, why are we doing this stuff? The more data we collect we need to know why we're doing it. And what are the phrases I've seen a lot in the past year which I think is going to grow in importance in next 10 years is observability. So observability to me is not the same as monitoring. Some people say monitoring is what we do but what I like to say is, "Yeah, that's what you do, but why you do it is observability." You have to have a strategy. Why am I collecting this data? Why am I collecting it here? Why am I collecting it at this time resolution? And so getting focused on those why questions, be able to create targeted analytics solutions for all kinds of different business problems. And so it really focuses it on small data. So, I think the latest Gartner data and analytics trending report, so we're going to see a lot more focus on small data in the near future. >> Kirk Borne, you're a dot connector. Thanks so much for coming on The Cube and being of the part of the program. >> My pleasure. (soft music)

Published Date : Mar 2 2021

SUMMARY :

for coming on the program. What do you see as some of the challenges And so being able to do that beyond the ones that you dreamed of and how do you think customers the point that you need it, where it makes sense to actually, you know It's the low latency that really matters. And so you use this term and cognitive awareness of the in the organization to do things So the last question "Yeah, that's what you do, and being of the part of the program. (soft music)

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NASAORGANIZATION

0.99+

DavePERSON

0.99+

KirkPERSON

0.99+

GartnerORGANIZATION

0.99+

Kirk BornePERSON

0.99+

oneQUANTITY

0.99+

bothQUANTITY

0.98+

First challengeQUANTITY

0.98+

Booz Allen HamiltonPERSON

0.97+

first opportunityQUANTITY

0.97+

10 minutes laterDATE

0.97+

OneQUANTITY

0.96+

Booz AllenORGANIZATION

0.95+

firstQUANTITY

0.88+

past yearDATE

0.88+

each individual use caseQUANTITY

0.85+

last 10 yearsDATE

0.8+

next 10 yearsDATE

0.8+

one fabricQUANTITY

0.7+

CubeORGANIZATION

0.66+

doubleQUANTITY

0.55+

three monthsQUANTITY

0.53+

lastQUANTITY

0.42+

Tech for Good | Exascale Day


 

(plane engine roars) (upbeat music) >> They call me Dr. Goh. I'm Senior Vice President and Chief Technology Officer of AI at Hewlett Packard Enterprise. And today I'm in Munich, Germany. Home to one and a half million people. Munich is famous for everything from BMW, to beer, to breathtaking architecture and festive markets. The Bavarian capital is the beating heart of Germany's automobile industry. Over 50,000 of its residents work in automotive engineering, and to date, Munich allocated around 30 million euros to boost electric vehicles and infrastructure for them. (upbeat music) >> Hello, everyone, my name is Dr. Jerome Baudry. I am a professor at the University of Alabama in Huntsville. Our mission is to use a computational resources to accelerate the discovery of drugs that will be useful and efficient against the COVID-19 virus. On the one hand, there is this terrible crisis. And on the other hand, there is this absolutely unique and rare global effort to fight it. And that I think is a is a very positive thing. I am working with the Cray HPE machine called Sentinel. This machine is so amazing that it can actually mimic the screening of hundreds of thousands, almost millions of chemicals a day. What we take weeks, if not months, or years, we can do in a matter of a few days. And it's really the key to accelerating the discovery of new drugs, new pharmaceuticals. We are all in this together, thank you. (upbeat music) >> Hello, everyone. I'm so pleased to be here to interview Dr. Jerome Baudry, of the University of Alabama in Huntsville. >> Hello, Dr. Goh, I'm very happy to be meeting with you here, today. I have a lot of questions for you as well. And I'm looking forward to this conversation between us. >> Yes, yes, and I've got lots of COVID-19 and computational science questions lined up for you too Jerome. Yeah, so let's interview each other, then. >> Absolutely, let's do that, let's interview each other. I've got many questions for you. And , we have a lot in common and yet a lot of things we are addressing from a different point of view. So I'm very much looking forward to your ideas and insights. >> Yeah, especially now, with COVID-19, many of us will have to pivot a lot of our research and development work, to address the most current issues. I watch your video and I've seen that you're very much focused on drug discovery using super computing. The central notebook you did, I'm very excited about that. Can you tell us a bit more about how that works, yeah? >> Yes, I'd be happy to in fact, I watch your video as well manufacturing, and it's actually quite surprisingly close, what we do with drugs, and with what other people do with planes or cars or assembly lanes. we are calculating forces, on molecules, on drug candidates, when they hit parts of the viruses. And we essentially try to identify what small molecules will hit the viruses or its components, the hardest to mess with its function in a way. And that's not very different from what you're doing. What you are describing people in the industry or in the transportation industry are doing. So that's our problem, so to speak, is to deal with a lot of small molecules. Guy creating a lot of forces. That's not a main problem, our main problem is to make intelligent choices about what calculates, what kind of data should we incorporate in our calculations? And what kind of data should we give to the people who are going to do the testing? And that's really something I would like you to do to help us understand better. How do you see artificial intelligence, helping us, putting our hands on the right data to start with, in order to produce the right data and accuracy. >> Yeah, that's that's a great question. And it is a question that we've been pondering in our strategy as a company a lot recently. Because more and more now we realize that the data is being generated at the far out edge. By edge. I mean, something that's outside of the cloud and data center, right? Like, for example, a more recent COVID-19 work, doing a lot of cryo electron microscope work, right? To try and get high resolution pictures of the virus and at different angles, so creating lots of movies under electron microscope to try and create a 3D model of the virus. And we realize that's the edge, right, because that's where the microscope is, away from the data center. And massive amounts of data is generated, terabytes and terabytes of data per day generated. And we had to develop means, a workflow means to get that data off the microscope and provide pre-processing and processing, so that they can achieve results without delay. So we learned quite a few lessons there, right, especially trying to get the edge to be more intelligent, to deal with the onslaught of data coming in, from these devices. >> That's fantastic that you're saying that and that you're using this very example of cryo-EM, because that's the kind of data that feeds our computations. And indeed, we have found that it is very, very difficult to get the right cryo-EM data to us. Now we've been working with HPE supercomputer Sentinel, as you may know, for our COVID-19 work. So we have a lot of computational power. But we will be even faster and better, frankly, if we knew what kind of cryo-EM data to focus on. In fact, most of our discussions are based on not so much how to compute the forces of the molecules, which we do quite well on an HP supercomputer. But again, what cryo-EM 3D dimensional space to look at. And it's becoming almost a bottleneck. >> Have access to that. >> And we spend a lot of time, do you envision a point where AI will be able to help us, to make this kind of code almost live or at least as close to live as possible, as that that comes from the edge? How to pack it and not triage it, but prioritize it for the best possible computations on supercomputers? >> What a visionary question and desire, right? Like exactly the vision we have, right? Of course, the ultimate vision, you aim for the best, and that will be a real time stream of processed data coming off the microscope straight, providing your need, right? We are not there. Before this, we are far from there, right? But that's the aim, the ability to push more and more intelligence forward, so that by the time the data reaches you, it is what you need, right, without any further processing. And a lot of AI is applied there, particularly in cryo-EM where they do particle picking, right, they do a lot of active pictures and movies of the virus. And then what they do is, they rotate the virus a little bit, right? And then to try and figure out in all the different images in the movies, to try and pick the particles in there. And this is very much image processing that AI is very good at. So many different stages, application is made. The key thing, is to deal with the data that is flowing at this at this speed, and to get the data to you in the right form, that in time. So yes, that's the desire, right? >> It will be a game changer, really. You'll be able to get things in a matter of weeks, instead of a matter of years to the colleague who will be doing the best day. If the AI can help me learn from a calculation that didn't exactly turn out the way we want it to be, that will be very, very helpful. I can see, I can envision AI being able to, live AI to be able to really revolutionize all the process, not only from the discovery, but all the way to the clinical, to the patient, to the hospital. >> Well, that's a great point. In fact, I caught on to your term live AI. That's actually what we are trying to achieve. Although I have not used that term before. Perhaps I'll borrow it for next time. >> Oh please, by all means. >> You see, yes, we have done, I've been doing also recent work on gene expression data. So a vaccine, clinical trial, they have the blood, they get the blood from the volunteers after the first day. And then to run very, very fast AI analytics on the gene expression data that the one, the transcription data, before translation to emit amino acid. The transcription data is enormous. We're talking 30,000, 60,000 different items, transcripts, and how to use that high dimensional data to predict on day one, whether this volunteer will get an adverse event or will have a good antibody outcome, right? For efficacy. So yes, how to do it so quickly, right? To get the blood, go through an SA, right, get the transcript, and then run the analytics and AI to produce an outcome. So that's exactly what we're trying to achieve, yeah. Yes, I always emphasize that, ultimately, the doctor makes that decision. Yeah, AI only suggests based on the data, this is the likely outcome based on all the previous data that the machine has learned from, yeah. >> Oh, I agree, we wouldn't want the machine to decide the fate of the patient, but to assist the doctor or nurse making the decision that will be invaluable? And are you aware of any kind of industry that already is using this kind of live AI? And then, is there anything in, I don't know in sport or crowd control? Or is there any kind of industry? I will be curious to see who is ahead of us in terms of making this kind of a minute based decisions using AI? Yes, in fact, this is very pertinent question. We as In fact, COVID-19, lots of effort working on it, right? But now, industries and different countries are starting to work on returning to work, right, returning to their offices, returning to the factories, returning to the manufacturing plants, but yet, the employers need to reassure the employees that things, appropriate measures are taken for safety, but yet maintain privacy, right? So our Aruba organization actually developed a solution called contact location tracing inside buildings, inside factories, right? Why they built this, and needed a lot of machine learning methods in there to do very, very well, as you say, live AI right? To offer a solution? Well, let me describe the problem. The problem is, in certain countries, and certain states, certain cities where regulations require that, if someone is ill, right, you actually have to go in and disinfect the area person has been to, is a requirement. But if you don't know precisely where the ill person has been to, you actually disinfect the whole factory. And if you have that, if you do that, it becomes impractical and cost prohibitive for the company to keep operating profitably. So what they are doing today with Aruba is, that they carry this Bluetooth Low Energy tag, which is a quarter size, right? The reason they do that is, so that they extract the tag from the person, and then the system tracks, everybody, all the employees. We have one company, there's 10,000 employees, right? Tracks everybody with the tag. And if there is a person ill, immediately a floor plan is brought up with hotspots. And then you just targeted the cleaning services there. The same thing, contact tracing is also produced automatically, you could say, anybody that is come in contact with this person within two meters, and more than 15 minutes, right? It comes up the list. And we, privacy is our focused here. There's a separation between the tech and the person, on only restricted people are allowed to see the association. And then things like washrooms and all that are not tracked here. So yes, live AI, trying to make very, very quick decisions, right, because this affects people. >> Another question I have for you, if you have a minute, actually has to be the same thing. Though, it's more a question about hardware, about computer hardware purify may. We're having, we're spending a lot of time computing on number crunching giant machines, like Sentinel, for instance, which is a dream to use, but it's very good at something but when we pulled it off, also spent a lot of time moving back and forth, so data from clouds from storage, from AI processing, to the computing cycles back and forth, back and forth, did you envision an architecture, that will kind of, combine the hardware needed for a massively parallel calculations, kind of we are doing. And also very large storage, fast IO to be more AI friendly, so to speak. You see on the horizon, some kind of, I would say you need some machine, maybe it's to be determined, to be ambitious at times but something that, when the AI ahead plan in terms of passing the vector to the massively parallel side, yeah, that makes sense? >> Makes a lot of sense. And you ask it I know, because it is a tough problem to solve, as we always say, computation, right, is growing capability enormously. But bandwidth, you have to pay for, latency you sweat for, right? >> That's a very good >> So moving data is ultimately going to be the problem. >> It is. >> Yeah, and we've move the data a lot of times, right, >> You move back and forth, so many times >> Back and forth, back and forth, from the edge that's where you try to pre-process it, before you put it in storage, yeah. But then once it arrives in storage, you move it to memory to do some work and bring it back and move it memory again, right, and then that's what HPC, and then you put it back into storage, and then the AI comes in you, you do the learning, the other way around also. So lots of back and forth, right. So tough problem to solve. But more and more, we are looking at a new architecture, right? Currently, this architecture was built for the AI side first, but we're now looking and see how we can expand that. And this is that's the reason why we announced HPE Ezmeral Data Fabric. What it does is that, it takes care of the data, all the way from the edge point of view, the minute it is ingested at the edge, it is incorporated in the global namespace. So that eventually where the data arrives, lands at geographically one, or lands at, temperature, hot data, warm data or cold data, regardless of eventually where it lands at, this Data Fabric checks everything, from in a global namespace, in a unified way. So that's the first step. So that data is not seen as in different places, different pieces, it is a unified view of all the data, the minute that it does, Just start from the edge. >> I think it's important that we communicate that AI is purposed for good, A lot of sci-fi movies, unfortunately, showcase some psychotic computers or teams of evil scientists who want to take over the world. But how can we communicate better that it's a tool for a change, a tool for good? >> So key differences are I always point out is that, at least we have still judgment relative to the machine. And part of the reason we still have judgment is because our brain, logical center is automatically connected to our emotional center. So whatever our logic say is tempered by emotion, and whatever our emotion wants to act, wants to do, right, is tempered by our logic, right? But then AI machine is, many call them, artificial specific intelligence. They are just focused on that decision making and are not connected to other more culturally sensitive or emotionally sensitive type networks. They are focus networks. Although there are people trying to build them, right. That's this power, reason why with judgment, I always use the phrase, right, what's correct, is not always the right thing to do. There is a difference, right? We need to be there to be the last Judge of what's right, right? >> Yeah. >> So that says one of the the big thing, the other one, I bring up is that humans are different from machines, generally, in a sense that, we are highly subtractive. We, filter, right? Well, machine is highly accumulative today. So an AI machine they accumulate to bring in lots of data and tune the network, but our brains a few people realize, we've been working with brain researchers in our work, right? Between three and 30 years old, our brain actually goes through a pruning process of our connections. So for those of us like me after 30 it's done right. (laughs) >> Wait till you reach my age. >> Keep the brain active, because it prunes away connections you don't use, to try and conserve energy, right? I always say, remind our engineers about this point, about prunings because of energy efficiency, right? A slice of pizza drives our brain for three hours. (laughs) That's why, sometimes when I get need to get my engineers to work longer, I just offer them pizza, three more hours, >> Pizza is universal solution to our problems, absolutely. Food Indeed, indeed. There is always a need for a human consciousness. It's not just a logic, it's not like Mr. Spock in "Star Trek," who always speaks about logic but forgets the humanity aspect of it. >> Yes, yes, The connection between the the logic centers and emotional centers, >> You said it very well. Yeah, yeah and the thing is, sleep researchers are saying that when you don't get enough REM sleep, this connection is weakened. Therefore, therefore your decision making gets affected if you don't get enough sleep. So I was thinking, people do alcohol test breathalyzer test before they are allowed to operate sensitive or make sensitive decisions. Perhaps in the future, you have to check whether you have enough REM sleep before, >> It is. This COVID-19 crisis obviously problematic, and I wish it never happened, but there is something that I never experienced before is, how people are talking to each other, people like you and me, we have a lot in common. But I hear more about the industry outside of my field. And I talk a lot to people, like cryo-EM people or gene expression people, I would have gotten the data before and process it. Now, we have a dialogue across the board in all aspects of industry, science, and society. And I think that could be something wonderful that we should keep after we finally fix this bug. >> Yes. yes, yes. >> Right? >> Yes, that's that's a great point. In fact, it's something I've been thinking about, right, for employees, things have changed, because of COVID-19. But very likely, the change will continue, yeah? >> Right. Yes, yes, because there are a few positive outcomes. COVID-19 is a tough outcome. But there positive side of things, like communicating in this way, effectively. So we were part of the consortium that developed a natural language processing system in AI system that would allow you scientists to do, I can say, with the link to that website, allows you to do a query. So say, tell me the latest on the binding energy between the Sasko B2 virus like protein and the AC receptor. And then you will, it will give you a list of 10 answers, yeah? And give you a link to the papers that say, they say those answers. If you key that in today to NLP, you see 315 points -13.7 kcal per mole, which is right, I think the general consensus answer, and see a few that are highly out of out of range, right? And then when you go further, you realize those are the earlier papers. So I think this NLP system will be useful. (both chattering) I'm sorry, I didn't mean to interrupt, but I mentioned yesterday about it, because I have used that, and it's a game changer indeed, it is amazing, indeed. Many times by using this kind of intelligent conceptual, analyzes a very direct use, that indeed you guys are developing, I have found connections between facts, between clinical or pharmaceutical aspects of COVID-19. That I wasn't really aware of. So a it's a tool for creativity as well, I find it, it builds something. It just doesn't analyze what has been done, but it creates the connections, it creates a network of knowledge and intelligence. >> That's why three to 30 years old, when it stops pruning. >> I know, I know. (laughs) But our children are amazing, in that respect, they see things that we don't see anymore. they make connections that we don't necessarily think of, because we're used to seeing a certain way. And the eyes of a child, are bringing always something new, which I think is what AI could potentially bring here. So look, this is fascinating, really. >> Yes, yes, difference between filtering subtractive and the machine being accumulative. That's why I believe, the two working together, can have a stronger outcome if used properly. >> Absolutely. And I think that's how AI will be a force for good indeed. Obviously see, seems that we would have missed that would end up being very important. Well, we are very interested in or in our quest for drug discovery against COVID-19, we have been quite successful so far. We have accelerated the process by an order of magnitude. So we're having molecules that are being tested against the virus, otherwise, it would have taken maybe three or four years to get to that point. So first thing, we have been very fast. But we are very interested in natural products, that chemicals that come from plants, essentially. We found a way to mine, I don't want to say explore it, but leverage, that knowledge of hundreds of years of people documenting in a very historical way of what plants do against what diseases in different parts of the world. So that really has been a, not only very useful in our work, but a fantastic bridge to our common human history, basically. And second, yes, plants have chemicals. And of course we love chemicals. Every living cell has chemicals. The chemicals that are in plants, have been fine tuned by evolution to actually have some biological function. They are not there just to look good. They have a role in the cell. And if we're trying to come up with a new growth from scratch, which is also something we want to do, of course, then we have to engineer a function that evolution hasn't already found a solution to, for in plants, so in a way, it's also artificial intelligence. We have natural solutions to our problems, why don't we try to find them and see their work in ourselves, we're going to, and this is certainly have to reinvent the wheel each time. >> Hundreds of millions of years of evolution, >> Hundreds of millions of years. >> Many iterations, >> Yes, ending millions of different plants with all kinds of chemical diversity. So we have a lot of that, at our disposal here. If only we find the right way to analyze them, and bring them to our supercomputers, then we will, we will really leverage this humongus amount of knowledge. Instead of having to reinvent the wheel each time we want to take a car, we'll find that there are cars whose wheels already that we should be borrowing instead of, building one each time. Most of the keys are out there, if we can find them, They' re at our disposal. >> Yeah, nature has done the work after hundreds of millions of years. >> Yes. (chattering) Is to figure out, which is it, yeah? Exactly, exactly hence the importance of biodiversity. >> Yeah, I think this is related to the Knowledge Graph, right? Where, yes, to objects and the linking parameter, right? And then you have hundreds of millions of these right? A chemical to an outcome and the link to it, right? >> Yes, that's exactly what it is, absolutely the kind of things we're pursuing very much, so absolutely. >> Not only only building the graph, but building the dynamics of the graph, In the future, if you eat too much Creme Brulee, or if you don't run enough, or if you sleep, well, then your cells, will have different connections on this graph of the ages, will interact with that molecule in a different way than if you had more sleep or didn't eat that much Creme Brulee or exercise a bit more, >> So insightful, Dr. Baudry. Your, span of knowledge, right, impressed me. And it's such fascinating talking to you. (chattering) Hopefully next time, when we get together, we'll have a bit of Creme Brulee together. >> Yes, let's find out scientifically what it does, we have to do double blind and try three times to make sure we get the right statistics. >> Three phases, three clinical trial phases, right? >> It's been a pleasure talking to you. I like we agreed, you knows this, for all that COVID-19 problems, the way that people talk to each other is, I think the things that I want to keep in this in our post COVID-19 world. I appreciate very much your insight and it's very encouraging the way you see things. So let's make it happen. >> We will work together Dr.Baudry, hope to see you soon, in person. >> Indeed in person, yes. Thank you. >> Thank you, good talking to you.

Published Date : Oct 16 2020

SUMMARY :

and to date, Munich allocated And it's really the key to of the University of to be meeting with you here, today. for you too Jerome. of things we are addressing address the most current issues. the hardest to mess with of the virus. forces of the molecules, and to get the data to you out the way we want it In fact, I caught on to your term live AI. And then to run very, the employers need to reassure has to be the same thing. to solve, as we always going to be the problem. and forth, from the edge to take over the world. is not always the right thing to do. So that says one of the the big thing, Keep the brain active, because but forgets the humanity aspect of it. Perhaps in the future, you have to check And I talk a lot to changed, because of COVID-19. So say, tell me the latest That's why three to 30 years And the eyes of a child, and the machine being accumulative. And of course we love chemicals. Most of the keys are out there, Yeah, nature has done the work Is to figure out, which is it, yeah? it is, absolutely the kind And it's such fascinating talking to you. to make sure we get the right statistics. the way you see things. hope to see you soon, in person. Indeed in person, yes.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeromePERSON

0.99+

HuntsvilleLOCATION

0.99+

BaudryPERSON

0.99+

Jerome BaudryPERSON

0.99+

threeQUANTITY

0.99+

10 answersQUANTITY

0.99+

hundreds of yearsQUANTITY

0.99+

Star TrekTITLE

0.99+

GohPERSON

0.99+

10,000 employeesQUANTITY

0.99+

COVID-19OTHER

0.99+

University of AlabamaORGANIZATION

0.99+

hundreds of millionsQUANTITY

0.99+

Hundreds of millions of yearsQUANTITY

0.99+

yesterdayDATE

0.99+

BMWORGANIZATION

0.99+

three timesQUANTITY

0.99+

three hoursQUANTITY

0.99+

more than 15 minutesQUANTITY

0.99+

todayDATE

0.99+

13.7 kcalQUANTITY

0.99+

MunichLOCATION

0.99+

first stepQUANTITY

0.99+

four yearsQUANTITY

0.99+

Munich, GermanyLOCATION

0.99+

ArubaORGANIZATION

0.99+

SentinelORGANIZATION

0.99+

Hundreds of millions of yearsQUANTITY

0.99+

315 pointsQUANTITY

0.99+

twoQUANTITY

0.99+

Dr.PERSON

0.98+

hundreds of millions of yearsQUANTITY

0.98+

hundreds of thousandsQUANTITY

0.98+

each timeQUANTITY

0.98+

secondQUANTITY

0.98+

three more hoursQUANTITY

0.98+

around 30 million eurosQUANTITY

0.98+

first thingQUANTITY

0.97+

bothQUANTITY

0.97+

University of AlabamaORGANIZATION

0.97+

first dayQUANTITY

0.97+

Sasko B2 virusOTHER

0.97+

SpockPERSON

0.96+

oneQUANTITY

0.96+

two metersQUANTITY

0.95+

Three phasesQUANTITY

0.95+

GermanyLOCATION

0.95+

one companyQUANTITY

0.94+

COVID-19 virusOTHER

0.94+

HPORGANIZATION

0.92+

Dr.BaudryPERSON

0.91+

Hewlett Packard EnterpriseORGANIZATION

0.91+

day oneQUANTITY

0.89+

30QUANTITY

0.88+

30 years oldQUANTITY

0.88+

BavarianOTHER

0.88+

30 years oldQUANTITY

0.84+

one and a half million peopleQUANTITY

0.84+

millions of chemicals a dayQUANTITY

0.84+

millions ofQUANTITY

0.83+

HPEORGANIZATION

0.82+

COVID-19 crisisEVENT

0.82+

ExascalePERSON

0.81+

Over 50,000 of its residentsQUANTITY

0.81+

ArubaLOCATION

0.8+

30,000, 60,000 different itemsQUANTITY

0.77+

Mr.PERSON

0.77+

doubleQUANTITY

0.73+

plantsQUANTITY

0.7+

Cray HPEORGANIZATION

0.69+

ACOTHER

0.67+

timesQUANTITY

0.65+

three clinical trial phasesQUANTITY

0.65+

Dr. Tim Wagner & Shruthi Rao | Cloud Native Insights


 

(upbeat electronic music) >> Narrator: From theCUBE studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE conversation! >> Hi, I'm Stu Miniman, your host for Cloud Native Insight. When we launched this series, one of the things we wanted to talk about was that we're not just using cloud as a destination, but really enabling new ways of thinking, being able to use the innovations underneath the cloud, and that if you use services in the cloud, that you're not necessarily locked into a solution or can't move forward. And that's why I'm really excited to help welcome to the program, I have the co-founders of Vendia. First we have Dr. Tim Wagner, he is the co-founder and CEO of the company, as well as generally known in the industry as the father of Serverless from the AWS Lambda, and his co-founder, Shruthi Rao, she is the chief business officer at Vendia, also came from AWS where she worked on blockchain solutions. Tim, Shruthi, thanks so much for joining us. >> Thanks for having us in here, Stu. Great to join the show. >> All right, so Shruthi, actually if we could start with you because before we get into Vendia, coming out of stealth, you know, really interesting technology space, you and Tim both learned a lot from working with customers in your previous jobs, why don't we start from you. Block chain of course had a lot of learnings, a lot of things that people don't understand about what it is and what it isn't, so give us a little bit about what you've learned and how that lead towards what you and Tim and the team are doing with Vendia. >> Yeah, absolutely, Stu! One, the most important thing that we've all heard of was this great gravitational pull towards blockchain in 2018 and 2019. Well, I was one of the founders and early adopters of blockchain from Bitcoin and Ethereum space, all the way back from 2011 and onwards. And at AWS I started the Amazon Managed Blockchain and launched Quantum Ledger Database, two services in the block chain category. What I learned there was, no surprise, there was a gold rush to blockchain from many customers. We, I personally talked to over 1,092 customers when I ran Amazon Managed Blockchain for the last two years. And I found that customers were looking at solving this dispersed data problem. Most of my customers had invested in IoT and edge devices, and these devices were gathering massive amounts of data, and on the flip side they also had invested quite a bit of effort in AI and ML and analytics to crunch this data, give them intelligence. But guess what, this data existed in multiple parties, in multiple clouds, in multiple technology stacks, and they needed a mechanism to get this data from wherever they were into one place so they could the AI, ML, analytics investment, and they wanted all of this to be done in real time, and to gravitated towards blockchain. But blockchain had quite a bit of limitations, it was not scalable, it didn't work with the existing stack that you had. It forced enterprises to adopt this new technology and entirely new type of infrastructure. It didn't work cross-cloud unless you hired expensive consultants or did it yourself, and required these specialized developers. For all of these reasons, we've seen many POCs, majority of POCs just dying on the vine and not ever reaching the production potential. So, that is when I realized that what the problem to be solved was not a trust problem, the problem was dispersed data in multiple clouds and multiple stacks problem. Sometimes multiple parties, even, problem. And that's when Tim and I started talking about, about how can we bring all of the nascent qualities of Lambda and Serverless and use all of the features of blockchain and build something together? And he has an interesting story on his own, right. >> Yeah. Yeah, Shruthi, if I could, I'd like to get a little bit of that. So, first of all for our audience, if you're watching this on the minute, probably want to hit pause, you know, go search Tim, go watch a video, read his Medium post, about the past, present, and future of Serverless. But Tim, I'm excited. You and I have talked in the past, but finally getting you on theCUBE program. >> Yeah! >> You know, I've looked through my career, and my background is infrastructure, and the role of infrastructure we know is always just to support the applications and the data that run business, that's what is important! Even when you talk about cloud, it is the applications, you know, the code, and the data that are important. So, it's not that, you know, okay I've got near infinite compute capacity, it's the new things that I can do with it. That's a comment I heard in one of your sessions. You talked about one of the most fascinating things about Serverless was just the new creativity that it inspired people to do, and I loved it wasn't just unlocking developers to say, okay I have new ways to write things, but even people that weren't traditional coders, like lots of people in marketing that were like, "I can start with this and build something new." So, I guess the question I have for you is, you know we had this idea of Platform as a Service, or even when things like containers launched, it was, we were trying to get close to that atomic unit of the application, and often it was talked about, well, do I want it for portability? Is it for ease of use? So, you've been wrangling and looking at this (Tim laughing) from a lot of different ways. So, is that as a starting point, you know, what did you see the last few years with Lambda, and you know, help connect this up to where Shruthi just left off her bit of the story. >> Absolutely. You know, the great story, the great success of the cloud is this elimination of undifferentiated heavy lifting, you know, from getting rid of having to build out a data center, to all the complexity of managing hardware. And that first wave of cloud adoption was just phenomenally successful at that. But as you say, the real thing businesses wrestle with are applications, right? It's ultimately about the business solution, not the hardware and software on which it runs. So, the very first time I sat down with Andy Jassy to talk about what eventually become Lambda, you know, one of the things I said was, look, if we want to get 10x the number of people to come and, you know, and be in the cloud and be successful it has to be 10 times simpler than it is today. You know, if step one is hire an amazing team of distributed engineers to turn a server into a full tolerance, scalable, reliable business solution, now that's going to be fundamentally limiting. We have to find a way to put that in a box, give that capability, you know, to people, without having them go hire that and build that out in the first place. And so that kind of started this journey for, for compute, we're trying to solve the problem of making compute as easy to use as possible. You know, take some code, as you said, even if you're not a diehard programmer or backend engineer, maybe you're just a full-stack engineer who loves working on the front-end, but the backend isn't your focus, turn that into something that is as scalable, as robust, as secure as somebody who has spent their entire career working on that. And that was the promise of Serverless, you know, outside of the specifics of any one cloud. Now, the challenge of course when you talk to customers, you know, is that you always heard the same two considerations. One is, I love the idea of Lamdba, but it's AWS, maybe I have multiple departments or business partners, or need to kind of work on multiple clouds. The other challenge is fantastic for compute, what about data? You know, you've kind of left me with, you're giving me sort of half the solution, you've made my compute super easy to use, can you make my data equally easy to use? And so you know, obviously the part of the genesis of Vendia is going and tackling those pieces of this, giving all that promise and ease of use of Serverless, now with a model for replicated state and data, and one that can cross accounts, machines, departments, clouds, companies, as easily as it scales on a single cloud today. >> Okay, so you covered quite a bit of ground there Tim, if you could just unpack that a little bit, because you're talking about state, cutting across environments. What is it that Vendia is bringing, how does that tie into solutions like, you know, Lamdba as you mentioned, but other clouds or even potentially on premises solutions? So, what is, you know, the IP, the code, the solution that Vendia's offering? >> Happy to! So, let's start with the customer problem here. The thing that every enterprise, every company, frankly, wrestles with is in the modern world they're producing more data than ever, IMT, digital journeys, you know, mobile, edge devices. More data coming in than ever before, at the same time, more data getting consumed than ever before with deep analytics, supply chain optimization, AI, ML. So, even more consumers of ever more data. The challenge, of course, is that data isn't always inside a company's four walls. In fact, we've heard 80% or more of that data actually lives outside of a company's control. So, step one to doing something like AI, ML, isn't even just picking a product or selecting a technology, it's getting all of your data back together again, so that's the problem that we set out to solve with Vendia, and we realized that, you know, and kind of part of the genesis for the name here, you know, Vendia comes from Venn Diagram. So, part of that need to bring code and data together across companies, across tech stacks, means the ability to solve some of these long-standing challenges. And we looked at the two sort of big movements out there. Two of them, you know, we've obviously both been involved in, one of them was Serverless, which amazing ability to scale, but single account, single cloud, single company. The other one is blockchain and distributed ledgers, manages to run more across parties, across clouds, across tech stacks, but doesn't have a great mechanism for scalability, it's really a single box deployment model, and obviously there are a lot of limitations with that. So, our technology, and kind of our insight and breakthrough here was bringing those two things together by solving the problems in each of them with the best parts of the other. So, reimagine the blockchain as a cloud data implementation built entirely out of Serverless components that have all of the scale, the cost efficiencies, the high utilization, like all of the ease of deployment that something like Lambda has today, and at the same time, you know, bring state to Serverless. Give things like Lambda and the equivalent of other clouds a simple, easy, built-in model so that applications can have multicloud, multi-account state at all times, rather than turning that into a complicated DIY project. So, that was our insight here, you know and frankly where a lot of the interesting technology for us is in turning those centralized services, a centralized version of Serverless Compute or Serverless Database into a multi-account, multicloud experience. And so that's where we spent a lot of time and energy trying to build something that gives customers a great experience. >> Yeah, so I've got plenty of background in customers that, you know, have the "information silos", if you will, so we know, when the unstructured data, you know so much of it is not searchable, I can't leverage it. Shruthi, but maybe it might make sense, you know, what is, would you say some of the top things some of your early customers are saying? You know, I have this pain point, that's pointing me in your direction, what was leading them to you? And how does the solution help them solve that problem? >> Yeah, absolutely! One of our design partners, our lead design partners is this automotive company, they're a premier automotive company, they want, their end goal is to track car parts for warranty recall issues. So, they want to track every single part that goes into a particular car, so they're about 30 to 35,000 parts in each of these cars, and then all the way from manufacturing floor to when the car is sold, and when that particular part is replaced eventually, towards the end of the lifecycle of that part. So for this, they have put together a small test group of their partners, a couple of the parts manufacturers, they're second care partners, National Highway Safety Administration is part of this group, also a couple of dealers and service centers. Now, if you just look at this group of partners, you will see some of these parties have high technology, technology backgrounds, just like the auto manufacturers themselves or the part manufacturers. Low modality or low IT-competency partners such as the service centers, for them desktop PCs are literally the IT competency, and so does the service centers. Now, most of, majority of these are on multiple clouds. This particular auto customer is on AWS and manufactures on Azure, another one is on GCP. Now, they all have to share these large files between each other, making sure that there are some transparency and business rules applicable. For example, two partners who make the same parts or similar parts cannot see each other's data. Most of the participants cannot see the PII data that are not applicable, only the service center can see that. National Highway Safety Administration has read access, not write access. A lot of that needed to be done, and their alternatives before they started using Vendia was either use point-to-point APIs, which was very expensive, very cumbersome, it works for a finite small set of parties, it does not scale, as in when you add more participants into this particular network. And the second option for them was blockchain, which they did use, and used Hyperledger Fabric, they used Ethereum Private to see how this works, but the scalability, with Ethereum Private, it's about 14 to 15 transactions per second, with Hyperledger Fabric it taps out at 100, or 150 on a good day, transaction through, but it's not just useful. All of these are always-on systems, they're not Serverless, so just provisioning capacity, our customers said it took them two to three weeks per participant. So, it's just not a scalable solution. With Vendia, what we delivered to them was this virtual data lake, where the sources of this data are on multiple clouds, are on multiple accounts owned by multiple parties, but all of that data is shared on a virtual data lake with all of the permissions, with all of the logging, with all of the security, PII, and compliance. Now, this particular auto manufacturer and the National Highway Safety Administration can run their ML algorithms to gain intelligence off of it, and start to understand patterns, so when certain parts go bad, or what's the propensity of a certain manufacturing unit producing faulty parts, and so on, and so forth. This really shows you this concept of unstructured data being shared between parties that are not, you know, connected with each other, when there are data silos. But I'd love to follow this up with another example of, you know, the democratization, democratization is very important to Vendia. When Tim launched Lambda and founded the AWS Serverless movement as a whole, and at AWS, one thing, very important thing happened, it lowered the barrier to entry for a new wave of businesses that could just experiment, try out new things, if it failed, they scrap it, if it worked, they could scale it out. And that was possible because of the entry point, because of the paper used, and the architecture itself, and we are, our vision and mission for Vendia is that Vendia fuels the next generation of multi-party connected distributed applications. My second design partner is actually a non-profit that, in the animal welfare industry. Their mission is to maintain a no-kill for dogs and cats in the United States. And the number one reason for over populations of dogs and cats in the shelters is dogs lost, dogs and cats lost during natural disasters, like the hurricane season. And when that happens, and when, let's say your dogs get lost, and you want to find a dog, the ID or the chip-reading is not reliable, they want to search this through pictures. But we also know that if you look at a picture of a dog, four people can come up with four different breed names, and this particular non-profit has 2,500 plus partners across the U.S., and they're all low to no IT modalities, some of them have higher IT competency, and a huge turnover because of volunteer employees. So, what we did for them was came up with a mechanism where they could connect with all 2,500 of these participants very easily in a very cost-effective way and get all of the pictures of all of the dogs in all these repositories into one data lake so they can run some kind of a dog facial recognition algorithm on it and identify where my lost dog is in minutes as opposed to days it used to take before. So, you see a very large customer with very sophisticated IT competency use this, also a non-profit being able to use this. And they were both able to get to this outcome in days, not months or years, as, blockchain, but just under a few days, so we're very excited about that. >> Thank you so much for the examples. All right, Tim, before we get to the end, I wonder if you could take us under the hood a little bit here. My understanding, the solution that you talk about, it's universal apps, or what you call "unis" -- >> Tim: Unis? (laughs) >> I believe, so if I saw that right, give me a little bit of compare and contrast, if you will. Obviously there's been a lot of interest in what Kubernetes has been doing. We've been watching closely, you know there's connections between what Kubernetes is doing and Serverless with the Knative project. When I saw the first video talking about Vendia, you said, "We're serverless, and we're containerless underneath." So, help us understand, because at, you know, a super high level, some of the multicloud and making things very flexible sound very similar. So you know, how is Vendia different, and why do you feel your architecture helps solve this really challenging problem? >> Sure, sure, awesome! You know, look, one of the tenets that we had here was that things have to be as easy as possible for customers, and if you think about the way somebody walks up today to an existing database system, right? They say, "Look, I've got a schema, I know the shape of my data." And a few minutes later I can get a production database, now it's single user, single cloud, single consumer there, but it's a very fast, simple process that doesn't require having code, hiring a team, et cetera, and we wanted Vendia to work the same way. Somebody can walk up with a JSON schema, hand it to us, five minutes later they have a database, only now it's a multiparty database that's decentralized, so runs across multiple platforms, multiple clouds, you know, multiple technology stacks instead of being single user. So, that's kind of goal one, is like make that as easy to use as possible. The other key tenet though is we don't want to be the least common denominator of the cloud. One of the challenges with saying everyone's going to deploy their own servers, they're going to run all their own software, they're going to build, you know, they're all going to co-deploy a Kubernetes cluster, one of the challenges with that is that, as Shruthi was saying, first, anyone for whom that's a challenge, if you don't have a whole IT department wrapped around you that's a difficult proposition to get started on no matter how amazing that technology might be. The other challenge with it though is that it locks you out, sort of the universe of a lock-in process, right, is the lock-out process. It locks you out of some of the best and brightest things the public cloud providers have come up with, and we wanted to empower customers, you know, to pick the best degree. Maybe they want to go use IBM Watson, maybe they want to use a database on Google, and at the same time they want to ingest IoT on AWS, and they wanted all to work together, and want all of that to be seamless, not something where they have to recreate an experience over, and over, and over again on three different clouds. So, that was our goal here in producing this. What we designed as an architecture was decentralized data storage at the core of it. So, think about all the precepts you hear with blockchain, they're all there, they all just look different. So, we use a no SQL database to store data so that we can scale that easily. We still have a consensus algorithm, only now it's a high speed serverless and cloud function based mechanism. You know, instead of smart contracts, you write things in a cloud function like Lambda instead, so no more learning Solidity, now you can use any language you want. So, we changed how we think about that architecture, but many of those ideas about people, really excited about blockchain and its capabilities and the vision for the future are still alive and well, they've just been implemented in a way that's far more practical and effective for the enterprise. >> All right, so what environments can I use today for your solution, Shruthi talked about customers spanning across some of the cloud, so what's available kind of today, what's on the roadmap in the future? Will this include beyond, you know, maybe the top five or six hyper scalers? Can I do, does it just require Serverless underneath? So, will things that are in a customer's own data center eventually support that. >> Absolutely. So, what we're doing right now is having people sign up for our preview release, so in the next few weeks, we're going to start turning that on for early access to developers. That'll be, the early access program, will be multi-account, focused on AWS, and then end of summer, well be doing our GA release, which will be multicloud, so we'll actually be able to operate across multiple clouds, multiple cloud services, on different platforms. But even from day one, we'll have API support in there. So, if you got a service, could even be running on a mainframe, could be on-prem, if it's API based you can still interact with the data, and still get the benefits of the system. So, developers, please start signing up, you can go find more information on vendia.net, and we're really looking forward to getting some of that early feedback and hear more from the people that we're the most excited to have start building these projects. >> Excellent, what a great call to action to get the developers and users in there. Shruthi, if you could just give us the last bit, you know, the thing that's been fascinating, Tim, when I look at the Serverless movement, you know, I've talked to some amazing companies that were two or three people (Tim laughing) and out of their basement, and they created a business, and they're like, "Oh my gosh, I got VC funding, and it's usually sub $10,000,000. So, I look at your team, I'd heard, Tim, you're the primary coder on the team. (Tim laughing) And when it comes to the seed funding it's, you know, compared to many startups, it's a small number. So, Shruthi, give us a little bit if you could the speeds and feeds of the company, your funding, and any places that you're hiring. Yeah, we are definitely hiring, lets me start from there! (Tim laughing) We're hiring for developers, and we are also hiring for solution architects, so please go to vendia.net, we have all the roles listed there, we would love to hear from you! And the second one, funding, yes. Tim is our main developer and solutions architect here, and look, the Serverless movement really helped quite a few companies, including us, to build this, bring this to market in record speeds, and we're very thankful that Tim and AWS started taking the stands, you know back in 2014, 2013, to bring this to market and democratize this. I think when we brought this new concept to our investors, they saw what this could be. It's not an easy concept to understand in the first wave, but when you understand the problem space, you see that the opportunity is pretty endless. And I'll say this for our investors, on behalf of our investors, that they saw a real founder market-fit between us. We're literally the two people who have launched and ran businesses for both Serverless and blockchain at scale, so that's what they thought was very attractive to them, and then look, it's Tim and I, and we're looking to hire 8 to 10 folks, and I think we have gotten to a space where we're making a meaningful difference to the world, and we would love for more people to join us, join this movement and democratize this big dispersed data problem and solve for this. And help us create more meanings to the data that our customers and companies worldwide are creating. We're very excited, and we're very thankful for all of our investors to be deeply committed to us and having conviction on us. >> Well, Shruthi and Tim, first of all, congratulations -- >> Thank you, thank you. >> Absolutely looking forward to, you know, watching the progress going forward. Thanks so much for joining us. >> Thank you, Stu, thank you. >> Thanks, Stu! >> All right, and definitely tune in to our regular conversations on Cloud Native Insights. I'm your host Stu Miniman, and looking forward to hearing more about your Cloud Native Insights! (upbeat electronic music)

Published Date : Jul 2 2020

SUMMARY :

and CEO of the company, Great to join the show. and how that lead towards what you and Tim and on the flip side You and I have talked in the past, it is the applications, you know, and build that out in the first place. So, what is, you know, the and at the same time, you know, And how does the solution and get all of the solution that you talk about, and why do you feel your architecture and at the same time they Will this include beyond, you know, and hear more from the people and look, the Serverless forward to, you know, and looking forward to hearing more

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ShruthiPERSON

0.99+

TimPERSON

0.99+

AWSORGANIZATION

0.99+

2018DATE

0.99+

2014DATE

0.99+

twoQUANTITY

0.99+

TwoQUANTITY

0.99+

80%QUANTITY

0.99+

Shruthi RaoPERSON

0.99+

2019DATE

0.99+

National Highway Safety AdministrationORGANIZATION

0.99+

two partnersQUANTITY

0.99+

National Highway Safety AdministrationORGANIZATION

0.99+

2011DATE

0.99+

2013DATE

0.99+

8QUANTITY

0.99+

BostonLOCATION

0.99+

second optionQUANTITY

0.99+

10 timesQUANTITY

0.99+

StuPERSON

0.99+

VendiaORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

Palo AltoLOCATION

0.99+

Andy JassyPERSON

0.99+

United StatesLOCATION

0.99+

U.S.LOCATION

0.99+

10xQUANTITY

0.99+

oneQUANTITY

0.99+

GoogleORGANIZATION

0.99+

Tim WagnerPERSON

0.99+

two peopleQUANTITY

0.99+

vendia.netOTHER

0.99+

two servicesQUANTITY

0.99+

first videoQUANTITY

0.99+

OneQUANTITY

0.99+

2,500 plus partnersQUANTITY

0.99+

eachQUANTITY

0.99+

firstQUANTITY

0.99+

bothQUANTITY

0.99+

five minutes laterDATE

0.99+

todayDATE

0.98+

100QUANTITY

0.98+

IBMORGANIZATION

0.98+

FirstQUANTITY

0.98+

over 1,092 customersQUANTITY

0.98+

three peopleQUANTITY

0.98+

two thingsQUANTITY

0.98+

AmazonORGANIZATION

0.98+

150QUANTITY

0.98+

AWS LambdaORGANIZATION

0.98+

Dave Husak & Dave Larson, HPE | HPE Discover 2020


 

>> Narrator: From around the globe, it's theCUBE, covering HPE Discover Virtual Experience brought to you by HPE. >> Hi, and welcome back to theCUBE's coverage of HPE Discover 2020 the virtual experience. I'm your host Stu Miniman. I'm really happy to be joined on the program two of our CUBE alumni, we have the Daves from Hewlett Packard labs. Sitting in the screen next to me is Dave Husak he is a fellow and general manager for the Cloudless Initiative. And on the other side of the screen, we have Dave Larson, vice president and CTO of the Cloudless Initiative. Dave and Dave, thank you so much for joining us again. >> Delighted to be here. >> All right, so specifically we're going to be talking a bit about security, obviously, you know, very important in the cloud era. And as we build our native architect, you know, Dave Husak, I guess, why don't you set the stage for us a little bit, of you know, where security fits into, you know, HPE overall and, you know, the mission that you know, last year a lot of buzz and discussion and interest around Cloudless. So just put that as a start and then we'll, get into a lot of discussion about security. >> Right yeah, last year we did, you know, launch the initiative and, you know, we framed it as, it composed of three components, one of which in fact, the most important aspect of which it was the trust fabric Cloudless Trust Fabric, which was you know, built on the idea of intrinsic security for all workload end points, right. And this is a theme that you see playing out, you know, a year later playing out, I think across the industry. You hear that language and that, you know, that kind of idea of being promoted in the context of zero trust, you know, new capabilities being launched by VMware and other kinds of runtime environments, right. And you know, the way I like to say it is that we have entered an era of security first in IT infrastructure. It's no longer going to be practical to build IT infrastructure and then, you know, have products that secure it, right. You know, build perimeters, do micro-segment or anything like that. Workload end points need to be intrinsically secure. And you know, the upshot of that really at this point is that all IT infrastructure companies are security companies now. The you know it, acknowledge it, like it or not, we're all security companies now. And so, you know, a lot of the principles applying in the Cloudless Trust Fabric are those zero trust principles are based on cryptographic, workload, identity, leverage unique aspects of HPs products and infrastructure that we've already been delivering with hardware and Silicon root of trust built into our reliance servers and other capabilities like that. And you know, our mission, my mission is to propel that forward and ensure that HP is, you know, at the forefront of securing everything. >> Yeah, excellent definitely, you know love the security first discussion. Every company we've talked to absolutely security is not only a sea level, but you know, typically board level discussion, I guess my initial feedback, as you would say, if every company today is a security company, many of them might not be living up to the expectation just yet So Dave Larson, let's say, you know, applications are, you know, at the core of what we've look at it in cloud native. It's new architectures, new design principles. So give us some, what is HPE thoughts and stuff, how security fits into that, and what's different from how we might've thought about security in the past the applications? Well, I think Dave touched on it, right? From a trust fabric perspective, we have to think of moving to something where the end points themselves, whether their workloads or services are actually intrinsically secure and that we can instantiate some kind of a zero trust framework that really benefits the applications. It really isn't sufficient to do intermediate inspection. In fact, the real, the primary reason why that's no longer possible is that the world is moving too encryption everywhere. And as soon as all packets are encrypted in flight, not withstanding claims to the contrary, it's virtually impossible to do any kind of inference on the flows to apply any meaningful security. But the way we see it is that the transition is moving to a modality where all services, all workloads, all endpoints can be mutually attested, cryptographically identified in a way that allows a zero trust model to emerge so that all end points can know what they are speaking to on the remote end and by authorization principals determine whether or not they're allowed to speak to those. So from a HPE perspective, the area where we build is from the bottom up, we have a Silicon root of trust in our server platform. It's part of our ILO five Integrated lights out baseboard management controller. We can actually deliver a discreet and measurable identity for the hardware and projected up into the workload, into the software realm. >> Excellent, Ty I heard you mentioned identity makes me think of the Cytel acquisition that the HPE made early this year, people in the cloud native community into CubeCon you know, SPIFFE of course, is a project that had gotten quite a bit of attention. Can give us a little bit as to how that acquisition fits into this overall discussion we were just having? >> Oh yeah, so we acquired Cytel into the initiative, beginning of this year. As you, understand Stu, right. Cryptographic identity is fundamental to zero trust security because we're no longer, like Dave pointed out we're no longer relying, on intermediary devices, firewalls, or other kinds of functions to manage, you know, authorize those communications. So the idea of building cryptographic identity into all workload endpoints, devices and data is sort of a cornerstone of any zero trust security strategy. We were delighted to bring the team on board. Not only from the standpoint that they are the world's experts, original contributors, and moderators and committers in the stewardship of SPIFFE and SPIRE the two projects in the CNCF. But you know, the impact they're going to have on the HPs product development, hardware and software is going to be outsized. And it also, you know, as a, I'll have to point this out as well, you know, It is the, this is the most prominent open source project that HP is now stewarding, right. In terms of its acceptance, of SPIFFE and SPIRE, or both poised to be I have an announcement here shortly, probably. But we expect they're going to be promoted to the incubating phase of CNCF maturity from the Sandbox is actually one of the first Sandbox projects in the CNCF. And so it's going to join that Pantheon of know, you know, top few dozen out of I think 1,390 projects in the CNCF. So like you pointed out Stu you know, SPIFFE and SPIRE are right now, you know, the world's leading candidate as, you know, sort of the certificate standard for cryptographic workload endpoint identity. And we're looking at that as a very fundamental enabling technology for this transformation, that the industry is going to go through. >> Yeah, it's really interesting if we pull on that open source thread a little bit more, you know, I think back to earlier in my career, you know, 15, 20 years ago, and if you talk to a CIO, you know, security might be important to them, but they keep what they're building and how their IT infrastructure, is something that they keep very understood. And if you were a vendor supplying to them, you had to be under NDA to understand, because that was a differentiation. Now we're talking about lifting cloud, we're talking about open source, you know, even when I talked to the financial institutions, they're all talking amongst themselves the how do we share best practices because it's not, am I secure? It's we all need to be secure. I wonder if you can comment a little bit on that trend, you know, how the role of open source. Yeah, this is an extension of Kerckhoffs's principle, right? The idea that a security system has to be secure, even if you know the system, right. That's it's only the contents of the ease in the communication letter, that are important. And that is playing out, at the highest level in our industry now, right. So it is, like I said, cryptographic identity and identity based encryption are the cornerstones of building a zero trust fabric. You know, one of the other things is, cause you mentioned that, we also observed is that the CNCF, the Apache foundation. The other thing that's, I think a contrast to 15 years ago, right back 15, 20 years ago, open source was a software development phenomenon, right. Where, you know, the usual idea, you know, there's repositories of code, you pull them down, you modify them for your own particular purposes and you upstream this, the changes and such, right. It's less about that now. It is much more a model for open source operations than it is a model for open source development. Most of the people that are pulling down those repositories unless they are using them, they're not modifying them, right. And as you also, I think understand, right. The framework of the CNCF landscape comprehensive, right? You can build an entire IT infrastructure operations environment by you know, taking storage technologies, security technologies, monitoring management, you know, it's complete, right. And it is, you know, becoming really, you know, a major operational discipline out there in the world to harness all of that development harness, the open source communities. Not only in the software, not only in the security space, but I think you know comprehensively and that engine of growth and development is I think probably the largest, you know manpower and brainpower, and you know, operational kind of active daily users model out there now, right. And, it's going to be critical. I think for the decade, this decade that's coming. That the successful IT infrastructure companies have to be very tightly engaged with those communities in that process, because open source operations is the new thing. It's like, you know DevOps became OpsDev or something like that is the trend. >> Yeah, and I'm glad you brought that up you know I think about the DevOps movement, really fused security, it can't be a bolt on it can't be an afterthought. The mantra I've heard over the last few years, is security is everyone's responsibility. Dave Larson, you know, the question I have for you is, how do we make sure, you know, policy is enforced you know, even I think about an organization everyone's responsible for it, you know, who's actually making sure that things happen because, you know, if everybody's looking after it, it should be okay. But, you know, bring us down a little bit from the application standpoint. >> Well, I would say, you know, first of all, you have to narrow the problem down, right? The more we try to centralize security with discreet appliances, that's some kind of a choke point, the explosion, the common editorial explosion of policy declaratives that are necessary in order to achieve that problem to achieve the solution becomes untenable, right? There is no way to achieve the right kind of policy enforcement unless we get as close to the actual workloads themselves, unless we implement a zero trust model where only known and authorized end points are allowed to communicate with each other, you know. We've lived with a really unfortunate situation in the internet at large, for the last couple of decades where an IP address is both a location and an identifier. This is problem because that can be abused. it's something that can be changed. It's something that is easily spoofed, and frankly the nature of that element of the way we connect applications together is the way that almost virtually all exploits, get into the environment and cause problems. If we move to a zero trust model where the individual end points will only speak with only respond to something that is authorized and only things that are authorized and they trust nothing else, we eliminate 95 to 99% of them problem. And we are in an automated stance that will allow us to have much better assurance of the security of the connections between the various endpoints and services. >> Excellent, so, you know, one of the questions that always comes up, some of the pieces we're talking about here are open source. You talk about security and trust across multiple environments. How does HPE differentiate from, you know, everything else out there and, you know, how are you taking the leadership position? I'd love to hear both of your commentary on that. >> Yeah, well, like I said, initially, the real differentiation for us is that HPE was the market leader for industry standard servers, from a security perspective. Three years ago in our ProLiant gen 10 servers, when we announced them, they had the Silicon root of trust and we've shipped more than a million and a half servers into the market with this capability that is unique in the market. And we've been actively extending that capability so that we can project the identity, not just to the actual hardware itself, but that we can bind it in a multi-factor sense, the individual software components that are hosted on that server, whether it's the operating system, a hypervisor, a VM, a container framework, or an actual container, or a piece of it code from a serverless perspective. All of those things need to be able to be identified and we can bring a multi-factor identity capability to individual workloads that can be the underpinning for this zero across connection capability. >> Great and David, anything you'd like to add there? >> No, like what he said I think HP is uniquely positioned you know, the depth and the breadth of our installed base of platforms that are already zero trust ready, if you will, right. Coupled with the identity technology that we're developing in the context of the Cytel acquisition and David, my work in a building, the cloudless trust fabric, you know, are the, like I said, the cornerstones of these architectures, right? And HP has a couple of unfair advantages here you know, okay breadth and depth of our, the customer base and the installed base of the system is already put out there. While the world is transitioning, you know, inevitably to these, you know, these kinds of security architectures, these kinds of IT infrastructure architectures, HP has a, you know, a leadership team position by default here that we can take advantage of. And our customers can reap the benefits of without, well, you know, without you know, rebuilding forklift upgrading, or otherwise, you know, it is, yeah as Dave talked about, you know, a lot will change, right. There's more to do, right? As we move from, you know, IP addresses and port numbers, as identities for security, because we know that perimeter security, network security like that is busted, right. It is, you know, every headline making, you know, kind of advanced persistent threat kind of vulnerabilities it's all at the root of all those problems, right. There are technologies like OPA, right you know, policy has to be reframed in the context of workload identity, not in network identity know. Like call this legal sort of the microsegmentation fallacy, right. You know that, you know, perimeters are broken, not a valid security strategy anymore. So the answer can't be, let's just draw smaller perimeters, especially since we're now filling them up with evermore, you know, dynamic evanescent kind of workload endpoints, you know, containers coming and going at a certain pace. And serverless instances, right. All of those things springing up and, and being torn down, you know, on, you know, very short life cycle that's right. It is inconceivable that traditional, you know perimeter based micro-segmentation based security frameworks can keep up with the competent tutorial explosion and the pace with which we are going to be where, you know, orchestration frameworks are going to be deploying these end points. There are, you know, there's a lot more to do, you know, but this is, the transformation story. This is of the 2020s, you know, infrastructure, IT infrastructure school is very different in two, five, 10 years from now than it does today. And you know that's you know we believe HP has, like I said, a few unfair advantages to lead the world in terms of those transformations. >> Excellent, well, appreciate the look towards the future as well as where we are today. Dave and Dave, thanks so much for joining. Thank you, Stu. >> Thanks, dude, pleasure. >> All right, we'll be back with lots more coverage. HPE Discover 2020 the Virtual Experience. I'm Stu Miniman and thank you for watching theCUBE. (upbeat music)

Published Date : Jun 24 2020

SUMMARY :

brought to you by HPE. Dave and Dave, thank you so that you know, last year a You hear that language and that, you know, is not only a sea level, but you know, community into CubeCon you know, SPIFFE and SPIRE are right now, you know, And it is, you know, that things happen because, you know, you know, first of all, out there and, you know, that can be the underpinning going to be where, you know, the look towards the future you for watching theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave LarsonPERSON

0.99+

DavePERSON

0.99+

DavidPERSON

0.99+

Dave HusakPERSON

0.99+

CytelORGANIZATION

0.99+

95QUANTITY

0.99+

ApacheORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

last yearDATE

0.99+

Hewlett PackardORGANIZATION

0.99+

Cloudless InitiativeORGANIZATION

0.99+

HPEORGANIZATION

0.99+

1,390 projectsQUANTITY

0.99+

HPORGANIZATION

0.99+

CNCFORGANIZATION

0.99+

twoQUANTITY

0.99+

todayDATE

0.99+

bothQUANTITY

0.98+

2020sDATE

0.98+

a year laterDATE

0.98+

15 years agoDATE

0.98+

fiveQUANTITY

0.98+

99%QUANTITY

0.98+

more than a million and a half serversQUANTITY

0.98+

two projectsQUANTITY

0.98+

ILOORGANIZATION

0.98+

Three years agoDATE

0.97+

SandboxORGANIZATION

0.97+

oneQUANTITY

0.97+

HPsORGANIZATION

0.97+

firstQUANTITY

0.97+

CUBEORGANIZATION

0.97+

early this yearDATE

0.96+

15DATE

0.96+

this yearDATE

0.96+

DevOpsTITLE

0.94+

CubeConORGANIZATION

0.94+

10 yearsQUANTITY

0.93+

three componentsQUANTITY

0.93+

KerckhoffsPERSON

0.92+

zeroQUANTITY

0.91+

SPIFFEORGANIZATION

0.91+

theCUBEORGANIZATION

0.9+

zero trustQUANTITY

0.89+

first discussionQUANTITY

0.88+

StuPERSON

0.87+

ProLiant gen 10COMMERCIAL_ITEM

0.84+

OpsDevTITLE

0.83+

20 years agoDATE

0.83+

CTOPERSON

0.82+

CloudlessORGANIZATION

0.81+

Power Panel | PegaWorld iNspire


 

>> Narrator: From around the globe, it's theCUBE with digital coverage of PegaWorld iNspire, brought to you by Pegasystems. >> Hi everybody, this is Dave Vellante and welcome to theCUBE's coverage of PegaWorld iNspire 2020. And now that the dust has settled on the event, we wanted to have a little postmortem power panel, and I'm really excited to have three great guests here today. Adrian Swinscoe is a customer service and experience advisor and the best-selling author of a couple of books: "How to Wow" and "Punk CX." Adrian great to see you, thanks for coming on. >> Hey Dave. >> And Shelly Kramer's a principal, analyst, and a founding partner at Futurum Research, CUBE alum. Shelly, good to see you. >> Hi, great to see you too. >> And finally, Don Schuerman who is the CTO of Pegasystems and one of the people that was really highlighting the keynotes. Don, thanks for your time, appreciate you coming on. >> Great to be here. >> Guys, let's start with some of the takeaways from the event, and if you don't mind I'm going to set it up. I had some, I had many many notes. But I'll take a cue from Alan's keynote, where he talked about three things: rethinking the customer engagement, that whole experience, that as a service, I'm going to say that certainly the second part of last decade came to the front and center and we think is going to continue in spades. And then new tech, we heard about that. Don we're going to ask you to chime in on that. Modern software, microservices, we've got machine intelligence now. And then I thought there were some really good customer examples. We heard from Siemens, we heard from the CIO and head of digital at Aflac, the Bank of Australia. So, some really good customer examples. But Shelly, let me start with you. What were your big takeaways of PegaWorld iNspire 2020, the virtual edition? >> You know, what I love is a focus, and we have talked a lot about that here at Futurum Research, but what I love is the thinking that what really is important now is to think about rethinking and kind of tearing things apart. Especially when we're in a time, we're in difficult economic times, and so instead of focusing on rebuilding and relaunching as quickly as possible, I think that now's the time to really focus on reexamining what is it that our customers want? How is it that we can best serve them? And really sort of start from ground zero and examine our thinking. And I think that's really at the heart of digital transformation, and I think that both in this virtual event and in some interviews I was lucky enough to do in advance with some of the Pega senior team, that was really a key focus, is really thinking about how we can re-architect things, how we can do things in ways that are more efficient, that impact people more effectively, that impact the bottom line more effectively. And to me that's really exciting. >> So Adrian, CX is obviously your wheelhouse. A lot of the conversation at PegaWorld iNspire was of course about customer experience, customer service. How do you think the content went? What were some of the highlights for you? And maybe, what would you have liked to hear more of? >> Well I think, thanks Dave, I actually really enjoyed it. I actually kind of thought was, first of all I should say that I've been to a bunch of virtual summits and I thought this was one of the best ones I've done in terms of its pace and its interactivity. I love the fact that Don was bouncing around the screen, kind of showing us around the menu and things. I thought that was great. But the things that I thought really stood out for me was this idea of the context around accelerating digital transformation. And that's very contextual, it's almost being forced upon us. But then this idea of also the center-out thinking and the Process Fabric. Because it really reminded me of, and Don you can maybe correct me if I'm wrong here, is taking a systems-thinking approach to delivering the right outcomes for customers. Because it's always struck me that there's a contradiction at the heart of the rhetoric around customer-centricity where people say they want to do the right things by customers but then they force them down this channel-centric or process-centric way of thinking. And so actually I thought it was really refreshing to hear about this center-out and Process Fabric platform that Pega's building. And I thought it's really exciting because it felt like actually we're going to start to take a more systemic look and take to delivering great service and great experience. So I thought that was really great. Those were my big headlines out of the summit. >> So Don, one of the-- >> Adrian I think-- >> Go ahead, please. >> Yeah, I think the whole idea, you know, and Alan referred to center-out as a business architecture, and I think that's really an important concept because this is really about the intersection of that business goal. How do I truly become customer-centric? And then how do I actually make my technology do it? And it's really important for that to work where you put your business logic in the technology. If you continue to do it in the sort of channel-centric way or really data-centric, system-centric way that historically has been the approach, I don't think you can build a sustainable platform for great customer engagement. So I think that idea of a business architecture that you clued in on a little bit is really central to how we've been thinking about this. >> Let's stay on that for a second. But first of all, I just want to mention, you guys did a good job of not just trying to take a physical event and plug in into virtual. So congratulations on that. The virtual clicker toss, and you know, you were having some fun eating your eggs. I mean that was, that's great. And the Dropkick Murphys couldn't be live, but you guys still leveraged that, so well done. One of the better ones that I've seen. But I want to stay on your point there. Alan talked about some of the mistakes that are made, and one of the questions I have for you guys is, what is the state of customer experience today, and why the divergence between great, and good, and pretty crappy? And Alan talked about, well, people try to impose business process top-down, or they try to infuse logic in the database bottom-up. You really got to do that middle-out. So, Don I want to come back to you. Let's explore that a little bit. What do you really mean by middle-out? Where am I putting the actual business logic? >> Yeah, I think this is important, right. And I think that a lot of time we have experiences as customers. And I had one of these recently with a cable provider, where I spent a bunch of time on their website chatting with a chatbot of some kind, that then flipped me over to a human. When the chatbot flipped me to the human, the human didn't know what I was doing with the chatbot. And that human eventually told me I had to call somebody. So I picked up the phone, I made the phone call. And that person didn't know what I was doing on chat with the human or with the chatbot. So every time there's a customer, I'm restarting. I'm reexplaining where I am. And that to me is a direct result of that kind of channel-centric thinking, where all of my business logic ends up embedded in, "Well hey, we're going to build a cool chatbot. "And now we're going to build a cool chat system. "And by the way, "we're going to keep our contact centers running." But I'm not thinking holistically about the customer experience. And that's why we think this center-out approach is so important, because I want to go below the channel. And I want to think about that customer journey. What's the outcome I'm trying to get to? In the case of my interaction, I was just trying to increase my bandwidth so that I could do events like this, right? What's that outcome that I'm trying to get to and how do I get the customer to that outcome in a way that's as efficient for the business and as easy for the customer as possible regardless of what channel they're on. And I think that's a little bit of a new way of thinking. And again, it means thinking not just about the customer goal, but having an opinion, whether you are a business leader or an IT person, about where that logic belongs in your architecture. >> So, Adrian. Don just described the sort of bot and human experience, which mimics a lot of the human experience that we've all touched in the past. So, but the customer journey that Don talked about isn't necessarily one journey. There's multiple journeys. So what's your take on how organizations can do better with that kind of service. >> Well I think you're absolutely right, Dave. I mean, actually during the summer I was talking, I was listening to Paul Greenberg talk about the future of customer service. And Paul said something that I think was really straightforward but really insightful. He said, "Look, organizations think about customer journeys "but customers don't think about journeys "in the way that organizations do. "They think discontinuously." So it's like, "I'm going to go to channel one, "and then channel three, and then channel four, "and then channel five, and then back to channel two. "And then back to channel five again." And they expect those conversations to be picked up across those different channels. And so I think what we've got to do is develop, as Don said, build an architecture that is, that works around trying to support the different journeys but allows that flexibility and that adaptability for customers to jump around and to have one of those continuous but disconnected conversations. But it's up to us to try and connect them all, to deliver the service and experience that the customers actually want. >> Now Shelly, a lot of the customer experience actually starts with the employees, and employees don't like when the customer is yelling at them saying, "I just answered all those questions. "Why do I have to answer them again?" So you've, at your firm, you guys have written a lot about this, you've thought a lot about it, you have some data I know you shared on theCUBE one time that 80% of employees are disengaged. And so, that affects the customer experience, doesn't it? >> Yeah it does, you know. And I think that when I'm listening to Don's explanation about his cable company, I'm having flashbacks to what feels like hundreds of my own experiences. And you're just thinking, "This does not have to be this complicated!" You know, ten years ago that same thing that Don just described happened with phone calls. You know, you called one person and they passed you off to somebody else, and they passed you off to somebody else, and you were equally as frustrated as a customer. Now what's happening a lot of times is that we're plugging technology in, like a chat bot, that's supposed to make things better but we're not developing a system and processes throughout our organization, and also change management, what do I want to say, programs within the organization and so we're kind of forgetting all of those things. So what's happening is that we're still having customers having those same experiences that are a decade old, and technology is part of the mix. And it really shouldn't be that way. And so, one thing that I really enjoyed, speaking about employees, was listening to Rich Gilbert from Aflac. And he was talking about when you're moving from legacy processes to new ones, you have to plan for and invest in change management. And we talk about this all the time here at Futurum, you know technology alone is never the answer. It's technology plus people. And so you have to invest in people, you have to invest in their training in order to be able to support and manage change and to drive change. And I think one really important part of that equation is also listening to your employees and getting their feedback, and making them part of the process. Because when they are truly on your front lines, dealing with customers, many times dealing with stressed, upset, frustrated customers, you know, they have a lot of insights. And sometimes we don't bring them into those conversations, certainly early enough in the process to help, to let them help guide us in terms of the solutions and the processes that we put in place. I think that's really important. >> Yeah, a lot of-- >> Shelly, I think-- >> If I may, a lot of the frustration with some employees sometimes is those processes change, and they're unknown going into it. We saw that with COVID, Don. And so, your thoughts on this? >> Yeah, I mean, I think the environment employees are working in is changing rapidly. We've got a customer, a large telecommunications company in the UK where their customer service requests are now being handled by about 4,000 employees pulled from their marketing department working distributed because that's the world that we're in. And the thing I was going to say in response to Shelly is, Alan mentioned in his keynote this idea of design thinking. And one of the reasons why I think that's so important is that it's actually about giving the people on the front lines a voice. It's a format for engaging the employees who actually know the day-to-day experiences of the customers, the day-to-day experiences of a customer service agent, and pulling them into the solution. How do we develop the systems, how do we rethink our processing, how does that need to plug into the various channels that we have? And that's why a lot of our focus is not just on the customer service technology, but the underlying low code platform that allows us to build those processes and those chunks of the customer journey. We often refer to them as "microjourneys" that lead to a specific outcome. And if you're using a low code based platform, something that allows anybody to come in and define that process, you can actually pull employees from the front lines and put them directly on your project teams. And all of a sudden you get better engagement but you also get this incredible insight flowing into what you're doing because you're talking to the people who live this day in and day out. >> Well and when you have-- >> So let's stay on this for a second, if we can. Shelly, go ahead please. >> Sure. When you have a chance to talk with those people, to talk with those front line employees who are having an opportunity to work with low code, no code, they get so excited about it and their jobs are completely, the way they think about their jobs and their contribution to the company, and their contribution to the customer, and the customer experience, is just so wonderful to see. And it's such an easy thing to do, so I think that that's really a critical part of the equation as it relates to success with these programs. >> Yeah, staying close to the customer-- >> Can I jump in? >> Yeah, please Adrian. >> Can I jump in on that a little, a second. I think Shelly, you're absolutely right. I think that it's a really simple thing. You talk about engagement. And one of the key parts of engagement, it seems to me, is that, is giving people a voice and making them feel important and feel heard. And so to go and ask for their opinion and to help them get involved and make a difference to the work that they do, the outcomes that their customers receive, and the overall productivity and efficiency, can only have a positive impact. And it's almost like, it feels self-evident that you'd do that but unfortunately it's not very common. >> Right. It does feel self-evident. But we miss on that front a lot. >> So I want to ask, I'm going to come back to, we talked about people process, we'll come back to that. But I want to talk about the tech. You guys announced, the big announcement was the Pega Process Fabric. You talked about that, Don, as a platform for digital platforms. You've got all these cool microservices and dynamic APIs and being able to compose on the fly, so some pretty cool stuff there. I wonder, with the virtual event, you know, with the physical event you've got the hallway traffic, you talk to people and you get face-to-face reactions. Were you able to get your kind of real-time reactions to the announcement? What was that like? Share with us please. >> Yeah, so, we got well over 1,000 questions in during the event and a lot of them were either about Process Fabric or comments about it. So I think people are definitely excited about this. And when you strip away all of the buzzwords around microservices and cloud, et cetera, I think what we're really getting at here is that work is going to be increasingly more distributed. We are living proof of that right now, the four of us all coming here from different studios. But work is going to be distributed for a bunch of reasons. Because people are more distributed, because organizations increasingly are building customer journeys that aren't just inside their walls, but are connected to the partners and their ecosystem. I'm a bank but I may, as part of my mortgage process, connect somebody up to a home insurer. And all of a sudden the home buying process goes beyond my four walls. And then finally, as you get all of these employees engaged with building their low code apps and being citizen developers, you want to let the 1,000 flowers to bloom but you also need a way to connect that all back together. And Process Fabric is about putting the technology in place to allow us to take these distributed bits of work that we need to do and weave them together into experiences that are coherent for a customer and easy for an employee to navigate. Because I think it's going to be really really important that we do that. And even as we take our systems and break them up into microservices, well customers don't interact with microservices. Customers interact with journeys, with experiences, with the processes you lay out, and making sure we can connect that up together into something that feels easy for the customer and the employee, and gets them to that result they want quickly, that's what the vision of Process Fabric is all about. >> You know, it strikes me, I'm checking my notes here. You guys talked about a couple of examples. One was, I think you talked about the car as sort of a mobility experience, maybe, you know, it makes me wonder with all this AI and autonomous vehicle stuff going on, at what point is owning and driving your own vehicle really going to be not the norm anymore? But you talked about this totally transformed, sorry to use that word, but experience around autos. And certainly financial services is maybe a little bit more near-term. But I wonder Shelly, Futurum, you know, you guys look ahead, how far can we actually go with AI in this realm? >> Well, I think we can go pretty far and I think it'll happen pretty fast. And I think that we're seeing that already in terms of what happened when we had the Coronavirus COVID-19, and of course we're still navigating through that, is that all of a sudden things that we talked about doing, or thought about doing, or planned doing, you know later on in this year or 2021, we had to do all of those things immediately. And so again, it is kind of like ripping the Bandaid off. And we're finding that AI plays a tremendously important role in relieving the workload on the frontline workers, and being able to integrate empathy into decision making. And you know, I go back to, I remember when you all first rolled out the empathy part of your platform, Don, and just watching a demo on that of how you can slide this empathy meter to be warmer, and see in true dollars and cents over time the impact of treating your customers with more empathy, what that delivers to a company. And I think that AI that continues to build and learn and again, what we're having right now, is we're having this gigantic volume of needs, of conversation, of all these transactions that need to happen at once, and great volumes make for better outcomes as it relates to artificial intelligence and how learning can happen more quickly over time. So I think that it's, we're definitely going to see more use of AI more rapidly than we might've seen it before, and I don't think that's going to slow down, at all. Certainly, I mean there's no reason for it to slow down. The benefits are tremendous. The benefits are tremendous, and let me step back and say, following a conversation with Rob Walker on responsible AI, that's a whole different ball of wax. And I think that's something that Pega has really embraced and planted a flag in. So I think that we'll see great things ahead with AI, and I think that we'll see the Pega team really leading as it relates to ethical AI. And I think that's tremendously important as well. >> Well that's the other side of the coin, you know. I asked how far can we go and I guess you're alluding to how far should we go. But Adrian, we also heard about agility and empathy. I mean, I want an empathic service provider. Are agility and empathy related to customer service, and how so? >> Well, David, I think that's a great question. I think that, you talk about agility and talk about empathy, and I think the thing is, what we probably know from our own experience is that being empathetic is sometimes going to be really hard. And it takes time, and it takes practice to actually get better at it. It's almost like a new habit. Some people are naturally better at it than others. But you know, organizationally, I talk about that we need to almost build, almost like an empathetic musculature at an organizational level if we're going to achieve this. And it can be aided by technology, but we, when we develop new muscles it takes time. And sometimes you go through a bit of pain in doing that. So I think that's where the agility comes in, is that we have to test and learn and try new things, be willing to get things wrong and then correct, and then kind of move on. And then learn from these kind of things. And so I think the agility and empathy, it does go hand in hand and it's something that will drive growth and increasing empathetic interactions as we go forward. But I think it's also, just to build on Shelly's point, I think you're absolutely right that Pega has been leading the way in this sort of dimension, in terms of its T-switch and its empathetic advisor. But now the ethical AI testing or the ethical bias testing adds a dimension to that to make sure it's not just about all horsepower, but being able to make sure that you can steer your car. To use your analogy. >> So AI's coming whether we like it or not. Right, Shelly? Go ahead. >> It is. One real quick real world example here is, you know, okay so we have this time when a lot of consumers are furloughed. Out of work. Stressed about finances. And we have a lot of Pega's customers are in the financial services space. Some of the systems that they've established, they've developed over time, the processes they've developed over time is, "Oh, I'm talking with Shelly Kramer and she has a "blah-blah-blah account here. "And this would be a great time to sell her on "this additional service," or whatever. And when you can, so that was our process yesterday. But when you're working with an empathic mindset and you are also needing to be incredibly agile because of current circumstances and situations, your technology, the platform that you're using, can allow you to go, "Okay I'm dealing "with a really stressed customer. "This is not the best time "to offer any additional services." Instead what we need to ask is this series of questions: "How can we help?" Or, "Here are some options." Or whatever. And I think that it's little tweaks like that that can help you in the customer service realm be more agile, be more empathetic, and really deliver an amazing customer experience as a result. And that's the technology. >> If I could just add to that. Alan mentioned in his keynote a specific example, which is Commonwealth Bank of Australia. And they were able, multiple times this year, once during the Australian wildfires and then again in response to the COVID crisis, to completely shift and turn on a dime how they interacted with their customer, and to move from a prioritization of maybe selling things to a prioritization of responding to a customer need. And maybe offering payment deferrals or assistance to a customer. But back to what we were talking about earlier, that agility only happened because they didn't have the logic for that embedded in all their channels. They had it centralized. They had it in a common brain that allowed them to make that change in one place and instantly propagate it to all of the 18 different channels in which they touch their customer. And so, being able to have agility and that empathy, to my mind, is explicitly tied to that concept of a center-out business architecture that Alan was talking about. >> Oh, absolutely. >> And, you know, this leads to discussion about automation, and again, how far can we go, how far should we go? Don, you've been interviewed many many times, like any tech executive, about the impact of AI on jobs. And, you know, the typical response of course is, "No, we want augmentation." But the reality is, machines have always replaced humans it's just, now it's the first time in terms of cognitive function. So it's a little different for us this time around. But it's clear, as I said, AI is coming whether we like it or not. Automation is very clearly on the top of people's minds. So how do you guys see the evolution of automation, the injection of automation into applications, the ubiquity of automations coming in this next decade? Shelly, let's start with you. >> You know, I was thinking you were going to ask Don that question so I'm just listening and listening. (laughing) >> Okay, well we can go with Don, that's-- >> No I'm happy to answer it. It's fine, it just wasn't what I expected. You know, we are really immersed in the automation space. So I very much see the concerns that people on the front line have, that automation is going to replace them. And the reality of it is, if a job that someone does can be automated, it will be automated. It makes sense. It makes good business sense to do that. And I think that what we are looking at from a business agility standpoint, from a business resilience standpoint, from a business survival standpoint, is really how can we deliver most effectively to serve the needs of our customers. Period. And how we can do that quickly and efficiently and without frustration and in a way that is cost effective. All of those things play into what makes a successful business today, as well as what keeps employees, I'm sorry, as well as what keeps customers served, loyal, staying around. I think that we live in a time where customer loyalty is fleeting. And so I think that smart businesses have to look at how do we deepen the relationships that we have with customers? How can we use automation to do that? And the thing about it, you know, I'll go back to the example that Don gave about his cable company that all of us have lived through. It's just like, "Oh my gosh. "There's got to be a better way." So compare that to, and I'm sure all of us can think of an experience where you had to deal with a customer service situation in some way or another, and it was the most awesome thing ever. And you walked away from it and you just went, "Oh my gosh. I know I was talking to a bot here or there." Or, "I know I was doing this, but that solved my problem. "I can't believe it was so easy! "I can't believe it was so easy! "I can't wait to buy something from this company again!" You know what I'm saying? And that's really, I think, the role that automation can play. Is that it can really help deepen existing relationships with our customers, and help us serve them better. And it can also help our employees do things that are more interesting and that are more relevant to the business. And I think that that's important too. So, yes, jobs will go. Yes, automation will slide into places where we've done things manually and repetitive processes before, but I think that's a good thing. >> So, we've got to end it shortly here but I'll give you guys each a last opportunity to chime in. And Adrian, I want to start with you. I invoked the T-word before, transformation, a kind of tongue-in-cheek joking because I know it's not your favorite word. But it is the industry's favorite word. Thinking ahead for the future, we've talked about AI, we've talked about automation, people, process and tech. What do you see as the future state of customer experience, this mix of human and machine? What do we have to look forward to? >> So I think that, first of all, let me tackle the transformation thing. I mean, I remember talking about this with Duncan Macdonald who is the CIO across at UPC, which is one of Pega's customers, on my podcast there the other week. And he talked about, he's the cosponsor of a three year digital transformation program. But then he appended the description of that by saying it's a transformation program that will never end. That's the thing that I think about, because actually, if you think about what we're talking about here, we're not transforming to anything in particular, you know. It's not like going from here to there. And actually, the thing that I think we need to start thinking about is, rather than transformation we actually need to think about an evolution. And adopting an evolutionary state. And we talked about being responsive. We talked about being adaptable. We talked about being agile. We talk about testing and learning and all these different sort of things, that's evolutionary, right? It's not transformational, it's evolutionary. If you think about Charles Darwin and the theory of the species, that's an evolutionary process. And there's a quote, as you've mentioned I authored this book called "Punk CX," there's a quote that I use in the book which is taken from a Bad Religion song called "No Control" and it's called, "There is no vestige of a beginning, "and no prospect of an end." And that quote comes from a 1788 book by James Hutton, which was one of the first treaties on geology, and what he found through all these studies was actually, the formation of the earth and its continuous formation, there is no vestige of a beginning, no prospect of an end. It's a continuous process. And I think that's what we've got to embrace is that actually change is constant. And as Alan says, you have to build for change and be ready for change. And have the right sort of culture, the right sort of business architecture, the right sort of technology to enable that. Because the world is getting faster and it is getting more competitive. This is probably not the last crisis that we will face. And so, like in most evolutionary things, it wasn't the fittest and the strongest that survived, it was the ones that were most adaptable that survived. And I think that's the kind of thing I want to land on, is actually how, it's the ones that kind of grasp that, grasp that whole concept are the ones that are going to succeed out of this. And, what they will do will be... We can't even imagine what they're going to do right now. >> And, thank you. And Shelly, it's not only responding to, as Adrian was saying, to crisis, but it's also being in a position to very rapidly take advantage of opportunities and that capability is going to be important. You guys are futurists, it's in the name. Your thoughts? >> Well I think that, you know, Adrian's comments were incredibly salient, as always. And I think that-- >> Thank you. >> The thing that this particular crisis that we are navigating through today has in many ways been bad, but in other ways, I think it's been incredibly good. Because it has forced us, in a way that we really haven't had to deal with before, to act quickly, to think quickly, to rethink and to embrace change. Oh, we've got to work from home! Oh, we've got 20 people that need to work from home, we have 20,000 people that need to work from home. What technology do we need? How do we take care of our customers? All of these things we've had to figure out in overdrive. And humans, generally speaking, aren't great at change. But what we are forced to do as a result of this pandemic is change. And rethink everything. And I think that, you know, the point about transformation not being a beginning and an end, we are never, ever, ever done. It is evolutionary and I think that as we look to the future and to one of your comments, we are going faster with more exciting technology solutions out there, with people who are incredibly smart, and so I think that it's exciting and I think that all we are going to see is more and more and more change, and I think it will be a time of great resilience, and we'll see some businesses survive and thrive, and we'll see other businesses not survive. But that's been our norm as well, so I think it's really, I think we have some things to thank this pandemic for. Which is kind of weird, but I also try to be fairly optimistic. But I do, I think we've learned a lot and I think we've seen some really amazing exciting things from businesses who have done this. >> Well thanks for sharing that silver lining, Shelly. And then, Don, I'm going to ask you to bring us to the finish line. And I'm going to close my final question to you, or pose it. You guys had the wrecking ball, and I've certainly observed, when it comes to things like digital transformations, or whatever you want to call it, that there was real complacency, and you showed that cartoon with the wrecking ball saying, "Ehh not in my life, not on my watch. "We're doing fine." Well, this pandemic has clearly changed people's thinking, automation is really top of mind now at executive. So you guys are in a good spot from that standpoint. But your final thoughts, please? >> Yeah, I mean, I want to concur with what Adrian and Shelly said and if I can drop another rock quote in there. This one is from Bob Dylan. And Dylan famously said, "The times they are a changing." But the quote that I keep on my wall is one that he tossed off during an interview where he said, "I accept chaos. "I'm not sure if it accepts me." But I think digital transformation looks a lot less like that butterfly emerging from a cocoon to go off happy to smell the flowers, and looks much more like accepting that we are in a world of constant and unpredictable change. And I think one of the things that the COVID crisis has done is sort of snapped us awake to that world. I was talking to the CIO of a large media company who is one of our customers, and he brought up the fact, you know, like Croom said, "We're all agile now. "I've been talking about five years, "trying to get this company to operate in an agile way, "and all of a sudden we had to do it. "We had no choice, we had to respond, "we had to try new things, we had to fail fast." And my hope is, as we think about what customer engagement and automation and business efficiency looks like in the future, we keep that mindset of trying new things and continuously adapting. Evolving. At the end of the day, our company's brand promise is, "Build for change." And we chose that because we think that that's what organizations, the one thing they can design for. They can design for a future that will continue to change. And if you put the right architecture in place, if you take that center-out mindset, you can support those immediate needs, but set yourself up for a future of continuous change and continuous evolution and adaptation. >> Well guys, I'll quote somebody less famous. Jeff Frick, who said, "The answer to every question "lives somewhere in a CUBE interview." and you guys have given us a lot of answers. I really appreciate your time. I hope that next year at PegaWorld iNspire we can see each other face-to-face and do some live interviews. But really appreciate the insights and all your good work. Thank you. >> Thank you. >> Absolutely. >> And thank you for watching everybody, this is Dave Vellante and our coverage of PegaWorld iNspire 2020. Be right back, right after this short break. (lighthearted music)

Published Date : Jun 9 2020

SUMMARY :

brought to you by Pegasystems. And now that the dust Shelly, good to see you. and one of the people that from the event, and if you don't mind And I think that's really at the heart of And maybe, what would you and the Process Fabric. And it's really important for that to work and one of the questions And that to me is a direct So, but the customer journey And Paul said something that I think was And so, that affects the and the processes that we put in place. If I may, a lot of the And the thing I was going to for a second, if we can. of the equation as it relates to success And one of the key parts of But we miss on that front a lot. and being able to compose on the fly, and gets them to that But I wonder Shelly, Futurum, you know, And I think that we're seeing side of the coin, you know. I talk about that we need to almost build, we like it or not. And that's the technology. that allowed them to make But the reality is, machines that question so I'm just And the thing about it, you know, And Adrian, I want to start with you. And actually, the thing that I think and that capability is And I think that-- And I think that, you know, And I'm going to close in the future, we keep that mindset and you guys have given And thank you for watching everybody,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff FrickPERSON

0.99+

DavidPERSON

0.99+

Rebecca KnightPERSON

0.99+

AlanPERSON

0.99+

JeffPERSON

0.99+

AdrianPERSON

0.99+

Peter BurrisPERSON

0.99+

PaulPERSON

0.99+

DavePERSON

0.99+

AWSORGANIZATION

0.99+

Adrian SwinscoePERSON

0.99+

Jeff BrewerPERSON

0.99+

MAN Energy SolutionsORGANIZATION

0.99+

2017DATE

0.99+

TonyPERSON

0.99+

ShellyPERSON

0.99+

Dave VellantePERSON

0.99+

VolkswagenORGANIZATION

0.99+

Tony FergussonPERSON

0.99+

PegaORGANIZATION

0.99+

EuropeLOCATION

0.99+

Paul GreenbergPERSON

0.99+

James HuttonPERSON

0.99+

Shelly KramerPERSON

0.99+

Stu MinimanPERSON

0.99+

Rob WalkerPERSON

0.99+

DylanPERSON

0.99+

10QUANTITY

0.99+

June 2019DATE

0.99+

Corey QuinnPERSON

0.99+

DonPERSON

0.99+

SantikaryPERSON

0.99+

CroomPERSON

0.99+

chinaLOCATION

0.99+

Tony FergusonPERSON

0.99+

30QUANTITY

0.99+

60 drugsQUANTITY

0.99+

roland cleoPERSON

0.99+

UKLOCATION

0.99+

Don SchuermanPERSON

0.99+

cal polyORGANIZATION

0.99+

SantiPERSON

0.99+

1985DATE

0.99+

Duncan MacdonaldPERSON

0.99+

Silicon ValleyLOCATION

0.99+

millionsQUANTITY

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

one yearQUANTITY

0.99+

10 yearsQUANTITY

0.99+

PegasystemsORGANIZATION

0.99+

80%QUANTITY

0.99+

Kerim Akgonul, Pegasystems | PegaWorld iNspire


 

>> Announcer: From around the globe, it's theCUBE, with digital coverage of PegaWorld iNspire, brought to you by Pegasystems. >> Hi everybody, welcome back. This is Dave Vellante, and you're watching theCUBE's coverage of PegaWorld iNspire 2020. Kerim Akgonul is here. He's the senior vice president of product at Pega, Pegasystems. Kerim, great to see you. Thanks for coming on. >> Hi Dave. Thanks for having me. Yeah, I mean I wish we were face-to-face at your big show, but this is going to have to do. A little different this year doing the virtual event. You're used to a big stage, big audience, lots of clapping and buzz. How's it been for you, this virtual pivot? >> It's been different, it's definitely been different, especially since the last few years we had it in Vegas, so it was a big Vegas show. Now we're in my living room. Not the same vibe, but nevertheless we have a lot of new products and new stories to tell, new experiences to share with the clients, so we're focusing on those aspects. >> Yeah, I'm excited to get into that, but I mean your whole raison d'être is you guys build for change, and obviously we've been thrown this curve ball, more than a curve ball, knuckle ball. Maybe talk about what you're seeing your customers do in terms of being able to rapidly adapt to this new abnormal. >> Yeah, so we've seen, obviously, across the globe, right, not just with Pega, not with just our clients, we've seen a tremendous amount of change. We've seen change in how we work, how we communicate, how we collaborate, how we get into meetings, and a lot of our clients, of course, had to quickly adjust to these recent changes as well in these last couple of months, and in many cases they had to make technology choices, and we're pretty excited that basically Pega technology has been on that top shelf of technologies that our clients chose to leverage in this time of crisis. They chose to use the technology to better engage across their organizational work that they do. They use the Pega technology to actually digitize how a lot of the work that gets done in their organization. They use it as a COVID-19 response. They use it to engage directly with the consumers, so it's been on, as I said, the top shelf of technologies that they had to leverage to adjust and transform, so it's been very busy, Dave. >> Obviously a lot of companies have been hit, and some industries have been very hard hit in the shutdown, but I want to pick a couple of examples. Let's start with healthcare. I mean they've been hit like no other, front lines. Do you have some examples that you can share, or any example in healthcare, how they pivoted? I mean have they been able to even spend time on anything that's not emergency? Maybe you could share some of your experiences there. >> Absolutely. Actually a lot of the healthcare organizations that we're working with, the front line workers, obviously, the way that they engage has changed quite a bit, but also the people that work in the corporate, in the back office, in the technology, they have changed as well as they had to really respond to the changes in the scale of their operations, changes in how they engage with their customers, with the other organizations that they work with, and how they operated their processes. We did have one of the customers that I talk about, HCA, one of the Pega customers, they basically implemented a Pega solution just in a couple of days, and rolled it out into production just a couple of days to keep track of their employees, the volunteers that basically work with them, to keep track of people who are impacted by COVID-19, and they have about 200,000 people that they need to manage the availability in the schedules, and they decided to use Pega technology to be able to manage that across the enterprise, which has been a great experience for us working with them. >> So Kerim, how would that work? So they're an existing Pega customer, they spun up a new module, they sort of developed it themselves. You guys helped them. Describe how that sort of became real. >> Sure, so we actually have a couple of different examples of these types of applications that went live in the last couple of months, from the healthcare organizations, we had it from some organizations in the telecommunications industry, we had state governments and different public sector companies. It works differently for each one of them, but it all starts with really having somebody, having a clear idea on exactly what they want to actually do. What do they want to keep track of? What do they want to operate? What do they want to be able to actually get done? And having somebody to have that vision and being able to articulate that in the Pega construct to automate it to define the process, to define what they're going to keep track of, to define the journeys of those things that they're going to keep track of, and a lot of the clients that have centers of excellence in their organizations with Pega experts, some of our clients work with our great set of partners who have come up with ideas and brought them into these organizations, and we also get pulled into a couple of these implementations, and like you said, Dave, we always talk about being built for change, and this is a time of crisis. This is a time of change, and Pega's technology is perfectly structured to be able to get things quickly done and up and running, but what it really needed at all times is somebody to actually have the vision and the ability to make a decision and go execute on it. And we know that the people are there. We know the technology is there, and that's how a lot of the results got done. >> Yeah, very fast decisions had to get made. Another example is we've been tracking the telecom space, and the whole work-from-home pivot has really put stress on distributed networks, the traditional corporate networks. Now everybody's at home. We've all experienced this, whether video calls, et cetera. The kids are at home, at school, sometimes gaming, so the internet, it didn't blow up, luckily, but still major change in the telco industry. >> Absolutely. How lucky we are to actually have access to all this technology, to all this internet capacity, and yeah, it's been a big change. Obviously the demand on their business has increased quite a bit in the telecommunications industry. One of our clients that basically had contact centers in other countries where the agents actually didn't have an opportunity to go into the contact center, and they couldn't actually enter the building. They weren't even allowed to be on the streets, out on the streets, so what they did, and while this is happening, right, while basically the agents are not able to go to work, at the same time the volumes are increasing through the roof, right? There's a tremendous amount of urgency and higher levels of volumes of requests coming in from the end customers, the end consumers coming in, right? It's basically a perfect storm of things happening, so what our clients have done is a couple of things. One, they created new sets of processes, and they created an army of volunteers from within the business to be able to respond to customer requests from home, and two, they really completely ramped up the pace of taking processes and making them self-service available on the mobile apps, on the website, on the IVR, because customers, consumers have a sense of urgency. They need an answer. They need something to get done quickly, and they want to be able to avoid waiting on line for four hours, right? We saw that, we saw a lot of the websites that says, "Hey, if you call our contact center," some companies put up these messages, "it's going to be so many hours." So our clients were able to take the processes that they have defined for their contact center agents and actually pushed them to self-service channels like the mobile channel, like the web self-service channel, as well as chat and chat bot channels, to be able to get the answers that the consumers need quickly and get their work done, respond to them quickly while in this time of amazing change. >> Yeah, so that enables scaling. Self-service is critical. Yeah, I want to ask you about digital transformation. It's a theme of PegaWorld iNspire. There's been a lot of talk the last three, four years about digital transformation. Frankly, a lot of lip service. I think it was Satya Nadella said we've accelerated. We've pulled two years of digital transformation into two months, but again, you guys are all about digital and digitizing processes, so kind of I want to know if you can talk about that theme of the show, kind of what it means to you and your client. >> I think it's been amazing. I think, like you said, there's been a lot of talk about it in several years, and there have been lots of initiatives, but I think it was missing the urgency that it needed to be able to get moving and get things done. We have had so many discussions. So many people have talked about what do we need to do, do we need to do it now, can we basically wait? Long meetings and long delays on making decisions to actually move forward, and this just basically changed all that, right? There's no more the question of do we need to go through a digital transformation? Everybody knows it's a yes. We had to do it, no question about it. There's no more question of can we do it. Yep, we know we can do it. Do we have the technology, do we have the people? Yep, got it. All that is in place. Now really the thing that we're seeing people succeed in is the ability to make a decision to move forward, to move forward aggressively, and having now proven that the people and the technology is there, and that they can get done, and it really basically requires decisiveness and leadership. >> Yeah, I think the word you use, 'urgency,' because there was a lot of complacency leading up to this, but the good news was there was also a lot of experimentation going on. So COVID obviously accelerated that urgency. Anna Gleiss from Siemens is an example of somebody who spoke during your keynote. Big industrial exposed with a huge supply chain, which for years some of that's been really opaque, and digitize that, now you get greater transparency. What were the key learnings from her discussion? >> Right, so Anna and the team have done a spectacular job, and like I say, they didn't need a worldwide pandemic to get going, and they basically approached theirs systematically with a great plan, and what they basically were able to do is really do that, another thing that people have done a lot of lip service in the past is IT and business collaboration. They actually executed brilliantly from that perspective where the IT organization, technology organization sort of delivered, on top of the Pega platform delivered a platform to be able to manage all the technical aspects of business applications that all the processes that seems needed, and in different departments and different divisions were able to leverage those assets and be able to quickly get applications up and running, and being able to dramatically increase the speed of innovation while at the same time dramatically reducing the cost of getting these things done and running them. So basically they built that environment where IT provided the technical aspects as a service to business applications so that they can quickly get things done, automate their processes, and deliver tremendous amount of operational efficiency into the organization. >> Now Kerim, of course, is the head of products. I want to get into some of the product discussion, some of the hard news that you have at PegaWorld. This notion of the Pega Process Fabric, I mean the metaphor is very strong. You think about digital, you think about a fabric. But what do we need to know about the Pega Process Fabric? >> Dave, it's a great solution that I believe corporations, especially enterprises, need to be able to make their staff more effective, streamline their work, getting them to a world where they don't have to personally navigate through dozens of different applications just to achieve an outcome, because whenever you basically have a situation where an employee of an enterprise has to jump through six, 10, 12 different applications just to be able to get something done for the customer, there's a tremendous amount of efficiency that's lost, there's a tremendous amount of training that's required to be able to actually get people to be able to manage all these, working across all these applications, and of course it's very easy to make mistakes. And whenever you have an environment that's built out like that, it inevitably gets exposed to the customers, and they basically, their experiences realize that there's a lot of jumping around. The Process Fabric is around bringing an experience to the users that is basically a single experience, even though work is coming from many different applications in the organization, right? You talk to any enterprise in anywhere in the world, and you basically name any enterprise software company, and they'll tell you, "Yeah, we got that." They have it. >> Yeah. >> They have Microsoft, they have Salesforce, they have ServiceNow, they have Pega, they have it, and users, employees have to juggle through all of these systems to be able to actually get their work done. The job of Process Fabric is to actually bring all these tasks, bring all this work that the workers, and then on behalf of the customers, have to get done, and weave them together into a single experience so that they don't have to jump around. There's much more efficiency. Get work done fast, and the organization then also has control around how the work is prioritized across different systems. How the work is managed through how it gets assigned, how to handle key customers and be able to see all the work that we're doing on behalf of them across all the different systems, and be able to actually bring a home all of these efforts and provide that experience to the user. >> So Kerim, what's the secret sauce there? Is it a combination of using APIs to those applications, and machine intelligence, and machine learning? >> There's a little bit of many things. The key is, one, we basically come with standard connectivity to standard enterprise solutions. We come prepackaged with connectivity to Pega environments within the organizations, as we have many customers that have deployed dozens of different Pega applications. We come with a standard open API approach to be able to provide connectivity, and then we use our decisioning capabilities and process capabilities to manage the prioritization, to be able to manage the routing and the experience for the end users. >> Okay, and the prioritization is something that's determined by business rules, is that correct? Or how does that all work? >> Absolutely. Absolutely, so the idea is to be able to leverage the business rules capabilities of the Pega platform to be able to handle the prioritization and the routing and sort of collating things together that are associated with the same work streams and for the same customers. >> When Alan Trefler started Pega it was right around the time I started in the industry and AI was the hot buzzword, and it took a while to get here, but it feels pretty real right now. How do you look at machine intelligence and the role that it plays? You've used the term real realtime AI. >> Right. >> What do you mean by that, and what's so special about your AI? >> Well, our realtime AI is real, so that's one of the main specialties, but look, there's a lot basically technology out there. There's a lot of great technology out there with great use cases that can look at historical sets of data and be able to actually generate predictive models from them, and those are great. Those are very, very valuable. But we believe that especially when we're directly engaging with customers, that is not enough. That you need actually realtime, real realtime AI. Let me give you an example. If you are basically running some predictive models against a set of customer data, say basically in January and February and using them in March, you will not get the right results that are basically for each individual customer, because things have changed dramatically between February and March. You couldn't make decisions about a customer based on what happened in their activity in January based on what's today. One of our telecom... One of our, I'm sorry, banking clients, for example, used their customer data in the UK, NatWest, used their customer data and identified people that work for the National Health Services and provided realtime programs that are specifically tailored for them, right, so that's basically being able to actually leverage the power of AI and be able to change how you engage with customers. They looked at customer data who might be at financial risk due to the crisis and actually changed programs and payment programs for them, because things have changed dramatically in the timeframe. Our AI leverages predictive models based on historical data, which is great, but actually also adds on top of it the ability to evaluate realtime data based on the real context of the end customer at this point in time, at this point on their experience on the website, on the IVR, on the mobile app, and be able to determine the best way to engage with that customer at that moment in time, and be able to deliver that one-to-one personalized experience. And this has been basically one of the major capabilities of Pega technology. That's how we differentiate in the marketplace in our ability to actually drive the AI capabilities in realtime interactions. >> Wonder if I could ask you about one of the trends in the marketplace, and you're seeing it in the equity markets, these private equity robotic process automation. People, I think, sometimes misunderstand you, and I've said, I've reported a number of times that RPA's just a small part of what you guys do, but at the same time you're seeing a lot of energy in the marketplace, money, billions of dollars, billions, yeah, have poured in. How do you look at RPA? Where does it fit in the Pega platform? >> Yeah, so RPA's absolutely a part of the overall journey. We look at things from an end-to-end automation perspective, essentially we need to do something for a customer, on behalf of a customer, to get an outcome delivered to a customer, and there's a process associated with it. And this process is frequently going to touch through a bunch of different systems. And some of these systems it's going to touch are old. They've been around for a very, very long time. They're a pain point for a lot of organizations. What RPA does really well is it basically lets you put a robotic process, essentially, a process that runs on the desktop and to be able to sort of execute that process inside that old system automatically. And that saves time and saves money, and there's basically a clear ROI associated with it, but it doesn't eliminate that old technology. It just puts, essentially, a veneer in front of it so that the end user doesn't have to key into some old application. It just does it on their behalf. We think that's a part of an end-to-end process automation, and as you go through different steps you might have to execute these robotic process automations, but it's not digital transformation. You're not really transforming it, right? You are basically eliminating that pain point for time being, and it will become a problem maybe for the next person that has to deal with it. We believe that robotic process automation is a great way to automate stuff, but each one of those elements need to go through that transformation as a part of the modernization, digital transformation journey. >> So it's that systems view that you would stress, and obviously you've always taken a systems view. You've got a platform that is an end-to-end platform. That's really what you mean by the end-to-end is that systems view, correct? >> Well, what we mean, really, by end-to-end is a customer comes in and they have a need, and we basically get them what they come in here for, and whatever is in between, whatever processes, and systems, and integrations, and technologies that sit in between, that's sort of the second part of the story. The main important part is work that needs to get done, we get the work done. And we will do anything in between. We'll do integrations, we'll do routing, we will do automation, we'll do business rules, we'll do AI, we'll do robotic process automation, anything that is necessary to basically drive that outcome, drive efficiency, faster response times, and better customer experience. >> Okay, so those are the key metrics. You just answered that other question. Last question, then, is we've got uncertain times. We've talked the gamut of digital transformation, but what advice would you give to customers given this uncertainty? How should they be best prepared? >> I think it's most important, really, to pay attention to the end consumers, and look at it from a perspective of empathy. What is the end consumer worried about right now? What is difficult for them? What is it that they need from your organization given their current circumstances, and make sure the experience that your corporation provides to them is the right experience. This is, I think, a time for a lot of corporations to build some incredible loyalty with their end customers, with the consumers. This is an amazing opportunity to basically have great engagement and to be able to have people realize that yeah, they were there for me. It was a good experience, it was an easy experience, it was a seamless experience, and I would mostly emphasize on that empathy factor. Make sure that we understand what's going through, what's happening in their lives, what they need, and when they engage with the corporation make sure that we provide a seamless experience to them. >> I think that's a great point. We're not going back to the customer experiences of the 2010s. We're entering a new decade, and Kerim, thanks so much for your insights and coming on theCUBE to share them. >> My pleasure, thanks for having me. >> You're welcome, and thank you for watching, everybody. You're watching theCUBE's coverage of PegaWorld iNspire 2020. Be right back right after this short break. (smooth music)

Published Date : Jun 2 2020

SUMMARY :

brought to you by Pegasystems. Kerim, great to see you. but this is going to have to do. and new stories to tell, in terms of being able to rapidly that they had to leverage I mean have they been able to even and they decided to use Pega technology Describe how that sort of became real. and the ability to make a and the whole work-from-home pivot to be able to get the answers There's been a lot of talk the last three, and having now proven that the people but the good news was there was also and be able to quickly get This notion of the Pega Process Fabric, that's required to be able to actually and provide that experience to the user. and process capabilities to and for the same customers. and the role that it plays? and be able to actually generate a lot of energy in the marketplace, and to be able to sort mean by the end-to-end anything that is necessary to to customers given this uncertainty? and to be able to have people realize and coming on theCUBE to share them. of PegaWorld iNspire 2020.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MichielPERSON

0.99+

AnnaPERSON

0.99+

DavidPERSON

0.99+

BryanPERSON

0.99+

JohnPERSON

0.99+

IBMORGANIZATION

0.99+

MichaelPERSON

0.99+

ChrisPERSON

0.99+

NECORGANIZATION

0.99+

EricssonORGANIZATION

0.99+

KevinPERSON

0.99+

Dave FramptonPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Kerim AkgonulPERSON

0.99+

Dave NicholsonPERSON

0.99+

JaredPERSON

0.99+

Steve WoodPERSON

0.99+

PeterPERSON

0.99+

Lisa MartinPERSON

0.99+

NECJORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

Mike OlsonPERSON

0.99+

AmazonORGANIZATION

0.99+

DavePERSON

0.99+

Michiel BakkerPERSON

0.99+

FCAORGANIZATION

0.99+

NASAORGANIZATION

0.99+

NokiaORGANIZATION

0.99+

Lee CaswellPERSON

0.99+

ECECTORGANIZATION

0.99+

Peter BurrisPERSON

0.99+

OTELORGANIZATION

0.99+

David FloyerPERSON

0.99+

Bryan PijanowskiPERSON

0.99+

Rich LanePERSON

0.99+

KerimPERSON

0.99+

Kevin BoguszPERSON

0.99+

Jeff FrickPERSON

0.99+

Jared WoodreyPERSON

0.99+

LincolnshireLOCATION

0.99+

KeithPERSON

0.99+

Dave NicholsonPERSON

0.99+

ChuckPERSON

0.99+

JeffPERSON

0.99+

National Health ServicesORGANIZATION

0.99+

Keith TownsendPERSON

0.99+

WANdiscoORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

MarchDATE

0.99+

NutanixORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

IrelandLOCATION

0.99+

Dave VellantePERSON

0.99+

Michael DellPERSON

0.99+

RajagopalPERSON

0.99+

Dave AllantePERSON

0.99+

EuropeLOCATION

0.99+

March of 2012DATE

0.99+

Anna GleissPERSON

0.99+

SamsungORGANIZATION

0.99+

Ritika GunnarPERSON

0.99+

Mandy DhaliwalPERSON

0.99+

Prashanth Shenoy, Cisco | Cisco Live EU Barcelona 2020


 

>>Ply from Barcelona, Spain. It's the cube covering Cisco live 2020 route to you by Cisco and its ecosystem partners. >>Hi buddy. Welcome back to the queue, the leader in live tech coverage. My name is Dave Volante with cohost Humanum and John furriers. Here we go out to the events, we extract the signal from the noise and of course this is day one of Cisco live Barcelona. Very excited to have Presant Shanola. He's the vice president of marketing enterprise networks for IOT and the developer platform at Cisco for sounds good to see you. Good to see you folks too. So right now we're in the middle of the the DNA center takeover in the dev net zone network's getting more complex. You need a command center to understand what's going on. >>Yeah, give us the update Y DNA. Yeah. So this has been a journey for Cisco and for our customers for the last three years or so. Right. So a few things happened in the last decade, like mobile, IOT, cloud, and the world of security. All of those came together in one place. And if you look at it, these are very network centric technologies, right? There'd be no cloud without networking or mobile or IOT. So when our customers started investing heavily in the world of applications in the cloud environment, mobile and IOT, the network was slightly left behind. The network that they had created and built was meant for the internet era, not for this multicloud mobile and IOT era. So we had to rethink networking fundamentally from the ground up to how do you help our customers design, build, scale, manage and deploy networks for this new era of digital transformation driven by mobile and cloud. >>And that was the Genesis of our intent based networking strategy, right? So that was like three years back. Then we designed a networking architecture that focuses on the business intent and lets you figure out the how part of it. Then NATO figures it out. So the DNS center was the command center as Dave, you put it to help manage design and build this network from the ground up. And it's been a journey for us and it's been a very, very exciting journey for us where we are getting a lot of positive feedback from the customer, whether it's to deploy their access infrastructure, wired wireless are more into the wide area network extending into data center and public cloud environment. >>So when we went from internet to the cloud, yoga talks about the flattening of the network and now I know we're going to talk about it. >>Yeah, yeah. Are we going to need a new DNA center for that next wave or no, it's the pendulum swing, right? Like it's all this meaning interesting mainframes, centralized and decentralized edges. Then again, centralized in the cloud and now cloud moving to the edge. So this is always going to be an interesting phenomenon and it's mainly because the world around both sides of the networking has become highly hyper connected and highly dynamic, right? Like users are mobile devices are everywhere, applications are everywhere. A single application is split into 500 different pieces run in containers and microservices across four different public clouds and three different data centers, right? Like, how do you manage this dynamic environment? How do you set the policy? How do you guarantee an application experience? So this has been a very challenging environment. So the idea of DNS entry is to provide you that single command center, right? >>No matter whether you want to deploy it as a Wachtel service, a physical service in the cloud, in a hardware platform, doesn't matter. Right? So how do you get all of your data? How do you get a single place to provision the system? Well, I'm glad you've mentioned scale quite a few times talking about this for the longest time it was how do we get the network people to get off of their CLI and go to the gooey? Well, I don't care if you've got the best goo in the world, the, the hyper connectivity, the amount of changes going on, people can't do this alone. So talk to us a little bit about know tooling, the automation, the API APIs, connect all these things and make sure that our people don't become the bottleneck for innovation. >> Frankly, the complexity has exceeded human scale. It's just impossible. >>It's funny because I was talking to the CIO for a pretty large global bank. I can't tell the name who was saying like, Hey, a few years back I had one it person to manage around thousand devices, all the devices. Right? And then that year when I was talking, and this was 2016 he had one is to 10,000 device, one it for 10,000 devices to manage. And he said, I'm looking in 2020 to be one it for 250,000 devices going up to a million devices. I'm like, dude, you're doing some funky Matthew. It's like, that looks like that hockey stick curve. Right? And I'm like, he was right. Now I don't even know what's on my network, what's connected to my network. I have, I'm flying blind. And that opens up a lot of security issues. That opens up a lot of operational challenges. In fact, for every dollar our customer spends on cap X for buying the network, they spend $3 on opics managing the network, monitoring and troubleshooting the network. >>So that's the key point saying that you can hire a hundred more it staff, you're just not going to be able to manage the complexity. So there has to be an automation world, right? We live in a world where repetitive tasks should be done by machines and not human beings. It's happened and the rest of the lives and networks, operations is just one part of that. So the concept of controller led architectures, which was the Genesis of SDN is now being applied to this world of intern based networking. But we also get the data to provide you insight on how things are behaving and how to take actions before it happens. >> Well, yeah, you brought up, are you used to, how many devices the enterprise can manage was something we measured for the longest time and used to compare to the hyperscalers and I said, well, here's the myth there. >>It's not that they're managing two of magnitude more equipment. They architect completely different Zack. They build the applications with the expectation that everything underneath is going to change. It's going to fail, it's going to be upgraded. So you don't have somebody inside of Yahoo in Google and all these hyperscalers running around patching and updating things. They build a data center and they keep adding environments and they throw things in the woodchipper when they're done and they break things down. So it's a completely different mindset. And part of SDN was the promise of it was to take some of those hyperscaler methodologies and bring it to Massell enterprise. So tell us how your software today is delivering kind of that, that hyperscale architecture and that's a little bit of a culture change for the enterprise. It's been a huge culture change, right? Like the concept of like abstracting the underlay complexity of all the network physical connections and giving an oral a, what we call a fabric. >>So underlying network works as a single integrated system, right? It's not like switches, routers, controllers, access point. All of that complexity is taken out. So you're programming a single fabric, putting the right policy and the controller will figure out how do I enforce that policy in this switch, that place, this controller, this access point? Right? So that was the complexity the Netflix operators of yesteryears we're dealing with. Right? They had to go and configure Mitzi Elias and now API, since we are in dev net is the new CLI. Right? Like, and that becomes a culture shift for network operators. Like I've been in the networking space for like 20 years. I was born on CLI, right? Like, and even when I created systems like access control lists, QRS and I had to system test my own code is fricking nightmare. It is tough. It is tough to manage that as a single system. >>Right? And that's why the role of controller to abstract the complexity of a, to program the infrastructure and then expose this intelligence to other systems, whether it's it systems, but it's business applications goes a long way. So that's why this journey is really exciting for us. So it sounds like we're entering the era of self-driving networks that, I mean you've got to even visualize this virtually possible unless it's at that abstraction layer. Yeah, absolutely. I mean there are new technologies that a lot of consumer markets and other places I've used like machine learning right? Like we have so much data within the network, the network sees everything, right? Because the connection point from mobile IOT to applications and cloud, right? But we haven't really leveraged the power of the data and the intelligence, right? And now that we have all of the data and now we have things like machine learning, it can identify traffic patterns and provide you more insights around your business, around your it and security, right? >>So that really takes the guesswork away. And the good part is with machine learning, the more data you feed it, the more it's learning from the data, not just your own local networks but the net folks across the world. And that makes it constantly adapting to changing conditions and constantly learning based on the traffic patterns and your environment. And that's a pretty exciting field, right? Because we've implemented that in the security field to predict threats before they happen. We've implemented that in parts of application performance and now you're bringing it to the wall of networking at cost access branch ran and campus to like help it move from a reactive world to more of a proactive world. To a predictive world, right? So they can spend less time looking for the needle in a haystack and focus more on solving strategic >>problems. So when you get into discussions about machine intelligence, oftentimes there's discussions about Oh, replacing jobs and you know, blah blah blah. And so it'll, it'll turn to a discussion of augmented intelligence, which very reasonable thing, what you just described as removing mundane tasks. Nobody wants to do those anymore. Here's my question. You talked about your CLI experience over the last 20 years. Is that CLI sort of tribal knowledge still vital as part, you know, part of the art of networking or does the machine essentially >>take over and humans you'll go on to other things? Yeah, I think that's a great question Dave. Like I call these next generation of network operators, the unicorns. So you do need to have the tribal knowledge of networking, not necessarily CLI, but the concept of networking. How do these protocols work? Right? Like this is not easy. It's, there are very, very few network engineers compared to application developers and software engineers in the world. So this is always going to be critical. But now if you marry this knowledge and compliment this knowledge with programmability and automation and application, you got yourself a unicorn that is going to be very, very strategic to the business because now the world of infrastructure and applications are coming together so he can truly focus on your business, which is run on applications, right? How can you, our applications run Foster's mater better with the network and how can your network understand how the applications are behaving becomes a whole new world. So you seek a new roles of network practitioners emerging. I feel like the data scientist after network, like the security defender of the network, the wall of security ops and networks are coming together. So that's what is exciting for us because you get bored in your life if you're doing just repetitive tasks and not learning new. And this provides a new way of ruining. So for me it's not taking jobs away. It's like upgrading your skillset to a whole new level. That's a lot more, >>well this is the secret of Cisco still. We've talked about this. All these hundreds of thousands of network engineers with growth path, income develop. What >>I've found fascinating is really unlocking that data because for the last decade we've talked about, well there's the network flows and there's analytics in the network streams, but what had been missing and what I think is starting to be there, as you said, that connectivity between the application and the actual data for the business, it isn't just some arcane dark art of networking and we're making that run better, faster, better, cheaper. But it's what that enables for the business, the data and the applications that there is a tighter, relevant they are today. That's the key thing, right? I mean everybody has been talking about data now, I dunno for 1520 years. It's the new crude aisle if you will. Right? But everybody has access to data and nobody knows what to do with it, right? Like this philosophical thing of data to knowledge to wisdom is like what we are all striving towards. >>Right? And now that we have access to this data and we have this intelligence system, which is a multi software that ingest data from not just networking but devices connected to the network, the security trends that we are seeing, the application data that you're seeing and provides this context and provide two very key insights around how does that impact your business, how does that impact your ID? How does that impact your security is a very powerful thing. Um, and you don't find that and you need to have that breadth of portfolio and system to be able to get all of the data and consume that at a hyperscale level, if you will. We often say in the cubit that data is plentiful insights or not, and you need insights in order to be able to take action. And that's where automation comes in for shot. Great segment. Thank you very much for coming on the cube. Really appreciate it. Thank you today. Thanks to pleasure. Awesome. All right. Thank you for watching. This is the cube live from Barcelona, Cisco live 2020 Dave Volante for stupid event and John furrier, we'll be right back.

Published Date : Jan 28 2020

SUMMARY :

Cisco live 2020 route to you by Cisco and its ecosystem for IOT and the developer platform at Cisco for sounds good to see you. to rethink networking fundamentally from the ground up to how do you help So the DNS center was the command center as Dave, you put it to help manage So when we went from internet to the cloud, yoga talks about the flattening of the network So the idea of DNS entry is to provide you that single command center, So how do you get all of your data? Frankly, the complexity has exceeded human scale. on cap X for buying the network, they spend $3 on opics managing So that's the key point saying that you can hire a hundred more it staff, Well, yeah, you brought up, are you used to, how many devices the enterprise can manage was something So you don't have somebody inside So that was the complexity the Netflix operators Because the connection point from mobile IOT to applications and cloud, right? So that really takes the guesswork away. So when you get into discussions about machine intelligence, oftentimes there's So this is always going to be critical. All these hundreds of thousands of network engineers It's the new crude aisle if you will. all of the data and consume that at a hyperscale level, if you will.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VolantePERSON

0.99+

CiscoORGANIZATION

0.99+

Prashanth ShenoyPERSON

0.99+

NATOORGANIZATION

0.99+

$3QUANTITY

0.99+

10,000 devicesQUANTITY

0.99+

DavePERSON

0.99+

250,000 devicesQUANTITY

0.99+

2020DATE

0.99+

20 yearsQUANTITY

0.99+

2016DATE

0.99+

NetflixORGANIZATION

0.99+

500 different piecesQUANTITY

0.99+

twoQUANTITY

0.99+

John furrierPERSON

0.99+

hundredsQUANTITY

0.99+

YahooORGANIZATION

0.99+

WachtelORGANIZATION

0.99+

Barcelona, SpainLOCATION

0.99+

both sidesQUANTITY

0.99+

10,000 deviceQUANTITY

0.98+

GoogleORGANIZATION

0.98+

HumanumORGANIZATION

0.98+

MatthewPERSON

0.98+

todayDATE

0.98+

BarcelonaLOCATION

0.97+

singleQUANTITY

0.97+

one partQUANTITY

0.97+

last decadeDATE

0.97+

three years backDATE

0.97+

three different data centersQUANTITY

0.97+

single applicationQUANTITY

0.96+

single systemQUANTITY

0.96+

Presant ShanolaPERSON

0.95+

oneQUANTITY

0.94+

last three yearsDATE

0.92+

one placeQUANTITY

0.92+

MassellORGANIZATION

0.92+

ZackPERSON

0.91+

around thousand devicesQUANTITY

0.88+

single placeQUANTITY

0.87+

SDNORGANIZATION

0.87+

a million devicesQUANTITY

0.86+

nextEVENT

0.83+

1520 yearsQUANTITY

0.83+

last 20 yearsDATE

0.83+

furriersORGANIZATION

0.82+

JohnPERSON

0.82+

single commandQUANTITY

0.81+

single fabricQUANTITY

0.79+

few years backDATE

0.77+

a hundredQUANTITY

0.77+

day oneQUANTITY

0.75+

CLITITLE

0.75+

key insightsQUANTITY

0.72+

upQUANTITY

0.7+

thousandsQUANTITY

0.7+

waveEVENT

0.69+

four different public cloudsQUANTITY

0.68+

EULOCATION

0.62+

Mitzi EliasPERSON

0.59+

FosterPERSON

0.5+

live 2020COMMERCIAL_ITEM

0.5+

SDNTITLE

0.38+

LiveCOMMERCIAL_ITEM

0.37+

liveEVENT

0.32+

Erik Kaulberg, Infinidat | AWS re:Invent 2019


 

>> Announcer: Live from Las Vegas, it's theCUBE covering AWS re:Invent 2019. Brought to you by Amazon Web Services and Intel, along with its ecosystem partners. >> Welcome back inside the Sands. We are in Las Vegas. We are live here on theCUBE along with Dave Vellante, I'm John Walls. We continue our coverage of AWS re:Invent 2019 by welcoming in Erik Kaulberg, VP of cloud solutions at Infinidat. Erik, good to see you today. Thanks for joining us. >> Thanks, it's nice to see you too. >> So share a little bit at home for the folks who might not be too familiar with Infindat. I know you guys, big in the, in data storage, in terms of what's happening in the enterprise, but shed a little bit of light on that for us. >> Yeah, so Infinidat's all about re-inventing the next generation of data storage at multi-petabyte scale, whether that's for on-prem appliances where we have over 5.4 exabytes deployed now around the world, large enterprises, or whether that's through our cloud services like Neutrix Cloud, which we're talking a lot about today and through the conference, we're solving large data challenges for customers with blocker file storage requirements. We're doing that through technology that gets the price point of hard drives with the performance capabilities of solid state media, DRAM and flash, and we're doing it at very large scale, even though we kind of fly under the radar a bit from a marketing standpoint. >> So there's a lot of interesting things going on. Good storage demand. There's no question that the cloud is eating away at some of the traditional on-prem, and there's very few companies that are gaining share rapidly. You happen to be one of them. You know, Pure Storage grew 15% this quarter. Much, much lower. You know, generally HBE's shrinking. I think Delium C grew a little bit. You know, IBM has been down. I don't think they've announced yet. So you're seeing a couple of things. Cloud eating away, and then all this injection of flash. You're really the only guys who can make spinning disk run faster, as fast as flash. Everybody else is just throwing flash at the problem. And that's created headroom. So what are you guys seeing, 'cause you're clearly growing. You're a market share gainer. You have the advantage of being new and smaller. Talk about your business and how you're growing and why you're growing. >> It's nothing but growth, and it comes from this increase in the overall data that, requirements that customers have, and it comes from the economic aspects of that data. Fundamentally, data storage is all about economics, and we're able to deliver through our technical advantage of blending disk, flash, and DRAM an order of magnitude cost basis advantage, and that translates into direct financial benefits that allow ultimately enterprises to do more with their data. That's what we're all about. >> So as workloads shift to the cloud, there's an on-prem component. We're going to talk about cloud, multicloud, hybrid cloud, et cetera. But you've got a product called Neutrix. Talk about that and where it fits into this big macro trend that we've just been talking about. >> Absolutely. So Neutrix fits into the broader landscape in a couple of ways. First of all, many of the clients that we deal with are large enterprises, and they're in their relatively early stages of cloud transformation. So Neutrix provides an easy on ramp for them to come from our best in class on-prem infrastructure and make that data accessible in one or multiple clouds. And that kind of, maybe it's for test dev. Maybe it's for a disaster recovery, a pilot light scenario, or a couple other use cases for general purpose primary data storage. That's their on ramp to then taking advantage of the more strategic value of Neutrix, which is allowing clouds to compete for the business on the compute side of things. >> You kind of hit a key word in there. I'm talking about transform. And we've talked about that a lot, transformation versus transition, in terms of storage capabilities, enterprise storage capabilities, whatever. Take us through that transformation, if you will, and not the transition, and what's the paradigm change? What's going on in that space that's requiring people tom ake this dive into the deep end, if you will, and not just tickling the water with their toes. >> Well, I think there's two elements to it. There's a business and kind of a philosophical reorientation around taking advantage of flexible resources and allowing infrastructure to change over time and pay opex-based business models, that sort of stuff, and getting comfortable with that honestly is a journey into and of itself, because many procurement organizations, especially large organizations, they don't know what to do with a monthly bill or an uncommitted reserve amount or things like that. So part of it is being able to walk with the customer as they transform on the business side of things, and then the other side is accepting and going down the path of variable workloads, being able to accommodate large varieties of mixed data environments, and be agile on the technology side so that you can put the data where it needs to be with the performance that it needs to be and with the capabilities that it needs to be. >> All right, so we're pressed for time, so I really want to get a few topics in. For now, I see three main opportunities, broadly. One is on-prem, stealing market share. We talked about that a little bit. Two is this multicloud thing, and we'll talk about that, as well. If you're an on-prem company, you got to have a multicloud strategy, and even if you're a pure cloud company, you got to have a multicloud strategy. And the third is the cloud. You've got to embrace the cloud. If you deny the cloud, you're denying the biggest trend. So let's start with the cloud. What's your cloud strategy? What's your relationship with AWS and how are you taking advantage of that? >> So we're all about delivering our data services in whatever means, whatever physical infrastructure, whatever underlying business model the customer requires. With that in mind, we deliver Neutrix Cloud as a service for use with major public cloud environments, including AWS, and our relationship with AWS, you know, they recognize, I think, they would say that we bring access to large-scale, tier one environments all around the world coming from our base on the on-prem, and they're very interested in obviously working with the customers on cloud transformation at the scale that we operate, as well, so it's a mutually beneficial partnership. We're proud to be an APN member and all of that sort of thing. >> Yeah, I mean, if you can put your stack in the AWS cloud, which is what you're doing, it's going to drive other services, right? It's going to drive ML and SageMakers and backup and all kinds of great things. >> Absolutely. >> So the storage guys at AWS may not love you, but everybody else at AWS is going to be happy because you're driving other services. All right, let's talk about multicloud. It's obviously a controversial topic. We've got, John Furrier every year does a exclusive interview with Andy Jassy, and he's on the record, and I think he's right. He says, look it, multicloud is going to be more complex, less secure, and more expensive. He's right. And he goes, but he also recognizes that there are multiple clouds out there, and so organizations have to participate in multicloud strategies. I've predicted, as have Stu Miniman and John Furrier, Amazon's going to participate in that someday. They're going to do what they're doing in hybrid. So Amazon looks at multicloud as multiple public clouds and on-prem as hybrid. Coming back to Infinidat, what's your multicloud strategy? >> So the great thing about our strategy is that we're able to deliver the same data in whatever public cloud environments the customer wants to deploy. So we actually run our own independent infrastructure that sits just outside the walled gardens of all the major public clouds, and then we can provide network connectivity using their direct connect interfaces or similar private network interconnects, all API-driven, customer doesn't have to think about the underlying infrastructure, and fundamentally it allows them to subscribe to our storage as a service directly in whatever public clouds they choose. >> And now let's talk about the on-prem piece of that, which is the hybrid component, using Jassy's sort of definitional framework. You've got Flex. That extends your on-prem story. Talk about that a little bit. >> Absolutely. So our customers are saying, "Hey, I want the public cloud business model "on the on-prem environment," and Flex is our answer to that kind of question. So we deliver essentially hardware independence, price per gig per month. We maintain title to the asset, all that sort of stuff. And we're in charge of refreshing the infrastructure every three years, and we back it with a more than public cloud level availability guarantee, 100% availability guarantee for the Flex business model. >> We've seen companies, flash-based products as backup targets. Infinidat uses a combination of flash and spinning disks to keep costs down, and you've got math magic to make it as performant. One of the things I like what you're doing is you're partnering with I think most of, if not all the backup software vendors and opening up new market opportunities and expanding your TAM by partnering with those guys. Talk a little bit about, can you give us some specifics there? >> Absolutely. So, for example, we were presenting at the Veeam booth earlier this week about the intersection between InfiniBox and the Veeam backup software suite, and we have similar capabilities with some of the other backup platforms, as well. So two sides to that, one using the on-prem or cloud environments as a source, and there we have integrations with our snapshot technology specifically, and then two, using our InfiniGuard product on the on-prem side as a target, and there we have deep integration at an API level with the various backup platforms. So it's a cohesive universe where customers can take primary data, they can put it on Infinidat, they can use whatever enterprise backup platform. They can also put it as a target on Infinidat technology. >> And we're talking a lot about today. What about tomorrow? I mean, you know, what's the bigger picture down the road? What's your crystal ball telling you in terms of future complexities and challenges and what you see where this is headed? >> I think from a storage standpoint, at least, obviously lots of other complexities beyond that universe, but from a storage standpoint, people want to stop thinking about infrastructure. They want to think about cloud data services. They want to think about essentially going from storage arrays to storage clouds. We're doing that on on-prem, we're doing that in public cloud environments, and we're knitting it all together with our initiative called the Elastic Data Fabric. Our ultimate goal there and what we think customers really want is to be able to get the data services that they want at any given instant through the business model they care about independent of the underlying infrastructure, and that's what we're set up to deliver over the next couple of years at Infinidat. >> Well, Erik, thank you for the time. We appreciate that. By the way, Erik has become a very important Cuber, a VIC. His sixth appearance here on theCUBE. I wish we had a plaque or something to give you, but how about just an attaboy? >> Thanks very much. >> We appreciate that. >> Thanks, Erik. >> Back with more coverage here from AWS re:Invent 2019. You're watching us live. We're here on theCUBE. (techno music)

Published Date : Dec 5 2019

SUMMARY :

Brought to you by Amazon Web Services and Intel, Erik, good to see you today. for the folks who might not be that gets the price point of hard drives There's no question that the cloud is eating away and it comes from the economic aspects of that data. We're going to talk about cloud, First of all, many of the clients that we deal with and not the transition, and going down the path of variable workloads, and how are you taking advantage of that? and our relationship with AWS, you know, and all kinds of great things. and he's on the record, and fundamentally it allows them to subscribe And now let's talk about the on-prem piece of that, and Flex is our answer to that kind of question. and spinning disks to keep costs down, and the Veeam backup software suite, and what you see where this is headed? and we're knitting it all together with our initiative By the way, Erik has become a very important Cuber, a VIC. Back with more coverage here from AWS re:Invent 2019.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

IBMORGANIZATION

0.99+

ErikPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Erik KaulbergPERSON

0.99+

John WallsPERSON

0.99+

100%QUANTITY

0.99+

Andy JassyPERSON

0.99+

15%QUANTITY

0.99+

TwoQUANTITY

0.99+

Las VegasLOCATION

0.99+

InfinidatORGANIZATION

0.99+

John FurrierPERSON

0.99+

two elementsQUANTITY

0.99+

NeutrixORGANIZATION

0.99+

tomorrowDATE

0.99+

thirdQUANTITY

0.99+

IntelORGANIZATION

0.99+

OneQUANTITY

0.99+

todayDATE

0.99+

two sidesQUANTITY

0.99+

twoQUANTITY

0.99+

Stu MinimanPERSON

0.98+

sixth appearanceQUANTITY

0.98+

VeeamORGANIZATION

0.97+

APNORGANIZATION

0.97+

this quarterDATE

0.96+

over 5.4 exabytesQUANTITY

0.96+

FlexTITLE

0.95+

FirstQUANTITY

0.93+

oneQUANTITY

0.93+

earlier this weekDATE

0.92+

Pure StorageORGANIZATION

0.91+

HBEORGANIZATION

0.89+

JassyTITLE

0.87+

Neutrix CloudTITLE

0.86+

three yearsQUANTITY

0.85+

InfiniGuardORGANIZATION

0.82+

Neutrix CloudORGANIZATION

0.82+

three main opportunitiesQUANTITY

0.81+

VICORGANIZATION

0.78+

InfiniBoxORGANIZATION

0.72+

InfindatORGANIZATION

0.72+

re:Invent 2019EVENT

0.71+

Invent 2019TITLE

0.68+

AWS re:Invent 2019TITLE

0.66+

SandsLOCATION

0.6+

re:InventEVENT

0.6+

coupleQUANTITY

0.55+

InfinidatTITLE

0.54+

Delium CORGANIZATION

0.54+

theCUBETITLE

0.53+

next couple of yearsDATE

0.53+

TAMORGANIZATION

0.51+

CuberORGANIZATION

0.49+

2019TITLE

0.49+

Elastic Data FabricORGANIZATION

0.44+

reEVENT

0.32+

Jerry Chen, Greylock | AWS re:Invent 2019


 

>> Narrator: Live from Las Vegas, it's theCUBE covering AWS reInvent 2019. Brought to you by Amazon Web Services and Intel along with it's Ecosystem partners. >> Well, welcome back, everyone theCUBE's live coverage in Las Vegas for AWS reInvent. It's theCUBE's 10th year of operations, it's our seventh AWS reInvent and every year, it gets better and better and every year, we've had theCUBE at reInvent, Jerry Chen has been on as a guest. He's a VIP, Jerry Chen, now a general partner at Greylock Tier One, one of the leading global Venture capitals at Silicon Valley. Jerry, you've been on the journey with us the whole time. >> I guess I'm your good luck charm. >> (laughs) Well, keep it going. Keep on changing the game. So, thanks for coming on. >> Jerry: Thanks for having me. >> So, now that you're a seasoned partner now at Greylock. You got a lot of investments under your belt. How's it going? >> It's great, I mean look, every single year, I look around the landscape thinking, "What else could be coming? "What if we surprise this year?" What's the new trends? What both macro-trends, also company trends, like, who's going to buy who, who's going to go public? Every year, it just gets busier and busier and bigger and bigger. >> All these new categories are emerging with this new architecture. I call it Cloud 2.0, maybe next gen Cloud, whatever you want to call it, it's clear visibility now into the fact that DevOps is working, Cloud operations, large scale operations with Cloud is certainly a great value proposition. You're seeing now multiple databases, pick the tool, I think Jassy got that right in his keynote, I believe that, but now the data equation comes over the top. So, you got DevOps infrastructure as code, you got data now looking like it's going to go down that same path of data as code where developers don't have to deal with all the different nuances of how data's stored, how it's handled, where is it, warm or cold or at glacier. So, developers still don't have that yet today. Seems to be an area of Amazon. What's your take on all this? >> I think you saw, so what drove DevOps? Speed, right? It's basically how developers shows you operations, merging of two groups. So, we're seeing the same trend DataOps, right? How data engineers and data scientists can now have the same speeds developers had for the past 10 years, DataOps. So, A, what does that mean? Give me the menu of what I want like, Goldilocks, too big, too small, just right. Too hot, too cold, just right. Like, give me the storage tier, the data tier, the size I want, the temperature I want and the speed I want. So, you're seeing DataOps give the same kind of Goldilocks treatment as developers. >> And on terms of like Cloud evolution again, you've seen the movie from the beginning at VM where now through Amazon, seventh year. What jumps out at you, what do you look at as squinting through the trend lines and the fashion of the features, it still seems to be the same old game, compute memory storage and software. >> Well I mean, compute memory storage, there's an atomic building blocks of a compute, right? So, regardless of services these high level frameworks, deep down, you still have compute networking and storage. So, that's the building blocks but I think we're seeing 10th year of reInvent this kind of, it's not one size fits all but this really big fat long tail, small instances, micro-instances, server lists, big instances for like jumbo VMs, bare metal, right? So, you're seeing not one architecture but folks can kind of pick and choose buy compute by the drip, the drop or buy compute by the whole VM or whole server full. >> And a lot of people are like, the builders love that. Amazon owns the builder market. I mean, if anyone who's doing a startup, they pretty much start on Amazon. It's the most robust, you pick your tools, you build, but Steve Malaney was just on before us says, "Enterprise don't want power tools, "they're going to cut their hand off." (laughs) Right so, Microsoft's been winning with this approach of consumable Cloud and it's a nice card to play because they're not yet there with capabilities with Amazon, so it's a good call, they got an Enterprise sales force. Microsoft playing a different game than AWS because they have to. >> Sure I mean, what's football now, you have a running game, you need a passing game, right? So, if you can't beat them with the running game, you go with a passing game and so, Amazon has kind of like the fundamental building blocks or power tools for the builders. There's a large segment of population out there that don't want that level of building blocks but they want us a little bit more prescriptive. Microsoft's been around Enterprise for many many years, they understand prescriptive tools and architectures. So, you're going to become a little bit more prefab, if you will. Here's how you can actually construct the right application, ML apps, AI apps, et cetera. Let me give you the building blocks at a higher level abstraction. >> So, I want to get your take on value creations. >> Jerry: Sure. >> So, if it's still early (mumbles), it's took a lot more growth, you start to see Jassy even admit that in his keynotes that he said quote, "There are two types "of developers and customers. "People want the building blocks "or people who want solutions." Or prefab or some sort of more consumable. >> More prescriptive, yeah. >> So, I think Amazon's going to start going that way but that being said, there's still opportunities for startups. You're an investor, you invest in startups. Where do you see opportunities? If you're looking at the startup landscape, what is the playbook? How should you advise startups? Because ya know, have the best team or whatever but you look at Amazon, it's like, okay, they got large scale. >> Jerry: Yeah. >> I'm going to be a little nervous. Are they going to eat my lunch? Do I take advantage of them? Do I draft off them? There are wide spaces as vertical market's exploding that are available. What's your view on how startups should attack the wealth creation opportunity value creation? >> There, I mean, Amazon's creating a new market, right? So, you look at their list of many services. There's just like 175 services out there, which is basically too many for any one company to win every single service. So, but you look at that menu of services, each one of those services themselves can be a startup or a collection of services can be a startup. So, I look at that as a roadmap for opportunity of companies can actually go in and create value around AI, around data, around security, around observability because Amazon's not going to naturally win all of those markets. What they do have is distribution, right? They have a lot of developer mind share. So, if you're a startup, you play one or three themes. So like, one is how do I pick one area and go deep for IP, right? Like, cheaper, better, faster, own some IP and though, they're going to execute better and that's doable over and over again in different markets. Number two is, we talked about this before, there's not going to be a one Cloud wins all, Amazon's clearly in the lead, they have won most of the Cloud, so far, but it'll be a multi-Cloud world, it'll be On Premise world. So, how do I play a multi-Cloud world, is another angle, so, go deep in IP, go multi-Cloud. Number three is this end to end solution, kind of prescriptive. Amazon can get you 80% of the way there, 70% of the way there but if you're like, an AI developer, you're a CMO, you're a marketing developer, you kind of want this end to end solution. So, how can I put together a full suite of tools from beginning to end that can give me a product that's a better experience. So, either I have something that's a deeper IP play a seam between multiple Clouds or give it end to end solutions around a problem and solve that one problem for our customer. >> And in most cases, the underlay is Amazon or Azure. >> Or Google or Alley Cloud or On Premises. Not going to wait any time soon, right? And so, how do I create a single fabric, if you will that looks similar? >> I want to riff with you in real time here on theCUBE around data. So, data scale is obviously a big discussion that's starting to happen now, data tsunami, we've heard that for years. So, there's two scale benefits, horizontal scale with data and then vertical specialism, vertical scale or ya know, using AI machine learning in apps, having data, so, how do you view that? What's your reaction to the notion of creating the horizontal scale value and vertical specialism value? >> Both are a great place for startups, right? They're not mutually exclusive but I think if you go horizontal, the amount of data being created by your applications, your infrastructure, your sensors, time stories data, ridiculously large amount, right? And that's not going away any time soon. I recently did investment in ChronoSphere, 'cause you guys covered over at CUBEcon a few weeks ago, that's talking about metrics and observability data, time stories data. So, they're going to handle that horizontal amount of data, petabytes and petabytes, how can we quarry this quickly, deeply with a lot of insight? That's one play, right? Cheaper, better, faster at scale. The next play, like you said, is vertical. It's how do I own data or slice the data with more contacts than I know I was going to have? We talked about the virtual cycle of data, right? Just the system of intelligence, as well. If I own a set of data, be it healthcare, government or self-driving car data, that no one else has, I can build a solution end to end and go deep and so either pick a lane or pick a geography, you can go either way. It's hard to do both, though. >> It's hard for startup. >> For a startup. >> Any big company. >> Very few companies can do two things well, startups especially, succeed by doing one thing very well. >> I think my observation is that I think looking at Amazon, is that they want the horizontal and they're leaving offers on the table for our startups, the vertical. >> Yeah, if you look at their strategy, the lower level Amazon gets, the more open-sourced, the more ubiquitous you try to be for containers, server lists, networking, S3, basic sub straits, so, horizontal horizontal, low price. As you get higher up from like, deep mind like, AI technologies, perception, prediction, they're getting a little bit more specialized, right? As you see these solutions around retail, healthcare, voice, so, the higher up in the stack, they can build more narrow solutions because like any startup of any product, you need the right wedge. What's the right wedge in the customers? At the base level of developers, building blocks, ubiquitous. For solutions marketing, healthcare, financial services, retail, how do I find a fine point wedge? >> So, the old Venture business was all enamored with consumers over the years and then, maybe four years ago, Enterprise got hot. We were lowly Enterprise guys where no one-- >> Enterprise has been hot forever in my mind, John but maybe-- >> Well, first of all, we've been hot on Enterprise, we love Enterprise but then all of a sudden, it just seemed like, oh my God, people had an awakening like, and there's real value to be had. The IT spend has been trillions and the stats are roughly 20 or so percent, yet to move to the Cloud or this new next gen architecture that you're investing companies in. So, a big market... that's an investment thesis. So, a huge enterprise market, Steve Malaney of Aviation called it a thousand foot wave. So, there's going to be a massive enterprise money... big bag of money on the table. (laughs) A lot of re-transformations, lot of reborn on the Cloud, lot of action. What's your take on that? Do you see it the same way because look how they're getting in big time, Goldman Sachs on stage here. It's a lot of cash. How do you think it's going to be deployed and who's going to be fighting for it? >> Well, I think, we talked about this in the past. When you look to make an investment, as a startup founder or as a VC, you want to pick a wave bigger than you, bigger than your competitors. Right so, on the consumer side, ya know, the classic example, your Instagram fighting Facebook and photo sharing, you pick the mobile first wave, iPhone wave, right, the first mobile native photo sharing. If you're fighting Enterprise infrastructure, you pick the Cloud data wave, right? You pick the big data wave, you pick the AI waves. So, first as a founder startup, I'm looking for these macro-waves that I see not going away any time soon. So, moving from BaaS data to streaming real time data. That's a wave that's happening, that's inevitable. Dollars are floating from slower BaaS data bases to streaming real time analytics. So, Rocksett, one of the investors we talked about, they're riding that wave from going BaaS to real time, how to do analytics and sequel on real time data. Likewise, time servers, you're going from like, ya know, BaaS data, slow data to massive amounts of time storage data, Chronosphere, playing that wave. So, I think you have to look for these macro-waves of Cloud, which anyone knows but then, you pick these small wavelettes, if that's a word, like a wavelettes or a smaller wave within a wave that says, "Okay, I'm going to "pick this one trend." Ride it as a startup, ride it as an investor and because that's going to be more powerful than my competitors. >> And then, get inside the wave or inside the tornado, whatever metaphor. >> We're going to torch the metaphors but yeah, ride that wave. >> All right, Jerry, great to have you on. Seven years of CUBE action. Great to have you on, congratulations, you're VIP, you've been with us the whole time. >> Congratulations to you, theCUBE, the entire staff here. It's amazing to watch your business grow in the past seven years, as well. >> And we soft launch our CUBE 365, search it, it's on Amazon's marketplace. >> Jerry: Amazing. >> SaaS, our first SaaS offering. >> I love it, I mean-- >> John: No Venture funding. (laughs) Ya know, we're going to be out there. Ya know, maybe let you in on the deal. >> But now, like you broadcast the deal to the rest of the market. >> (laughs) Jerry, great to have you on. Again, great to watch your career at Greylock. Always happy to have ya on, great commentary, awesome time, Jerry Chen, Venture partner, general partner of Greylock. So keep coverage, breaking down the commentary, extracting the signal from the noise here at reInvent 2019, I'm John Furrier, back with more after this short break. (energetic electronic music)

Published Date : Dec 4 2019

SUMMARY :

Brought to you by Amazon Web Services and Intel of the leading global Venture capitals at Silicon Valley. Keep on changing the game. So, now that you're a seasoned partner now at Greylock. What's the new trends? So, you got DevOps infrastructure as code, I think you saw, so what drove DevOps? of the features, it still seems to be the same old game, So, that's the building blocks It's the most robust, you pick your tools, you build, So, if you can't beat them with the running game, So, I want to get your take you start to see Jassy even admit that in his keynotes So, I think Amazon's going to start going that way I'm going to be a little nervous. So, but you look at that menu of services, And so, how do I create a single fabric, if you will I want to riff with you So, they're going to handle that horizontal amount of data, one thing very well. on the table for our startups, the vertical. the more ubiquitous you try to be So, the old Venture business was all enamored So, there's going to be a massive enterprise money... So, I think you have to look for these or inside the tornado, whatever metaphor. We're going to torch the metaphors All right, Jerry, great to have you on. It's amazing to watch your business grow And we soft launch our CUBE 365, Ya know, maybe let you in on the deal. But now, like you broadcast the deal (laughs) Jerry, great to have you on.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Steve MalaneyPERSON

0.99+

Jerry ChenPERSON

0.99+

JerryPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

70%QUANTITY

0.99+

80%QUANTITY

0.99+

John FurrierPERSON

0.99+

JohnPERSON

0.99+

two groupsQUANTITY

0.99+

Las VegasLOCATION

0.99+

175 servicesQUANTITY

0.99+

Goldman SachsORGANIZATION

0.99+

oneQUANTITY

0.99+

10th yearQUANTITY

0.99+

firstQUANTITY

0.99+

GreylockORGANIZATION

0.99+

AWSORGANIZATION

0.99+

IntelORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

BothQUANTITY

0.99+

JassyPERSON

0.99+

bothQUANTITY

0.99+

two typesQUANTITY

0.99+

DevOpsTITLE

0.99+

one problemQUANTITY

0.99+

seventh yearQUANTITY

0.98+

two thingsQUANTITY

0.98+

reInventEVENT

0.98+

AviationORGANIZATION

0.98+

four years agoDATE

0.98+

Seven yearsQUANTITY

0.98+

two scaleQUANTITY

0.97+

CUBEconORGANIZATION

0.97+

iPhoneCOMMERCIAL_ITEM

0.97+

one companyQUANTITY

0.97+

three themesQUANTITY

0.97+

todayDATE

0.96+

reInvent 2019EVENT

0.96+

InstagramORGANIZATION

0.96+

ChronoSphereTITLE

0.95+

AzureORGANIZATION

0.95+

each oneQUANTITY

0.95+

FacebookORGANIZATION

0.94+

RocksettPERSON

0.94+

this yearDATE

0.93+

GoogleORGANIZATION

0.92+

Number threeQUANTITY

0.92+

Number twoQUANTITY

0.92+

one thingQUANTITY

0.92+

trillionsQUANTITY

0.92+

20QUANTITY

0.92+

VentureORGANIZATION

0.92+

CloudTITLE

0.92+

single fabricQUANTITY

0.88+

one areaQUANTITY

0.87+

Cloud 2.0TITLE

0.87+

Erik Kaulberg, Infinidat | CUBEConversation, November 2019


 

(jazzy music) >> From our studios in the heart of Silicon Valley, Palo Alto, California, this is a CUBE conversation. >> Hello, and welcome to theCUBE studios in Palo Alto, California for another CUBE conversation, where we go in depth with thought leaders driving innovation across the tech industry. I'm your host, Peter Burris. It's going to be a multi-cloud world. It's going to be a multi-cloud world because enterprises are so diverse, have so many data requirements and application needs that it's going to be serviced by a panoply of players, from public cloud to private cloud and SaaS companies. That begs the question, if data is the centerpiece of a digital strategy, how do we assure that we remain in control of our data even as we exploit this marvelous array of services from a lot of different public and private cloud providers and technology companies? So the question, then, is data sovereignty. How do we stay in control of our data? To have that conversation, we're joined by Erik Kaulberg, who's a vice president at Infinidat. Erik, welcome back to theCUBE. >> Thanks, nice to be here. >> So before we get into this, what's a quick update on Infinidat? >> Well, we just crossed the 5.4 exabyte milestone deployed around the world, and for perspective, a lot of people don't appreciate the scale at which Infinidat operates. That's about five and a half Dropboxes worth of content on our systems and on our cloud services deployed around the world today. So it's an exciting time. It's great being able to deliver these kinds of transformations at large enterprises all over the place. Business has been ramping wonderfully, and the other elements of our product portfolio that we announced earlier in the year are really coming to bear for us. >> Well, let's talk about some of those product, or some of those announcements in the product portfolio, because you have traditionally been more of an interestingly and importantly architected box company, but now you're looking at becoming more of a full player, a primary citizen in the cloud world. How has that been going? >> It's been great. So we announced our Elastic Data Fabric program, which is really our vision for how enterprises should deal with data in a multi-cloud world, in May, and that unified several different product silos within our company. You had InfiniBox on the primary storage appliance platform standpoint. You have Neutrix Cloud on the primary storage for public clouds. You have InfiniGuard for the secondary storage environments, and now we've been able to articulate this vision of enterprises should be able to access the data services that they want at scale and consume them in however way they prefer, whether that be on a private cloud environment with an appliance or whether that be in an environment where they're accessing the same data from multiple public clouds. >> So they should be able to get the cloud experience without compromising on the quality and the characteristics of the data service. >> Exactly. And fundamentally, since we deliver our value in the form of software, the customer shouldn't have to really care on what infrastructure it's running. So Elastic Data Fabric really broadens that message so that customers can understand, yes, they can get all the value of Infinidat wherever they'd prefer it. >> Okay, so let's dig into this. So the basic problem that companies face, to kind of lay this up front, the basic problems that companies face is they want to be able to tap into this incredible array of services that you can get out of the cloud, but they don't necessarily want to force their data into a particular cloud vendor or particular cloud silo. So they want the services, but they want to retain control over their data and their data destiny. How do you, in your conversations with customers, how do you see your customers articulating that tension? >> I think when I deal with the typical CIO, and I was in a couple of these conversations literally yesterday, it all comes back to the fundamental idea of do you want to pledge allegiance to a single public cloud provider forever? If the answer to that is no or if there's any hesitation in that answer, then you need to be considering services that go beyond the walled gardens of individual public clouds. And so that's where services like our Neutrix Cloud service can allow customers to keep control, keep sovereignty over their data in order to make the right decisions about where the compute should reside across whichever public cloud might offer the best combination of capabilities for a given workload. >> So it has been historically a quid pro quo where, give me your data, says the public cloud provider, and then I'll make available this range of services to you. And enterprises are saying, well, I want to get access to the services without giving you my data. How are companies generally going to solve this? Because it's not going to be by not working with public cloud or cloud companies, and it's not going to be by wanting to think too hard about which cloud companies to work with for which types of workloads. So what is the solution that folks have to start considering? Not just product level, but just generally speaking. >> Speaking broadly, I would say that there's no single answer for every company, but most large enterprises are going to want some sort of solution that allows their data to transcend the boundaries of public clouds. And there's a couple of different approaches to doing that. Some approaches just take software and then knit together multiple data silos across clouds, but you still have the data physically reside in different cloud environments, and then there are some approaches where they abstract away the data, where the data's physically stored, so that it can be accessed by multiple public clouds. And I think some mix of those approaches, depending on the scale of the company, is probably going to be one element of the solution. Now, data and how you treat the locations of data isn't the whole solution to the problem. There's many things to consider about your application state, about the security, about all that stuff, but-- >> Intellectual property, compliance, you name it. >> Absolutely. But if you don't get the data problem figured out, then everything else becomes a whole lot more complicated and a whole lot more expensive. >> So if we think about that notion of getting the data problem right, that should, we should start thinking in terms of what services does this data with these characteristics, by workload, location, intellectual property controls, whatever else they might be, what service does that data require? Today, the range of services that are available on more traditional approaches to thinking about storage are a little bit more mature. They're a little bit more, the options are a little bit greater, and the performance is often a lot better than you get out of the public cloud. Would you agree with that and can you give us some examples? >> Of course, yeah. And I think that in general, the public cloud providers have a different design point from traditional enterprise environments. You prioritize scale over resilience, for example. And specific features that we see come up a lot in our conversations with large enterprises are snapshots, replication with on-prem environments, and the ability to compress or reduce data as necessary depending on the workload requirements. There's a bunch of other things that get rolled into all of that. >> But those are three big ones. >> But those are big ones, absolutely. >> So how are enterprises thinking about being able to access all that's available in the cloud while also getting access to the data services they need for their data? >> Well, in the early days of public cloud deployments, we saw a lot of people either compromising on the data services and rearchitecting their applications accordingly or choosing to bring in more expensive layers to put on top of the standard hyperscale public cloud storage services and try and amalgamate them into a better solution. And of course we think that those are kind of suboptimal approaches, but if you have the engineering resources to invest or if you're really viewing that as something you can differentiate your business on, you want to make yourself a good storage provider, then by all means have at it. We think most enterprises don't want to go down that path. >> So what's your approach? How does Infinidat and your company provide that capability for customers? >> Well, step one is recognizing that we have a robust data services platform already out there. It's software, and we happen to package it in an appliance format for large enterprises today. That's that 5.4 exabytes, that's mostly the InfiniBox product, which is that software in an appliance. And so we've proven our core capabilities on the InfiniBox platform, and then about two and a half years ago now, we launched a service called Neutrix Cloud. And Neutrix Cloud takes that robust set of capabilities, that set of expectations that enterprises have around how they're going to handle multi-petabyte datasets, and delivers all those software-driven values as a public cloud service. So you can subscribe to the value of Infinidat without having any boxes involved or anything like that. And then you can use it for two things, basically. One is general purpose public cloud storage. So a better alternative or a more enterprise-grad alternative to things like AWS, EBS, or EFS. And another use case that is surprisingly popular for us is customers coming from on-prem environments and using the Neutrix Cloud service as just a replication target to get started. Kind of a bridge to the cloud approach. So we can support any combination of those types of scenarios, and then it gets most interesting when you combine them and add the multi-cloud piece, because then you're really seeing the benefits of eliminating the data silos in each individual public cloud when you can have, say, a file system that can be simultaneously mounted and used by applications in AWS, Azure, and GCP. >> Well, that's where, I would've thought that that would've been a third use case, right? >> Yeah. >> Is that multi-cloud and being able to mount the data wherever it's required is also obviously a very rich and important use case that's not generally available from most suppliers of data-oriented services. So where do you think this goes? Give us a kind of a visibility in where your customers are pointing as they think about incorporating and utilizing more fully this flexibility and new data services, the ability to extend and enhance the data services they get from traditional public cloud players. >> I think it's still early innings in general for the use of enterprise-grade public cloud services. I think NetApp actually just recently said that they're at $74 million annual run rate for their entire cloud data services business. So we have yet to see the full potential in general through the entire market of those capabilities in public clouds. But I think that in the long term, we get to this world where cloud compute providers can compete, truly have to compete for enterprise workloads, where you essentially have a marketplace where the customer gets to say, I have a workload. I need X cores. I need X capabilities. The data's right here in Neutrix or in something like Neutrix. And what will you offer me to run this workload for 35 minutes in Amazon? Same thing to Azure, same thing to GCP. I think that kind of competitive marketplace for public cloud compute is the natural endpoint for a disaggregated storage approach like ours, and that's what frankly gets some of our investors very excited about Infinidat, as well, because we're really the only ones who are making a strong investment in a multi-cloud piece first and foremost. >> So the ability to have greater control over your data means you can apply it in a market competitive way to whatever compute resource you want to utilize. >> Exactly. Spot instance pricing, for example, is only the beginning, because, I assume you're familiar with this, you can basically get Amazon to give you a discounted rate on a block of compute resources, similar to the other public clouds. But if your data happens to be in Amazon but Azure's giving you a lower spot instance rate, you're kind of SOL or you're going to pay egress fees and stuff like that. And I think that just disaggregating the data makes it a more competitive marketplace and better for customers. I think there's even more improvements to be had as the granularity of spot instance pricing becomes higher and higher so that customers can really pick with maximum economic efficiency where they want a workload to go for how long and ultimately drive that value back into the return that IT delivers to the business. >> So, Erik, you mentioned there's this enormous amount of data that's now running on Infinidat's platforms. Can you give us any insight into the patterns, particular industries, size of companies, workloads, that are being featured, or is it just general purpose? >> It's always a tough question for us because it is truly a horizontal platform. The one unifying characteristic of pretty much every Infinidat user is scale. If you're in the petabyte arena, then we're talking. If you're not in the petabyte arena, then you're probably talking to one of the upstart vendors in our space. It's business-critical workloads. It's enterprise-grade, whether you talk about enterprise-grade in the sense of replacing VMAX-type solutions or whether you talk about enterprise-grade in terms of modernizing cloud environments like what I've just described. It's all about scale, enterprise-grade capabilities. >> Erik Kaulberg, Infinidat, thanks again for being on theCUBE. >> Thanks. >> And once again, I want to thank you for joining us for another CUBE conversation. I'm Peter Burris. See you next time. (jazzy music)

Published Date : Nov 15 2019

SUMMARY :

From our studios in the heart that it's going to be serviced by a panoply of players, and the other elements of our product portfolio a primary citizen in the cloud world. of enterprises should be able to access the data services So they should be able to get the cloud experience the customer shouldn't have to really care that you can get out of the cloud, If the answer to that is no and it's not going to be by wanting to think too hard is probably going to be one element of the solution. But if you don't get the data problem figured out, and the performance is often a lot better and the ability to compress or reduce data as necessary Well, in the early days of public cloud deployments, and add the multi-cloud piece, the ability to extend and enhance the data services for public cloud compute is the natural endpoint So the ability to have greater control over your data back into the return that IT delivers to the business. Can you give us any insight into the patterns, to one of the upstart vendors in our space. And once again, I want to thank you for joining us

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ErikPERSON

0.99+

Peter BurrisPERSON

0.99+

Erik KaulbergPERSON

0.99+

$74 millionQUANTITY

0.99+

InfinidatORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

35 minutesQUANTITY

0.99+

MayDATE

0.99+

November 2019DATE

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

AWSORGANIZATION

0.99+

yesterdayDATE

0.99+

EBSORGANIZATION

0.99+

two thingsQUANTITY

0.99+

one elementQUANTITY

0.99+

TodayDATE

0.99+

NetAppORGANIZATION

0.98+

5.4 exabytesQUANTITY

0.98+

CUBEORGANIZATION

0.97+

VMAXORGANIZATION

0.97+

OneQUANTITY

0.96+

NeutrixORGANIZATION

0.95+

EFSORGANIZATION

0.95+

about two and a half years agoDATE

0.93+

GCPTITLE

0.92+

each individualQUANTITY

0.91+

Neutrix CloudTITLE

0.91+

AzureTITLE

0.91+

about five and a halfQUANTITY

0.9+

single answerQUANTITY

0.9+

todayDATE

0.9+

oneQUANTITY

0.89+

InfiniBoxTITLE

0.88+

third use caseQUANTITY

0.88+

Silicon Valley, Palo Alto, CaliforniaLOCATION

0.86+

InfiniBoxORGANIZATION

0.84+

5.4 exabyte milestoneQUANTITY

0.83+

three bigQUANTITY

0.82+

DropboxesORGANIZATION

0.77+

AzureORGANIZATION

0.75+

single publicQUANTITY

0.74+

firstQUANTITY

0.73+

step oneQUANTITY

0.71+

egressORGANIZATION

0.69+

FabricOTHER

0.69+

Elastic DataTITLE

0.63+

InfiniGuardTITLE

0.53+

theCUBEORGANIZATION

0.52+

CloudCOMMERCIAL_ITEM

0.26+