Image Title

Search Results for 13 initiatives:

Sam Nicholls, Veeam | AWS re:Invent 2022


 

(bright music) >> Hello cloud computing friends and welcome back to theCUBE, where we are live from Las Vegas, Nevada, here at AWS re:Invent all week. My name is Savannah Peterson, very excited to be joined by Paul Gillan today. How are you doing? >> I'm doing great, Savannah. It's my first re:Invent. >> I was just going to ask you >> So it's quite an experience. >> If you've ever been to re:Invent. >> It's dazzling much like the sequins on your top. It's dazzling. >> Yes. >> It's a jam packed affair. I came to the COMDEX Conference for many years in Las Vegas, which was huge event and this really rivals it in terms of these crowd sizes. But I think there's more intensity here. There's more excitement. People are just jazzed about being here to the extent that I never saw at other computer conferences. >> I thought I would agree with you. It's my first re:Invent as well. I'm glad we could share this experience together. And the vibe, the pulse, I think being back in person is really contagious as well. Ooh, maybe the wrong word to use, but in a great way. The energy is definitely radiating between people here. I'll watch my words a little bit better. >> And in person we have with us Samuel Nicholls, the director of public cloud at Global Product Marketing at Veeam Software. Sam, is it Sam or Samuel? >> Depends if I'm in trouble, Paul. >> Savannah: But it depends on who's saying it out loud. >> Yeah, yeah. It's typically, Samuel is usually reserved for my mother, so- >> Yeah. >> (laughs) Well, Sam, thanks for joining us. >> We'll stick with Sam on the show. >> Yeah. >> So Veeam been a red hot company for several years. Really made its, uh, its reputation in the VMware world. Now you've got this whole-sail shift to the cloud, not that VMware is not important still, but how is that affecting, you're shifting with it, how is that affecting your role as a product manager and the business overall? >> Yeah, it's a fantastic question. Obviously Veeam was pioneered in terms of being the purpose-built backup and recovery company for VMware. And as these workloads are being transitioned from the data center into the cloud or just net new workloads being created in the cloud, there is that equal need for backup and recovery there. So it's incredibly important that we were able to provide a purpose-built backup and recovery solution for workloads that live in AWS as well. >> Paul: And how different is it backing up an AWS workload compared to a VMware workload? >> I think it depends on what kind of service a user is, is, is utilizing, right? There's infrastructure as a service, platform as a service, software as a service. And given the differences in what is exposed to that customer that can make backup and recovery quite challenging. So I would say that the primary thing that we want to look at is utilizing native snapshots is our first line of defense when it comes to backup and recovery, irrespective of what workload that right might be whether it's a virtual machine, Amazon EC2, some sort of database on Amazon RDS, a file share, so on. >> Savannah: I bet you're seeing a lot across verticals and across the industry given the support that you're giving customers. What are you seeing in the market and in customer environments? What are some of those trends? >> So I think the major trends that we highlight in our data protection trend support, which is a new update is coming very shortly in the new year, is- >> Savannah: We have to check that out. >> Yeah, absolutely. The physical server is on a decline within the data center. Virtualized workloads, namely VMware is relatively static, kind of flat. The real hockey stick is with the cloud workloads. And as I mentioned before, that is partially because workloads are being transitioned from physical to virtual machines to being cloud hosted but also we're creating more applications and the cloud has become lead de facto standard for new workloads. So you hear about cloud first initiatives, digital transformation, the cloud is central to that. >> You mentioned snapshotting, which is a relatively new phenomenon, although it's taken a hold rapidly, how does snapshotting work in the cloud versus in on your on-prem environment? >> Samuel: It's not wildly different at all. I think the snapshots is again, a great first line of defense for helping users achieve very low recovery point objectives. So the frequency that they can protect their data as well as very low recovery time objectives, how quickly that I can recover the data. Because that's why we're backing up, right? We need the ability to recover. However, snapshots certainly have their limitations as well. They are not independent of the workload that is being protected. So if there were to be some sort of cybersecurity event like ransomware that is prolific throughout pretty much every business, every vertical. When that snapshot is not independent, if the production system becomes compromised that snapshot's likely to be compromised as well. And then going back to the recovery piece, not going to have something to recover from. >> And it's not a one and done with ransomware. >> No. >> It's, yeah. So how, so what is the role that backup plays? I mean a lot of people, I feel like security is such a hot topic here in the show and just in general, attacks are coming in unique form factors for everyone. I mean, I feel like backup is, no pun intended, the backbone of a system here. How does that affect what you're creating, I mean? >> Yeah, absolutely. I think, like you say the backup is core to any comprehensive security strategy, right? I think when we talk about security, everyone tends to focus on the preventative, the proactive piece, stopping the bad guys from getting in. However, there is that remediative aspect as well because like you say, ransomware is relentless, right? You, you as a good guy have to pretty much fend off each and every single attack that comes your way. And that can be an infinite number of attacks. We're all human beings, we're fallible, right? And sometimes we can't defend against everything. So having a secure backup strategy is part of that remediative recovery component for a cybersecurity strategy is critical. And that includes things like encryption, immutability, logical separation of data and so forth. >> Paul: We know that ransomware is a scourge on-premises, typically begins with the end users, end user workstation. How does ransomware work in the cloud? And do the cloud providers have adequate protections against ransomware? Or can they? >> Samuel: Yeah, it's a, it's a fantastic question as well. I think when we look at the cloud, one of the common misconceptions is as we transition workloads to the cloud, we are transitioning responsibility to that cloud provider. And again, it's a misconception, right? It is a shared responsibility between the cloud provider in this case, AWS and the user. So as we transition these workloads across varying different services, infrastructure, platform, software as a service, we're always, always transitioning varying degrees of responsibility. But we always own our data and it is our responsibility to protect and secure that data, for the actual infrastructure components, the hardware that is on the onus of the cloud provider, so I'd say that's the major difference. >> Is ransomware as big a threat in the cloud as it is on-prem? >> Absolutely. There's no difference between a ransomware attack on-premises or in the cloud. Irrespective of where you are choosing to run your workloads, you need to have that comprehensive cybersecurity strategy in order to defend against that and ultimately recover as well if there's a successful attempt. >> Yeah, it's, ooh, okay. Let's get us out at the dark shadows real quick (laughs) and bring us back to a little bit of the business use case here. A lot of people using AWS. What do you think are some of the considerations, they should have when they're thinking about this, thinking about growing their (indistinct)? >> Well, if we're going to stick down the dark shadows, the cybersecurity piece. >> We can be the darkness. >> You and me kind of dark shadows business. >> Yeah, yeah. >> We can go rainbows and unicorns, nice and happy if you like. I think there's a number of considerations they need to keep up. Security is, is, is number one. The next piece is around the recovery as well. I think folks, when they, when we talk about backup and recovery, the focus is always on the backup piece of it. But again, we need to focus on why we're doing the backup. It's the recovery, it's the recovery component. So making sure that we have a clean verifiable backup that we're able to restore data from. Can we do that in a, in efficient and timely manner? And I think the other major consideration is looking at the entirety of our environments as well. Very few companies are a hundred percent sole sourced on a single cloud provider. It is typically hybrid cloud. It's around 80% of organizations are hybrid, right? So they have their on-premises data and they also have workloads running in one or multiple clouds. And when it comes to backup and recovery of all of these different infrastructures and environments, the way that we approach it is very different. And that often leads to multiple different point products from multiple different vendors. The average company utilizes three different backup products, sometimes as many as seven and that can introduce a management nightmare that's very complex, very resource intensive, expensive. So looking at the entirety of the environment and looking to utilize a backup provider that can cover the entirety of that environment while centralizing everything under a single management console helps folks be a lot more efficient, a lot more cost effective and ultimately better when it comes to data protection. >> Amazon and all cloud providers really are increasingly making regions transparent. Just at this conference, Amazon introduced failover controls from multiple multi-region access points. So you can, you can failover from one access from one region to another. What kind of challenges does that present to you as a backup provider? >> I don't think it represents any challenges. When we look at the native durability of the cloud, we look at availability zones, we look at multi-region failover. That is, that durability is ultimately founded on, on replication. And I wouldn't say that replication and backup, you would use one or the other. I would say that they are complimentary. So for replication, that is going to help with the failover scenario, that durability component. But then backup again is that independent copy. Because if we look at replication, if let's say the source data were to be compromised by ransomware or there was accidental deletion or corruption, that's simply going to be copied over to the target destination as well. Having that backup as an independent copy, again compliments that strategy as well. >> Paul: You need it in either, in any scenario. >> Samuel: In any scenario. >> I think the average person would probably say that backup is not the most exciting technology aspect of this industry. But, but you guys certainly made, build a great business on it. What excites you about what's coming in backup? What are the new technologies, new advancements that perhaps we haven't seen and productized yet that you think are going to change the game? >> I think actually what we offer right now is the most exciting piece which is just choice flexibility. So Veeam again is synonymous with VMware backup but we cover a multitude of environments including AWS, containerized workloads, Kubernetes physical systems and the mobility pieces is critical because as organizations look to act on their digital transformation, cloud first initiatives, they need to be able to mobilize their workloads across different infrastructures, maybe from on-premises into the cloud, one cloud to another, maybe it's cloud back to on-premises, 'cause we do also see that. That flexibility of choice is what excites me about Veeam because it's ultimately giving the users best in class data protection tool sets without any prescriptive approach from us in terms of where you should be running your workloads. That is the choice that you use. >> Yeah, Veeam is definitely more than VMware. We actually had a chance to chat with you all like KubeCon and CloudNativeCon in Detroit. So we, we've seen the multitude of things that you touch. I want to bring it back to something and something kind of fun because you talked a lot about the community and being able to serve them. It's very clear, actually I shouldn't say this, I shouldn't say it's very clear, but to me it appears clear that community is a big priority for Veeam. I just want to call this out 'cause this was one of the cooler pieces of swag. You all gave out a hundred massage guns. Okay, very hot topic. Hot Christmas gift for 2022. I feel like Vanna White right now. And, but I thought that I was actually really compelled by this because we do a swag segment on theCUBE but it's not just about the objects or getting stuff. It's really about who's looking out for their community and how are they saying thanks. I mean, swag is a brand activation but it's also a thank you and I loved that you were giving out massage guns to the AWS Heroes and Community Builders. >> Yep. >> What role does community play in the culture and the product development at Veeam? >> So community has always been at the heart of Veeam. If you have a look at pretty much every single development across all of our versions, across all of our products it's always did by the community, right? We have a wonderful Veeam forum where we got 400,000 plus users actively providing feedback on the product what they would like to see. And that is ultimately what steers the direction of the product. Of course market trends and technology chain. >> A couple other factors, I'm sure. >> A couple of other factors, but community is huge for us. And the same goes for AWS. So, you know, talking with the AWS Heroes, the Community Builders helps Veeam reach further into that, into that community and the AWS user base and empower those folks with data protection tools and massage guns, when your feet are tired from, you know, being standing on them all day in Vegas. >> (laughs) Yeah, well, I mean, everybody, everybody's working hard and it's nice to say, it's nice to say, thank you. So I love, I love to hear that and it's, it's clear from the breadth of products that you're creating, the ways that you're supporting your customers that you already, they care a lot about community. We have a new challenge on theCUBE this year at AWS re:Invent. Think of it as an Instagram reel of your thought leadership, your hot take on the show, key themes as we look into 2023. What do you think is the most important story or trend or thing going on here at the show? >> I think it's just the continuation of cybersecurity and the importance of backup as a comprehensive cybersecurity strategy. You know, some folks might say that secure backup is your last line of defense. Again, ransomware is relentless. These folks are going to keep coming and even if they're successful, it's not a one and done thing. It's going to happen again and again and again. So, you know, we have a look around the show floor, the presentations there is a huge cybersecurity focus and really just what folks should be doing as their best practice to secure their AWS environments. >> That's awesome. Well, Paul, any final, any final thoughts or questions? >> I just quickly, you've mentioned data security, you mentioned data protection and backup sort of interchangeably but they're not really the same thing, are they? I mean, what businesses do you see Veeam as being here? >> I would say that we are a data protection company because of, yes, there is backup, but there's also the replication component. There's the continuous data protection component where we've got, you know, near-zero RTOs and then we again look at the cybersecurity components of that. What can we do to really protect that data? So I would say that the two are different. Backup is a subset of data protection. >> Sam, thank you so much for being here with us on theCUBE. It's been a super insightful conversation. Hopefully we'll get you back soon and more of the teams, there seem to be celebrities here with us on theCUBE. Paul Gillan, thank you so much for being here with me. >> Pleasure Savannah. >> And I'm glad we get to celebrate our first re:Invent and most importantly, thank you to the audience for tuning in. Without you, we don't get to hang out here in fabulous Las Vegas, Nevada, where we're live from the show floor at AWS re:Invent. My name is Savannah Peterson with Paul Gillan. We're theCUBE and we are the leading source for high-tech coverage. (bright music)

Published Date : Nov 29 2022

SUMMARY :

How are you doing? It's my first re:Invent. to re:Invent. the sequins on your top. I came to the COMDEX Conference And the vibe, the pulse, the director of public cloud on who's saying it out loud. Samuel is usually reserved (laughs) Well, Sam, on the show. the business overall? being created in the cloud, And given the differences and across the industry given the support and the cloud has become We need the ability to recover. And it's not a one the backbone of a system here. on the preventative, And do the cloud providers for the actual infrastructure components, on-premises or in the cloud. of the business use case here. stick down the dark shadows, You and me kind of that can cover the entirety to you as a backup provider? durability of the cloud, we look either, in any scenario. that backup is not the most That is the choice that you use. but it's not just about the of the product. into that community and the AWS user base and it's nice to say, it's and the importance of backup Well, Paul, any final, any at the cybersecurity components of that. and more of the teams, are the leading source

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul GillanPERSON

0.99+

Samuel NichollsPERSON

0.99+

PaulPERSON

0.99+

SamPERSON

0.99+

SamuelPERSON

0.99+

Sam NichollsPERSON

0.99+

AmazonORGANIZATION

0.99+

Savannah PetersonPERSON

0.99+

Savannah PetersonPERSON

0.99+

AWSORGANIZATION

0.99+

SavannahPERSON

0.99+

Veeam SoftwareORGANIZATION

0.99+

VeeamPERSON

0.99+

VegasLOCATION

0.99+

twoQUANTITY

0.99+

Las VegasLOCATION

0.99+

2023DATE

0.99+

DetroitLOCATION

0.99+

Las Vegas, NevadaLOCATION

0.99+

400,000 plus usersQUANTITY

0.99+

VeeamORGANIZATION

0.99+

VMwareORGANIZATION

0.98+

re:InventEVENT

0.98+

first lineQUANTITY

0.98+

around 80%QUANTITY

0.98+

todayDATE

0.97+

Vanna WhitePERSON

0.97+

hundred percentQUANTITY

0.97+

oneQUANTITY

0.97+

this yearDATE

0.97+

2022DATE

0.96+

KubeConEVENT

0.96+

sevenQUANTITY

0.96+

firstQUANTITY

0.95+

three different backup productsQUANTITY

0.95+

CloudNativeConEVENT

0.95+

COMDEX ConferenceEVENT

0.94+

first initiativesQUANTITY

0.93+

ChristmasEVENT

0.93+

eachQUANTITY

0.92+

AWS re:InventEVENT

0.9+

AWS HeroesORGANIZATION

0.9+

one regionQUANTITY

0.89+

single management consoleQUANTITY

0.88+

one accessQUANTITY

0.87+

single cloud providerQUANTITY

0.84+

a hundred massage gunsQUANTITY

0.83+

InstagramORGANIZATION

0.82+

Global Product MarketingORGANIZATION

0.81+

EC2TITLE

0.79+

first reQUANTITY

0.79+

InventEVENT

0.77+

Steve Mullaney, Aviatrix | Supercloud22


 

[Music] we're here with steve melanie the president and ceo of aviatrix steve john and i started this whole super cloud narrative as a way to describe that something different is happening specifically within the aws ecosystem but more broadly across the cloud landscape at re invent last year you and i spoke on the cube and you said one of your investors guy named nick sterile said to you at the show it's happening steve welcome to the cube what's happening what did nick mean by that yeah we were we were just getting ready to go on and i leaned over and he looked at me and he whispered in my ear and said it's happening he said it just like that and and you're right it was it was kind of funny and we talked about that and what he means is enterprises you know this is why i went to aviatrix three and a half years ago is the the the flip switch for enterprises and they said now we mean it we've been talking about cloud for 12 years or 15 years now we mean it we are digitally transforming we are the movement to cloud is going to make that happen and oh by the way of course it's multi-cloud because enterprises put workloads where they run best where they have the best security the best performance the best cost and the business is driving this transformation and they decide that i'm going to use that azure and another business unit decides i'm using google and another one says i'm using aws and so of course it's going to be multi-cloud and i think we're going to start seeing actual multi-cloud applications once that infrastructure and you know you call it the super cloud once that starts getting built developers are going to go wait a minute so i can pick this feature from google and and that service from azure and that service from aws easily without any hesitation once that happens they're going to start really developing today there aren't multi-cloud applications but but but the what's happening is the enterprise embracing public cloud they're using multiple clouds many of them call it four plus one right they're four different public clouds plus what they have on prem that to me is what's happening i am now re-architecting my enterprise infrastructure from applications all the way down to the network and i am embracing uh uh public clouds in that in that process so i mean you nailed us so many things in there i mean digitally transforming to me this is the digital transformation it's leveraging embracing the capex from the hyperscalers now you know people in the industry we're not trying to do what gartner does and create a new category per se but we do use super cloud as a metaphor so i don't expect necessarily vendors to use it or not but but i and i get that but when you talk about multi-cloud what specifically is new in other words what you touched on some of this stuff what constitutes a modern multi-cloud or what we would call a super cloud you know network architecture what are the salient attributes yeah i would say today so two years ago there was no such thing even as multiple clouds it was aws let's be clear everything was aws and for people to even back then two three years ago to even envision that there would be anything else other than aws people couldn't even envision now people kind of go yeah that was done we now see that we're going to use multiple clouds we're going to use azure we're going to use gcp and we're going to use this and we'll guess we're going to use oracle and even ollie cloud we're going to use five or four or five different public clouds what's but that would be i think of as multiple clouds but from an i.t perspective they need to be able to support all those clouds in these shared services and what they're going to do i actually think we're starting and you may have hit on something in the super cloud or i know you've talked about metacloud that that's got bad connotations for facebook i know everybody's like no please not another meta thing but there is that concept of this abstracted layer above you know writing we call it you know altitude you know aviatrix everything's you know riding above the clouds right that that that common abstracted layer this application infrastructure that runs the application that rides above all the different public clouds and i think once we do that you know dave what's going to happen is i think really what's going to happen is you're going to start seeing these these multi-cloud applications which to my knowledge really doesn't exist today i i think that might be the next phase and in order for that to happen you have to have all of the infrastructure be multi-cloud meaning not just networking and network security from from from aviation but you need snowflake you need hashtag you need datadog you need all the new horsemen of the new multi-cloud which isn't the old guys right this is all new people aviatrix dashie snowflake datadonk you name it that are going to be able to deliver all this multi-cloud cross-cloud wherever you want to talk about it such that application development and deployment can happen seamlessly and frictionlessly across multi-cloud once that happens the entire stack then you're going to start seeing and that to me starts enabling this what you guys call you know the super cloud the meta cloud the whatever cloud but that then rides above all the individual clouds that that's going to start getting a whole new realm of application development in my mind so we've got some work to do to basic do some basic blocking and tackling then the application developers can really build on top of that so so some of the skeptics on on this topic would ask how do you envision this changing networking versus it just being a bolt-on to existing fossilized network infrastructure in other words yeah how do we get from point a where we are today to point b you know so-called networking so we can actually build those uh super cloud applications yeah so you know what it is it's interesting because it goes back to my background at nasira and what we used to talk about then it isn't about managing complexity it's about creating simplicity it's very different and when you put the intelligence into the software right this is what computer science is all about we're turning networking into computer science when you create an abstraction layer we are not just an overlay day we dave we actually integrate in with the native services of the cloud we are not managing the complexity of these multi-clouds we are using it you know controlling the native constructs adding our own intelligence to this and then creating what is basically simplification for the people above it so we're simplifying things not just managing the complexity that's how you get the agility for cloud that's how you get to be able to do this because if all you are is a veneer on top of complexity you're just hiding complexity you're not creating simplicity and what happens is it actually probably gets more complex because if all you're doing is hiding the bad stuff you're not getting rid of it i love that i love that we're doing that at the networking and network security layer you're going to see snowflake and datadog and other people do it at their layers you know i reminds of a conversation i had with cause the one of the founders of pure storage who they're all about simplicity this idea of of creating simplicity versus like you said just creating you know a way to handle the complexity compare you know pure storage with the sort of old legacy emc storage devices and that's what you had you had you you had emc managing the complexity at pure storage disrupting by creating simplicity so what are the challenges of creating that simplicity and delivering that seamless experience that continuous experience across cloud is it engineering is it mindset is it culture is it technology what is it well i mean look at look you see the recession that we're we're hitting you see there is a significant problem that we have in the general it industry right now and it's called skills gap skills shortage it's two problems we don't have enough people and we don't have enough people that know cloud and the reason is everybody on the same tuesday three and a half years ago all said now i mean i'm moving the cloud we're a technology company we don't make sneakers anymore we don't make beer we're a technology company and we're going to digitally transform and we're going to move the cloud guess what three years ago there were probably seven people that understood cloud now everyone on the same tuesday morning all decides to try to hire those same seven people there's just not enough people around so you're going to need software and you're going to have to put the intelligence into the software because you're not going to be able to a hire those people and b even if you hire them you can't keep them as soon as they learn cloud guess what happens dave they're off they're on to the next job at the next highest bidder so how are you going to handle that you have to have software that intelligent software that is going to simplify things for you we have people managing massive multi-cloud network and network security people with two people on-prem they got hundreds right you it's not about taking that complex model that it had on-prem and jam it into the cloud you don't have the people to do it and you're not going to get the people to do it you know i want to ask you yeah so i want to ask you about the go to market challenges because we our industry gets a bad rap for for selling we're really good at selling and then but but actually delivering what we sell sometimes we fall down there so so i love tom sweet as cfo of of dell he talks about the the say do ratio uh how that's actually got to be low but you know but you know what i mean uh the math the fraction guy right so but do do what you say you're going to do are there specific go to market challenges related to this type of cross cloud selling where you can set you have to set the customer's expectations because what you're describing is not going to happen overnight it's a journey but how do you handle that go to market challenge in terms of setting those customer expectations and actually delivering what you say you can sell and selling enough to actually have a successful business um so i think everything's outside in so so i think the the what really is exciting to me about this cloud computing model that with the transformation that we're going through is it is business-led and it is led by the ceo and it is led by the business units they run the business it is all about agility is about enabling my developers and it's all about driving the business market share revenue all these kind of things you know the last transformation of mainframe to on to pc client server was led by technologists it wasn't led by the business and it was it was really hard to tie that to the business so then so this is great because we can look at the initiatives you can look at the the the initiatives of the ceo in your company and now as an i.t person you can tie to that and they're going to have two or three or four initiatives and you can actually map it to that so that's where we start is let's look at what the c your ceo cares about he cares about this he cares about that he cares about driving revenue he cares about agility of getting new applications out to the market sooner to get more revenue there's this and oh by the way transfer made transforming your infrastructure to the cloud is the number one thing so it's all about agility so guess what you need to be able to respond to that immediately because tomorrow the business is going to go to you and say great news dave we're moving to gcp wait what no one told me about that well we're telling you now and uh you need to be ready tomorrow and if you're sitting there and you're tied to the low-level constructs and all you know is aws well i don't have those people and even if i have even if i could hire them i'm not allowed to because i can't hire anybody how am i going to respond to the business and the needs of the business now all of a sudden i'm in the way as the infrastructure team of the ceo's goals because we decided we need to we need to get the ai capabilities of gcp and we're moving to gcp or i just did a big deal with gcp and uh miraculously they said i need to run on gcp right i did a big deal with google right guess what comes along with that oh you're moving to gcp great the business says we're moving to gcp and the i.t guys are sitting there going well no one told me well sorry so it's all about agility it's all about that and the and and complexity is the killer to agility this is all about business they're going to come to you and say we just acquired a company we need to integrate them oh but they got they use the same ip address range as we do there's overlapping ips and oh by the way they're in a different cloud how do i do that no one cares the business doesn't care they're like me they're very impatient get it done or we'll find someone who will yeah so you've got to get ahead of that and so when we in terms of when we talk to customers that's what we do this isn't just about defenses this is about making you get promoted making you do good for your company such that you can respond to that and maybe even enable the company to go do that like we're going to enable people to do true multi-cloud applications because the infrastructure has to come first right you you put the foundation in your big skyscraper like the crew behind me and the plumbing before you start building the floors right so infrastructure comes first then comes then comes the applications yeah so you know again some people call it super cloud like us multi-cloud 2.0 but the the real mega trend that i see steve and i'd love you to bottom line this and bring us home is you know andreessen's all companies are software companies it's like version 2.0 of that and the applications that are going to be built on that top this tie into the digital transformations it was goldman it's jpmc it's walmart it's capital one b of a oracle's acquisition of cerner is going to be really interesting to see these super clouds form within industries bringing their data their tooling and their specific software expertise built on top of that hyperscale infrastructure and infrastructure for companies like yours so bottom line is stephen steve what's the future of cloud how do you see it the future is n plus one so two years ago people had one plus one i had what i had on prem and then what i had in aws they today if you talk to an enterprise they'll have what they call four plus one right which is four public clouds plus what i have on prem it's going to n plus one right and what's going to happen is exactly what you said you're going to have industry clouds you're going to the the multi-cloud aspect of it is going to end it's not going to go from four to one some people think oh it's not going to be four it's going down to one or two bs it's going to end it's going to a lot as they start extending to the edge and they start integrating out to the to the branch offices it's not going to be about that branch offer so that edge iot or edge computing or data centers or campus connecting into the cloud it's going to be the other way around the cloud is going to extend to those areas and you're going to have ai clouds you know whether it's you know ultra beauty who's a customer of ours who's starting to roll out ar and vr out to their retail stores to show you know makeup and this and the other thing these are new applications transformations are always driven by new applications that don't exist this isn't about lift and shift of the existing applications the 10x tam in this market is going to becomes all the new things that's where the explosion is going to happen and you're going to see end level those those branch offices are going to look like clouds and they're going to need to be stitched together and treated like one infrastructure so it's going to go from four plus one to n plus one and that's what you're gonna want as an enterprise i'm gonna want n clouds so we're gonna see an explosion it's not going to be four it's going to be end now at the end underneath all of that will be leveraging and effectively commoditizing the existing csps yeah and but you're going to have an explosion of people commoditizing them and just like the goldmans and the industry clubs are going to do they're going to build their own eye as well right no way no way it's that's what's going to happen it's going to be a 10x on what we saw last decade with sas it's all going to happen around clouds and supercloud steve malini thanks so much for coming back in the cube and helping us sort of formulate this thinking i mean it really started with with with you and myself and john and nick and really trying to think this through and watching this unfold before our eyes so great to have you back thank you yeah it's fun thanks for having me are you welcome but keep it right there for more action from super cloud 22 be right back [Music] you

Published Date : Sep 9 2022

SUMMARY :

that to me starts enabling this what you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
12 yearsQUANTITY

0.99+

threeQUANTITY

0.99+

twoQUANTITY

0.99+

tomorrowDATE

0.99+

Steve MullaneyPERSON

0.99+

nick sterilePERSON

0.99+

15 yearsQUANTITY

0.99+

fourQUANTITY

0.99+

fiveQUANTITY

0.99+

stevePERSON

0.99+

steve johnPERSON

0.99+

two problemsQUANTITY

0.99+

hundredsQUANTITY

0.99+

seven peopleQUANTITY

0.99+

awsORGANIZATION

0.99+

johnPERSON

0.99+

two years agoDATE

0.99+

todayDATE

0.99+

facebookORGANIZATION

0.99+

AviatrixORGANIZATION

0.98+

three years agoDATE

0.98+

nickPERSON

0.98+

gartnerORGANIZATION

0.98+

two peopleQUANTITY

0.98+

steve maliniPERSON

0.98+

steve melaniePERSON

0.98+

last yearDATE

0.97+

tuesdayDATE

0.97+

two three years agoDATE

0.97+

tuesday morningDATE

0.97+

oneQUANTITY

0.97+

googleORGANIZATION

0.96+

walmartORGANIZATION

0.96+

three and a half years agoDATE

0.96+

aviatrixORGANIZATION

0.96+

azureORGANIZATION

0.94+

last decadeDATE

0.94+

Supercloud22ORGANIZATION

0.93+

firstQUANTITY

0.93+

version 2.0OTHER

0.93+

three and a half years agoDATE

0.89+

a minuteQUANTITY

0.88+

10xQUANTITY

0.88+

four initiativesQUANTITY

0.85+

stephenPERSON

0.84+

supercloudORGANIZATION

0.82+

ultra beautyORGANIZATION

0.8+

super cloud 22ORGANIZATION

0.8+

andreessenPERSON

0.72+

pointOTHER

0.72+

nasiraORGANIZATION

0.72+

goldmansORGANIZATION

0.67+

oracleORGANIZATION

0.63+

tom sweetPERSON

0.62+

number oneQUANTITY

0.61+

dellORGANIZATION

0.57+

public cloudsQUANTITY

0.57+

premORGANIZATION

0.54+

snowflakeEVENT

0.52+

presidentPERSON

0.5+

azureTITLE

0.46+

cloudsORGANIZATION

0.43+

Jon Siegal & Dave McGraw | VMware Explore 2022


 

welcome back everyone to thecube's live coverage in san francisco for vmware explorer 2022 formerly vmworld i'm john furrier david live dave 12 years we've been covering this event formerly vmware first time in west now it's explore we've been in north we've been in south we've been in vegas multi-cloud is now the exploration vmware community is coming in john siegel svp at dell cube alumni dave mccraw vp at vmware guys thanks for coming back both cube alumni it's great to see you very senior organizations senior roles in the organizations of vmware and dell one year since the split great partnership continuing i mean some of the conversations we've been having over the past few years is that control plane the management layer making everything work together it's essentially been the multi-cloud hybrid cloud story what's the update what's how's the partnership look yeah i you know i just to start off i mean i would say i don't think our partnership's been any has ever been any better um if you look at you mention our vision very much a shared vision in terms of the multi-cloud world and i don't think we've ever had more joint innovation projects at one time i think we have over 40 now dave that are going on across multi-cloud ai cyber security uh modern applications and and uh you know here just at you just vmworld vmware explorer we have over 30 uh vmware sessions that are featuring dell um and this is i think more than we've ever had so look i think um there's a lot of momentum there and we're really looking forward to what's to come so you guys obviously spent a lot of time together when vmware was part of dell and then you've been it's been a year since the spin and then you codified i think it was a five-year agreement you know so you had some time to figure that out and then put it into paper so you just kind of quantified some of the stuff that's going on but now we're entering a yet another phase so that that that that agreement's probably more important than ever now i mean list in terms of getting it documented and an understanding right yeah that agreement really defines a framework for solution development and for go to market so we've been doing it and refining it for the last five years so now you know putting and codifying it into a written signed agreement it basically is instantiating what we've been doing that we know works uh where we can drive uh solution development we can drive deep architectural co-innovation together as well and as john said across multiple you know project and solution areas so we we've been talking to years to you know a lot of these strat guys guys like matt baker about things like you know you see aws do nitro and then of course project monterey and and i know that you guys have had a you know a big sort of input into that and so now to see it come to fruition is is huge because you know from our view it's the future of computing architectures how do you handle you know data rich applications ai applications that's what are your thoughts on here i couldn't agree more uh project monterey is a great example of how we're innovating together we just talked about i mean first of all it's all so we have vxrail which let's let's start there right we have over 19 000 joint customers right now we continue to innovate more and more on the vxrail architecture great example of that as our partnership with project monterey and taking essentially vsphere 8 and running it for the first time on an hci system directly on the dp used itself right on the dpus ability now to offload nsxt from from the cpus to the dpus uh hope you know in the short term first of all great benefits for customers in terms of better performance but as you just mentioned it's game changing in terms of laying the foundation for the future architectures that we plan on together helping out customers there's one other dynamic for you on is um and it's not unique to dell but dell's the biggest you know supply supplier partner etc but you're able to take vmware software and drive it through your business and and that enables you to get more subscription revenue and makes it stickier and that's a really important change from you know 10 years ago yeah and it's it's a combination as you know of dell software and vmware software together absolutely and i think what's with this is a game-changing innovation that you can run on top of our joint system vxrail if you will um and now what our customers can expect is life cycle automation of now you know the dpus as well as tanzu as well as everything else we layer on top of that core foundation that we have over 19 000 customers running today so i mean like that 19 000 number i want to get back up to the vx rail and you mentioned vsphere that's big news here this year vsphere 8 big release a lot of going on what's the hci angle you mentioned that what's in it for the customer what does that mean for the folks here because let's face it the vsphere aids got everyone in that they've all the v-sections are going going crazy right another vsphere release getting training they have the labs here what's it mean for the customers what's the value there with that hci solution with the gpus well first of all vsphere 8 as we know it has a lot of goodies in it but you know what what i think to me what's been most powerful about this is the ability to run vsphere 8 uh and and specifically on the dpus now you can run it it is open up all new possibilities now and so that nsxt that i mentioned you know running that on gpus opens up a whole new uh architecture now for our customers going forward and now really sets us up for modern distributed architecture for the future so like edge okay yeah and vsphere 8 brings in you know cloud connectivity as well so you know customers can run in a cloud disconnected mode they can run in a cloud connected mode so you know that's going to bring in the ability to do specialized things on security cycle management there's a whole series of services that can now be added as well as you know leveraging you know vcenter management capabilities so what's happening at the edge we had i think it was lows on hotel tech world right okay good not the other one um but so so that's got to be exploding now with that with that because it just changes the game for for these stores there's i mean retail uh manufacturing maybe you can give us an update on there's so much happening on the edge side as you know i mean that's where most of the a lot of the innovations happening right now is at the edge and a lot of the companies we talked to 8x right 8x expectation of increase in uh edge workloads over the next and the data challenge too and the data challenge is huge so you heard about the innovations with vsphere 8. in addition to that we just introduced today as well the smallest vx rail for the edge ever this thing is it's like think picture a couple eight and a half by 11 notebooks not much not much you know maybe a little wider than that but not much more um you know these these are stacked on top of each other these are you can rack and stack and mount these things anywhere and it also is the first aci system that has you know a built-in hardware witness so this helps set it up for environments that are you know network bandwidth constrained or have high high latency no longer an issue next gen app is going to want to have a local data server at the edge right and with compute there right high performance right right so now you're getting it across the wire yes you get racket stack a couple of these small things i mean they can they can fit into like a you know clark kent's briefcase right these things are so small um you want to do the analytics on site and return responses back you don't want to be moving massive data payloads off the egg so you got to have the right level of compute to run machine learning algorithms and and do the analytics type work that you want to do to make local decisions yeah i mean we just had david lithimon who was one of the keynote speakers here at the event and we've been talking about super cloud and multi-cloud meta cloud all the different versions of what we see as this next-gen and this brings up a point of like his advice to young people learn how multi-cloud learn about system architecture because if you can figure out how to put it together you're going to have to make more money anyway that this whole edge piece opens up huge challenges and opportunities around how do you configure these next-gen apps what does the ai look like what's the data architecture this is not like get some training curriculum online and you get you know 101 and you're getting a job no this is more complicated but with the hardware you guys make it easier so where's the complexity shift between having a powerful edge device like the vxrail with the vsphere what's the ec button on that like how do you guys what's the vision because this is going to be a major battleground this whole edge piece yeah it's going to be huge well i think when you look at the innovation that dell is bringing to market with technologies like outlander and then designing that into vxrail and then you combine that with our tonzu capabilities to manage development and deployment of applications this is about heterogeneous deployment and management at scale of applications with technologies like tons of mission control then deploying service mesh right for security being able to use sassy to be able to secure you know with cloud security over the wire so it's bringing together multiple technologies to deliver simplicity to the customer the ability to go one to many you know in terms of being able to deploy and manage and update whether that's a security patch or an application update and do that very rapidly at a low cost so the benefit with this solution now just putting this together is i can ship a box small and or stack them and essentially it's done remotely it's that's provision the provisioning issues not a truck roll as they say or professional services enabled you can just drop that out there and this is where the customers need to be yeah that absolutely is that the vision don't get that right exactly you don't you don't need the you don't need the skills yeah you don't need the specialized skills you don't need a lot of space you don't need you know high network bandwidth all these things right all these innovations that we're talking about here um really combined into really enabling a whole new whole new future here for edge is are you doing apex now is that i think thickest part sure part of yours okay so um is apex fitting into the to the edge how does it fit yeah i mean well first of all you know a lot of what we talked with apex is really about a consumption a way to ensure there's a common cloud experience wherever the data is and where the applications are and so absolutely edge fits into this as well and so we have we have common ways to consume our infrastructure today our joint infrastructure whether it's in the data center at the edge um or you know uh in the cloud usain ragu when he was on i said it was great keynote loved it one of the things that i didn't think there was enough of was security and he's like yeah we only had so much time but vmware is a very strong security story we heard a really strong security story at dell tech world i mean half the innovations and the new you know storage products were security and the new os's and it was impressive what what's how are you guys working together on security is that one of those let me give you a few key things you know our teams are working together at the engineer to engineer level you know reference architectures for zero trust as an example being able to look you know hardware root of trust up into the application layer right so we're looking at really defense in depth here you know i mentioned what we're doing with sassy right with cloud security capabilities so you really have to look at this from the edge to the core with the you know from a networking perspective getting the network the insights on things that maybe anomalies that may be happening on the network so using our network insight technology you know uh nsx and then being able to ultimately uh have a secure development pipeline as well i mean you we all know about the supply chain attacks that happen right and so being able to have a you know secure pipeline for development is critical for both of our companies working together i think the tan zoo and you mentioned the developer self-service that experience combined with kind of the power of the dell you know let's face it the boxes are awesome hardware matters and software matters so bringing that expertise together michael daley always used to say on thecube better together in respect to vmware and dell a lot of fruit has been born from that labor right specifically around and now when you add the tan zoo and you get vsphere you got the operational excellence you got the you got the performance and scale with the dell boxes and hardware and software and now you've got the tan zoo what's missing or is it all there now i mean where how would you how would you guys peg the progress bar is it like it's all rocking right now or or i'd say you're never done first of all but i you know i look at some of the innovations that we've brought to market recently where we've are combining and stacking these technologies into a more defense in-depth like solution you know bringing nsx onto vxrail so that you can flip a switch easily and light up the firewall the new plug-in yeah that's a great example simple simple um carbon black workload another example where we're taking carbon black technology that was typically on endpoints you know on pcs bringing that into the data center right and leveraging all the analytics and insights around you know being able to identify anomalies and then remediate those anomalies so we're seeing very good traction with those and the cloud native developers containers they're all native container working with compute and container storage object store in the cloud kubernetes we've embraced it yeah i mean yeah containers running containers and vms on the same infrastructure common way to manage it all i mean that that's been a big part of it as well obviously a lot of the focus that dell's bringing here as well is is the inability to run that stack easily right you heard the announcement on uh tanzu for kubernetes operators right earlier today tko we call it uh you know that running on vxrail now is really targeted at the i.t operator in allowing them to easily stand up a self-service developer devops environment on vxrail going forward and then a piece that might be invisible to them is back to monterey isolation right encryption and data moving you know absolutely storage the security the compute right the management right that's that's a complete and it's about reducing attack services as well right the security perspective as well when you when you're moving nsxt onto a dpu you're doing that as well so there's it takes the little things right at the end of the day security is a mindset up across both companies in terms of how we approach our architectures um and it's the you know a lot of times it's the little things as well that we make sure right so shared vision working at the engineering levels together for many many years know that you guys are validating more of that coming what's next take us through okay we're here 2022 we got super cloud multi-cloud hybrid full throttle right now it's hybrid's a steady state that's cloud operations infrastructure as code has happened it's happening what's next for you guys in the relationship can you share a little bit that you can if you can what we can expect what you see uh with monterrey is the start of a re-architecting of i.t infrastructure not just in the data center but also at the edge right these technologies will move out and be pervasive you know across i think edge to colo to core data center to cloud right and so that's a starting point now we're looking at memory tiering right i think we talked last time about capitola and memory tiering and you know being able to bring that forward uh being able to do more with confidential computing as an example right secure enclaves and confidential computing so you know a lot of this is focused around simplicity and security going forward and ease of management around take the heavy lifting away from the customer abstract that in offer the power and performance that's right and it's going to come down to delivering time to value for our customers you know can we cut that time to value by 25 50 percent so they can be in production faster yeah i think project monterey is something we'll be building on for a long time right i mean this is the start of a major new future architecture of these companies so if you had to pick one we have 40 initiatives that are joined together real literally project monterey is one of my favorites for sure in terms of what it's going to do not just for that common cloud experience but for the edge and and we talked a lot about the edge today and where that's headed you think it's going to explode up new apps i really do think so well it's going to put you in a new it's going to put in curve yeah absolutely right and operationally uh security wise um from a modern apps perspective i mean all it checks all the boxes and it's going to allow us to to help and take our existing customers on that journey as well what's great about this conversation we've been following both you guys for a long time and your companies and and technology upgrades and and the business impact and open source and all doing all this for customers but the wave that's coming we're seeing the expo hall here i mean it's people are really excited they're enthused they're committed highly confident that this this wave is coming they kind of see it people kind of seeing the fog lift they're seeing money making value creation people kind of feeling more comfortable but still a little nervous around you know what's coming next because it's still uncertainty but pretty good ecosystem i'd have to say that's pretty pretty interesting yeah a lot of them are excited about you know what they can do at the edge and how they can differentiate their businesses i mean that's right well congratulations guys thanks for coming on thecube and sharing the update thank you it more innovation it's not stopping here at vmware explorer dell and vm we're continuing to have that kind of relationship joint engineering it's all coming together and you can mix and match this and the stack but it's ultimately going to be cloud operations edge is the action of course hybrid cloud as well it's thecube thanks for watching [Music] you

Published Date : Aug 31 2022

SUMMARY :

the edge to the core with the you know

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
san franciscoLOCATION

0.99+

five-yearQUANTITY

0.99+

Jon SiegalPERSON

0.99+

Dave McGrawPERSON

0.99+

john siegelPERSON

0.99+

david lithimonPERSON

0.99+

vmwareORGANIZATION

0.99+

davePERSON

0.99+

vsphere 8TITLE

0.99+

over 19 000 customersQUANTITY

0.99+

bothQUANTITY

0.99+

40 initiativesQUANTITY

0.99+

johnPERSON

0.98+

19 000QUANTITY

0.98+

12 yearsQUANTITY

0.98+

john furrierPERSON

0.98+

both companiesQUANTITY

0.98+

first timeQUANTITY

0.98+

michael daleyPERSON

0.98+

10 years agoDATE

0.98+

a yearQUANTITY

0.98+

dellORGANIZATION

0.97+

2022DATE

0.97+

matt bakerPERSON

0.97+

oneQUANTITY

0.96+

eight and a halfQUANTITY

0.96+

25 50 percentQUANTITY

0.95+

one timeQUANTITY

0.95+

one yearQUANTITY

0.95+

aws do nitroORGANIZATION

0.95+

todayDATE

0.95+

davidPERSON

0.95+

over 30QUANTITY

0.94+

apexTITLE

0.94+

11 notebooksQUANTITY

0.94+

this yearDATE

0.93+

over 19 000 joint customersQUANTITY

0.92+

vsphereTITLE

0.92+

over 40QUANTITY

0.91+

vmworldORGANIZATION

0.91+

last five yearsDATE

0.9+

vegasLOCATION

0.9+

cubeORGANIZATION

0.89+

vxrailTITLE

0.89+

clark kentPERSON

0.86+

8xQUANTITY

0.85+

outlanderORGANIZATION

0.84+

earlier todayDATE

0.84+

firstQUANTITY

0.84+

thecubeORGANIZATION

0.83+

projectORGANIZATION

0.8+

livePERSON

0.8+

VMware ExploreTITLE

0.79+

project montereyORGANIZATION

0.78+

lotQUANTITY

0.77+

sassyTITLE

0.77+

dell cubeORGANIZATION

0.76+

a lot of goodiesQUANTITY

0.73+

vmORGANIZATION

0.71+

a lotQUANTITY

0.71+

past few yearsDATE

0.71+

first aciQUANTITY

0.7+

monterreyORGANIZATION

0.69+

waveEVENT

0.69+

2021 095 Kit Colbert VMware


 

[Music] welcome to thecube's coverage of vmworld 2021 i'm lisa martin pleased to welcome back to the program the cto of vmware kit kohlberg welcome back to the program and congrats on your new role thank you yeah i'm really excited to be here so you've been at vmware for a long time you started as an intern i read yeah yeah it's been uh 18 years as a full-timer but i guess 19 if you count my internship so quite a while it's many lifetimes in silicon valley right many lifetimes in silicon valley well we've seen a lot of innovation from vmware in its 23 years you've been there the vast majority of that we've seen a lot of successful big tech waves ridden by vmware in april vmware pulled tanzu and vmware cloud foundation together vmware cloud you've got some exciting news with respect to that what are you announcing today well we got a lot of exciting announcements happening at vmworld this week but one of the ones i'm really excited about is vmware cloud with tons of services so let me talk about what these things are so we have vmware cloud which is really us taking our vmware cloud foundation technology and delivering that as a service in partnership with our public cloud providers but in particular this one with aws vmware cloud on aws we're combining that with our tanzu portfolio of technologies and these are really technologies focused at developers at folks driving devops building and operating modern applications and what we're doing is really bringing them together to simplify customers moving from their data centers into the cloud and then modernizing their applications it's a pattern that we see very very often this notion of migrate and then modernize right once you're on a modern cloud infrastructure makes it much easier to modernize your applications talk to me about some of the catalysts for this change and this offering of services was it you know catalyzed by some of the events we've seen in the world in the last 18 months and this acceleration of digital adoption yeah absolutely and we saw this across our customer base across many many different industries although as you can imagine those industries that that were really considered essential uh were the ones where we saw the biggest sorts of accelerations we saw a tremendous amount of people needing to support remote workers overnight right and cloud is a perfect use case for that but the challenge a lot of customers had was that they couldn't take the time to retool that they had to use what they already had and so something like vmware cloud was perfect for that because it allowed them to take what they were doing on-prem and seamlessly extend it into the cloud without any changes able to do that you know almost overnight right but at the same time what we also saw was the acceleration of their digital transformation people are now online they're needing to interact with an app over their phone to get something you know remotely delivered or to schedule maybe um an appointment for their pet because you know a lot of people got pets during the pandemic and so you just saw this rush toward digitization and these new applications need to be created and so as customers move their application estate into the cloud with vmware cloud and aws they then had this need to modernize those applications to be able to deliver them faster to respond fast to the very dynamic nature of what was happening during the pandemic so let's talk about uh some of the opportunities and the advantages that vmware cloud with tanzania service is going to deliver to those it admins who have to deliver things even faster yep so let me talk a bit about the tech and then talk about how that fits into uh what the users will experience so vmware cloud with tons of services is really two key components uh the first of which is the tanzu kubernetes grid service the tkg service as we call it so what this is is actually a deep integration of tonsil kubernetes grid with vmware cloud and and the kubernetes we've actually integrated into vmware cloud foundation folks who are familiar with vmware may remember that a couple of years ago we announced project pacific which was a deep integration of kubernetes into vsphere essentially enabling vsphere to have a kubernetes interface to be natively kubernetes and what that did was it enabled the i.t admins to have direct insight inside of kubernetes clusters to understand what was happening in terms of the containers and pods that that their developers were running it also allowed them to leverage uh their existing vsphere and vmware cloud foundation tooling on those workloads so fast forward today we we have this built in now and what we're doing is actually offering that as a service so that the customer doesn't need to deal with managing it installing it updating any of that stuff instead they can just leverage it they can start creating kubernetes clusters and upstream conformant kubernetes clusters to allow their developers to take advantage of those capabilities but also be able to use their native tooling on it so i think that's really really important is that the it admin really can enable their developers to seamlessly start to build and operate modern applications on top of vmware cloud got it and talk to me about how this is going to empower those it admins to become kubernetes operators yeah well i think that's exactly it you know we talk to a lot of these admins and and they're seeing the desire for kubernetes uh from their lines of business from you know from the app teams and the idea is that when you look start looking at the kubernetes ecosystem there's a whole bunch of new tooling and technology out there we find that people have to spend a lot of time figuring out what the right thing to use is and for a lot of these folks they say hey i've already figured out how to operate applications in production i've got the tooling i've got the standardization i got things like security figured out right super important and so the real benefit of this approach and this deep integration is it allows them to take those those tools those operational best practices that they already have and now apply them to these new workloads fairly seamlessly and so this is really about the power of leveraging all the investments they've made to take those forward with modern applications and the total adjustable market here is pretty big i heard your cto referring to that in an interview in september and i was looking at some recent vmware survey numbers where 80 of customers say they're deploying applications in highly distributed environments that include their own data center multiple clouds uh edge and also customers said hey 90 of our application initiatives are focused on modernization so vmware clearly sees the big tam here yeah it's absolutely massive um you know we see uh many customers the vast majority something like 75 percent are using multiple clouds or on-prem in the cloud we have some customers using even more than that and you see this very large application estate that's spread out across this and so you know i think what we're really looking at is how do we enable uh the right sorts of consistency both from an infrastructure perspective enabling things like security but also management across all these environments and by the way it's another exciting thing neglected to mention about this announcement vmware cloud with tonsil services not only includes the tonsil kubernetes grid service giving you that sort of kubernetes uh cluster as a service if you will but it also includes tons of mission control essentials and this is really the next generation of management when you start looking at modern applications and what tons of mission control focuses on is enabling managing kubernetes consistently across clouds and so this is the other really important point is that yes we want to make vmware cloud vmware cloud infrastructure the best place to build and operate applications especially modern ones but we also realize that you know customers are doing all sorts of things right they're in the native cloud whether that's aws or azure or google and they want ways of managing more consistently across all these environments in addition to their vmware environments both in the cloud and on-prem and so tons of mission control really enables that as well and that's another really powerful aspect of this is that it's built in to enable that next level of administration and management that consistency is critical right i mean that's probably one of the biggest benefits that customers are getting is that familiarity with the console the consistency of being able to manage so that they can deploy apps faster um that as businesses are still pivoting and changing direction in light of the pandemics i imagine that that is a huge uh from a business outcomes perspective the workforce productivity there is probably pretty pretty big yeah and i think it's also about managing risk as well you know one of the the biggest worries that we hear from many of the cios uh ctos executives that we talk to at our customers is this uh software supply chain risk like what is it exactly like what are the exact bits that they're running out there right in their applications because the reality is that um those apps are composed of many open source technologies and you know as we saw with solarwinds it's very possible for someone to get in and you know plant malicious code into their source repository such that as it gets built and flows out it'll you know just go out and customers will start using it and it's a huge huge security vulnerability and one thing on that note that customers are particularly worried about is the lack of consistency across their cloud environments that because things are done different ways and the different teams have different processes across different clouds it's easy for small mistakes to creep in there for little openings right that a hacker might be able to go and exploit and so i think this gets back to that notion of consistency and that you're right it's great for productivity but the one i think that's almost in some ways you might say uh for many of these folks more important for is from a security standpoint that they can validate and ensure they're in compliance with their security standards and by the way you know this is uh for most companies a board level discussion right the board is saying hey like do we have the right controls in place because it is um such an important thing and such a critical risk factor it is a critical risk factor we saw you mentioned solar winds but just in the last 18 months the the massive changes to the threat landscape the huge rise in ransomware and ddos attacks you know we had this scatterer everybody went home and you've got you know the edge is booming and you've got folks using uh you know not using their vpns and things when they should be so that the fact that that's a board level discussion and that this is going to help from a risk mitigation perspective that consistency that you talked about is huge i think for a customer in any industry yep yeah and it's pretty interesting as well like you mentioned ransomware so we're doing some work on that one as well actually not specifically with this announcement but it's another vmware cloud service that plugs into this uh seamlessly vmware cloud disaster recovery and one of the really cool features that we're announcing at vmworld this week is the ability to actually support and and maybe uh handle ransomware attacks and so the idea there is that if you do get compromised and what typically happens is that the hackers come in and they encrypt you know some of your data and they say hey if you want to get access to it you got to pay us and we'll decrypt it for you but if you have the right dr solution um that's backing up on a fairly continuous basis it means that whatever data might be encrypted you know would only be a small delta like the last let's say hour or two of data right and so what we're looking at is leveraging that dr solution to be able to very rapidly restore specific individual files uh that may have been compromised and so this is like one way that we're helping customers deal with that like obviously we want to put a whole bunch of other security protections in place and we do when we enable them to do that but one thing when you think about security is that it's very much defense in depth that you have multiple layers of the fail-safes there and so this one being kind of like the end result that hackers do get in they do manage to compromise it they do manage to get a hold of it and encrypt it well you still got unencrypted backups that you control and that you have um a very clean delineation and separation from just like kind of an architectural standpoint that the hackers won't be able to get at right so that you can control that and restore it so again you know this is something very top of mind for us and it's funny because we don't always lead with the security angle maybe we should as i'm saying it here but uh but it's something that's very very top of mind for a lot of our customers it's something that's also top of mind for us and that we're focused on it is because it's no longer if we get attacked it's one and they've got to be able to have the right recovery strategy so that they don't have to pay those ransoms and of course we only hear about the big ones like the solar winds and the colonial pipelines and there's many more going on when i get back to vmware cloud with tanzania services talk to me about how this fits into vmware's bigger picture yeah yeah yeah great question thanks for bringing me back i'd love to geek out on some of these things so um but when you take a step back so what we're really doing uh with vmware cloud is trying to provide this really powerful infrastructure layer uh that is available anywhere customers want to run applications and that could be in the public cloud it could be in the data center it could be at the edge it could be at all those locations and you know you mentioned edge earlier and i think we're seeing explosive growth there as well and so what we're really doing is driving uh broad optionality in terms of how customers want to adopt these technologies and then as i said we're sort of you know we're kind of going broad many locations we're also building up in each of those locations this notion of ponzu services being seamlessly integrated in doing that uh you know starting now with vmware cloud aws but expanding that to every every location that we have in addition you know we're also really excited another thing we're announcing this week called project arctic now the idea with arctic is really to start driving more choice and flexibility into how customers consume vmware cloud do they consume it as software or as a service and where do they do that so traditionally the only way to get it delivered as a service would be in the public cloud right vmware cloud aws you can click a few buttons and you get a software defined data center set up for you automatically now traditionally on-prem we haven't had that we we did do something pretty powerful uh a year or two back with the release of vmware cloud on dell emc we can deliver a service there but that often required new hardware you know new setup for customers and customers are coming back to us and saying hey like we've got these really large vsphere deployments how do we enable them to take advantage of all this great vmware cloud functionality from where they are today right they say hey we can't rebuild all these overnight but we want to take advantage of vmware cloud today so that's what really what project arctic is focused on it's focused on connecting into these brownfield existing vsphere environments and delivering some of the vmware cloud benefits there things like being able to easily well first of all be able to manage those environments through the vmware cloud console so now you have one place where you can see your on-prem deployments your cloud deployments everything being able to really easily move uh applications between on-prem and the cloud leveraging some of the vmware cloud disaster recovery capabilities i just mentioned like the ransomware example you can now do that even on prem as well because keep in mind it's people aren't attacking you know the hackers aren't attacking just the public cloud they're attacking data centers or anywhere else where these applications might be running and so arctic's a great example of where we're saying hey there's a bunch of cool stuff happening here but let's really meet customers where they're at and many of our customers still have a very large data center footprint still want to maintain that that's really strategic for them or as i said may even want to be extending to the edge so it's really about giving them more of that flexibility so in terms of meeting customers where they are i know vmware has been focused on that for probably its entire history we talk about that on the cube in every vmworld where can customers go like what's the right starting point is this targeted for vmware cloud on aws current customers what's kind of the next steps for customers to learn more about this yeah absolutely so there's a bunch of different ways so first of all there's a tremendous amount of activity happening here at vmworld um just all sorts of breakout sessions like you know detailed demos like all sorts of really cool stuff just a ton of content i'm actually kind of i'm in this new role i'm super excited about it but one thing i'm kind of bummed out about is i don't have as much time to go look at all these cool sessions so i highly recommend going and checking those out um you know we have hands-on labs as well which is another great way to test out and try vmware products so hold.vmware.com uh you can go and spin those things up and just kind of take them for a test drive see what they're all about and then if you go to vmc.vmware.com that is vmware cloud right we want to make it very easy to get started whether you're in just a vsphere on-prem customer or whether you already have vmware cloud and aws what you can see is that it's really easy to get started in that there's a ton of value-add services on top of our core infrastructure so it's all about making it accessible making it easy and simple to consume and get started with so there's a ton of options out there and i highly recommend folks go and check out all the things i just mentioned excellent kit thank you for joining me today talking about vmware cloud with tons of services what's new what's exciting the opportunities in it for customers from the i.t admin folks to be empowered to be kubernetes operators to those businesses being able to do essential services in a changing environment and again congratulations on your promotion that's very exciting awesome thank you lisa thank you for having me our pleasure for kit colbert i'm lisa martin you're watching thecube's coverage of vmworld 2021 [Music] you

Published Date : Oct 1 2021

SUMMARY :

and by the way you know this is

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
lisa martinPERSON

0.99+

75 percentQUANTITY

0.99+

septemberDATE

0.99+

kit colbertPERSON

0.99+

23 yearsQUANTITY

0.99+

18 yearsQUANTITY

0.99+

vmwareORGANIZATION

0.99+

lisaPERSON

0.98+

todayDATE

0.98+

pandemicEVENT

0.98+

two key componentsQUANTITY

0.98+

2021DATE

0.97+

oneQUANTITY

0.97+

80 of customersQUANTITY

0.97+

aprilDATE

0.97+

tons of servicesQUANTITY

0.97+

one thingQUANTITY

0.97+

vmworldORGANIZATION

0.97+

hold.vmware.comOTHER

0.97+

twoQUANTITY

0.96+

tonsQUANTITY

0.96+

a lot of peopleQUANTITY

0.96+

googleORGANIZATION

0.96+

this weekDATE

0.96+

this weekDATE

0.96+

hourQUANTITY

0.95+

one wayQUANTITY

0.95+

lisa martinPERSON

0.95+

awsORGANIZATION

0.94+

firstQUANTITY

0.94+

a ton of optionsQUANTITY

0.94+

one placeQUANTITY

0.93+

last 18 monthsDATE

0.92+

a yearQUANTITY

0.92+

vmc.vmware.comOTHER

0.91+

bothQUANTITY

0.91+

a couple of years agoDATE

0.9+

19QUANTITY

0.9+

azureORGANIZATION

0.87+

arcticORGANIZATION

0.86+

pandemicsEVENT

0.84+

eachQUANTITY

0.83+

vmware cloudTITLE

0.81+

tons of mission controlQUANTITY

0.8+

90 of our application initiativesQUANTITY

0.79+

vmware cloudORGANIZATION

0.79+

2021 095OTHER

0.77+

a ton of value-add servicesQUANTITY

0.77+

vmware cloud foundationORGANIZATION

0.77+

mission controlQUANTITY

0.76+

a lot of our customersQUANTITY

0.76+

lot of customersQUANTITY

0.76+

vmware cloudORGANIZATION

0.74+

a lot of these folksQUANTITY

0.68+

ponzuORGANIZATION

0.67+

silicon valleyLOCATION

0.63+

lotQUANTITY

0.62+

vmwareTITLE

0.59+

Kit ColbertORGANIZATION

0.59+

cloud awsTITLE

0.59+

HelloFresh v2


 

>>Hello. And we're here at the cube startup showcase made possible by a Ws. Thanks so much for joining us today. You know when Jim McDaid Ghani was formulating her ideas around data mesh, She wasn't the only one thinking about decentralized data architecture. Hello, Fresh was going into hyper growth mode and realized that in order to support its scale, it needed to rethink how it thought about data. Like many companies that started in the early part of last decade, Hello Fresh relied on a monolithic data architecture and the internal team. It had concerns about its ability to support continued innovation at high velocity. The company's data team began to think about the future and work backwards from a target architecture which possessed many principles of so called data mesh even though they didn't use that term. Specifically, the company is a strong example of an early but practical pioneer of data mission. Now there are many practitioners and stakeholders involved in evolving the company's data architecture, many of whom are listed here on this on the slide to are highlighted in red are joining us today, we're really excited to welcome into the cube Clements cheese, the Global Senior Director for Data at Hello Fresh and christoph Nevada who's the Global Senior Director of data also, of course. Hello Fresh folks. Welcome. Thanks so much for making some time today and sharing your story. >>Thank you very much. Hey >>steve. All right, let's start with Hello Fresh. You guys are number one in the world in your field, you deliver hundreds of millions of meals each year to many, many millions of people around the globe. You're scaling christoph. Tell us a little bit more about your company and its vision. >>Yeah. Should I start or Clements maybe maybe take over the first piece because Clements has actually been a longer trajectory yet have a fresh. >>Yeah go ahead. Climate change. I mean yes about approximately six years ago I joined handle fresh and I didn't think about the startup I was joining would eventually I. P. O. And just two years later and the freshman public and approximately three years and 10 months after. Hello fresh was listed on the German stock exchange which was just last week. Hello Fresh was included in the Ducks Germany's leading stock market index and debt to mind a great great milestone and I'm really looking forward and I'm very excited for the future for the future for head of fashion. All our data. Um the vision that we have is to become the world's leading food solution group and there's a lot of attractive opportunities. So recently we did lounge and expand Norway. This was in july and earlier this year we launched the U. S. Brand green >>chef in the U. K. As >>well. We're committed to launch continuously different geographies in the next coming years and have a strong pipe ahead of us with the acquisition of ready to eat companies like factor in the U. S. And the planned acquisition of you foods in Australia. We're diversifying our offer now reaching even more and more untapped customer segments and increase our total addressable market. So by offering customers and growing range of different alternatives to shop food and consumer meals. We are charging towards this vision and the school to become the world's leading integrated food solutions group. >>Love it. You guys are on a rocket ship, you're really transforming the industry and as you expand your tam it brings us to sort of the data as a as a core part of that strategy. So maybe you guys could talk a little bit about your journey as a company specifically as it relates to your data journey. You began as a start up. You had a basic architecture like everyone. You made extensive use of spreadsheets. You built a Hadoop based system that started to grow and when the company I. P. O. You really started to explode. So maybe describe that journey from a data perspective. >>Yes they saw Hello fresh by 2015 approximately had evolved what amount of classical centralized management set up. So we grew very organically over the years and there were a lot of very smart people around the globe. Really building the company and building our infrastructure. Um This also means that there were a small number of internal and external sources. Data sources and a centralized the I team with a number of people producing different reports, different dashboards and products for our executives for example of our different operations teams, christian company's performance and knowledge was transferred um just via talking to each other face to face conversations and the people in the data where's team were considered as the data wizard or as the E. T. L. Wizard. Very classical challenges. And those et al. Reserves indicated the kind of like a silent knowledge of data management. Right? Um so a central data whereas team then was responsible for different type of verticals and different domains, different geographies and all this setup gave us to the beginning the flexibility to grow fast as a company in 2015 >>christoph anything that might add to that. >>Yes. Um Not expected to that one but as as clement says it right, this was kind of set up that actually work for us quite a while. And then in 2017 when L. A. Freshman public, the company also grew rapidly and just to give you an idea how that looked like. As was that the tech department self actually increased from about 40 people to almost 300 engineers And the same way as a business units as Clemens has described, also grew sustainable, sustainably. So we continue to launch hello fresh and new countries launching brands like every plate and also acquired other brands like much of a factor and with that grows also from a data perspective the number of data requests that centrally we're getting become more and more and more and also more and more complex. So that for the team meant that they had a fairly high mental load. So they had to achieve a very or basically get a very deep understanding about the business. And also suffered a lot from this context switching back and forth, essentially there to prioritize across our product request from our physical product, digital product from the physical from sorry, from the marketing perspective and also from the central reporting uh teams. And in a nutshell this was very hard for these people. And this that also to a situation that, let's say the solution that we have became not really optimal. So in a nutshell, the central function became a bottleneck and slowdown of all the innovation of the company. >>It's a classic case, isn't it? I mean Clements, you see you see the central team becomes a bottleneck and so the lines of business, the marketing team salesman's okay, we're going to take things into our own hands. And then of course I I. T. And the technical team is called in later to clean up the mess. Uh maybe, I mean was that maybe I'm overstating it, but that's a common situation, isn't it? >>Yeah. Uh This is what exactly happened. Right. So um we had a bottleneck, we have the central teams, there was always a little of tension um analytics teams then started in this business domains like marketing, trade chain, finance, HR and so on. Started really to build their own data solutions at some point you have to get the ball rolling right and then continue the trajectory um which means then that the data pipelines didn't meet the engineering standards. And um there was an increased need for maintenance and support from central teams. Hence over time the knowledge about those pipelines and how to maintain a particular uh infrastructure for example left the company such that most of those data assets and data sets are turned into a huge step with decreasing data quality um also decrease the lack of trust, decreasing transparency. And this was increasing challenge where majority of time was spent in meeting rooms to align on on data quality for example. >>Yeah. And and the point you were making christoph about context switching and this is this is a point that Jemaah makes quite often is we've we've we've contextualized are operational systems like our sales systems, our marketing system but not our our data system. So you're asking the data team, Okay. Be an expert in sales, be an expert in marketing, be an expert in logistics, be an expert in supply chain and it start stop, start, stop, it's a paper cut environment and it's just not as productive. But but on the flip side of that is when you think about a centralized organization you think, hey this is going to be a very efficient way, a cross functional team to support the organization but it's not necessarily the highest velocity, most effective organizational structure. >>Yeah, so so I agree with that. Is that up to a certain scale, a centralized function has a lot of advantages, right? That's clear for everyone which would go to some kind of expert team. However, if you see that you actually would like to accelerate that and specific and this hyper growth, right, you wanna actually have autonomy and certain teams and move the teams or let's say the data to the experts in these teams and this, as you have mentioned, right, that increases mental load and you can either internally start splitting your team into a different kind of sub teams focusing on different areas. However, that is then again, just adding another peace where actually collaboration needs to happen busy external sees, so why not bridging that gap immediately and actually move these teams and to end into into the function themselves. So maybe just to continue what, what was Clements was saying and this is actually where over. So Clements, my journey started to become one joint journey. So Clements was coming actually from one of these teams to build their own solutions. I was basically having the platform team called database housed in these days and in 2019 where basically the situation become more and more serious, I would say so more and more people have recognized that this model doesn't really scale In 2019, basically the leadership of the company came together and I identified data as a key strategic asset and what we mean by that, that if we leverage data in a proper way, it gives us a unique competitive advantage which could help us to, to support and actually fully automated our decision making process across the entire value chain. So what we're, what we're trying to do now or what we should be aiming for is that Hello, Fresh is able to build data products that have a purpose. We're moving away from the idea. Data is just a by problem products, we have a purpose why we would like to collect this data. There's a clear business need behind that. And because it's so important to for the company as a business, we also want to provide them as a trust versi asset to the rest of the organization. We say there's the best customer experience, but at least in a way that users can easily discover, understand and security access high quality data. >>Yeah, so and and and Clements, when you c J Maxx writing, you see, you know, she has the four pillars and and the principles as practitioners you look at that say, okay, hey, that's pretty good thinking and then now we have to apply it and that's and that's where the devil meets the details. So it's the four, you know, the decentralized data ownership data as a product, which we'll talk about a little bit self serve, which you guys have spent a lot of time on inclement your wheelhouse which is which is governance and a Federated governance model. And it's almost like if you if you achieve the first two then you have to solve for the second to it almost creates a new challenges but maybe you could talk about that a little bit as to how it relates to Hello fresh. >>Yes. So christophe mentioned that we identified economic challenge beforehand and for how can we actually decentralized and actually empower the different colleagues of ours. This was more a we realized that it was more an organizational or a cultural change and this is something that somebody also mentioned I think thought words mentioned one of the white papers, it's more of a organizational or cultural impact and we kicked off a um faced reorganization or different phases we're currently and um in the middle of still but we kicked off different phases of organizational reconstruct oring reorganization, try unlock this data at scale. And the idea was really moving away from um ever growing complex matrix organizations or matrix setups and split between two different things. One is the value creation. So basically when people ask the question, what can we actually do, what shall we do? This is value creation and how, which is capability building and both are equal in authority. This actually then creates a high urge and collaboration and this collaboration breaks up the different silos that were built and of course this also includes different needs of stuffing forward teams stuffing with more, let's say data scientists or data engineers, data professionals into those business domains and hence also more capability building. Um Okay, >>go ahead. Sorry. >>So back to Tzemach did johnny. So we the idea also Then crossed over when she published her papers in May 2019 and we thought well The four colors that she described um we're around decentralized data ownership, product data as a product mindset, we have a self service infrastructure and as you mentioned, Federated confidential governance. And this suited very much with our thinking at that point of time to reorganize the different teams and this then leads to a not only organisational restructure but also in completely new approach of how we need to manage data, show data. >>Got it. Okay, so your business is is exploding. Your data team will have to become domain experts in too many areas, constantly contact switching as we said, people started to take things into their own hands. So again we said classic story but but you didn't let it get out of control and that's important. So we actually have a picture of kind of where you're going today and it's evolved into this Pat, if you could bring up the picture with the the elephant here we go. So I would talk a little bit about the architecture, doesn't show it here, the spreadsheet era but christoph maybe you can talk about that. It does show the Hadoop monolith which exists today. I think that's in a managed managed hosting service, but but you you preserve that piece of it, but if I understand it correctly, everything is evolving to the cloud, I think you're running a lot of this or all of it in A W. S. Uh you've got everybody's got their own data sources, uh you've got a data hub which I think is enabled by a master catalog for discovery and all this underlying technical infrastructure. That is really not the focus of this conversation today. But the key here, if I understand it correctly is these domains are autonomous and not only that this required technical thinking, but really supportive organizational mindset, which we're gonna talk about today. But christoph maybe you could address, you know, at a high level some of the architectural evolution that you guys went through. >>Yeah, sure. Yeah, maybe it's also a good summary about the entire history. So as you have mentioned, right, we started in the very beginning with the model is on the operation of playing right? Actually, it wasn't just one model is both to one for the back end and one for the for the front and and or analytical plane was essentially a couple of spreadsheets and I think there's nothing wrong with spreadsheets, right, allows you to store information, it allows you to transform data allows you to share this information. It allows you to visualize this data, but all the kind of that's not actually separating concern right? Everything in one tool. And this means that obviously not scalable, right? You reach the point where this kind of management set up in or data management of isn't one tool reached elements. So what we have started is we've created our data lake as we have seen here on Youtube. And this at the very beginning actually reflected very much our operational populace on top of that. We used impala is a data warehouse, but there was not really a distinction between borders, our data warehouse and borders our data like the impala was used as a kind of those as the kind of engine to create a warehouse and data like construct itself and this organic growth actually led to a situation as I think it's it's clear now that we had to centralized model is for all the domains that will really lose kimball modeling standards. There was no uniformity used actually build in house uh ways of building materialized use abuse that we have used for the presentation layer, there was a lot of duplication of effort and in the end essentially they were missing feedbacks, food, which helped us to to improve of what we are filled. So in the end, in the natural, as we have said, the lack of trust and that's basically what the starting point for us to understand. Okay, how can we move away and there are a lot of different things that you can discuss of apart from this organizational structure that we have said, okay, we have these three or four pillars from from Denmark. However, there's also the next extra question around how do we implement our talking about actual right, what are the implications on that level? And I think that is there's something that we are that we are currently still in progress. >>Got it. Okay, so I wonder if we could talk about switch gears a little bit and talk about the organizational and cultural challenges that you faced. What were those conversations like? Uh let's dig into that a little bit. I want to get into governance as well. >>The conversations on the cultural change. I mean yes, we went through a hyper growth for the last year since obviously there were a lot of new joiners, a lot of different, very, very smart people joining the company which then results that collaboration uh >>got a bit more difficult. Of course >>there are times and changes, you have different different artifacts that you were created um and documentation that were flying around. Um so we were we had to build the company from scratch right? Um Of course this then resulted always this tension which I described before, but the most important part here is that data has always been a very important factor at l a fresh and we collected >>more of this >>data and continued to improve use data to improve the different key areas of our business. >>Um even >>when organizational struggles, the central organizational struggles data somehow always helped us to go through this this kind of change. Right? Um in the end those decentralized teams in our local geography ease started with solutions that serve the business which was very very important otherwise wouldn't be at the place where we are today but they did by all late best practices and standards and I always used sport analogy Dave So like any sport, there are different rules and regulations that need to be followed. These rules are defined by calling the sports association and this is what you can think about data governance and compliance team. Now we add the players to it who need to follow those rules and bite by them. This is what we then called data management. Now we have the different players and professionals, they need to be trained and understand the strategy and it rules before they can play. And this is what I then called data literacy. So we realized that we need to focus on helping our teams to develop those capabilities and teach the standards for how work is being done to truly drive functional excellence in a different domains. And one of our mission of our data literacy program for example is to really empower >>every employee at hello >>fresh everyone to make the right data informs decisions by providing data education that scaled by royal Entry team. Then this can be different things, different things like including data capabilities, um, with the learning paths for example. Right? So help them to create and deploy data products connecting data producers and data consumers and create a common sense and more understanding of each other's dependencies, which is important, for example, S. S. L. O. State of contracts and etcetera. Um, people getting more of a sense of ownership and responsibility. Of course, we have to define what it means, what does ownership means? But the responsibility means. But we're teaching this to our colleagues via individual learning patterns and help them up skill to use. Also, there's shared infrastructure and those self self service applications and overall to summarize, we're still in this progress of of, of learning, we are still learning as well. So learning never stops the tele fish, but we are really trying this um, to make it as much fun as possible. And in the end we all know user behavior has changed through positive experience. Uh, so instead of having massive training programs over endless courses of workshops, um, leaving our new journalists and colleagues confused and overwhelmed. >>We're applying um, >>game ification, right? So split different levels of certification where our colleagues can access, have had access points, they can earn badges along the way, which then simplifies the process of learning and engagement of the users and this is what we see in surveys, for example, where our employees that your justification approach a lot and are even competing to collect Those learning path batteries to become the # one on the leader board. >>I love the game ification, we've seen it work so well and so many different industries, not the least of which is crypto so you've identified some of the process gaps uh that you, you saw it is gloss over them. Sometimes I say paved the cow path. You didn't try to force, in other words, a new architecture into the legacy processes. You really have to rethink your approach to data management. So what what did that entail? >>Um, to rethink the way of data management. 100%. So if I take the example of Revolution, Industrial Revolution or classical supply chain revolution, but just imagine that you have been riding a horse, for example, your whole life and suddenly you can operate a car or you suddenly receive just a complete new way of transporting assets from A to B. Um, so we needed to establish a new set of cross functional business processes to run faster, dry faster, um, more robustly and deliver data products which can be trusted and used by downstream processes and systems. Hence we had a subset of new standards and new procedures that would fall into the internal data governance and compliance sector with internal, I'm always referring to the data operations around new things like data catalog, how to identify >>ownership, >>how to change ownership, how to certify data assets, everything around classical software development, which we know apply to data. This this is similar to a new thinking, right? Um deployment, versioning, QA all the different things, ingestion policies, policing procedures, all the things that suffer. Development has been doing. We do it now with data as well. And in simple terms, it's a whole redesign of the supply chain of our data with new procedures and new processes and as a creation as management and as a consumption. >>So data has become kind of the new development kit. If you will um I want to shift gears and talk about the notion of data product and, and we have a slide uh that we pulled from your deck and I'd like to unpack it a little bit. Uh I'll just, if you can bring that up, I'll read it. A data product is a product whose primary objective is to leverage on data to solve customer problems where customers, both internal and external. So pretty straightforward. I know you've gone much deeper and you're thinking and into your organization, but how do you think about that And how do you determine for instance who owns what? How did you get everybody to agree? >>I can take that one. Um, maybe let me start with the data product. So I think um that's an ongoing debate. Right? And I think the debate itself is an important piece here, right? That visit the debate, you clarify what we actually mean by that product and what is actually the mindset. So I think just from a definition perspective, right? I think we find the common denominator that we say okay that our product is something which is important for the company has come to its value what you mean by that. Okay, it's it's a solution to a customer problem that delivers ideally maximum value to the business. And yes, it leverages the power of data and we have a couple of examples but it had a fresh year, the historical and classical ones around dashboards for example, to monitor or error rates but also more sophisticated ways for example to incorporate machine learning algorithms in our recipe recommendations. However, I think the important aspects of the data product is a there is an owner, right? There's someone accountable for making sure that the product that we are providing is actually served and is maintained and there are, there is someone who is making sure that this actually keeps the value of that problem thing combined with the idea of the proper documentation, like a product description, right that people understand how to use their bodies is about and related to that peace is the idea of it is a purpose. Right? You need to understand or ask ourselves, Okay, why does this thing exist does it provide the value that you think it does. That leads into a good understanding about the life cycle of the data product and life cycle what we mean? Okay from the beginning from the creation you need to have a good understanding, we need to collect feedback, we need to learn about that. We need to rework and actually finally also to think about okay benefits time to decommission piece. So overall, I think the core of the data product is product thinking 11 right that we start the point is the starting point needs to be the problem and not the solution and this is essentially what we have seen what was missing but brought us to this kind of data spaghetti that we have built there in in Russia, essentially we built at certain data assets, develop in isolation and continuously patch the solution just to fulfill these articles that we got and actually these aren't really understanding of the stakeholder needs and the interesting piece as a result in duplication of work and this is not just frustrating and probably not the most efficient way how the company should work. But also if I build the same that assets but slightly different assumption across the company and multiple teams that leads to data inconsistency and imagine the following too narrow you as a management for management perspective, you're asking basically a specific question and you get essentially from a couple of different teams, different kind of grass, different kind of data and numbers and in the end you do not know which ones to trust. So there's actually much more ambiguity and you do not know actually is a noise for times of observing or is it just actually is there actually a signal that I'm looking for? And the same is if I'm running in a B test right, I have a new future, I would like to understand what has it been the business impact of this feature. I run that specific source in an unfortunate scenario. Your production system is actually running on a different source. You see different numbers. What you've seen in a B test is actually not what you see then in production typical thing then is you're asking some analytics tend to actually do a deep dive to understand where the discrepancies are coming from. The worst case scenario. Again, there's a different kind of source. So in the end it's a pretty frustrating scenario and that's actually based of time of people that have to identify the root cause of this divergence. So in a nutshell, the highest degree of consistency is actually achieved that people are just reusing Dallas assets and also in the media talk that we have given right, we we start trying to establish this approach for a B testing. So we have a team but just providing or is kind of owning their target metric associated business teams and they're providing that as a product also to other services including the A B testing team, they'll be testing team can use this information defines an interface is okay I'm joining this information that the metadata of an experiment and in the end after the assignment after this data collection face, they can easily add a graph to the dashboard. Just group by the >>Beatles Hungarian. >>And we have seen that also in other companies. So it's not just a nice dream that we have right. I have actually worked in other companies where we worked on search and we established a complete KPI pipeline that was computing all this information. And this information was hosted by the team and it was used for everything A B test and deep dives and and regular reporting. So uh just one of the second the important piece now, why I'm coming back to that is that requires that we are treating this data as a product right? If you want to have multiple people using the things that I am owning and building, we have to provide this as a trust mercy asset and in a way that it's easy for people to discover and actually work with. >>Yeah. And coming back to that. So this is to me this is why I get so excited about data mesh because I really do think it's the right direction for organizations. When people hear data product they say well, what does that mean? Uh but then when you start to sort of define it as you did, it's it's using data to add value, that could be cutting costs, that could be generating revenue, it could be actually directly you're creating a product that you monetize, So it's sort of in the eyes of the beholder. But I think the other point that we've made is you made it earlier on to and again, context. So when you have a centralized data team and you have all these P NL managers a lot of times they'll question the data because they don't own it. They're like wait a minute. If they don't, if it doesn't agree with their agenda, they'll attack the data. But if they own the data then they're responsible for defending that and that is a mindset change, that's really important. Um And I'm curious uh is how you got to, you know, that ownership? Was it a was it a top down with somebody providing leadership? Was it more organic bottom up? Was it a sort of a combination? How do you decide who owned what in other words, you know, did you get, how did you get the business to take ownership of the data and what is owning? You know, the data actually mean? >>That's a very good question. Dave I think this is one of the pieces where I think we have a lot of learnings and basically if you ask me how we could start the feeling. I think that would be the first piece. Maybe we need to start to really think about how that should be approached if it stopped his ownership. Right? It means somehow that the team has a responsibility to host and self the data efforts to minimum acceptable standards. This minimum dependencies up and down string. The interesting piece has been looking backwards. What what's happening is that under that definition has actually process that we have to go through is not actually transferring ownership from the central team to the distributor teams. But actually most cases to establish ownership, I make this difference because saying we have to transfer ownership actually would erroneously suggests that the data set was owned before. But this platform team, yes, they had the capability to make the changes on data pipelines, but actually the analytics team, they're always the ones who had the business understands, you use cases and but no one actually, but it's actually expensive expected. So we had to go through this very lengthy process and establishing ownership. We have done that, as in the beginning, very naively. They have started, here's a document here, all the data assets, what is probably the nearest neighbor who can actually take care of that and then we we moved it over. But the problem here is that all these things is kind of technical debt, right? It's not really properly documented, pretty unstable. It was built in a very inconsistent over years and these people who have built this thing have already left the company. So there's actually not a nice thing that is that you want to see and people build up a certain resistance, e even if they have actually bought into this idea of domain ownership. So if you ask me these learnings, but what needs to happen as first, the company needs to really understand what our core business concept that they have, they need to have this mapping from. These are the core business concept that we have. These are the domain teams who are owning this concept and then actually link that to the to the assets and integrated better with both understanding how we can evolve actually, the data assets and new data build things new in the in this piece in the domain. But also how can we address reduction of technical death and stabilizing what we have already. >>Thank you for that christoph. So I want to turn a direction here and talk about governance and I know that's an area that's passionate, you're passionate about. Uh I pulled this slide from your deck, which I kind of messed up a little bit sorry for that, but but by the way, we're going to publish a link to the full video that you guys did. So we'll share that with folks. But it's one of the most challenging aspects of data mesh, if you're going to decentralize you, you quickly realize this could be the Wild West as we talked about all over again. So how are you approaching governance? There's a lot of items on this slide that are, you know, underscore the complexity, whether it's privacy, compliance etcetera. So, so how did you approach this? >>It's yeah, it's about connecting those dots. Right. So the aim of the data governance program is about the autonomy of every team was still ensuring that everybody has the right interoperability. So when we want to move from the Wild West riding horses to a civilised way of transport, um you can take the example of modern street traffic, like when all participants can manoeuvre independently and as long as they follow the same rules and standards, everybody can remain compatible with each other and understand and learn from each other so we can avoid car crashes. So when I go from country to country, I do understand what the street infrastructure means. How do I drive my car? I can also read the traffic lights in the different signals. Um, so likewise as a business and Hello Fresh, we do operate autonomously and consequently need to follow those external and internal rules and standards to set forth by the redistribution in which we operate so in order to prevent a car crash, we need to at least ensure compliance with regulations to account for society's and our customers increasing concern with data protection and privacy. So teaching and advocating this advantage, realizing this to everyone in the company um was a key community communication strategy and of course, I mean I mentioned data privacy external factors, the same goes for internal regulations and processes to help our colleagues to adapt to this very new environment. So when I mentioned before the new way of thinking the new way of um dealing and managing data, this of course implies that we need new processes and regulations for our colleagues as well. Um in a nutshell then this means the data governance provides a framework for managing our people the processes and technology and culture around our data traffic. And those components must come together in order to have this effective program providing at least a common denominator, especially critical for shared dataset, which we have across our different geographies managed and shared applications on shared infrastructure and applications and is then consumed by centralized processes um for example, master data, everything and all the metrics and KPI s which are also used for a central steering. Um it's a big change day. Right. And our ultimate goal is to have this noninvasive, Federated um ultimatum and computational governance and for that we can't just talk about it. We actually have to go deep and use case by use case and Qc buy PVC and generate learnings and learnings with the different teams. And this would be a classical approach of identifying the target structure, the target status, match it with the current status by identifying together with the business teams with the different domains have a risk assessment for example, to increase transparency because a lot of teams, they might not even know what kind of situation they might be. And this is where this training and this piece of illiteracy comes into place where we go in and trade based on the findings based on the most valuable use case um and based on that help our teams to do this change to increase um their capability just a little bit more and once they hand holding. But a lot of guidance >>can I kind of kind of trying to quickly David will allow me I mean there's there's a lot of governance piece but I think um that is important. And if you're talking about documentation for example, yes, we can go from team to team and tell these people how you have to document your data and data catalog or you have to establish data contracts and so on the force. But if you would like to build data products at scale following actual governance, we need to think about automation right. We need to think about a lot of things that we can learn from engineering before. And that starts with simple things like if we would like to build up trust in our data products, right, and actually want to apply the same rigor and the best practices that we know from engineering. There are things that we can do and we should probably think about what we can copy and one example might be. So the level of service level agreements, service level objectives. So that level indicators right, that represent on on an engineering level, right? If we're providing services there representing the promises we made to our customers or consumers, these are the internal objectives that help us to keep those promises. And actually these are the way of how we are tracking ourselves, how we are doing. And this is just one example of that thing. The Federated Governor governance comes into play right. In an ideal world, we should not just talk about data as a product but also data product. That's code that we say, okay, as most as much as possible. Right? Give the engineers the tool that they are familiar basis and actually not ask the product managers for example to document their data assets in the data catalog but make it part of the configuration. Have this as a, as a C D C I, a continuous delivery pipeline as we typically see another engineering task through and services we say, okay, there is configuration, we can think about pr I can think about data quality monitoring, we can think about um the ingestion data catalog and so on and forest, I think ideally in the data product will become of a certain templates that can be deployed and are actually rejected or verified at build time before we actually make them deploy them to production. >>Yeah, So it's like devoPS for data product um so I'm envisioning almost a three phase approach to governance and you kind of, it sounds like you're in early phases called phase zero where there's there's learning, there's literacy, there's training, education, there's kind of self governance and then there's some kind of oversight, some a lot of manual stuff going on and then you you're trying to process builders at this phase and then you codify it and then you can automate it. Is that fair? >>Yeah, I would rather think think about automation as early as possible in the way and yes, there needs to be certain rules but then actually start actually use case by use case. Is there anything that small piece that we can already automate? It's as possible. Roll that out and then actually extended step by step, >>is there a role though that adjudicates that? Is there a central Chief state officer who is responsible for making sure people are complying or is it how do you handle that? >>I mean from a from a from a platform perspective, yes, we have a centralized team to uh implement certain pieces they'll be saying are important and actually would like to implement. However, that is actually working very closely with the governance department. So it's Clements piece to understand and defy the policies that needs to be implemented. >>So Clements essentially it's it's your responsibility to make sure that the policy is being followed. And then as you were saying, christoph trying to compress the time to automation as fast as possible percent. >>So >>it's really it's uh >>what needs to be really clear that it's always a split effort, Right? So you can't just do one thing or the other thing, but everything really goes hand in hand because for the right automation for the right engineering tooling, we need to have the transparency first. Uh I mean code needs to be coded so we kind of need to operate on the same level with the right understanding. So there's actually two things that are important which is one its policies and guidelines, but not only that because more importantly or even well equally important to align with the end user and tech teams and engineering and really bridge between business value business teams and the engineering teams. >>Got it. So just a couple more questions because we gotta wrap I want to talk a little bit about the business outcome. I know it's hard to quantify and I'll talk about that in a moment but but major learnings, we've got some of the challenges that you cited. I'll just put them up here. We don't have to go detailed into this, but I just wanted to share with some folks. But my question, I mean this is the advice for your peers question if you had to do it differently if you had a do over or a Mulligan as we like to say for you golfers, what would you do differently? Yeah, >>I mean can we start with from a from the transformational challenge that understanding that it's also high load of cultural change. I think this is this is important that a particular communication strategy needs to be put into place and people really need to be um supported. Right? So it's not that we go in and say well we have to change towards data mesh but naturally it's in human nature, you know, we're kind of resistance to to change right? Her speech uncomfortable. So we need to take that away by training and by communicating um chris we're gonna add something to that >>and definitely I think the point that I have also made before right we need to acknowledge that data mesh is an architecture of scale, right? You're looking for something which is necessary by huge companies who are vulnerable, data productive scale. I mean Dave you mentioned it right, there are a lot of advantages to have a centralized team but at some point it may make sense to actually decentralized here and at this point right? If you think about data Mash, you have to recognize that you're not building something on a green field. And I think there's a big learning which is also reflected here on the slide is don't underestimate your baggage. It's typically you come to a point where the old model doesn't doesn't broke anymore and has had a fresh right? We lost our trust in our data and actually we have seen certain risks that we're slowing down our innovation so we triggered that this was triggering the need to actually change something. So this transition implies that you typically have a lot of technical debt accumulated over years and I think what we have learned is that potentially we have decentralized some assets to earlier, this is not actually taking into account the maturity of the team where we are actually distributed to and now we actually in the face of correcting pieces of that one. Right? But I think if you if you if you start from scratch you have to understand, okay, is are my team is actually ready for taking on this new uh, this news capabilities and you have to make sure that business decentralization, you build up these >>capabilities and the >>teams and as Clements has mentioned, right, make sure that you take the people on your journey. I think these are the pieces that also here, it comes with this knowledge gap, right? That we need to think about hiring and literacy the technical depth I just talked about and I think the last piece that I would add now which is not here on the flight deck is also from our perspective, we started on the analytical layer because that's kind of where things are exploding, right, this is the thing that people feel the pain but I think a lot of the efforts that we have started to actually modernize the current state uh, towards data product towards data Mash. We've understood that it always comes down basically to a proper shape of our operational plane and I think what needs to happen is is I think we got through a lot of pains but the learning here is this need to really be a commitment from the company that needs to happen and to act. >>I think that point that last point you made it so critical because I I hear a lot from the vendor community about how they're gonna make analytics better and that's that's not unimportant, but but through data product thinking and decentralized data organizations really have to operationalize in order to scale. So these decisions around data architecture an organization, their fundamental and lasting, it's not necessarily about an individual project are why they're gonna be project sub projects within this architecture. But the architectural decision itself is an organizational, its cultural and what's the best approach to support your business at scale. It really speaks to to to what you are, who you are as a company, how you operate and getting that right, as we've seen in the success of data driven driven companies is yields tremendous results. So I'll ask each of you to give give us your final thoughts and then we'll wrap maybe >>maybe it quickly, please. Yeah, maybe just just jumping on this piece that you have mentioned, right, the target architecture. If we talk about these pieces right, people often have this picture of mind like OK, there are different kind of stages, we have sources, we have actually ingestion layer, we have historical transformation presentation layer and then we're basically putting a lot of technology on top of that kind of our target architecture. However, I think what we really need to make sure is that we have these different kind of viewers, right? We need to understand what are actually the capabilities that we need in our new goals. How does it look and feel from the different kind of personas and experience view? And then finally, that should actually go to the to the target architecture from a technical perspective um maybe just to give an outlook but what we're what we're planning to do, how we want to move that forward. We have actually based on our strategy in the in the sense of we would like to increase that to maturity as a whole across the entire company and this is kind of a framework around the business strategy and it's breaking down into four pillars as well. People meaning the data, cultural, data literacy, data organizational structure and so on that. We're talking about governance as Clements has actually mentioned that, right, compliance, governance, data management and so on. You talk about technology and I think we could talk for hours for that one. It's around data platform, better science platform and then finally also about enablement through data, meaning we need to understand that a quality data accessibility and the science and data monetization. >>Great, thank you christophe clement. Once you bring us home give us your final thoughts. >>Can't can just agree with christoph that uh important is to understand what kind of maturity people have to understand what the maturity level, where the company where where people organization is and really understand what does kind of some kind of a change replies to that those four pillars for example, um what needs to be taken first and this is not very clear from the very first beginning of course them it's kind of like Greenfield you come up with must wins to come up with things that we really want to do out of theory and out of different white papers. Um only if you really start conducting the first initiatives you do understand. Okay, where we have to put the starts together and where do I missed out on one of those four different pillars? People, process technology and governance. Right? And then that kind of an integration. Doing step by step, small steps by small steps not boiling the ocean where you're capable ready to identify the gaps and see where either you can fill um the gaps are where you have to increase maturity first and train people or increase your text text, >>you know Hello Fresh is an excellent example of a company that is innovating. It was not born in Silicon Valley which I love. It's a global company. Uh and I gotta ask you guys, it seems like this is an amazing place to work you guys hiring? >>Yes, >>definitely. We do >>uh as many rights as was one of these aspects distributing. And actually we are hiring as an entire company specifically for data. I think there are a lot of open roles serious. Please visit or our page from better engineering, data, product management and Clemens has a lot of rules that you can speak about. But yes >>guys, thanks so much for sharing with the cube audience, your, your pioneers and we look forward to collaborations in the future to track progress and really want to thank you for your time. >>Thank you very much. Thank you very much. Dave >>thank you for watching the cubes startup showcase made possible by A W. S. This is Dave Volonte. We'll see you next time. >>Yeah.

Published Date : Sep 20 2021

SUMMARY :

and realized that in order to support its scale, it needed to rethink how it thought Thank you very much. You guys are number one in the world in your field, Clements has actually been a longer trajectory yet have a fresh. So recently we did lounge and expand Norway. ready to eat companies like factor in the U. S. And the planned acquisition of you foods in Australia. So maybe you guys could talk a little bit about your journey as a company specifically as So we grew very organically So that for the team becomes a bottleneck and so the lines of business, the marketing team salesman's okay, we're going to take things into our own Started really to build their own data solutions at some point you have to get the ball rolling But but on the flip side of that is when you think about a centralized organization say the data to the experts in these teams and this, as you have mentioned, right, that increases mental load look at that say, okay, hey, that's pretty good thinking and then now we have to apply it and that's And the idea was really moving away from um ever growing complex go ahead. we have a self service infrastructure and as you mentioned, the spreadsheet era but christoph maybe you can talk about that. So in the end, in the natural, as we have said, the lack of trust and that's and cultural challenges that you faced. The conversations on the cultural change. got a bit more difficult. there are times and changes, you have different different artifacts that you were created These rules are defined by calling the sports association and this is what you can think about So learning never stops the tele fish, but we are really trying this and this is what we see in surveys, for example, where our employees that your justification not the least of which is crypto so you've identified some of the process gaps uh So if I take the example of This this is similar to a new thinking, right? gears and talk about the notion of data product and, and we have a slide uh that we There's someone accountable for making sure that the product that we are providing is actually So it's not just a nice dream that we have right. So this is to me this is why I get so excited about data mesh because I really do the company needs to really understand what our core business concept that they have, they need to have this mapping from. to the full video that you guys did. in order to prevent a car crash, we need to at least ensure the promises we made to our customers or consumers, these are the internal objectives that help us to keep a three phase approach to governance and you kind of, it sounds like you're in early phases called phase zero where Is there anything that small piece that we can already automate? and defy the policies that needs to be implemented. that the policy is being followed. so we kind of need to operate on the same level with the right understanding. or a Mulligan as we like to say for you golfers, what would you do differently? So it's not that we go in and say So this transition implies that you typically have a lot of the company that needs to happen and to act. It really speaks to to to what you are, who you are as a company, how you operate and in the in the sense of we would like to increase that to maturity as a whole across the entire company and this is kind Once you bring us home give us your final thoughts. and see where either you can fill um the gaps are where you Uh and I gotta ask you guys, it seems like this is an amazing place to work you guys hiring? We do you can speak about. really want to thank you for your time. Thank you very much. thank you for watching the cubes startup showcase made possible by A W. S.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

2015DATE

0.99+

AustraliaLOCATION

0.99+

Dave VolontePERSON

0.99+

May 2019DATE

0.99+

2017DATE

0.99+

2019DATE

0.99+

threeQUANTITY

0.99+

Hello FreshORGANIZATION

0.99+

RussiaLOCATION

0.99+

DavidPERSON

0.99+

Silicon ValleyLOCATION

0.99+

100%QUANTITY

0.99+

julyDATE

0.99+

DenmarkLOCATION

0.99+

ClementsPERSON

0.99+

Jim McDaid GhaniPERSON

0.99+

U. S.LOCATION

0.99+

christophePERSON

0.99+

two years laterDATE

0.99+

last yearDATE

0.99+

first pieceQUANTITY

0.99+

one exampleQUANTITY

0.99+

ClementsORGANIZATION

0.99+

stevePERSON

0.99+

last weekDATE

0.99+

BeatlesORGANIZATION

0.99+

OneQUANTITY

0.99+

oneQUANTITY

0.99+

one toolQUANTITY

0.98+

two thingsQUANTITY

0.98+

NorwayLOCATION

0.98+

secondQUANTITY

0.98+

bothQUANTITY

0.98+

fourQUANTITY

0.98+

christophPERSON

0.98+

todayDATE

0.98+

first twoQUANTITY

0.98+

hundreds of millions of mealsQUANTITY

0.98+

one modelQUANTITY

0.98+

four colorsQUANTITY

0.97+

four pillarsQUANTITY

0.97+

firstQUANTITY

0.97+

first initiativesQUANTITY

0.97+

earlier this yearDATE

0.97+

JemaahPERSON

0.97+

eachQUANTITY

0.96+

handle freshORGANIZATION

0.96+

U. K.LOCATION

0.95+

DallasLOCATION

0.95+

christoph NevadaPERSON

0.95+

johnnyPERSON

0.95+

Wild WestLOCATION

0.94+

YoutubeORGANIZATION

0.94+

christophe clementPERSON

0.94+

four different pillarsQUANTITY

0.94+

about 40 peopleQUANTITY

0.93+

each yearQUANTITY

0.93+

A W. S.PERSON

0.92+

two different thingsQUANTITY

0.92+

Hello freshORGANIZATION

0.92+

millions of peopleQUANTITY

0.91+

Clemence W. Chee & Christoph Sawade, HelloFresh


 

(upbeat music) >> Hello everyone. We're here at theCUBE startup showcase made possible by AWS. Thanks so much for joining us today. You know, when Zhamak Dehghani was formulating her ideas around data mesh, she wasn't the only one thinking about decentralized data architectures. HelloFresh was going into hyper-growth mode and realized that in order to support its scale, it needed to rethink how it thought about data. Like many companies that started in the early part of the last decade, HelloFresh relied on a monolithic data architecture and the internal team it had concerns about its ability to support continued innovation at high velocity. The company's data team began to think about the future and work backwards from a target architecture, which possessed many principles of so-called data mesh, even though they didn't use that term specifically. The company is a strong example of an early but practical pioneer of data mesh. Now, there are many practitioners and stakeholders involved in evolving the company's data architecture many of whom are listed here on this slide. Two are highlighted in red and joining us today. We're really excited to welcome you to theCUBE, Clemence Chee, who is the global senior director for data at HelloFresh, and Christoph Sawade, who's the global senior director of data also of course at HelloFresh. Folks, welcome. Thanks so much for making some time today and sharing your story. >> Thank you very much. >> Thanks, Dave. >> All right, let's start with HelloFresh. You guys are number one in the world in your field. You deliver hundreds of millions of meals each year to many, many millions of people around the globe. You're scaling. Christoph, tell us a little bit more about your company and its vision. >> Yeah. Should I start or Clemence? Maybe take over the first piece because Clemence has actually been longer a director at HelloFresh. >> Yeah go ahead Clemence. >> I mean, yes, about approximately six years ago I joined and HelloFresh, and I didn't think about the startup I was joining would eventually IPO. And just two years later, HelloFresh went public. And approximately three years and 10 months after HelloFresh was listed on the German stock exchange which was just last week, HelloFresh was included in the DAX Germany's leading stock market index and that, to mind a great, great milestone, and I'm really looking forward and I'm very excited for the future for HelloFresh and also our data. The vision that we have is to become the world's leading food solution group. And there are a lot of attractive opportunities. So recently we did launch and expand in Norway. This was in July. And earlier this year, we launched the US brand, Green Chef, in the UK as well. We're committed to launch continuously different geographies in the next coming years and have a strong path ahead of us. With the acquisition of ready to eat companies like factor in the US and the plant acquisition of Youfoodz in Australia, we are diversifying our offer, now reaching even more and more untapped customer segments and increase our total address for the market. So by offering customers and growing range of different alternatives to shop food and to consume meals, we are charging towards this vision and this goal to become the world's leading integrated food solutions group. >> Love it. You guys are on a rocket ship. You're really transforming the industry. And as you expand your TAM, it brings us to sort of the data as a core part of that strategy. So maybe you guys could talk a little bit about your journey as a company, specifically as it relates to your data journey. I mean, you began as a startup, you had a basic architecture and like everyone, you've made extensive use of spreadsheets, you built a Hadoop based system that started to grow. And when the company IPO'd, you really started to explode. So maybe describe that journey from a data perspective. >> Yes, Dave. So HelloFresh by 2015, approximately had evolved what amount, a classical centralized data management set up. So we grew very organically over the years, and there were a lot of very smart people around the globe, really building the company and building our infrastructure. This also means that there were a small number of internal and external sources, data sources, and a centralized BI team with a number of people producing different reports, different dashboards and, and products for our executives, for example, or for different operations teams to see a company's performance and knowledge was transferred just by our talking to each other face-to-face conversations. And the people in the data warehouse team were considered as the data wizard or as the ETL wizard. Very classical challenges. And it was ETL, who reserved, indicated the kind of like a style of knowledge of data management, right? So our central data warehouse team then was responsible for different type of verticals in different domains, different geographies. And all this setup gave us in the beginning, the flexibility to grow fast as a company in 2015. >> Christoph, anything to add to that? >> Yes, not explicitly to that one, but as, as Clemence said, right, this was kind of the setup that actually worked for us quite a while. And then in 2017, when HelloFresh went public, the company also grew rapidly. And just to give you an idea how that looked like as well, the tech departments have actually increased from about 40 people to almost 300 engineers. And in the same way as the business units, as there Clemence has described, also grew sustainably. So we continue to launch HelloFresh in new countries, launched new brands like Every Plate, and also acquired other brands like we have Factor. And that grows also from a data perspective, the number of data requests that the central (mumbles), we're getting become more and more and more, and also more and more complex. So that for the team meant that they had a fairly high mental load. So they had to achieve a very, or basically get a very deep understanding about the business and also suffered a lot from this context, switching back and forth. Essentially, they had to prioritize across our product requests from our physical product, digital product, from a physical, from, sorry, from the marketing perspective, and also from the central reporting teams. And in a nutshell, this was very hard for these people, and that altered situations that let's say the solution that we have built. We can not really optimal. So in a, in a, in a, in a nutshell, the central function became a bottleneck and slow down of all the innovation of the company. >> It's a classic case. Isn't it? I mean, Clemence, you see, you see the central team becomes a bottleneck, and so the lines of business, the marketing team, sales teams say "Okay, we're going to take things into our own hands." And then of course IT and the technical team is called in later to clean up the mess. Maybe, maybe I'm overstating it, but, but that's a common situation. Isn't it? >> Yeah this is what exactly happened. Right. So we had a bottleneck, we had those central teams, there was always a bit of tension. Analytics teams then started in those business domains like marketing, supply chain, finance, HR, and so on started really to build their own data solutions. At some point you have to get the ball rolling, right? And then continue the trajectory, which means then that the data pipelines didn't meet the engineering standards. And there was an increased need for maintenance and support from central teams. Hence over time, the knowledge about those pipelines and how to maintain a particular infrastructure, for example, left the company, such that most of those data assets and data sets that turned into a huge debt with decreasing data quality, also decreasing lack of trust, decreasing transparency. And this was an increasing challenge where a majority of time was spent in meeting rooms to align on, on data quality for example. >> Yeah. And the point you were making Christoph about context switching, and this is, this is a point that Zhamak makes quite often as we've, we've, we've contextualized our operational systems like our sales systems, our marketing systems, but not our, our data systems. So you're asking the data team, okay, be an expert in sales, be an expert in marketing, be an expert in logistics, be an expert in supply chain and it's start, stop, start, stop. It's a paper cut environment, and it's just not as productive. But, but, and the flip side of that is when you think about a centralized organization, you think, hey, this is going to be a very efficient way across functional team to support the organization, but it's not necessarily the highest velocity, most effective organizational structure. >> Yeah. So, so I agree with that piece, that's up to a certain scale. A centralized function has a lot of advantages, right? So it's a tool for everyone, which would go to a destined kind of expert team. However, if you see that you actually would like to accelerate that in specific as the type of growth. But you want to actually have autonomy on certain teams and move the teams, or let's say the data to the experts in these teams. And this, as you have mentioned, right, that increases mental load. And you can either internally start splitting your team into different kinds of sub teams focusing on different areas, however, that is then again, just adding another piece where actually collaboration needs to happen because the external seized, so why not bridging that gap immediately and actually move these teams end to end into the, into the function themselves. So maybe just to continue what Clemence was saying, and this is actually where our, so, Clemence and my journey started to become one joint journey. So Clemence was coming actually from one of these teams who builds their own solutions. I was basically heading the platform team called data warehouse team these days. And in 2019, where (mumbles) become more and more serious, I would say, so more and more people have recognized that this model does not really scale, in 2019, basically the leadership of the company came together and identified data as a key strategic asset. And what we mean by that, that if he leveraged it in a, in a, an appropriate way, it gives us a unique, competitive advantage, which could help us to, to support and actually fully automate our decision making process across the entire value chain. So once we, what we're trying to do now, or what we would be aiming for is that HelloFresh is able to build data products that have a purpose. We're moving away from the idea that it's just a bi-product. We have a purpose why we would like to collect this data. There's a clear business need behind that. And because it's so important to, for the company as a business, we also want to provide them as a trustworthy asset to the rest of the organization. We'd say, this is the best customer experience, but at least in a way that users can easily discover, understand and securely access, high quality data. >> Yeah. So, and, and, and Clemence, when you see Zhamak's writing, you see, you know, she has the four pillars and the principles. As practitioners, you look at that say, okay, hey, that's pretty good thinking. And then now we have to apply it. And that's where the devil meets the details. So it's the for, the decentralized data ownership, data as a product, which we'll talk about a little bit, self-serve, which you guys have spent a lot of time on, and Clemence your wheelhouse, which is, which is governance and a federated governance model. And it's almost like if you, if you achieve the first two, then you have to solve for the second two, it almost creates a new challenges, but maybe you could talk about that a little bit as to how it relates to HelloFresh. >> Yes. So Chris has mentioned that we identified kind of a challenge beforehand and said, how can we actually decentralized and actually empower the different colleagues of ours? And this was more a, we realized that it was more an organizational or a cultural change. And this is something that someone also mentioned. I think ThoughtWorks mentioned one of the white papers, it's more of an organizational or a cultural impact. And we kicked off a phased reorganization, or different phases we're currently on, in the middle of still, but we kicked off different phases of organizational restructuring or reorganization trying to lock this data at scale. And the idea was really moving away from ever growing complex matrix organizations or matrix setups and split between two different things. One is the value creation. So basically when people ask the question, what can we actually do? What should we do? This is value creation and the how, which is capability building, and both are equal in authority. This actually then creates a high urge in collaboration and this collaboration breaks up the different silos that were built. And of course, this also includes different needs of staffing for teams staffing with more, let's say data scientists or data engineers, data professionals into those business domains, enhance, or some more capability building. >> Okay, go ahead. Sorry. >> So back to Zhamak Dehghani. So we, the idea also then crossed over when she published her papers in May, 2019. And we thought, well, the four pillars that she described were around decentralized data ownership, product, data as a product mindset, we have a self-service infrastructure. And as you mentioned, federated computational governance. And this suited very much with our thinking at that point of time to reorganize the different teams and this then that to not only organizational restructure, but also in completely new approach of how we need to manage data, through data. >> Got it. Okay. So your businesses is exploding. The data team was having to become domain experts to many areas, constantly context switching as we said, people started to take things into their own hands. So again, we said classic story, but, but you didn't let it get out of control and that's important. And so we, we actually have a picture of kind of where you're going today and it's evolved into this, Pat, if you could bring up the picture with the, the elephant, here we go. So I will talk a little bit about the architecture. It doesn't show it here, the spreadsheet era, but Christoph, maybe you could talk about that. It does show the Hadoop monolith, which exists today. I think that's in a managed hosting service, but, but you, you preserve that piece of it. But if I understand it correctly, everything is evolving to the cloud. I think you're running a lot of this or all of it in AWS. You've got, everybody's got their own data sources. You've got a data hub, which I think is enabled by a master catalog for discovery and all this underlying technical infrastructure that is, is really not the focus of this conversation today. But the key here, if I understand correctly is these domains are autonomous and that not only this required technical thinking, but really supportive organizational mindset, which we're going to talk about today. But, but Christoph, maybe you could address, you know, at a high level, some of the architectural evolution that you guys went through. >> Yeah, sure. Yeah. Maybe it's also a good summary about the entire history. So as you have mentioned, right, we started in the very beginning, it's a monolith on the operational plan, right? Actually it wasn't just one model it was two, one for the backend and one for the front end. And our analytical plan was essentially a couple of spreadsheets. And I think there's nothing wrong with spreadsheets, but it allows you to store information, it allows you to transform data, it allows you to share this information, it allows you to visualize this data, but all kind of, it's not actually separating concern, right? Every single one tool. And this means that it's obviously not scalable, right? You reach the point where this kind of management's set up in, or data management is in one tool, reached elements. So what we have started is we created our data lake, as we have seen here on our dupe. And just in the very beginning actually reflected very much our operation upon this. On top of that, we used Impala as a data warehouse, but there was not really a distinction between what is our data warehouse and what is our data lakes as the Impala was used as kind of both as a kind of engine to create a warehouse and data lake constructed itself. And this organic growth actually led to a situation. As I think it's clear now that we had the centralized model as, for all the domains that were really lose Kimball, the modeling standards and there's new uniformity we used to actually build, in-house, a base of building materialized use, of use that we have used for the presentation there. There was a lot of duplication of effort. And in the end, essentially the amendments and feedback tool, which helped us to, to improve of what we, have built during the end in a natural, as you said, the lack of trust. And this basically was a starting point for us to understand, okay, how can we move away? And there are a lot of different things that we can discuss of apart from this organizational structure that we have set up here, we have three or four pillars from Zhamak. However, there's also the next, extra question around, how do we implement product, right? What are the implications on that level and I think that is, that's something that we are, that we are currently still in progress. >> Got it. Okay. So I wonder if we could talk about, switch gears a little bit, and talk about the organizational and cultural challenges that you faced. What were those conversations like? And let's, let's dig into that a little bit. I want to get into governance as well. >> The conversations on the cultural change. I mean, yes, we went through a hyper growth through the last year, and obviously there were a lot of new joiners, a lot of different, very, very smart people joining the company, which then results that collaborations got a bit more difficult. Of course, the time zone changes. You have different, different artifacts that you had recreated in documentation that were flying around. So we were, we had to build the company from scratch, right? Of course, this then resulted always this tension, which I described before. But the most important part here is that data has always been a very important factor at HelloFresh, and we collected more of this data and continued to improve, use data to improve the different key areas of our business. Even when organizational struggles like the central (mumbles) struggles, data somehow always helped us to grow through this kind of change, right? In the end, those decentralized teams in our local geographies started with solutions that serve the business, which was very, very important. Otherwise, we wouldn't be at the place where we are today, but they did violate best practices and standards. And I always use the sports analogy, Dave. So like any sport, there are different rules and regulations that need to be followed. These routes are defined by, I'll call it, the sports association. And this is what you can think about other data governance and then our compliance team. Now we add the players to it who need to follow those rules and abide by them. This is what we then call data management. Now we have the different players, the professionals they also need to be trained and understand the strategy and the rules before they can play. And this is what I then called data literacy. So we realized that we need to focus on helping our teams to develop those capabilities and teach the standards for how work is being done to truly drive functional excellence in the different domains. And one of our ambition of our data literacy program for example, is to really empower every employee at HelloFresh, everyone, to make the right data-informed decisions by providing data education that scales (mumbles), and that can be different things. Different things like including data capabilities with, in the learning path for example, right? So help them to create and deploy data products, connecting data, producers, and data consumers, and create a common sense and more understanding of each other's dependencies, which is important. For example, SIS, SLO, state of contracts, et cetera, people get more of a sense of ownership and responsibility. Of course, we have to define what it means. What does ownership means? What does responsibility mean? But we are teaching this to our colleagues via individual learning patterns and help them upscale to use also their shared infrastructure, and those self-service data applications. And of all to summarize, we are still in this progress of learning. We're still learning as well. So learning never stops at Hello Fresh, but we are really trying this to make it as much fun as possible. And in the end, we all know user behavior is changed through positive experience. So instead of having massive training programs over endless courses of workshops, leaving our new joiners and colleagues confused and overwhelmed, we're applying gamification, right? So split different levels of certification where our colleagues, can access, have had access points. They can earn badges along the way, which then simplifies the process of learning and engagement of the users. And this is what we see in surveys, for example, where our employees value this gamification approach a lot and are even competing to collect those learning pet badges, to become the number one on the leaderboard. >> I love the gamification. I mean, we've seen it work so well in so many different industries, not the least of which is crypto. So you've identified some of the process gaps that you, you saw, you just gloss over them. Sometimes I say, pave the cow path. You didn't try to force. In other words, a new architecture into the legacy processes, you really had to rethink your approach to data management. So what did that entail? >> To rethink the way of data management, 100%. So if I take the example of revolution, industrial revolution or classical supply chain revolution, but just imagine that you have been riding a horse, for example, your whole life, and suddenly you can operate a car or you suddenly receive just a complete new way of transporting assets from A to B. So we needed to establish a new set of cross-functional business processes to run faster, drive faster, more robustly, and deliver data products which can be trusted and used by downstream processes and systems. Hence we had a subset of new standards and new procedures that would fall into the internal data governance and compliance sector. With internal, I'm always referring to the data operations around new things like data catalog, how to identify ownership, how to change ownership, how to certify data assets, everything around classical is software development, which we now apply to data. This, this is some old and new thinking, right? Deployment, versioning, QA, all the different things, ingestion policies, the deletion procedures, all the things that software development has been doing, we do it now with data as well. And it's simple terms, it's a whole redesign of the supply chain of our data with new procedures and new processes in asset creation, asset management and asset consumption. >> So data's become kind of the new development kit, if you will. I want to shift gears and talk about the notion of data product, and we have a slide that, that we pulled from your deck. And I'd like to unpack it a little bit. I'll just, if you can bring that up, I'll, I'll read it. A data product is a product whose primary objective is to leverage on data to solve customer problems, where customers are both internal and external. so pretty straightforward. I know you've, you've gone much deeper in your thinking and into your organization, but how do you think about that and how do you determine for instance, who owns what, how did you get everybody to agree? >> I can take that one. Maybe let me start as a data product. So I think that's an ongoing debate, right? And I think the debate itself is the important piece here, right? You mentioned the debate, you've clarified what we actually mean by that, a product, and what is actually the mindset. So I think just from a definition perspective, right? I think we find the common denominator that we say, okay, that our product is something which is important for the company that comes with value. What do you mean by that? Okay. It's a solution to a customer problem that delivers ideally maximum value to the business. And yes, leverage is the power of data. And we have a couple of examples, and I'll hit refresh here, the historical and classical ones around dashboards, for example, to monitor our error rates, but also more sophisticated based for example, to incorporate machine learning algorithms in our recipe recommendation. However, I think the important aspects of a data product is A: there is an owner, right? There's someone accountable for making sure that the product that you're providing is actually served and has maintained. And there are, there's someone who's making sure that this actually keeps the value of what we are promising. Combined with the idea of the proper documentation, like a product description, right? The people understand how to use it. What is this about? And related to that piece is the idea of, there's a purpose, right? We need to understand or ask ourselves, okay, why does a thing exist? Does it provide the value that we think it does? Then it leads in to a good understanding of what the life cycle of the data product and product life cycle. What do we mean? Okay. From the beginning, from the creation, you need to have a good understanding. You need to collect feedback. We need to learn about that, you need to rework, and actually finally, also to think about, okay, when is it time to decommission that piece So overall I think the core of this data product is product thinking 101, right? That we start, the point is, the starting point needs to be the problem and not the solution. And this is essentially what we have seen, what was missing, what brought us to this kind of data spaghetti that we have built there in Rush, essentially, we built it. Certain data assets develop in isolation and continuously patch the solution just to fulfill these ad hoc requests that we got and actually really understanding what the stakeholder needs. And the interesting piece as a results in duplication of (mumbled) And this is not just frustrating and probably not the most efficient way, how the company should work. But also if I build the same data assets, but slightly different assumption across the company and multiple teams that leads to data inconsistency. And imagine the following scenario. You, as a management, for management perspective, you're asking basically a specific question and you get essentially from a couple of different teams, different kinds of graphs, different kinds of data and numbers. And in the end, you do not know which ones to trust. So there's actually much (mumbles) but good. You do not know what actually is it noise for times of observing or is it just actually, is there actually a signal that I'm looking for? And the same as if I'm running an AB test, right? I have a new feature, I would like to understand what is the business impact of this feature? I run that with a specific source and an unfortunate scenario. Your production system is actually running on a different source. You see different numbers. What you have seen in the AB test is actually not what you see then in production, typical thing. Then as you asking some analytics team to actually do a deep dive, to understand where the discrepancies are coming from, worst case scenario again, there's a different kind of source. So in the end, it's a pretty frustrating scenario. And it's actually a waste of time of people that have to identify the root cause of this type of divergence. So in a nutshell, the highest degree of consistency is actually achieved if people are just reusing data assets. And also in the end, the meetup talk they've given, right? We start trying to establish this approach by AB testing. So we have a team, but just providing, or is kind of owning their target metric associated business teams, and they're providing that as a product also to other services, including the AB testing team. The AB testing team can use this information to find an interface say, okay, I'm drawing information for the metadata of an experiment. And in the end, after the assignment, after this data collection phase, they can easily add a graph to a dashboard just grouped by the AB testing barrier. And we have seen that also in other companies. So it's not just a nice dream that we have, right? I have actually looked at other companies maybe looked on search and we established a complete KPI pipeline that was computing all these information and this information both hosted by the team and those that (mumbles) AB testing, deep dives and, and regular reporting again. So just one last second, the, the important piece, Now, why I'm coming back to that is that it requires that we are treating this data as a product, right? If we want to have multiple people using the thing that I am owning and building, we have to provide this as a trust (mumbles) asset and in a way that it's easy for people to discover and to actually work with. >> Yeah. And coming back to that. So this is, to me this is why I get so excited about data mesh, because I really do think it's the right direction for organizations. When people hear data product, they think, "Well, what does that mean?" But then when you start to sort of define it as you did, it's using data to add value that could be cutting costs, that could be generating revenue, it could be actually directly creating a product that you monetize. So it's sort of in the eyes of the beholder, but I think the other point that we've made, is you made it earlier on too, and again, context. So when you have a centralized data team and you have all these P&L managers, a lot of times they'll question the data 'cause they don't own it. They're like, "Well, wait a minute." If it doesn't agree with their agenda, they'll attack the data. But if they own the data, then they're responsible for defending that. And that is a mindset change that's really important. And I'm curious is how you got to that ownership. Was it a top-down or was somebody providing leadership? Was it more organic bottom up? Was it a sort of a combination? How do you decide who owned what? In other words, you know, did you get, how did you get the business to take ownership of the data and what does owning the data actually mean? >> That's a very good question, Dave. I think that one of the pieces where I think we have a lot of learning and basically if you ask me how we could stop the filling, I think that would be the first piece that we need to start. Really think about how that should be approached. If it's staff has ownership, right? That means somehow that the team has the responsibility to host themselves the data assets to minimum acceptable standards. That's minimum dependencies up and down stream. The interesting piece has to be looking backwards. What was happening is that under that definition, this extra process that we have to go through is not actually transferring ownership from a central team to the other teams, but actually in most cases to establish ownership. I make this difference because saying we have to transfer ownership actually would erroneously suggest that the dataset was owned before, but this platform team, yes, they had the capability to make the change, but actually the analytics team, but always once we had the business understand the use cases and what no one actually bought, it's actually expensive, expected. So we had to go through this very lengthy process and establishing ownership, how we have done that as in the beginning, very naively started, here's a document, here are all the data assets, what is probably the nearest neighbor who can actually take care of that. And then we, we moved it over. But the problem here is that all these things is kind of technical debt, right? It's not really properly documented, pretty unstable. It was built in a very inconsistent way over years. And these people that built this thing have already left the company. So this is actually not a nice thing that you want to see and people build up a certain resistance, even if they have actually bought into this idea of domain ownership. So if you ask me these learnings, what needs to happen is first, the company needs to really understand what our core business concept that we have the need to have this mapping from this other core business concept that we have. These are the domain teams who are owning this concept, and then actually linked that to the, the assets and integrate that better, but suppose understanding how we can evolve, actually the data assets and new data builds things new and the, in this piece and the domain, but also how can we address reduction of technical depth and stabilizing what we have already. >> Thank you for that Christoph. So I want to turn a direction here and talk Clemence about governance. And I know that's an area that's passionate, you're passionate about. I pulled this slide from your deck, which I kind of messed up a little bit, sorry for that. But, but, but by the way, we're going to publish a link to the full video that you guys did. So we'll share that with folks, but it's one of the most challenging aspects of data mesh. If you're going to decentralize, you, you quickly realize this could be the wild west, as we talked about all over again. So how are you approaching governance? There's a lot of items on this slide that are, you know, underscore the complexity, whether it's privacy compliance, et cetera. So, so how did you approach this? >> It's yeah, it's about connecting those dots, right? So the aim of the data governance program is to promote the autonomy of every team while still ensuring that everybody has the right interoperability. So when we want to move from the wild west, riding horses to a civilized way of transport, I can take the example of modern street traffic. Like when all participants can maneuver independently, and as long as they follow the same rules and standards, everybody can remain compatible with each other and understand and learn from each other so we can avoid car crashes. So when I go from country to country, I do understand what the street infrastructure means. How do I drive my car? I can also read the traffic lights and the different signals. So likewise, as a business in HelloFresh we do operate autonomously and consequently need to follow those external and internal rules and standards set forth by the tradition in which we operate. So in order to prevent a, a car crash, we need to at least ensure compliance with regulations, to account for societies and our customers' increasing concern with data protection and privacy. So teaching and advocating this imaging, evangelizing this to everyone in the company was a key community or communication strategy. And of course, I mean, I mentioned data privacy, external factors, the same goes for internal regulations and processes to help our colleagues to adapt for this very new environment. So when I mentioned before, the new way of thinking, the new way of dealing and managing data, this of course implies that we need new processes and regulations for our colleagues as well. In a nutshell, then this means that data governance provides a framework for managing our people, the processes and technology and culture around our data traffic. And that governance must come together in order to have this effective program providing at least a common denominator is especially critical for shared data sets, which we have across our different geographies managed, and shared applications on shared infrastructure and applications. And as then consumed by centralized processes, for example, master data, everything, and all the metrics and KPIs, which are also used for a central steering. It's a big change, right? And our ultimate goal is to have this non-invasive federated, automated and computational governance. And for that, we can't just talk about it. We actually have to go deep and use case by use case and QC by PUC and generate learnings and learnings with the different teams. And this would be a classical approach of identifying the target structure, the target status, match it with the current status, by identifying together with the business teams, with the different domains and have a risk assessment, for example, to increase transparency because a lot of teams, they might not even know what kind of situation they might be. And this is where this training and this piece of data literacy comes into place, where we go in and trade based on the findings, based on the most valuable use case. And based on that, help our teams to do this change, to increase their capability. I just told a little bit more, I wouldn't say hand-holding, but a lot of guidance. >> Can I kind of kind of chime in quickly and (mumbled) below me, I mean, there's a lot of governance piece, but I think that is important. And if you're talking about documentation, for example, yes, we can go from team to team and tell these people, hey, you have to document your data assets and data catalog, or you have to establish a data contract and so on and forth. But if we would like to build data products at scale, following actual governance, we need to think about automation, right? We need to think about a lot of things that we can learn from engineering before, and just starts as simple things. Like if we would like to build up trust in our data products, right? And actually want to apply the same rigor and the best practices that we know from engineering. There are things that we can do. And we should probably think about what we can copy. And one example might be so the level of service level agreements, so that level objectives. So the level of indicators, right, that represent on a, on an engineering level, right? Are we providing services? They're representing the promises we make to our customer and to our consumers. These are the internal objectives that help us to keep those promises. And actually these audits of, of how we are tracking ourselves, how we are doing. And this is just one example of where I think the federated governance, governance comes into play, right? In an ideal world, you should not just talk about data as a product, but also data product that's code. That'd be say, okay, as most, as much as possible, right? Give the engineers the tool that they are familiar with, and actually not ask the product managers, for example, to document the data assets in the data catalog, but make it part of the configuration has as, as a, as a CDCI continuous delivery pipeline, as we typically see in other engineering, tasks through it and services maybe say, okay, there is configuration, we can think about PII, we can think about data quality monitoring, we can think about the ingestion data catalog and so on and forth. But I think ideally in a data product goals become a sort of templates that can be deployed and are actually rejected or verified at build time before we actually make them and deploy them to production. >> Yeah so it's like DevOps for data product. So, so I'm envisioning almost a three-phase approach to governance. And you're kind of, it sounds like you're in the early phase of it, call it phase zero, where there's learning, there's literacy, there's training education, there's kind of self-governance. And then there's some kind of oversight, some, a lot of manual stuff going on, and then you, you're trying to process builders at this phase and then you codify it and then you can automate it. Is that fair? >> Yeah. I would rather think, think about automation as early as possible in a way, and yes, it needs to be separate rules, but then actually start actually use case by use case. Is there anything that small piece that we can already automate? If just possible roll that out at the next extended step-by-step. >> Is there a role though, that adjudicates that? Is there a central, you know, chief state officer who's responsible for making sure people are complying or is it, how do you handle it? >> I mean, from a, from a, from a platform perspective, yes. This applies in to, to implement certain pieces, that we are saying are important and actually would like to implement, however, that is actually working very closely with the governance department, So it's Clemence's piece to understand that defy the policies that needs to be implemented. >> So good. So Clemence essentially, it's, it's, it's your responsibility to make sure that the policy is being followed. And then as you were saying, Christoph, you want to compress the time to automation as fast as possible. Is that, is that-- >> Yeah, so it's a really, it's a, what needs to be really clear is that it's always a split effort, right? So you can't just do one or the other thing, but there is some that really goes hand in hand because for the right information, for the right engineering tooling, we need to have the transparency first. I mean, code needs to be coded. So we kind of need to operate on the same level with the right understanding. So there's actually two things that are important, which is one it's policies and guidelines, but not only that, because more importantly or equally important is to align with the end-user and tech teams and engineering and really bridge between business value business teams and the engineering teams. >> Got it. So just a couple more questions, because we got to wrap up, I want to talk a little bit about the business outcome. I know it's hard to quantify and I'll talk about that in a moment, but, but major learnings, we've got some of the challenges that, that you cited. I'll just put them up here. We don't have to go detailed into this, but I just wanted to share with some folks, but my question, I mean, this is the advice for your peers question. If you had to do it differently, if you had a do over or a Mulligan, as we like to say for you, golfers, what, what would you do differently? >> I mean, I, can we start with, from, from the transformational challenge that understanding that it's also high load of cultural exchange. I think this is, this is important that a particular communication strategy needs to be put into place and people really need to be supported, right? So it's not that we go in and say, well, we have to change into, towards data mash, but naturally it's the human nature, nature, nature, we are kind of resistant to change, right? And (mumbles) uncomfortable. So we need to take that away by training and by communicating. Chris, you might want to add something to that. >> Definitely. I think the point that I've also made before, right? We need to acknowledge that data mesh it's an architectural scale, right? If you're looking for something which is necessary by huge companies who are vulnerable, that are product at scale. I mean, Dave, you mentioned that right, there are a lot of advantages to have a centralized team, but at some point it may make sense to actually decentralize here. And at this point, right, if you think about data mesh, you have to recognize that you're not building something on a green field. And I think there's a big learning, which is also reflected on the slide is, don't underestimate your baggage. It's typically is you come to a point where the old model doesn't work anymore. And as had a fresh write, we lost the trust in our data. And actually we have seen certain risks of slowing down our innovation. So we triggered that, this was triggering the need to actually change something. So at this transition applies that you took, we have a lot of technical depth accumulated over years. And I think what we have learned is that potentially we have, de-centralized some assets too early. This is not actually taking into account the maturity of the team. We are actually investigating too. And now we'll be actually in the face of correcting pieces of that one, right? But I think if you, if you, if you start from scratch, you have to understand, okay, is all my teams actually ready for taking on this new, this new capability? And you have to make sure that this is decentralization. You build up these capabilities and the teams, and as Clemence has mentioned, right? Make sure that you take the, the people on your journey. I think these are the pieces that also here it comes with this knowledge gap, right? That we need to think about hiring literacy, the technical depth I just talked about. And I think the, the last piece that I would add now, which is not here on the slide deck is also from our perspective, we started on the analytical layer because it was kind of where things are exploding, right? This is the bit where people feel the pain. But I think a lot of the efforts that we have started to actually modernize the current stage and data products, towards data mesh, we've understood that it always comes down basically to a proper shape of our operational plan. And I think what needs to happen is I think we got through a lot of pains, but the learning here is this needs to really be an, a commitment from the company. It needs to have an end to end. >> I think that point, that last point you made is so critical because I, I, I hear a lot from the vendor community about how they're going to make analytics better. And that's not, that's not unimportant, but, but true data product thinking and decentralized data organizations really have to operationalize in order to scale it. So these decisions around data architecture and organization, they're fundamental and lasting, it's not necessarily about an individual project ROI. They're going to be projects, sub projects, you know, within this architecture. But the architectural decision itself is organizational it's cultural and, and what's the best approach to support your business at scale. It really speaks to, to, to what you are, who you are as a company, how you operate and getting that right, as we've seen in the success of data-driven companies is, yields tremendous results. So I'll, I'll, I'll ask each of you to give, give us your final thoughts and then we'll wrap. Maybe. >> Just can I quickly, maybe just jumping on this piece, what you have mentioned, right, the target architecture. If you talk about these pieces, right, people often have this picture of (mumbled). Okay. There are different kinds of stages. We have (incomprehensible speech), we have actually a gesture layer, we have a storage layer, transformation layer, presentation data, and then we are basically putting a lot of technology on top of that. That's kind of our target architecture. However, I think what we really need to make sure is that we have these different kinds of views, right? We need to understand what are actually the capabilities that we need to know, what new goals, how does it look and feel from the different kinds of personas and experience view. And then finally that should actually go to the, to the target architecture from a technical perspective. Maybe just to give an outlook what we are planning to do, how we want to move that forward. Yes. Actually based on our strategy in the, in the sense of we would like to increase the maturity as a whole across the entire company. And this is kind of a framework around the business strategy and it's breaking down into four pillars as well. People meaning the data culture, data literacy, data organizational structure and so on. If you're talking about governance, as Clemence had actually mentioned that right, compliance, governance, data management, and so on, you're talking about technology. And I think we could talk for hours for that one it's around data platform, data science platform. And then finally also about enablements through data. Meaning we need to understand data quality, data accessibility and applied science and data monetization. >> Great. Thank you, Christoph. Clemence why don't you bring us home. Give us your final thoughts. >> Okay. I can just agree with Christoph that important is to understand what kind of maturity people have, but I understand we're at the maturity level, where a company, where people, our organization is, and really understand what does kind of, it's just kind of a change applies to that, those four pillars, for example, what needs to be tackled first. And this is not very clear from the very first beginning (mumbles). It's kind of like green field, you come up with must wins to come up with things that you really want to do out of theory and out of different white papers. Only if you really start conducting the first initiatives, you do understand that you are going to have to put those thoughts together. And where do I miss out on one of those four different pillars, people process technology and governance, but, and then that can often the integration like doing step by step, small steps, by small steps, not pulling the ocean where you're capable, really to identify the gaps and see where either you can fill the gaps or where you have to increase maturity first and train people or increase your tech stack. >> You know, HelloFresh is an excellent example of a company that is innovating. It was not born in Silicon Valley, which I love. It's a global company. And, and I got to ask you guys, it seems like it's just an amazing place to work. Are you guys hiring? >> Yes, definitely. We do. As, as mentioned right as well as one of these aspects distributing and actually hiring as an entire company, specifically for data. I think there are a lot of open roles, so yes, please visit or our page from data engineering, data, product management, and Clemence has a lot of roles that you can speak to about. But yes. >> Guys, thanks so much for sharing with theCUBE audience, you're, you're pioneers, and we look forward to collaborations in the future to track progress, and really want to thank you for your time. >> Thank you very much. >> Thank you very much Dave. >> And thank you for watching theCUBE's startup showcase made possible by AWS. This is Dave Volante. We'll see you next time. (cheerful music)

Published Date : Sep 15 2021

SUMMARY :

and the internal team it had the world in your field. Maybe take over the first and the plant acquisition And as you expand your TAM, the flexibility to grow So that for the team meant and so the lines of business, and so on started really to and the flip side of that say the data to the experts So it's the for, And the idea was really moving away Okay, go ahead. And as you mentioned, federated computational governance. is really not the focus of And in the end, and talk about the organizational And in the end, we all know user behavior not the least of which is crypto. So if I take the example of revolution, of the new development kit, And also in the end, So it's sort of in the the company needs to really but it's one of the most So the aim of the data governance and actually not ask the the early phase of it, that we can already automate? that defy the policies that the time to automation on the same level with the about the business outcome. So it's not that we go in and say, well, efforts that we have started to I hear a lot from the vendor in the sense of we would like Clemence why don't you bring us home. fill the gaps or where you And, and I got to ask you guys, that you can speak to about. collaborations in the future to track And thank you for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

ChristophPERSON

0.99+

ChrisPERSON

0.99+

Christoph SawadePERSON

0.99+

2015DATE

0.99+

Zhamak DehghaniPERSON

0.99+

YoufoodzORGANIZATION

0.99+

Dave VolantePERSON

0.99+

Clemence CheePERSON

0.99+

2019DATE

0.99+

NorwayLOCATION

0.99+

2017DATE

0.99+

AWSORGANIZATION

0.99+

May, 2019DATE

0.99+

UKLOCATION

0.99+

HelloFreshORGANIZATION

0.99+

ClemencePERSON

0.99+

Silicon ValleyLOCATION

0.99+

AustraliaLOCATION

0.99+

100%QUANTITY

0.99+

USLOCATION

0.99+

JulyDATE

0.99+

twoQUANTITY

0.99+

Clemence W. CheePERSON

0.99+

TwoQUANTITY

0.99+

TAMORGANIZATION

0.99+

oneQUANTITY

0.99+

threeQUANTITY

0.99+

Hello FreshORGANIZATION

0.99+

first pieceQUANTITY

0.99+

one toolQUANTITY

0.99+

last yearDATE

0.99+

last weekDATE

0.99+

two thingsQUANTITY

0.99+

ZhamakPERSON

0.99+

firstQUANTITY

0.99+

two years laterDATE

0.99+

PatPERSON

0.99+

second twoQUANTITY

0.99+

one last secondQUANTITY

0.99+

Green ChefORGANIZATION

0.99+

OneQUANTITY

0.98+

first twoQUANTITY

0.98+

one exampleQUANTITY

0.98+

bothQUANTITY

0.98+

one modelQUANTITY

0.98+

theCUBEORGANIZATION

0.97+

four pillarsQUANTITY

0.97+

Every PlateORGANIZATION

0.97+

todayDATE

0.97+

eachQUANTITY

0.97+

earlier this yearDATE

0.97+

Boost Your Solutions with the HPE Ezmeral Ecosystem Program | HPE Ezmeral Day 2021


 

>> Hello. My name is Ron Kafka, and I'm the senior director for Partner Scale Initiatives for HBE Ezmeral. Thanks for joining us today at Analytics Unleashed. By now, you've heard a lot about the Ezmeral portfolio and how it can help you accomplish objectives around big data analytics and containerization. I want to shift gears a bit and then discuss our Ezmeral Technology Partner Program. I've got two great guest speakers here with me today. And together, We're going to discuss how jointly we are solving data analytic challenges for our customers. Before I introduce them, I want to take a minute to talk to provide a little bit more insight into our ecosystem program. We've created a program with a realization based on customer feedback that even the most mature organizations are struggling with their data-driven transformation efforts. It turns out this is largely due to the pace of innovation with application vendors or ICS supporting data science and advanced analytic workloads. Their advancements are simply outpacing organization's ability to move workloads into production rapidly. Bottom line, organizations want a unified experience across environments where their entire application portfolio in essence provide a comprehensive application stack and not piece parts. So, let's talk about how our ecosystem program helps solve for this. For starters, we were leveraging HPEs long track record of forging technology partnerships and it created a best in class ISB partner program specific for the Ezmeral portfolio. We were doing this by developing an open concept marketplace where customers and partners can explore, learn, engage and collaborate with our strategic technology partners. This enables our customers to adopt, deploy validated applications from industry leading software vendors on HPE Ezmeral with a high degree of confidence. Also, it provides a very deep bench of leading ISVs for other groups inside of HPE to leverage for their solutioning efforts. Speaking of industry leading ISV, it's about time and introduce you to two of those industry leaders right now. Let me welcome Daniel Hladky from Dataiku, and Omri Geller from Run:AI. So I'd like to introduce Daniel Hladky. Daniel is with Dataiku. He's a great partner for HPE. Daniel, welcome. >> Thank you for having me here. >> That's great. Hey, would you mind just talking a bit about how your partnership journey has been with HPE? >> Yes, pleasure. So the journey started about five years ago and in 2018 we signed a worldwide reseller agreement with HPE. And in 2020, we actually started to work jointly on the integration between the Dataiku Data Science Studio called DSS and integrated that with the Ezmeral Container platform, and was a great success. And it was on behalf of some clear customer projects. >> It's been a long partnership journey with you for sure with HPE. And we welcome your partnership extremely well. Just a brief question about the Container Platform and really what that's meant for Dataiku. >> Yes, Ron. Thanks. So, basically I'd like the quote here Florian Douetteau, which is the CEO of Dataiku, who said that the combination of Dataiku with the HPE Ezmeral Container Platform will help the customers to successfully scale and put machine learning projects into production. And this basically is going to deliver real impact for their business. So, the combination of the two of us is a great success. >> That's great. Can you talk about what Dataiku is doing and how HPE Ezmeral Container Platform fits in a solution offering a bit more? >> Great. So basically Dataiku DSS is our product which is a end to end data science platform, and basically brings value to the project of customers on their past enterprise AI. In simple ways, we can say it could be as simple as building data pipelines, but it could be also very complex by having machine and deep learning models at scale. So the fast track to value is by having collaboration, orchestration online technologies and the models in production. So, all of that is part of the Data Science Studio and Ezmeral fits perfectly into the part where we design and then basically put at scale those project and put it into product. >> That's perfect. Can you be a bit more specific about how you see HPE and Dataiku really tightening up a customer outcome and value proposition? >> Yes. So what we see is also the challenge of the market that probably about 80% of the use cases really never make it to production. And this is of course a big challenge and we need to change that. And I think the combination of the two of us is actually addressing exactly this need. What we can say is part of the MLOps approach, Dataiku and the Ezmeral Container Platform will provide a frictionless approach, which means without scripting and coding, customers can put all those projects into the productive environment and don't have to worry any more and be more business oriented. >> That's great. So you mentioned you're seeing customers be a lot more mature with their AI workloads and deployment. What do you suggest for the other customers out there that are just starting this journey or just thinking about how to get started? >> Yeah. That's a very good question, Ron. So what we see there is actually the challenge that people need to go on a pass of maturity. And this starts with a simple data pipelines, et cetera, and then basically move up the ladder and basically build large complex project. And here I see a very interesting offer coming now from HPE which is called D3S, which is the data science startup pack. That's something I discussed together with HPE back in early 2020. And basically, it solves the three stages, which is explore, experiment and evolve and builds quickly MVPs for the customers. By doing so, basically you addressed business objectives, lay out in the proper architecture and also setting up the proper organization around it. So, this is a great combination by HPE and Dataiku through the D3S. >> And it's a perfect example of what I mentioned earlier about leveraging the ecosystem program that we built to do deeper solutioning efforts inside of HPE in this case with our AI business unit. So, congratulations on that and thanks for joining us today. I'm going to shift gears. I'm going to bring in Omri Geller from Run:AI. Omri, welcome. It's great to have you. You guys are killing it out there in the market today. And I just thought we could spend a few minutes talking about what is so unique and differentiated from your offerings. >> Thank you, Ron. It's a pleasure to be here. Run:AI creates a virtualization and orchestration layer for AI infrastructure. We help organizations to gain visibility and control over their GPO resources and help them deliver AI solutions to market faster. And we do that by managing granular scheduling, prioritization, allocation of compute power, together with the HPE Ezmeral Container Platform. >> That's great. And your partnership with HPE is a bit newer than Daniel's, right? Maybe about the last year or so we've been working together a lot more closely. Can you just talk about the HPE partnership, what it's meant for you and how do you see it impacting your business? >> Sure. First of all, Run:AI is excited to partner with HPE Ezmeral Container Platform and help customers manage appeals for their AI workloads. We chose HPE since HPE has years of experience partnering with AI use cases and outcomes with vendors who have strong footprint in this markets. HPE works with many partners that are complimentary for our use case such as Nvidia, and HPE Container Platform together with Run:AI and Nvidia deliver a world class solutions for AI accelerated workloads. And as you can understand, for AI speed is critical. Companies want to gather important AI initiatives into production as soon as they can. And the HPE Ezmeral Container Platform, running IGP orchestration solution enables that by enabling dynamic provisioning of GPU so that resources can be easily shared, efficiently orchestrated and optimal used. >> That's great. And you talked a lot about the efficiency of the solution. What about from a customer perspective? What is the real benefit that our customers are going to be able to gain from an HPE and Run:AI offering? >> So first, it is important to understand how data scientists and AI researchers actually build solution. They do it by running experiments. And if a data scientist is able to run more experiments per given time, they will get to the solution faster. With HPE Ezmeral Container Platform, Run:AI and users such as data scientists can actually do that and seamlessly and efficiently consume large amounts of GPU resources, run more experiments or given time and therefore accelerate their research. Together, we actually saw a customer that is running almost 7,000 jobs in parallel over GPUs with efficient utilization of those GPUs. And by running more experiments, those customers can be much more effective and efficient when it comes to bringing solutions to market >> Couldn't agree more. And I think we're starting to see a lot of joint success together as we go out and talk to the story. Hey, I want to thank you both one last time for being here with me today. It was very enlightening for our team to have you as part of the program. And I'm excited to extend this customer value proposition out to the rest of our communities. With that, I'd like to close today's session. I appreciate everyone's time. And keep an eye out on our ISP marketplace for Ezmeral We're continuing to expand and add new capabilities and new partners to our marketplace. We're excited to do a lot of great things and help you guys all be successful. Thanks for joining. >> Thank you, Ron. >> What a great panel discussion. And these partners they really do have a good understanding of the possibilities, working on the platform, and I hope and expect we'll see this ecosystem continue to grow. That concludes the main program, which means you can now pick one of three live demos to attend and chat live with experts. Now those three include day in the life of IT Admin, day in the life of a data scientist, and even a day in the life of the HPE Ezmeral Data Fabric, where you can see the many ways the data fabric is used in your life today. Wish you could attend all three, no worries. The recordings will be available on demand for you and your teams. Moreover, the show doesn't stop here, HPE has a growing and thriving tech community, you should check it out. It's really a solid starting point for learning more, talking to smart people about great ideas and seeing how Ezmeral can be part of your own data journey. Again, thanks very much to all of you for joining, until next time, keep unleashing the power of your data.

Published Date : Mar 17 2021

SUMMARY :

and how it can help you Hey, would you mind just talking a bit and integrated that with the and really what that's meant for Dataiku. So, basically I'd like the quote here Florian Douetteau, and how HPE Ezmeral Container Platform and the models in production. about how you see HPE and and the Ezmeral Container Platform or just thinking about how to get started? and builds quickly MVPs for the customers. and differentiated from your offerings. and control over their GPO resources and how do you see it and HPE Container Platform together with Run:AI efficiency of the solution. So first, it is important to understand for our team to have you and even a day in the life of

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DanielPERSON

0.99+

Ron KafkaPERSON

0.99+

RonPERSON

0.99+

Omri GellerPERSON

0.99+

Florian DouetteauPERSON

0.99+

HPEORGANIZATION

0.99+

Daniel HladkyPERSON

0.99+

DataikuORGANIZATION

0.99+

twoQUANTITY

0.99+

2020DATE

0.99+

NvidiaORGANIZATION

0.99+

2018DATE

0.99+

DSSORGANIZATION

0.99+

oneQUANTITY

0.99+

last yearDATE

0.99+

todayDATE

0.99+

threeQUANTITY

0.99+

early 2020DATE

0.99+

firstQUANTITY

0.98+

Data Science StudioORGANIZATION

0.98+

EzmeralPERSON

0.98+

EzmeralORGANIZATION

0.98+

Dataiku Data Science StudioORGANIZATION

0.97+

three live demosQUANTITY

0.97+

bothQUANTITY

0.97+

about 80%QUANTITY

0.96+

HPEsORGANIZATION

0.95+

three stagesQUANTITY

0.94+

two great guest speakersQUANTITY

0.93+

OmriPERSON

0.91+

Analytics UnleashedORGANIZATION

0.91+

D3STITLE

0.87+

almost 7,000 jobsQUANTITY

0.87+

HPE Container PlatformTITLE

0.86+

HPE Ezmeral Container PlatformTITLE

0.83+

HBE EzmeralORGANIZATION

0.83+

RunORGANIZATION

0.82+

Ezmeral Container PlatformTITLE

0.81+

about five years agoDATE

0.8+

PlatformTITLE

0.71+

EzmeralTITLE

0.7+

Run:AIORGANIZATION

0.7+

Ezmeral DataORGANIZATION

0.69+

2021DATE

0.68+

Ezmeral Ecosystem ProgramTITLE

0.68+

ICSORGANIZATION

0.67+

RunTITLE

0.66+

Partner Scale InitiativesORGANIZATION

0.66+

Boost Your Solutions with the HPE Ezmeral Ecosystem Program | HPE Ezmeral Day 2021


 

>> Hello. My name is Ron Kafka, and I'm the senior director for Partner Scale Initiatives for HBE Ezmeral. Thanks for joining us today at Analytics Unleashed. By now, you've heard a lot about the Ezmeral portfolio and how it can help you accomplish objectives around big data analytics and containerization. I want to shift gears a bit and then discuss our Ezmeral Technology Partner Program. I've got two great guest speakers here with me today. And together, We're going to discuss how jointly we are solving data analytic challenges for our customers. Before I introduce them, I want to take a minute to talk to provide a little bit more insight into our ecosystem program. We've created a program with a realization based on customer feedback that even the most mature organizations are struggling with their data-driven transformation efforts. It turns out this is largely due to the pace of innovation with application vendors or ICS supporting data science and advanced analytic workloads. Their advancements are simply outpacing organization's ability to move workloads into production rapidly. Bottom line, organizations want a unified experience across environments where their entire application portfolio in essence provide a comprehensive application stack and not piece parts. So, let's talk about how our ecosystem program helps solve for this. For starters, we were leveraging HPEs long track record of forging technology partnerships and it created a best in class ISB partner program specific for the Ezmeral portfolio. We were doing this by developing an open concept marketplace where customers and partners can explore, learn, engage and collaborate with our strategic technology partners. This enables our customers to adopt, deploy validated applications from industry leading software vendors on HPE Ezmeral with a high degree of confidence. Also, it provides a very deep bench of leading ISVs for other groups inside of HPE to leverage for their solutioning efforts. Speaking of industry leading ISV, it's about time and introduce you to two of those industry leaders right now. Let me welcome Daniel Hladky from Dataiku, and Omri Geller from Run:AI. So I'd like to introduce Daniel Hladky. Daniel is with Dataiku. He's a great partner for HPE. Daniel, welcome. >> Thank you for having me here. >> That's great. Hey, would you mind just talking a bit about how your partnership journey has been with HPE? >> Yes, pleasure. So the journey started about five years ago and in 2018 we signed a worldwide reseller agreement with HPE. And in 2020, we actually started to work jointly on the integration between the Dataiku Data Science Studio called DSS and integrated that with the Ezmeral Container platform, and was a great success. And it was on behalf of some clear customer projects. >> It's been a long partnership journey with you for sure with HPE. And we welcome your partnership extremely well. Just a brief question about the Container Platform and really what that's meant for Dataiku. >> Yes, Ron. Thanks. So, basically I like the quote here Florian Douetteau, which is the CEO of Dataiku, who said that the combination of Dataiku with the HPE Ezmeral Container Platform will help the customers to successfully scale and put machine learning projects into production. And this basically is going to deliver real impact for their business. So, the combination of the two of us is a great success. >> That's great. Can you talk about what Dataiku is doing and how HPE Ezmeral Container Platform fits in a solution offering a bit more? >> Great. So basically Dataiku DSS is our product which is a end to end data science platform, and basically brings value to the project of customers on their past enterprise AI. In simple ways, we can say it could be as simple as building data pipelines, but it could be also very complex by having machine and deep learning models at scale. So the fast track to value is by having collaboration, orchestration online technologies and the models in production. So, all of that is part of the Data Science Studio and Ezmeral fits perfectly into the part where we design and then basically put at scale those project and put it into product. >> That's perfect. Can you be a bit more specific about how you see HPE and Dataiku really tightening up a customer outcome and value proposition? >> Yes. So what we see is also the challenge of the market that probably about 80% of the use cases really never make it to production. And this is of course a big challenge and we need to change that. And I think the combination of the two of us is actually addressing exactly this need. What we can say is part of the MLOps approach, Dataiku and the Ezmeral Container Platform will provide a frictionless approach, which means without scripting and coding, customers can put all those projects into the productive environment and don't have to worry any more and be more business oriented. >> That's great. So you mentioned you're seeing customers be a lot more mature with their AI workloads and deployment. What do you suggest for the other customers out there that are just starting this journey or just thinking about how to get started? >> Yeah. That's a very good question, Ron. So what we see there is actually the challenge that people need to go on a pass of maturity. And this starts with a simple data pipelines, et cetera, and then basically move up the ladder and basically build large complex project. And here I see a very interesting offer coming now from HPE which is called D3S, which is the data science startup pack. That's something I discussed together with HPE back in early 2020. And basically, it solves the three stages, which is explore, experiment and evolve and builds quickly MVPs for the customers. By doing so, basically you addressed business objectives, lay out in the proper architecture and also setting up the proper organization around it. So, this is a great combination by HPE and Dataiku through the D3S. >> And it's a perfect example of what I mentioned earlier about leveraging the ecosystem program that we built to do deeper solutioning efforts inside of HPE in this case with our AI business unit. So, congratulations on that and thanks for joining us today. I'm going to shift gears. I'm going to bring in Omri Geller from Run:AI. Omri, welcome. It's great to have you. You guys are killing it out there in the market today. And I just thought we could spend a few minutes talking about what is so unique and differentiated from your offerings. >> Thank you, Ron. It's a pleasure to be here. Run:AI creates a virtualization and orchestration layer for AI infrastructure. We help organizations to gain visibility and control over their GPO resources and help them deliver AI solutions to market faster. And we do that by managing granular scheduling, prioritization, allocation of compute power, together with the HPE Ezmeral Container Platform. >> That's great. And your partnership with HPE is a bit newer than Daniel's, right? Maybe about the last year or so we've been working together a lot more closely. Can you just talk about the HPE partnership, what it's meant for you and how do you see it impacting your business? >> Sure. First of all, Run:AI is excited to partner with HPE Ezmeral Container Platform and help customers manage appeals for their AI workloads. We chose HPE since HPE has years of experience partnering with AI use cases and outcomes with vendors who have strong footprint in this markets. HPE works with many partners that are complimentary for our use case such as Nvidia, and HPE Ezmeral Container Platform together with Run:AI and Nvidia deliver a word about solution for AI accelerated workloads. And as you can understand, for AI speed is critical. Companies want to gather important AI initiatives into production as soon as they can. And the HPE Ezmeral Container Platform, running IGP orchestration solution enables that by enabling dynamic provisioning of GPU so that resources can be easily shared, efficiently orchestrated and optimal used. >> That's great. And you talked a lot about the efficiency of the solution. What about from a customer perspective? What is the real benefit that our customers are going to be able to gain from an HPE and Run:AI offering? >> So first, it is important to understand how data scientists and AI researchers actually build solution. They do it by running experiments. And if a data scientist is able to run more experiments per given time, they will get to the solution faster. With HPE Ezmeral Container Platform, Run:AI and users such as data scientists can actually do that and seamlessly and efficiently consume large amounts of GPU resources, run more experiments or given time and therefore accelerate their research. Together, we actually saw a customer that is running almost 7,000 jobs in parallel over GPUs with efficient utilization of those GPUs. And by running more experiments, those customers can be much more effective and efficient when it comes to bringing solutions to market >> Couldn't agree more. And I think we're starting to see a lot of joint success together as we go out and talk to the story. Hey, I want to thank you both one last time for being here with me today. It was very enlightening for our team to have you as part of the program. And I'm excited to extend this customer value proposition out to the rest of our communities. With that, I'd like to close today's session. I appreciate everyone's time. And keep an eye out on our ISP marketplace for Ezmeral We're continuing to expand and add new capabilities and new partners to our marketplace. We're excited to do a lot of great things and help you guys all be successful. Thanks for joining. >> Thank you, Ron. (bright upbeat music)

Published Date : Mar 11 2021

SUMMARY :

and how it can help you journey has been with HPE? and integrated that with the and really what that's meant for Dataiku. and put machine learning and how HPE Ezmeral Container Platform and the models in production. about how you see HPE and and the Ezmeral Container Platform or just thinking about how to get started? and builds quickly MVPs for the customers. and differentiated from your offerings. and control over their GPO resources and how do you see it and outcomes with vendors efficiency of the solution. So first, it is important to understand and new partners to our marketplace. Thank you, Ron.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DanielPERSON

0.99+

Ron KafkaPERSON

0.99+

Florian DouetteauPERSON

0.99+

RonPERSON

0.99+

Omri GellerPERSON

0.99+

HPEORGANIZATION

0.99+

Daniel HladkyPERSON

0.99+

NvidiaORGANIZATION

0.99+

twoQUANTITY

0.99+

2020DATE

0.99+

2018DATE

0.99+

DataikuORGANIZATION

0.99+

DSSORGANIZATION

0.99+

last yearDATE

0.99+

todayDATE

0.99+

OmriPERSON

0.99+

Data Science StudioORGANIZATION

0.98+

early 2020DATE

0.98+

firstQUANTITY

0.98+

EzmeralORGANIZATION

0.98+

Dataiku Data Science StudioORGANIZATION

0.97+

about 80%QUANTITY

0.97+

bothQUANTITY

0.97+

HPEsORGANIZATION

0.95+

three stagesQUANTITY

0.94+

two great guest speakersQUANTITY

0.93+

oneQUANTITY

0.93+

almost 7,000 jobsQUANTITY

0.92+

Analytics UnleashedORGANIZATION

0.91+

HPE Ezmeral Container PlatformTITLE

0.84+

HBE EzmeralORGANIZATION

0.83+

RunORGANIZATION

0.83+

Ezmeral Container PlatformTITLE

0.82+

D3STITLE

0.81+

about five years agoDATE

0.8+

HPE Ezmeral Container PlatformTITLE

0.79+

2021DATE

0.76+

Run:AIORGANIZATION

0.72+

EzmeralTITLE

0.7+

PlatformTITLE

0.69+

Ezmeral Container PlatformTITLE

0.68+

ICSORGANIZATION

0.67+

Partner Scale InitiativesORGANIZATION

0.66+

HPETITLE

0.62+

DSSTITLE

0.6+

Ezmeral ContainerTITLE

0.59+

ContainerTITLE

0.56+

HPE EzmeralEVENT

0.55+

FirstQUANTITY

0.52+

RunTITLE

0.51+

DayEVENT

0.51+

John Shaw and Roland Coelho V1


 

from around the globe it's thecube covering space and cyber security symposium 2020 hosted by cal poly hello and welcome to thecube's coverage we're here hosting with cal poly an amazing event space in the intersection of cyber security this session is defending satellite and space infrastructure from cyber threats got two great guests we've got major general john shaw combined four space component commander u.s space command and vandenberg air force base in california and roland cuello who's the ceo of maverick space systems gentlemen thank you for spending the time to come on to this session for the cal poly space and cyber security symposium appreciate it absolutely um guys defending satellites and space infrastructure is the new domain obviously it's a war warfighting domain it's also the future of the world and this is an important topic because we rely on space now for our everyday life and it's becoming more and more critical everyone knows how their phones work and gps just small examples of all the impacts i'd like to discuss with this hour this topic with you guys so if we can have you guys do an opening statement general if you can start with your opening statement we'll take it from there thanks john and greetings from vandenberg air force base we are just down the road from cal poly here on the central coast of california and uh very proud to be part of this uh effort and part of the partnership that we have with with cal poly on a number of fronts um i should uh so in in my job here i actually uh have two hats that i wear and it's i think worth talking briefly about those to set the context for our discussion you know we had two major organizational events within our department of defense with regard to space last year in 2019 and probably the one that made the most headlines was the stand-up of the united states space force that happened uh december 20th last year and again momentous the first new branch in our military since 1947 uh and uh it is a it's just over nine months old now as we're making this recording uh and already we're seeing a lot of change uh with regard to how we're approaching uh organizing training and equipping on a service side or space capabilities and so i uh in that with regard to the space force the hat i wear there is commander of space operations command that was what was once 14th air force when we were still part of the air force here at vandenberg and in that role i'm responsible for the operational capabilities that we bring to the joint warfighter and to the world from a space perspective didn't make quite as many headlines but another major change that happened last year was the uh the reincarnation i guess i would say of united states space command and that is a combatant command it's how our department of defense organizes to actually conduct warfighting operations um most people are more familiar perhaps with uh central command centcom or northern command northcom or even strategic command stratcom well now we have a space com we actually had one from 1985 until 2002 and then stood it down in the wake of the 9 11 attacks and a reorganization of homeland security but we've now stood up a separate command again operationally to conduct joint space operations and in that organization i wear a hat as a component commander and that's the combined force-based component command uh working with other all the additional capabilities that other services bring as well as our allies that combined in that title means that uh i under certain circumstances i would lead an allied effort uh in space operations and so it's actually a terrific job to have here on the central coast of california uh both working the uh how we bring space capabilities to the fight on the space force side and then how we actually operate those capabilities it's a point of joint in support of joint warfighters around the world um and and national security interests so that's the context now what el i i also should mention you kind of alluded to john you're beginning that we're kind of in a change situation than we were a number of years ago and that space we now see space as a warfighting domain for most of my career going back a little ways most of my my focus in my jobs was making sure i could bring space capabilities to those that needed them bringing gps to that special operations uh soldier on the ground somewhere in the world bringing satellite communications for our nuclear command and control bringing those capabilities for other uses but i didn't have to worry in most of my career about actually defending those space capabilities themselves well now we do we've actually gone to a point where we're are being threatened in space we now are treating it more like any other domain normalizing in that regard as a warfighting domain and so we're going through some relatively emergent efforts to protect and defend our capabilities in space to to design our capabilities to be defended and perhaps most of all to train our people for this new mission set so it's a very exciting time and i know we'll get into it but you can't get very far into talking about all these space capabilities and how we want to protect and defend them and how we're going to continue their ability to deliver to warfighters around the globe without talking about cyber because they fit together very closely so anyway thanks for the chance to be here today and i look forward to the discussion general thank you so much for that opening statement and i would just say that not only is it historic with the space force it's super exciting because it opens up so much more challenges and opportunities for to do more and to do things differently so i appreciate that statement roland your opening statement your your job is to put stuff in space faster cheaper smaller better your opening statement please um yes um thank you john um and yes you know to um general shaw's point you know with with the space domain and the need to protect it now um is incredibly important and i hope that we are more of a help um than a thorn in your side um in terms of you know building satellites smaller faster cheaper um you know and um definitely looking forward to this discussion and you know figuring out ways where um the entire space domain can work together you know from industry to to us government even to the academic environment as well so first would like to say and preface this by saying i am not a cyber security expert um we you know we build satellites um and uh we launch them into orbit um but we are by no means you know cyber security experts and that's why um you know we like to partner with organizations like the california cyber security institute because they help us you know navigate these requirements um so um so i'm the ceo of um of maverick space systems we are a small aerospace business in san luis obispo california and we provide small satellite hardware and service solutions to a wide range of customers all the way from the academic environment to the us government and everything in between we support customers through an entire you know program life cycle from mission architecture and formulation all the way to getting these customer satellites in orbit and so what we try to do is um provide hardware and services that basically make it easier for customers to get their satellites into orbit and to operate so whether it be reducing mass or volume um creating greater launch opportunities or providing um the infrastructure and the technology um to help those innovations you know mature in orbit you know that's you know that's what we do our team has experienced over the last 20 years working with small satellites and definitely fortunate to be part of the team that invented the cubesat standard by cal poly and stanford uh back in 2000 and so you know we are in you know vandenberg's backyard um we came from cal poly san luis obispo um and you know our um our hearts are fond you know of this area and working with the local community um a lot of that success um that we have had is directly attributable um to the experiences that we learned as students um working on satellite programs from our professors and mentors um you know that's you know all you know thanks to cal poly so just wanted to tell a quick story so you know back in 2000 just imagine a small group of undergraduate students you know myself included with the daunting task of launching multiple satellites from five different countries on a russian launch vehicle um you know many of us were only 18 or 19 not even at the legal age to drink yet um but as you know essentially teenagers we're managing million dollar budgets um and we're coordinating groups um from around the world um and we knew that we knew what we needed to accomplish um yet we didn't really know um what we were doing when we first started um the university was extremely supportive um and you know that's the cal poly learn by doing philosophy um i remember you know the first time we had a meeting with our university chief legal counsel and we were discussing the need to to register with the state department for itar nobody really knew what itar was back then um and you know discussing this with the chief legal counsel um you know she was asking what is itar um and we essentially had to explain you know this is um launching satellites as part of the um the u.s munitions list and essentially we have a similar situation you know exporting munitions um you know we are in similar categories um you know as you know as weapons um and so you know after that initial shock um everybody jumped in you know both feet forward um the university um you know our head legal counsel professors mentors and the students um you know knew we needed to tackle this problem um because you know the the need was there um to launch these small satellites and um you know the the reason you know this is important to capture the entire spectrum of users of the community um is that the technology and the you know innovation of the small satellite industry occurs at all levels you know so we have academia commercial national governments we even have high schools and middle schools getting involved and you know building satellite hardware um and the thing is you know the the importance of cyber security is incredibly important because it touches all of these programs and it touches you know people um at a very young age um and so you know we hope to have a conversation today um to figure out you know how do we um create an environment where we allow these programs to thrive but we also you know protect and you know keep their data safe as well thank you very much roland appreciate that uh story too as well thanks for your opening statement gentlemen i mean i love this topic because defending the assets in space is is as obvious um you look at it but there's a bigger picture going on in our world right now and generally you kind of pointed out the historic nature of space force and how it's changing already operationally training skills tools all that stuff is revolving you know in the tech world that i live in you know change the world is a topic they use that's thrown around a lot you can change the world a lot of young people we have just other panels on this where we're talking about how to motivate young people changing the world is what it's all about with technology for the better evolution is just an extension of another domain in this case space is just an extension of other domains similar things are happening but it's different there's a huge opportunity to change the world so it's faster there's an expanded commercial landscape out there certainly government space systems are moving and changing how do we address the importance of cyber security in space general we'll start with you because this is real it's exciting if you're a young person there's touch points of things to jump into tech building hardware to changing laws and and everything in between is an opportunity and it's exciting and it's truly a chance to change the world how does the commercial government space systems teams address the importance of cyber security so john i think it starts with with the realization that as i like to say that cyber and space are bffs uh there's nothing that we do on the cutting edge of space that isn't heavy reliant heavily reliant on the cutting edge of cyber and frankly there's probably nothing on the cutting edge of cyber that doesn't have a space application and when you realize that you see how how closely those are intertwined as we need to move forward at at speed it becomes fundamental to to the to answering your question let me give a couple examples we one of the biggest challenges i have on a daily basis is understanding what's going on in the space domain those on the on the on the surface of the planet talk about tyranny of distance across the oceans across large land masses and i talk about the tyranny of volume and you know right now we're looking out as far as the lunar sphere there's activity that's extending out to the out there we expect nasa to be conducting uh perhaps uh human operations in the lunar environment in the next few years so it extends out that far when you do the math that's a huge volume how do you do that how do you understand what's happening in real time in within that volume it is a big data problem by the very definition of that that kind of effort to that kind of challenge and to do it successfully in the years ahead it's going to require many many sensors and the fusion of data of all kinds to present a picture and then analytics and predictive analytics that are going to deliver an idea of what's going on in the space arena and that's just if people are not up to mischief once you have threats introduced into that environment it is even more challenging so i'd say it's a big data problem that we'll be enjoying uh tackling in the years ahead a second example is you know we if i if i had to if we had to take a vote of what were the most uh amazing robots that have ever been designed by humans i think that spacecraft would have to be up there on the list whether it's the nasa spacecraft that explore other planets or the ones that we or gps satellites that that amazingly uh provide a wonderful service to the entire globe uh and beyond they are amazing technological machines that's not going to stop i mean all the work that roland talked about at the at the even even that we're doing it at the kind of the microsoft level is is putting cutting-edge technology into smaller packages you can to get some sort of capability out of that as we expand our activities further and further into space for national security purposes or for exploration or commercial or civil the the cutting edge technologies of uh artificial intelligence uh and machine to machine engagements and machine learning are going to be part of that design work moving forward um and then there's the threat piece as we try to as we operate these these capabilities how these constellations grow that's going to be done via networks and as i've already pointed out space is a warfighting domain that means those networks will come under attack we expect that they will and that may happen early on in a conflict it may happen during peace time in the same way that we see cyber attacks all the time everywhere in many sectors of of activity and so by painting that picture you kind of get you we start to see how it's intertwined at the very very base most basic level the cutting edge of cyber and cutting edge of space with that then comes the need to any cutting edge cyber security capability that we have is naturally going to be needed as we develop space capabilities and we're going to have to bake that in from the very beginning we haven't done that in the past as well as we should but moving forward from this point on it will be an essential ingredient that we work into all of our new capability roland we're talking about now critical infrastructure we're talking about new capabilities being addressed really fast so it's kind of chaotic now there's threats so it's not as easy as just having capabilities because you've got to deal with the threats the general just pointed out but now you've got critical infrastructure which then will enable other things down down the line how do you protect it how do we address this how do you see this being addressed from a security standpoint because you know malware these techniques can be mapped in as extended into into space and takeovers wartime peacetime these things are all going to be under threat that's pretty well understood i think people kind of get that how do we address it what's your what's your take yeah you know absolutely and you know i couldn't agree more with general shaw you know with cyber security and space being so intertwined um and you know i think with fast and rapid innovation um comes you know the opportunity for threats especially um if you have bad actors um that you know want to cause harm and so you know as a technology innovator and you're pushing the bounds um you kind of have a common goal of um you know doing the best you can um and you know pushing the technology balance making it smaller faster cheaper um but a lot of times what entrepreneurs and you know small businesses and supply chains um are doing and don't realize it is a lot of these components are dual use right i mean you could have a very benign commercial application but then a small you know modification to it and turn it into a military application and if you do have these bad actors they can exploit that and so you know i think the the big thing is um creating a organization that is you know non-biased that just wants to kind of level the playing field for everybody to create a set standard for cyber security in space i think you know one group that would be perfect for that you know is um cci um you know they understand both the cybersecurity side of things and they also have you know at cal poly um you know the the small satellite group um and you know just having kind of a a clearinghouse or um an agency where um can provide information that is free um you know you don't need a membership for and to be able to kind of collect that but also you know reach out to the entire value chain you know for a mission and um making them aware um of you know what potential capabilities are and then how it might um be you know potentially used as a weapon um and you know keeping them informed because i think you know the the vast majority of people in the space industry just want to do the right thing and so how do we get that information free flowing to you know to the us government so that they can take that information create assessments and be able to not necessarily um stop threats from occurring presently but identify them long before that they would ever even happen um yeah that's you know general i want to i want to follow up on that real quick before we go to the next talk track critical infrastructure um you mentioned you know across the oceans long distance volume you know when you look at the physical world you know you had you know power grids here united states you had geography you had perimeters uh the notion of a perimeter and the moat this is and then you had digital comes in then you have we saw software open up and essentially take down this idea of a perimeter and from a defense standpoint and that everything changed and we had to fortify those critical assets uh in the u.s space increases the same problem statement significantly because it's you can't just have a perimeter you can't have a moat it's open it's everywhere like what digital's done and that's why we've seen a slurge of cyber in the past two decades attacks with software so this isn't going to go away you need the critical infrastructure you're putting it up there you're formulating it and you've got to protect it how do you view that because it's going to be an ongoing problem statement what's the current thinking yeah i i think my sense is a mindset that you can build a a firewall or a defense or some other uh system that isn't dynamic in his own right is probably not heading in the right direction i think cyber security in the future whether it's for our space systems or for other critical infrastructure is going to be a dynamic fight that happens at a machine-to-machine um a speed and dynamic um i don't think it's too far off where we will have uh machines writing their own code in real time to fight off attacks that are coming at them and by the way the offense will probably be doing the same kind of thing and so i i guess i would not want to think that the answer is something that you just build it and you leave it alone and it's good enough it's probably going to be a constantly evolving capability constantly reacting to new threats and staying ahead of those threats that's the kind of use case just to kind of you know as you were kind of anecdotal example is the exciting new software opportunities for computer science majors i mean i tell my young kids and everyone man it's more exciting now i wish i was 18 again it's so so exciting with ai bro i want to get your thoughts we were joking on another panel with the dod around space and the importance of it obviously and we're going to have that here and then we had a joke it's like oh software's defined everything it says software's everything ai and and i said well here in the united states companies had data centers and they went to the cloud and they said you can't do break fix it's hard to do break fix in space you can't just send a tech up i get that today but soon maybe robotics the general mentions robotics technologies and referencing some of the accomplishments fixing things is almost impossible in space but maybe form factors might get better certainly software will play a role what's your thoughts on that that landscape yeah absolutely you know for for software in orbit um you know there's there's a push for you know software-defined radios um to basically go from hardware to software um and you know that's that that's a critical link um if you can infiltrate that and a small satellite has propulsion on board you could you know take control of that satellite and cause a lot of havoc and so you know creating standards and you know that kind of um initial threshold of security um you know for let's say you know these radios you know communications and making that um available um to the entire supply chain to the satellite builders um and operators you know is incredibly key and you know that's again one of the initiatives that um that cci is um is tackling right now as well general i want to get your thoughts on best practices around cyber security um state of the art today uh and then some guiding principles and kind of how the if you shoot the trajectory forward what what might happen uh around um supply chain there's been many stories where oh we outsourced the chips and there's a little chip sitting in a thing and it's built by someone else in china and the software is written from someone in europe and the united states assembles it it gets shipped and it's it's corrupt and it has some cyber crime making i'm oversimplifying the the statement but this is what when you have space systems that involve intellectual property uh from multiple partners whether it's from software to creation and then deployment you get supply chain tiers what are some of the best practices that you see involving that don't stunt the innovation but continues to innovate but people can operate safely what's your thoughts yeah so on supply chain i think i think the symposium here is going to get to hear from lieutenant general jt thompson uh from space missile system center down in los angeles and and uh he's a he's just down the road from us there uh on the coast um and his team is is the one that we look to really focus on as he acquires and develop again bake in cyber security from the beginning and knowing where the components are coming from and and properly assessing those as you as you put together your space systems is a key uh piece of what his team is focused on so i expect we'll hear him talk about that when it talks to i think she asked the question a little more deeply about how do the best practices in terms of how we now develop moving forward well another way that we don't do it right is if we take a long time to build something and then you know general general jt thompson's folks take a while to build something and then they hand it over to to to me and my team to operate and then they go hands-free and and then and then that's you know that's what i have for for years to operate until the next thing comes along that's a little old school what we're going to have to do moving forward with our space capabilities and with the cyber piece baked in is continually developing new capability sets as we go we actually have partnership between general thompson's team and mine here at vandenberg on our ops floor or our combined space operations center that are actually working in real time together better tools that we can use to understand what's going on the space environment to better command and control our capabilities anywhere from military satellite communications to space domain awareness sensors and such and so and we're developing those capabilities in real time it's a dev and and with the security pieces so devsecops is we're practicing that in in real time i think that is probably the standard today that we're trying to live up to as we continue to evolve but it has to be done again in close partnership all the time it's not a sequential industrial age process while i'm on the subject of partnerships so general thompson's and team and mine have good partnerships it's part partnerships across the board are going to be another way that we are successful and that uh it means with with academia in some of the relationships that we have here with cal poly it's with the commercial sector in ways that we haven't done before the old style business was to work with just a few large um companies that had a lot of space experience well we need we need a lot of kinds of different experience and technologies now in order to really field good space capabilities and i expect we'll see more and more non-traditional companies being part of and and organizations being part of that partnership that will work going forward i mentioned at the beginning that um uh allies are important to us so everything that uh that role and i've been talking about i think you have to extrapolate out to allied partnerships right it doesn't help me uh as a combined force component commander which is again one of my jobs it doesn't help me if the united states capabilities are cyber secure but i'm trying to integrate them with capabilities from an ally that are not cyber secure so that partnership has to be dynamic and continually evolving together so again close partnering continually developing together from the acquisition to the operational sectors with as many um different sectors of our economy uh as possible are the ingredients to success general i'd love to just follow up real quick i was having just a quick reminder for a conversation i had with last year with general keith alexander who was does a lot of cyber security work and he was talking about the need to share faster and the new school is you got to share faster and to get the data you mentioned observability earlier you need to see what everything's out there he's a real passionate person around getting the data getting it fast and having trusted partners so that's not it's kind of evolving as i mean sharing is a well-known practice but with cyber it's sensitive data potentially so there's a trust relationship there's now a new ecosystem that's new for uh government how do you view all that and your thoughts on that trend of the sharing piece of it on cyber so it's i don't know if it's necessarily new but it's at a scale that we've never seen before and by the way it's vastly more complicated and complex when you overlay from a national security perspective classification of data and information at various levels and then that is again complicated by the fact you have different sharing relationships with different actors whether it's commercial academic or allies so it gets very very uh a complex web very quickly um so that's part of the challenge we're working through how can we how can we effectively share information at multiple classification levels with multiple partners in an optimal fashion it is certainly not optimal today it's it's very difficult even with maybe one industry partner for me to be able to talk about data at an unclassified level and then various other levels of classification to have the traditional networks in place to do that i could see a solution in the future where our cyber security is good enough that maybe i only really need one network and the information that is allowed to flow to the players within the right security environment um to uh to make that all happen as quickly as possible so you've actually uh john you've hit on yet another big challenge that we have is um is evolving our networks to properly share with the right people at the right uh clearance levels as at speed of war which is what we're going to need yeah and i wanted to call that out because this is an opportunity again this discussion here at cal poly and around the world is for new capabilities and new people to solve the problems and um it's again it's super exciting if you you know you're geeking out on this it's if you have a tech degree or you're interested in changing the world there's so many new things that could be applied right now roland will get your thoughts on this because one of the things in the tech trends we're seeing this is a massive shift all the theaters of the tech industry are are changing rapidly at the same time okay and it affects policy law but also deep tech the startup communities are super important in all this too we can't forget them obviously the big trusted players that are partnering certainly on these initiatives but your story about being in the dorm room now you got the boardroom and now you got everything in between you have startups out there that want to and can contribute and you know what's an itar i mean i got all these acronym certifications is there a community motion to bring startups in in a safe way but also give them a ability to contribute because you look at open source that proved everyone wrong on software that's happening now with this now open network concept the general is kind of alluding to which is it's a changing landscape your thoughts i know you're passionate about this yeah absolutely you know and i think um you know as general shaw mentioned you know we need to get information out there faster more timely and to the right people um and involving not only just stakeholders in the us but um internationally as well you know and as entrepreneurs um you know we have this very lofty vision or goal uh to change the world and um oftentimes um you know entrepreneurs including myself you know we put our heads down and we just run as fast as we can and we don't necessarily always kind of take a breath and take a step back and kind of look at what we're doing and how it's touching um you know other folks and in terms of a community i don't know of any formal community out there it's mostly ad hoc and you know these ad hoc communities are folks who let's say have you know was was a student working on a satellite um you know in college and they love that entrepreneurial spirit and so they said well i'm gonna start my own company and so you know a lot of the these ad hoc networks are just from relationships um that are that have been built over the last two decades um you know from from colleagues that you know at the university um i do think formalizing this and creating um kind of a you know clearinghouse to to handle all of this is incredibly important yeah um yeah there's gonna be a lot of entrepreneurial activity no doubt i mean just i mean there's too many things to work on and not enough time so i mean this brings up the question though while we're on this topic um you got the remote work with covid everyone's working remotely we're doing this remote um interview rather than being on stage works changing how people work and engage certainly physical will come back but if you looked at historically the space industry and the talent you know they're all clustered around the bases and there's always been these areas where you're you're a space person you're kind of working there and there's jobs there and if you were cyber you were 10 in other areas over the past decade there's been a cross-pollination of talent and location as you see the intersection of space general start with you you know first of all central coast is a great place to live i know that's where you guys live but you can start to bring together these two cultures sometimes they're you know not the same maybe they're getting better we know they're being integrated so general can you just share your thoughts because this is uh one of those topics that everyone's talking about but no one's actually kind of addressed directly um yeah john i i think so i think i want to answer this by talking about where i think the space force is going because i think if there was ever an opportunity or inflection point in our department of defense to sort of change culture and and try to bring in non-traditional kinds of thinking and and really kind of change uh maybe uh some of the ways that the department of defense has does things that are probably archaic space force is an inflection point for that uh general raymond our our chief of space operations has said publicly for a while now he wants the us space force to be the first truly digital service and uh you know what we what we mean by that is you know we want the folks that are in the space force to be the ones that are the first adopters or the early adopters of of technology um to be the ones most fluent in the cutting edge technological developments on space and cyber and and other um other sectors of the of of the of the economy that are technologically focused uh and i think there's some can that can generate some excitement i think and it means that we probably end up recruiting people into the space force that are not from the traditional recruiting areas that the rest of the department of defense looks to and i think it allows us to bring in a diversity of thought and diversity of perspective and a new kind of motivation um into the service that i think is frankly is is really exciting so if you put together everything i mentioned about how space and cyber are going to be best friends forever and i think there's always been an excitement in them you know from the very beginning in the american psyche about space you start to put all these ingredients together and i think you see where i'm going with this that really changed that cultural uh mindset that you were describing it's an exciting time for sure and again changing the world and this is what you're seeing today people do want to change world they want a modern world that's changing roy look at your thoughts on this i was having an interview a few years back with a tech entrepreneur um techie and we were joking we were just kind of riffing and we and i said everything that's on star trek will be invented and we're almost there actually if you think about it except for the transporter room you got video you got communicators so you know not to bring in the star trek reference with space force this is digital and you start thinking about some of the important trends it's going to be up and down the stack from hardware to software to user experience everything your thoughts and reaction yeah abs absolutely and so you know what we're seeing is um timeline timelines shrinking dramatically um because of the barrier to entry for you know um new entrants and you know even your existing aerospace companies is incredibly low right so if you take um previously where you had a technology on the ground and you wanted it in orbit it would take years because you would test it on the ground you would verify that it can operate in space in a space environment and then you would go ahead and launch it and you know we're talking tens if not hundreds of millions of dollars to do that now um we've cut that down from years to months when you have a prototype on the ground and you want to get it launched you don't necessarily care if it fails on orbit the first time because you're getting valuable data back and so you know we're seeing technology being developed you know for the first time on the ground and in orbit in a matter of a few months um and the whole kind of process um you know that that we're doing as a small business is you know trying to enable that and so allowing these entrepreneurs and small small companies to to get their technology in orbit at a price that is sometimes even cheaper than you know testing on the ground you know this is a great point i think this is really an important point to call out because we mentioned partnerships earlier the economics and the business model of space is doable i mean you do a mission study you get paid for that you have technology you can get stuff up up quickly and there's a cost structure there and again the alternative was waterfall planning years and millions now the form factors are different now again there may be different payloads involved but you can standardize payloads you got robotic arms all this is all available this brings up the congestion problem this is going to be on the top of mind the generals of course but you got the proliferation okay of these constellation systems you have more and more tech vectors i mean essentially that's malware i mean that's a probe you throw something up in space that could cause some interference maybe a takeover general this is the this is the real elephant in the room the threat matrix from new stuff and new configurations so general how does the proliferation of constellation systems change the threat matrix so i i think the uh you know i guess i'm gonna i'm gonna be a little more optimistic john than i think you pitched that i'm actually excited about these uh new mega constellations in leo um i'm excited about the the growing number of actors that are that are going into space for various reasons and why is that it's because we're starting to realize a new economic engine uh for the nation and for human society so the question is so so i think we want that to happen right when uh um when uh we could go to almost any any other domain in history and and and you know there when when air traffic air air travel started to become much much more commonplace with many kinds of uh actors from from private pilots flying their small planes all the way up to large airliners uh you know there there was a problem with congestion there was a problem about um challenges about uh behavior and are we gonna be able to manage this and yes we did and it was for the great benefit of society i could probably look to the maritime domain for similar kinds of things and so this is actually exciting about space we are just going to have to find the ways as a society and it's not just the department of defense it's going to be civil it's going to be international find the mechanisms to encourage this continued investment in the space domain i do think the space force uh will play a role in in providing security in the space environment as we venture further out as as economic opportunities emerge uh wherever they are um in the in the lunar earth lunar system or even within the solar system space force is going to play a role in that but i'm actually really excited about the those possibilities hey by the way i got to say you made me think of this when you talked about star trek and and and space force and our technologies i remember when i was younger watching the the next generation series i thought one of the coolest things because being a musician in my in my spare time i thought one of the coolest things was when um commander riker would walk into his quarters and and say computer play soft jazz and there would just be the computer would just play music you know and this was an age when you know we had we had hard uh um uh media right like how will that that is awesome man i can't wait for the 23rd century when i can do that and where we are today is is so incredible on those lines the things that i can ask alexa or siri to play um well that's the thing everything that's on star trek think about it almost invented i mean you got the computers you got the only thing really is the holograms are starting to come in you got now the transporter room now that's physics we'll work on that right right so there's a there is this uh a balance between physics and imagination but uh we have not exhausted either well um personally everyone that knows me knows i'm a huge star trek fan all the series of course i'm an original purist but at that level but this is about economic incentive as well roland i want to get your thoughts because you know the gloom and doom you got to think about the the bad stuff to make it good if i if i put my glass half full on the table there's economic incentives just like the example of the plane and the air traffic there's there's actors that are more actors that are incented to have a secure system what's your thoughts to general's comments around the optimism and and the potential threat matrix that needs to be managed absolutely so and you know one of the things that we've seen over the years um as you know we build these small satellites is a lot of the technology you know that the general is talking about um you know voice recognition miniaturized chips and sensors um started on the ground and i mean you know you have you know your iphone um that about 15 years ago before the first iphone came out um you know we were building small satellites in the lab and we were looking at cutting-edge state-of-the-art magnetometers and sensors um that we were putting in our satellites back then we didn't know if they were going to work and then um a few years later as these students graduate they go off and they go out to under you know other industries and so um some of the technology that was first kind of put in these cubesats in the early 2000s you know kind of ended up in the first generation iphone smartphones um and so being able to take that technology rapidly you know incorporate that into space and vice versa gives you an incredible economic advantage because um not only are your costs going down um because you know you're mass producing you know these types of terrestrial technologies um but then you can also um you know increase you know revenue and profit um you know by by having you know smaller and cheaper systems general let's talk about that for real quickly it's a good point i want to just shift it into the playbook i mean everyone talks about playbooks for management for tech for startups for success i mean one of the playbooks that's clear from in history is investment in r d around military and or innovation that has a long view spurs innovation commercially i mean just there's a huge many decades of history that shows that hey we got to start thinking about these these challenges and you know next you know it's in an iphone this is history this is not like a one-off and now with space force you get you're driving you're driving the main engine of innovation to be all digital you know we we riff about star trek which is fun but the reality is you're going to be on the front lines of some really new cool mind-blowing things could you share your thoughts on how you sell that people who write the checks or recruit more talent well so i first i totally agree with your thesis that the that you know national security well could probably go back an awful long way hundreds to thousands of years that security matters tend to drive an awful lot of innovation and creativity because um you know i think the the probably the two things that drive drive people the most are probably an opportunity to make money uh but only by beating that out are trying to stay alive um and uh and so i don't think that's going to go away and i do think that space force can play a role um as it pursues uh security uh structures you know within the space domain to further encourage economic investment and to protect our space capabilities for national security purposes are going to be at the cutting edge this isn't the first time um i think we can point back to the origins of the internet really started in the department of defense and with a partnership i should add with academia that's how the internet got started that was the creativity in order to to meet some needs there cryptography has its roots in security but we use it uh in in national security but now we use it in for economic reasons and meant and a host of other kinds of reasons and then space itself right i mean we still look back to uh apollo era as an inspiration for so many things that inspired people to to either begin careers in in technical areas or in space and and so on so i think i think in that same spirit you're absolutely right i guess i'm totally agreeing with your thesis the space force uh will be and a uh will have a positive inspirational influence in that way and we need to to realize that so when we are asking for when we're looking for how we need to meet capability needs we need to spread that net very far look for the most creative solutions and partner early and often with those that that can that can work on those when you're on the new frontier you've got to have a team sport it's a team effort you mentioned the internet just anecdotally i'm old enough to remember this because i remember the days that was going on and said the government if the policy decisions that the u.s made at that time was to let it go a little bit invisible hand they didn't try to commercialize it too fast and but there was some policy work that was done that had a direct effect to the innovation versus take it over and next you know it's out of control so i think you know i think this this just a cross-disciplinary skill set becomes a big thing where you need to have more people involved and that's one of the big themes of this symposium so it's a great point thank you for sharing that roland your thoughts on this because you know you got policy decisions we all want to run faster we want to be more innovative but you got to have some ops view now mostly ops people want things very tight very buttoned up secure the innovators want to go faster it's the yin and yang that's that's the world we live in how's it all balanced in your mind yeah um you know one of the things um that may not be apparently obvious is that you know the us government and department of um of defense is one of the biggest investors in technology in the aerospace sector um you know they're not the traditional venture capitalists but they're the ones that are driving technology innovation because there's funding um you know and when companies see that the us governments is interested in something businesses will will re-vector um you know to provide that capability and in the i would say the more recent years we've had a huge influx of private equity venture capital um coming into the markets to kind of help augment um you know the government investment and i think having a good partnership and a relationship with these private equity venture capitalists and the us government is incredibly important because the two sides you know can can help collaborate and kind of see a common goal but then also too on um you know the other side is you know there's that human element um and as general shaw was saying it's like not you know not only do companies you know obviously want to thrive and do really well some companies just want to stay alive um to see their technology kind of you know grow into what they've always dreamed of and you know oftentimes entrepreneurs um are put in a very difficult position because they have to make payroll they have to you know keep the lights on and so sometimes they'll take investment um from places where they may normally would not have you know from potentially foreign investment that could potentially you know cause issues with you know the you know the us supply chain well my final question is the best i wanted to say for last because i love the idea of human space flight i'd love to be on mars i'm not sure i'll be able to make it someday but how do you guys see the possible impacts of cyber security on expanding human space flight operations i mean general this is your wheelhouse this is urine command putting humans in space and certainly robots will be there because they're easy to go because they're not human but humans in space i mean you're starting to see the momentum the discussion uh people are are scratching that itch what's your take on that how do we see making this more possible well i i think we will see we will see uh commercial space tourism uh in the future i'm not sure how wide and large a scale it will become but we'll we will see that and um part of uh i think the mission of the space force is going to be probably to again do what we're doing today is have really good awareness of what's going on the domain to uh to to to ensure that that is done safely and i think a lot of what we do today will end up in civil organizations to do space traffic management and safety uh in in that uh arena um and uh um it is only a matter of time uh before we see um humans going even beyond the you know nasa has their plan the the artemis program to get back to the moon and the gateway initiative to establish a a space station there and that's going to be an exploration initiative but it is only a matter of time before we have um private citizens or private corporations putting people in space and not only for tourism but for economic activity and so it'll be really exciting to watch it would be really exciting and space force will be a part of it general roland i want to thank you for your valuable time to come on this symposium i really appreciate it final uh comment i'd love to you to spend a minute to share your personal thoughts on the importance of cyber security to space and we'll close it out we'll start with you roland yeah so i think that the biggest thing um i would like to try to get out of this you know from my own personal perspective is um creating that environment that allows um you know the the aerospace supply chain small businesses you know like ourselves be able to meet all the requirements um to protect um and safeguard our data but also um create a way that you know we can still thrive and it won't stifle innovation um you know i'm looking forward um to comments and questions um you know from the audience um to really kind of help um you know you know basically drive to that next step general final thoughts the importance of cyber security to space i'll just i'll go back to how i started i think john and say that space and cyber are forever intertwined they're bffs and whoever has my job 50 years from now or 100 years from now i predict they're going to be saying the exact same thing cyber and space are are intertwined for good we will always need the cutting edge cyber security capabilities that we develop as a nation or as a as a society to protect our space capabilities and our cyber capabilities are going to need space capabilities in the future as well general john shaw thank you very much roland cleo thank you very much for your great insight thank you to cal poly for putting this together i want to shout out to the team over there we couldn't be in person but we're doing a virtual remote event i'm john furrier with thecube and siliconangle here in silicon valley thanks for watching

Published Date : Oct 1 2020

SUMMARY :

and um you know the the reason

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff FrickPERSON

0.99+

DavidPERSON

0.99+

Rebecca KnightPERSON

0.99+

AlanPERSON

0.99+

JeffPERSON

0.99+

AdrianPERSON

0.99+

Peter BurrisPERSON

0.99+

PaulPERSON

0.99+

DavePERSON

0.99+

AWSORGANIZATION

0.99+

Adrian SwinscoePERSON

0.99+

Jeff BrewerPERSON

0.99+

MAN Energy SolutionsORGANIZATION

0.99+

2017DATE

0.99+

TonyPERSON

0.99+

ShellyPERSON

0.99+

Dave VellantePERSON

0.99+

VolkswagenORGANIZATION

0.99+

Tony FergussonPERSON

0.99+

PegaORGANIZATION

0.99+

EuropeLOCATION

0.99+

Paul GreenbergPERSON

0.99+

James HuttonPERSON

0.99+

Shelly KramerPERSON

0.99+

Stu MinimanPERSON

0.99+

Rob WalkerPERSON

0.99+

DylanPERSON

0.99+

10QUANTITY

0.99+

June 2019DATE

0.99+

Corey QuinnPERSON

0.99+

DonPERSON

0.99+

SantikaryPERSON

0.99+

CroomPERSON

0.99+

chinaLOCATION

0.99+

Tony FergusonPERSON

0.99+

30QUANTITY

0.99+

60 drugsQUANTITY

0.99+

roland cleoPERSON

0.99+

UKLOCATION

0.99+

Don SchuermanPERSON

0.99+

cal polyORGANIZATION

0.99+

SantiPERSON

0.99+

1985DATE

0.99+

Duncan MacdonaldPERSON

0.99+

Silicon ValleyLOCATION

0.99+

millionsQUANTITY

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

one yearQUANTITY

0.99+

10 yearsQUANTITY

0.99+

PegasystemsORGANIZATION

0.99+

80%QUANTITY

0.99+

Rachini Moosavi & Sonya Jordan, UNC Health | CUBE Conversation, July 2020


 

>> From theCUBE studios in Palo Alto in Boston, connecting with thought leaders all around the world, this a CUBE conversation. >> Hello, and welcome to this CUBE conversation, I'm John Furrier, host of theCUBE here, in our Palo Alto, California studios, here with our quarantine crew. We're getting all the remote interviews during this time of COVID-19. We've got two great remote guests here, Rachini Moosavi who's the Executive Director of Analytical Services and Data Governance at UNC Healthcare, and Sonya Jordan, Enterprise Analytics Manager of Data Governance at UNC Health. Welcome to theCUBE, thanks for coming on. >> Thank you. >> Thanks for having us. >> So, I'm super excited. University of North Carolina, my daughter will be a freshman this year, and she is coming, so hopefully she won't have to visit UNC Health, but looking forward to having more visits down there, it's a great place. So, thanks for coming on, really appreciate it. Okay, so the conversation today is going to be about how data and how analytics are helping solve problems, and ultimately, in your case, serve the community, and this is a super important conversation. So, before we get started, talk about UNC Health, what's going on there, how you guys organize, how big is it, what are some of the challenges that you have? >> SO UNC Health is comprised of about 12 different entities within our hospital system. We have physician groups as well as hospitals, and we serve, we're spread throughout all of North Carolina, and so we serve the patients of North Carolina, and that is our primary focus and responsibility for our mission. As part of the offices Sonya and I are in, we are in the Enterprise Analytics and Data Sciences Office that serves all of those entities and so we are centrally located in the triangle area of North Carolina, which is pretty central to the state, and we serve all of our entities equally from our Analytics and Data Governance needs. >> John: You guys got a different customer base, obviously you've got the clinical support, and you got the business applications, you got to be agile, that's what it's all about today, you don't need to rely on IT support. How do you guys do that? What's the framework? How do you guys tackle that problem of being agile, having the data be available, and you got two different customers, you got all the compliance issues with clinical, I can only imagine all the regulations involved, and you've got the business applications. How do you handle those? >> Yeah, so for us in the roles that we are in, we are fully responsible for more of the data and analytics needs of the organization, and so we provide services that truly are balanced across our clinician group, so we have physicians, and nurses, and all of the other ancillary clinical staff that we support, as well as the operational needs as well, so revenue cycle, finance, pharmacy, any of those groups that are required in order to run a healthcare system. So, we balance our time amongst all of those and for the work that we take on and how we continuously support them is really based on governance at the end of the day. How we make decisions around what the priorities are and what needs to happen next, and requires the best insights, is really how we focus on what work we do next. As for the applications that we build, in our office, we truly only build analytical applications or products like visualizations within Tableau as well as we support data governance platforms and services and so we provide some of the tools that enable our end users to be able to interact with the information that we're providing around analytics and insights, at the end of the day. >> Sonya, what's your job? Your title is Analytics Manager of Data Governance, obviously that sounds broad but governance is obviously required in all things. What is your job, what is your day-to-day roles like? What's your focus? >> Well, my day-to-day operations is first around building a data governance program. I try to work with identifying customers who we can start partnering with so that we can start getting documentation and utilizing a lot of the programs that we currently have, such as certification, so when we talk about initiatives, this is one of the initiatives that we use to partner with our stakeholders in order to start bringing visibilities to the various assets, such as metrics, or universes that we want to certify, or dashboards, algorithm, just various lists of different types of assets that we certify that we like to partner with the customers in order for them to start documenting within the tools, so that we can bring visibility to what's available, really focusing on data literacy, helping people to understand what assets are available, not only what assets are available, but who owns them, and who own the asset, and what can they do with it, making sure that we have great documentation in order to be able to leverage literacy as well. >> So, I can only imagine with how much volume you guys are dealing from a data standpoint, and the diversity, that the data warehouse must be massive, or it must be architected in a way that it can be agile because the needs, of the diverse needs. Can you guys share your thoughts on how you guys look on the data warehouse challenge and opportunity, and what you guys are currently doing? >> Well, so- >> Yeah you go ahead, Rachini. >> Go ahead, Sonya. >> Well, last year we implemented a tool, an enterprise warehouse, basically behind a tool that we implemented, and that was an opportunity for Data Governance to really lay some foundation and really bring visibility to the work that we could provide for the enterprise. We were able to embed into probably about six or seven of the 13 initiatives, I was actually within that project, and with that we were able to develop our stewardship committee, our data governance council, and because Rachini managed Data Solutions, our data solution manager was able to really help with the architect and integration of the tools. >> Rachini, your thoughts on running the data warehouse, because you've got to have flexibility for new types of data sources. How do you look at that? >> So, as Sonya just mentioned, we upgraded our data warehouse platform just recently because of these evolving needs, and like a lot of healthcare providers out there, a lot of them are either one or the other EMRs that are top in the market. With our EMR, they provide their own data warehouse, so you have to factor almost the impact of what they bring to the table in with an addition to all of those other sources of data that you're trying to co-mingle and bring together into the same data warehouse, and so for us, it was time for us to evolve our data warehouse. We ended up deciding on trying to create a virtual data warehouse, and in doing so, with virtualization, we had to upgrade our platform, which is what created that opportunity that Sonya was mentioning. And by moving to this new platform we are now able to bring all of that into one space and it's enabled us to think about how does the community of analysts interact with the data? How do we make that available to them in a secure way? In a way that they can take advantage of reusable master data files that could be our source of truth within our data warehouse, while also being able to have the flexibility to build what they need in their own functional spaces so that they can get the wealth of information that they need out of the same source and it's available to everyone. >> Okay, so I got to ask the question, and I was trying to get the good stuff out first, but let's get at the reality of COVID-19. You got pre-COVID-19 pandemic, we're kind of in the middle of it, and people are looking at strategies to come out of it, obviously the world will be changed, higher with a lot of virtualization, virtual meetings, and virtual workforce, but the data still needs to be, the business still needs to run, but data will be changing different sources, how are you guys responding to that crisis because you're going to be leaned on heavily for more and more support? >> Yeah it's been non-stop since March (laughs). So, I'm going to tell you about the reporting aspects of it, and then I'd love to turn it over to Sonya to tell you about some of the great things that we've actually been able to do to it and enhance our data governance program by not wasting this terrible event and this opportunity that's come up. So, with COVID, when it kicked off back in March, we actually formed a war room to address the needs around reporting analytics and just insights that our executives needed, and so in doing so, we created within the first week, our first weekend actually, our first dashboard, and within the next two weeks we had about eight or nine other dashboards that were available. And we continuously add to that. Information is so critical to our executives, to our clinicians, to be able to know how to address the evolving needs of COVID-19 and how we need to respond. We literally, and I'm not even exaggerating, at this very moment we have probably, let's see, I think it's seven different forecasts that we're trying to build all at the same time to try and help us prepare for this new recovery, this sort of ramp up efforts, so to your point, it started off as we're shutting down so that we can flatten the curve, but now as we try to also reopen at the same time while we're still meeting the needs of our COVID patients, there's this balancing act that we're trying to keep up with and so analytics is playing a critical factor in doing that. >> Sonya, your thoughts. First of all, congratulations, and action is what defines the players from the pretenders in my mind, you're seeing that play out, so congratulations for taking great action, I know you're working hard. Sonya, your thoughts, COVID, it's putting a lot of pressure? It highlights the weaknesses and strengths of what's kind of out there, what's your thoughts? >> Well, it just requires a great deal of collaboration and making sure that you're documenting metrics in a way where you're factoring true definition because at the end of the day, this information can go into a dashboard that's going to be visualized across the organization, I think what COVID has done was really enhanced the need and the understanding of why data governance is important and also it has allowed us to create a lot of standardization, where we we're standardizing a lot of processes that we currently had in correct place but just enhancing them. >> You know, not to go on a tangent, but I will, it's funny how the reality has kind of pulled back, exposed a lot of things, whether it's the remote work situation, people are VPNing, not under provision with the IT side. On the data side, everyone now understands the quality of the data. I mean, I got my kids talking progression analysis, "Oh, the curves are all wrong," I mean people are now seeing the science behind the data and they're looking at graphs all the time, you guys are in the visualization piece, this really highlights the need of data as a story, because there's an impact, and two, quality data. And if you don't have the data, the story isn't being told and then misinformation comes out of it, and this is actually playing out in real time, so it's not like it's just a use case for the most analytics but this again highlights the value of proposition of what you guys do. What's your personal thoughts on all this because this really is playing out globally. >> Yeah, it's been amazing how much information is out there. So, we have been extremely blessed at times but also burdened at times by that amount of information. So, there's the data that's going through our healthcare system that we're trying to manage and wrangle and do that data storytelling so that people can drive those insights to very effective decisions. But there's also all of this external data that we're trying to be able to leverage as well. And this is where the whole sharing of information can sometimes become really hard to try and get ahead of, we leverage the Johns Hopkins data for some time, but even that, too, can have some hiccups in terms of what's available. We try to use our State Department of Health and Human Services data and they just about updated their website and how information was being shared every other week and it was making it impossible for us to ingest that into our dashboards that we were providing, and so there's really great opportunities but also risks in some of the information that we're pulling. >> Sonya, what's your thoughts? I was just having a conversation this morning with the Chief of Analytics and Insight from NOA which is the National Oceanic Administration, about weather data and forecasting weather, and they've got this community model where they're trying to get the edges to kind of come in, this teases out a template. You guys have multiple locations. As you get more democratized in the connection points, whether it's third-party data, having a system managing that is hard, and again, this is a new trend that's emerging, this community connection points, where I think you guys might also might be a template, and your multiple locations, what's your general thoughts on that because the data's coming in, it's now connected in, whether it's first-party to the healthcare system or third-party. >> Yeah, well we have been leveraging our data governance tool to try to get that centralized location, making sure that we obtain the documentations. Due to COVID, everything is moving very fast, so it requires us to really sit down and capture the information and when you don't have enough resources in order to do that, it's easy to miss some very important information, so really trying to encourage people to understand the reason why we have data governance tools in order for them to leverage, in order to capture the documentation in a way that it can tell the story about the data, but most of all, to be able to capture it in a way so that if that person happened to leave the organization, we're not spending a lot of time trying to figure out how was this information created, how was this dashboard designed, where are the requirements, where are the specifications, where are the key elements, where does that information live, and making sure we capture that up front. >> So, guys, you guys are using Informatica, how are they helping you? Obviously, they have a system they're getting some great feedback on, how are you using Informatica, how is it going, and how has that enabled you guys to be successful? >> Yeah, so we decided on Informatica after doing a really thorough vetting of all of the other vendors in the industry that could provide us these services. We've really loved the capabilities that we've been able to provide to our customers at this point. It's evolving, I think, for us, the ability to partner with a group like Prominence, to be able to really leverage the capabilities of Informatica and then be really super, super hyper focused on providing data literacy back to our end users and making that the full intent of what we're doing within data governance has really enabled us to take the tools and make it something that's specific to UNC Health and the needs that our end users are verbalizing and provide that to them in a very positive way. >> Sonya, they talk about this master catalog, and I've talked to the CEO of Informatica and all their leaders, governance is a big part of it, and I've always said, I've always kind of had a hard time, I'm an entrepreneur, I like to innovate, move fast, break things, which is kind of not the way you work in the data world, you don't want to be breaking anything, so how do you balance governance and compliance with innovation? This has been a key topic and I know that you guys are using their enterprise data catolog. Is that helping? How does that fit in, is that part of it? >> Well, yeah, so during our COVID initiatives and building these telos dashboards, these visualizations and forecast models for executive leaders, we were able to document and EMPower you, which we rebranded Axon to EMPower, we were able to document a lot of our dashboards, which is a data set, and pretty much document attributes and show lineage from EMPower to EDC, so that users would know exactly when they start looking at the visualization not only what does this information mean, but they're also able to see what other sources that that information impacts as well as the data lineage, where did the information come from in EDC. >> So I got to ask the question to kind of wrap things up, has Informatica helped you guys out now that you're in this crisis? Obviously you've implemented before, now that you're in the middle of it, have you seen any things that jumped out at you that's been helpful, and are there areas that need to be worked on so that you guys continue to fight the good fight, come out of this thing stronger than before you came in? >> Yeah, there is a lot of new information, what we consider as "aha" moments that we've been learning about, and how EMPower, yes there's definitely a learning curve because we implemented EDC and EMPower last year doing our warehouse implementation, and so there's a lot of work that still needs to be done, but based on where we were the first of the year, I can say we have evolved tremendously due to a lot of the pandemic issues that arised, and we're looking to really evolve even greater, and pilot across the entire organization so that they can start leveraging these tools for their needs. >> Rachini you got any thoughts on your end on what's worked, what you see improvements coming, anything to share? >> Yeah, so we're excited about some of the new capabilities like the marketplace for example that's available in Axon, we're looking forward to being able to take advantage of some of these great new aspects of the tool so that we can really focus more on providing those insights back to our end users. I think for us, during COVID, it's really been about how do we take advantage of the immediate needs that are surfacing. How do we build all of these dashboards in record-breaking time but also make sure that folks understand exactly what's being represented within those dashboards, and so being able to provide that through our Informatica tools and service it back to our end users, almost in a seamless way like it's built into our dashboards, has been a really critical factor for us, and feeling like we can provide that level of transparency, and so I think that's where as we evolve that we would look for more opportunities, too. How do we make it simple for people to get that immediate answers to their questions, of what does the information need without it feeling like they're going elsewhere for the information. >> Rachini, thank you so much for your insight, Sonya as well, thanks for the insight, and stay safe. Sonya, behind you, I was pointing out, that's your artwork, you painted that picture. >> Yes. >> Looks beautiful. >> Yes, I did. >> You got two jobs, you're an artist, and you're doing data governance. >> Yes, I am, and I enjoy painting, that's how I relax (laughs). >> Looks great, get that on the market soon, get that on the marketplace, let's get that going. Appreciate the time, thank you so much for the insights, and stay safe and again, congratulations on the hard work you're doing, I know there's still a lot more to do, thanks for your time, appreciate it. >> Thank you. >> Thank you. >> It's theCUBE conversation, I'm John Furrier at the Palo Alto studios, for the remote interviews with Informatica, I'm John Furrier, thanks for watching. (upbeat music)

Published Date : Jul 24 2020

SUMMARY :

leaders all around the world, Hello, and welcome to and this is a super and so we serve the and you got the business applications, and all of the other obviously that sounds broad so that we can start getting documentation and what you guys are currently doing? and that was an opportunity running the data warehouse, and it's available to everyone. but the data still needs to be, so that we can flatten the curve, and action is what defines the players and making sure that and this is actually and do that data storytelling and again, this is a new and capture the information and making that the full intent and I know that you guys are using their so that users would know and pilot across the entire organization and so being able to provide that and stay safe. and you're doing data governance. Yes, I am, and I enjoy painting, that on the market soon, for the remote interviews

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rachini MoosaviPERSON

0.99+

RachiniPERSON

0.99+

JohnPERSON

0.99+

National Oceanic AdministrationORGANIZATION

0.99+

SonyaPERSON

0.99+

MarchDATE

0.99+

Sonya JordanPERSON

0.99+

John FurrierPERSON

0.99+

July 2020DATE

0.99+

Palo AltoLOCATION

0.99+

InformaticaORGANIZATION

0.99+

North CarolinaLOCATION

0.99+

last yearDATE

0.99+

two jobsQUANTITY

0.99+

EMPowerORGANIZATION

0.99+

State Department of Health and Human ServicesORGANIZATION

0.99+

oneQUANTITY

0.99+

UNC HealthcareORGANIZATION

0.99+

UNC HealthORGANIZATION

0.99+

first dashboardQUANTITY

0.99+

COVIDOTHER

0.99+

ProminenceORGANIZATION

0.99+

twoQUANTITY

0.99+

todayDATE

0.99+

theCUBEORGANIZATION

0.98+

Palo Alto, CaliforniaLOCATION

0.98+

13 initiativesQUANTITY

0.98+

NOAORGANIZATION

0.98+

CUBEORGANIZATION

0.98+

COVID-19OTHER

0.98+

this yearDATE

0.97+

COVIDTITLE

0.97+

one spaceQUANTITY

0.97+

BostonLOCATION

0.97+

first weekendQUANTITY

0.97+

SonyaORGANIZATION

0.97+

first weekQUANTITY

0.96+

TableauTITLE

0.96+

firstQUANTITY

0.96+

University of North CarolinaORGANIZATION

0.96+

nineQUANTITY

0.95+

about sixQUANTITY

0.94+

EDCORGANIZATION

0.94+

ChiefPERSON

0.94+

AxonORGANIZATION

0.93+

sevenQUANTITY

0.93+

Johns HopkinsORGANIZATION

0.92+

seven different forecastsQUANTITY

0.92+

two different customersQUANTITY

0.91+

FirstQUANTITY

0.91+

two great remote guestsQUANTITY

0.91+

agileTITLE

0.91+

pandemicEVENT

0.9+

Enterprise Analytics and Data Sciences OfficeORGANIZATION

0.9+

about 12 different entitiesQUANTITY

0.88+

Analytical ServicesORGANIZATION

0.87+

this morningDATE

0.87+

Guillermo Miranda, IBM | IBM Think 2020


 

>> Announcer: From theCUBE studios in Palo Alto and Boston. It's theCUBE. Covering IBM Think. Brought to you by IBM. >> Hi everybody, we're back this is Dave Vellante from theCUBE and you're watching our wall-to-wall coverage of IBM's Digital Think 2020 event and we are really pleased to have Guillermo Miranda here. He's the Vice President of Corporate and Social Responsibility. Guillermo thanks for coming on theCUBE. >> Absolutely, good afternoon to you. Good evening, wherever you are. >> So, you know this notion of corporate responsibility, it really has gained steam lately and of course with COVID-19, companies like IBM really have to take the lead on this. The tech industry actually has been one of those industries that has been less hard hit and IBM as a leader along with some other companies are really being looked at to step up. So talk a little bit about social responsibility in the context of the current COVID climate. >> Absolutely. Now thank you for the question. Look, first our responsibility is with the safety of our employees and the continuity of business for our clients. In this frame what we have done is see what is the most adequate areas to respond to the emergency of the pandemic and using what we know in terms of expertise and the talent that we have is why we decided to work first with high performance computing. IBM design and produce the fastest computers in the world. So Summit and a consortium of providers of high performance computing is helping on the discovery of vaccinations and drugs for the pandemic. The second thing that we are doing is related with data and insights. We own The Weather Company which is at 80 million people connected to check the weather every morning, every afternoon. So through The Weather Company, we are providing insights and data about county level information on COVID-19. Another thing that we are doing is we are offering some of our products for free. Watson, it is a chatbot to inform about what is adequate, what is needed in the middle of a pandemic if you are a consumer. We are also helping with our volunteers. IBM volunteers are helping teachers and school districts to rapidly flip into remote learning and get used to the tools of working on a remote environment. And finally we have a micro volunteering opportunity for anybody that has a computer or an android phone. So with the world community grid, you can help with the discovery also of drugs and vaccinations for COVID-19. >> Wow that's great, those are four awesome initiatives. They can't get the vaccine fast enough. Getting good quality information in the hands of people in this era of fake news also very very important. Students missing out on some of the key parts of their learning so remote learning is key. I love this idea of kind of micro crowd sourcing solutions. Really kind of opening that up and hopefully we'll have some big wins there Guillermo. Thank you for that. I want to ask you people talk about blue collar jobs, they talk about white collar jobs, you guys talk about new collar jobs. You and others. What are new collar jobs and why are they important? >> Look, in this data, digital, artificial intelligence driven economy, it's important not to have a digital divide between the haves and the have nots on the foundational skills to be operational in a digital economy. So new collar jobs are precisely the intersection of the skills that you need to operate in this digital driven economy with the basic knowledge to be a user of technology. So think about a cyber security analyst. You don't need a masters degree in industrial engineering to be a cyber security analyst. You just need the basic things about operating an environment on a security control center for instance. Or talk about blockchain or talk about software engineering, full stack developer. There are many roles that you can do in this economy where you don't need to have a full four-year degree in a university to have a decent paying job for the digital economy. These are the new collar jobs and what we are attempting to do with the new collar job definition is to get rid of the paradigm that the university degree is the only passport to a successful career in the marketplace. You can start in different, having the opportunity to have a job in a high tech area. Not necessarily with a PhD in engineering as I said, it's something important for us, for our clients and for the community. >> Yeah, so that's a very interesting concept that a lot of us can relate to. To go back to our university days, many of the courses that we took, we shook our heads and said, "okay, why do I have to take this?" Okay, I get it, well rounded liberal arts experience, that's all good but it's almost like you're implying that the notion of specialization that we've known for years like for instance, in vocations, auto-mechanic, woodworking, etc. Planning that have really critical aspect of the economy. Applying that to the technology business. It's genius and very simple. >> Absolutely. Look, this is the reinvention of vocational education for the 21st century where you continue to need the plumber, you continue to need the hairdresser but also you need people that operate the digital platforms and are comfortable with this environment and they don't need to pass at the beginning through full university. And it's also the concept that we have divided the secondary education, high school from college, university etc., like a Chinese wall. Here is high school, here is college. No! There can be a clear integration because you can start to get ready without finishing high school yet. So there are several paradigms that we have evolved in the previous century that now we need to change and be ready for this 21st century digital driven economy. >> Yeah, very refreshing. Really about time that this thinking came into practice. Talk about P-Tech. How does P-Tech fit into acquiring these skills? And maybe you could give us a sense as to the sort of profile of the folks and there backgrounds and give us a sense as to and add some color to how that's all working. >> Absolutely, so look, the P-Tech model started 10 years ago in a high school in New York City, in Brooklyn. And the whole idea is to go to an under-served area and create a ramp onto success that will help you to first finish high school. Finishing high school is very important and has a lot connotations for your future. And then at the same time, they start getting an associate degree in an area of high growth. The third component is the industry partner. An industry partner that works with the school district and the community college in order to bring the knowledge of what is needed in that community in order to create real job opportunities and we will send you the people and then you will use it. No! We need to work together in order to train the talent for the future. And you just go to the middle age and the guilds were the ones that were preparing the workers. So the industry was preparing the workforce. Why in the 20th century we renounced to that? Having real, relevant skills starting in high school, helping the kids to graduate with a dual diploma. High school, college and practice in real life what it is to be in a workplace environment. So we have more than 220 schools. In this school year, we have more than 150,000 kids in 24 countries already working through the P-Tech model. >> Love it and really scaling that up. So let's say I'm an individual. I'm a young person, I'm from a diverse background, maybe my parents came to this country and I'm a first generation American. Of course, it's not just the United States, it's global but let's say I'm from a background that's less advantaged, how do I take advantage? How hard is it for me to tap in to something like P-Tech and get these skills? >> Well, first one of the characteristics of the model is this is free admission. So there is not a barrier fence. If your school district offers P-Tech, you can apply to P-Tech and get into the P-Tech model education without any barrier without any account. And the second thing that you need to have is curiosity. Because it's not going to be the typical high school where you have math, science, gym, whatever. This is more of an integration of how the look of a career will be in the future and how you have to start understanding that there are drivers into the economy that are fast tracks into well paid jobs. So curiosity on top of being ready to join a P-Tech school in the school district where you live in. >> That's great Guillermo, thank you for sharing that. Now of course corporate responsibility, that's a wide net. This is one of your passions. I'll give you the last word to kind of, where do you see this whole corporate responsibility movement going generally and specifically within IBM? >> I think that this whole pandemic will just accelerate some of the clear trends in the marketplace. Corporate responsibility cannot be an afterthought as before in the '80s or '90s. I will put a foundation. I have a little of profits that are left and then I will distribute grants and that's my whole corporate responsibility approach. Corporate responsibility needs to be within the fabric of how do you do business. It has to be embedded into the values of your company and your value proposition and you have to serve those projects with the same kind of skills and technology, in the case of IBM, that you do for your commercial engagements. And this is what we do in IBM. We help IBMers to be helpful to their communities with the same kind of quality and platforms that we offer to our clients. And we help to solve one of the most complicated problems in society through technology, innovation, time. >> Love it. Guillermo thanks so much, you're doing great work. Really appreciate you coming on theCUBE and sharing with our audience. Congratulations. >> Absolutely. Thank you for very much for having me. >> You're very welcome and thank you for watching everybody. This is Dave Vellante from theCUBE. You're watching our continuous coverage of IBM Think 2020, the digital version. Keep it right there, we'll be right back after this short break. (bright music)

Published Date : May 5 2020

SUMMARY :

Brought to you by IBM. He's the Vice President of Corporate Absolutely, good afternoon to you. of the current COVID climate. and the talent that we have is They can't get the vaccine fast enough. of the skills that you need to operate many of the courses that we took, that operate the digital platforms the folks and there backgrounds helping the kids to graduate Of course, it's not just the in the school district where you live in. thank you for sharing that. in the case of IBM, and sharing with our audience. Thank you for very much for having me. of IBM Think 2020, the digital version.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Guillermo MirandaPERSON

0.99+

Dave VellantePERSON

0.99+

IBMORGANIZATION

0.99+

GuillermoPERSON

0.99+

Palo AltoLOCATION

0.99+

The Weather CompanyORGANIZATION

0.99+

20th centuryDATE

0.99+

New York CityLOCATION

0.99+

BostonLOCATION

0.99+

BrooklynLOCATION

0.99+

more than 150,000 kidsQUANTITY

0.99+

21st centuryDATE

0.99+

more than 220 schoolsQUANTITY

0.99+

COVID-19OTHER

0.99+

24 countriesQUANTITY

0.99+

third componentQUANTITY

0.99+

United StatesLOCATION

0.99+

80 million peopleQUANTITY

0.99+

second thingQUANTITY

0.98+

oneQUANTITY

0.98+

10 years agoDATE

0.98+

firstQUANTITY

0.98+

pandemicEVENT

0.97+

Digital Think 2020EVENT

0.97+

androidTITLE

0.96+

P-TechORGANIZATION

0.96+

21st centuryDATE

0.96+

Think 2020COMMERCIAL_ITEM

0.95+

theCUBEORGANIZATION

0.93+

first generationQUANTITY

0.92+

SummitORGANIZATION

0.92+

first oneQUANTITY

0.89+

four-year degreeQUANTITY

0.87+

ChineseOTHER

0.83+

Vice President of Corporate and Social ResponsibilityPERSON

0.76+

AmericanOTHER

0.75+

afternoonQUANTITY

0.74+

'80sDATE

0.73+

four awesome initiativesQUANTITY

0.72+

PORGANIZATION

0.7+

'90sDATE

0.68+

every morningQUANTITY

0.68+

WatsonPERSON

0.64+

TechORGANIZATION

0.52+

previousDATE

0.51+

COVIDEVENT

0.4+

ThinkCOMMERCIAL_ITEM

0.39+

UNLIST TILL 4/2 - Extending Vertica with the Latest Vertica Ecosystem and Open Source Initiatives


 

>> Sue: Hello everybody. Thank you for joining us today for the Virtual Vertica BDC 2020. Today's breakout session in entitled Extending Vertica with the Latest Vertica Ecosystem and Open Source Initiatives. My name is Sue LeClaire, Director of Marketing at Vertica and I'll be your host for this webinar. Joining me is Tom Wall, a member of the Vertica engineering team. But before we begin, I encourage you to submit questions or comments during the virtual session. You don't have to wait. Just type your question or comment in the question box below the slides and click submit. There will be a Q and A session at the end of the presentation. We'll answer as many questions as we're able to during that time. Any questions that we don't get to, we'll do our best to answer them offline. Alternatively, you can visit the Vertica forums to post you questions after the session. Our engineering team is planning to join the forums to keep the conversation going. Also a reminder that you can maximize your screen by clicking the double arrow button in the lower right corner of the slides. And yes, this virtual session is being recorded and will be available to view on demand later this week. We'll send you a notification as soon as it's ready. So let's get started. Tom, over to you. >> Tom: Hello everyone and thanks for joining us today for this talk. My name is Tom Wall and I am the leader of Vertica's ecosystem engineering team. We are the team that focuses on building out all the developer tools, third party integrations that enables the SoftMaker system that surrounds Vertica to thrive. So today, we'll be talking about some of our new open source initatives and how those can be really effective for you and make things easier for you to build and integrate Vertica with the rest of your technology stack. We've got several new libraries, integration projects and examples, all open source, to share, all being built out in the open on our GitHub page. Whether you use these open source projects or not, this is a very exciting new effort that will really help to grow the developer community and enable lots of exciting new use cases. So, every developer out there has probably had to deal with the problem like this. You have some business requirements, to maybe build some new Vertica-powered application. Maybe you have to build some new system to visualize some data that's that's managed by Vertica. The various circumstances, lots of choices will might be made for you that constrain your approach to solving a particular problem. These requirements can come from all different places. Maybe your solution has to work with a specific visualization tool, or web framework, because the business has already invested in the licensing and the tooling to use it. Maybe it has to be implemented in a specific programming language, since that's what all the developers on the team know how to write code with. While Vertica has many different integrations with lots of different programming language and systems, there's a lot of them out there, and we don't have integrations for all of them. So how do you make ends meet when you don't have all the tools you need? All you have to get creative, using tools like PyODBC, for example, to bridge between programming languages and frameworks to solve the problems you need to solve. Most languages do have an ODBC-based database interface. ODBC is our C-Library and most programming languages know how to call C code, somehow. So that's doable, but it often requires lots of configuration and troubleshooting to make all those moving parts work well together. So that's enough to get the job done but native integrations are usually a lot smoother and easier. So rather than, for example, in Python trying to fight with PyODBC, to configure things and get Unicode working, and to compile all the different pieces, the right way is to make it all work smoothly. It would be much better if you could just PIP install library and get to work. And with Vertica-Python, a new Python client library, you can actually do that. So that story, I assume, probably sounds pretty familiar to you. Sounds probably familiar to a lot of the audience here because we're all using Vertica. And our challenge, as Big Data practitioners is to make sense of all this stuff, despite those technical and non-technical hurdles. Vertica powers lots of different businesses and use cases across all kinds of different industries and verticals. While there's a lot different about us, we're all here together right now for this talk because we do have some things in common. We're all using Vertica, and we're probably also using Vertica with other systems and tools too, because it's important to use the right tool for the right job. That's a founding principle of Vertica and it's true today too. In this constantly changing technology landscape, we need lots of good tools and well established patterns, approaches, and advice on how to combine them so that we can be successful doing our jobs. Luckily for us, Vertica has been designed to be easy to build with and extended in this fashion. Databases as a whole had had this goal from the very beginning. They solve the hard problems of managing data so that you don't have to worry about it. Instead of worrying about those hard problems, you can focus on what matters most to you and your domain. So implementing that business logic, solving that problem, without having to worry about all of these intense, sometimes details about what it takes to manage a database at scale. With the declarative syntax of SQL, you tell Vertica what the answer is that you want. You don't tell Vertica how to get it. Vertica will figure out the right way to do it for you so that you don't have to worry about it. So this SQL abstraction is very nice because it's a well defined boundary where lots of developers know SQL, and it allows you to express what you need without having to worry about those details. So we can be the experts in data management while you worry about your problems. This goes beyond though, what's accessible through SQL to Vertica. We've got well defined extension and integration points across the product that allow you to customize this experience even further. So if you want to do things write your own SQL functions, or extend database softwares with UDXs, you can do so. If you have a custom data format that might be a proprietary format, or some source system that Vertica doesn't natively support, we have extension points that allow you to use those. To make it very easy to do passive, parallel, massive data movement, loading into Vertica but also to export Vertica to send data to other systems. And with these new features in time, we also could do the same kinds of things with Machine Learning models, importing and exporting to tools like TensorFlow. And it's these integration points that have enabled Vertica to build out this open architecture and a rich ecosystem of tools, both open source and closed source, of different varieties that solve all different problems that are common in this big data processing world. Whether it's open source, streaming systems like Kafka or Spark, or more traditional ETL tools on the loading side, but also, BI tools and visualizers and things like that to view and use the data that you keep in your database on the right side. And then of course, Vertica needs to be flexible enough to be able to run anywhere. So you can really take Vertica and use it the way you want it to solve the problems that you need to solve. So Vertica has always employed open standards, and integrated it with all kinds of different open source systems. What we're really excited to talk about now is that we are taking our new integration projects and making those open source too. In particular, we've got two new open source client libraries that allow you to build Vertica applications for Python and Go. These libraries act as a foundation for all kinds of interesting applications and tools. Upon those libraries, we've also built some integrations ourselves. And we're using these new libraries to power some new integrations with some third party products. Finally, we've got lots of new examples and reference implementations out on our GitHub page that can show you how to combine all these moving parts and exciting ways to solve new problems. And the code for all these things is available now on our GitHub page. And so you can use it however you like, and even help us make it better too. So the first such project that we have is called Vertica-Python. Vertica-Python began at our customer, Uber. And then in late 2018, we collaborated with them and we took it over and made Vertica-Python the first official open source client for Vertica You can use this to build your own Python applications, or you can use it via tools that were written in Python. Python has grown a lot in recent years and it's very common language to solve lots of different problems and use cases in the Big Data space from things like DevOps admission and Data Science or Machine Learning, or just homegrown applications. We use Python a lot internally for our own QA testing and automation needs. And with the Python 2 End Of Life, that happened at the end of 2019, it was important that we had a robust Python solution to help migrate our internal stuff off of Python 2. And also to provide a nice migration path for all of you our users that might be worried about the same problems with their own Python code. So Vertica-Python is used already for lots of different tools, including Vertica's admintools now starting with 9.3.1. It was also used by DataDog to build a Vertica-DataDog integration that allows you to monitor your Vertica infrastructure within DataDog. So here's a little example of how you might use the Python Client to do some some work. So here we open in connection, we run a query to find out what node we've connected to, and then we do a little DataLoad by running a COPY statement. And this is designed to have a familiar look and feel if you've ever used a Python Database Client before. So we implement the DB API 2.0 standard and it feels like a Python package. So that includes things like, it's part of the centralized package manager, so you can just PIP install this right now and go start using it. We also have our client for Go length. So this is called vertica-sql-go. And this is a very similar story, just in a different context or the different programming language. So vertica-sql-go, began as a collaboration with the Microsoft Focus SecOps Group who builds microfocus' security products some of which use vertica internally to provide some of those analytics. So you can use this to build your own apps in the Go programming language but you can also use it via tools that are written Go. So most notably, we have our Grafana integration, which we'll talk a little bit more about later, that leverages this new clients to provide Grafana visualizations for vertica data. And Go is another rising popularity programming language 'cause it offers an interesting balance of different programming design trade-offs. So it's got good performance, got a good current concurrency and memory safety. And we liked all those things and we're using it to power some internal monitoring stuff of our own. And here's an example of the code you can write with this client. So this is Go code that does a similar thing. It opens a connection, it runs a little test query, and then it iterates over those rows, processing them using Go data types. You get that native look and feel just like you do in Python, except this time in the Go language. And you can go get it the way you usually package things with Go by running that command there to acquire this package. And it's important to note here for the DC projects, we're really doing open source development. We're not just putting code out on our GitHub page. So if you go out there and look, you can see that you can ask questions, you can report bugs, you can submit poll requests yourselves and you can collaborate directly with our engineering team and the other vertica users out on our GitHub page. Because it's out on our GitHub page, it allows us to be a little bit faster with the way we ship and deliver functionality compared to the core vertica release cycle. So in 2019, for example, as we were building features to prepare for the Python 3 migration, we shipped 11 different releases with 40 customer reported issues, filed on GitHub. That was done over 78 different poll requests and with lots of community engagement as we do so. So lots of people are using this already, we see as our GitHub badge last showed with about 5000 downloads of this a day of people using it in their software. And again, we want to make this easy, not just to use but also to contribute and understand and collaborate with us. So all these projects are built using the Apache 2.0 license. The master branch is always available and stable with the latest creative functionality. And you can always build it and test it the way we do so that it's easy for you to understand how it works and to submit contributions or bug fixes or even features. It uses automated testing both for locally and with poll requests. And for vertica-python, it's fully automated with Travis CI. So we're really excited about doing this and we're really excited about where it can go in the future. 'Cause this offers some exciting opportunities for us to collaborate with you more directly than we have ever before. You can contribute improvements and help us guide the direction of these projects, but you can also work with each other to share knowledge and implementation details and various best practices. And so maybe you think, "Well, I don't use Python, "I don't use go so maybe it doesn't matter to me." But I would argue it really does matter. Because even if you don't use these tools and languages, there's lots of amazing vertica developers out there who do. And these clients do act as low level building blocks for all kinds of different interesting tools, both in these Python and Go worlds, but also well beyond that. Because these implementations and examples really generalize to lots of different use cases. And we're going to do a deeper dive now into some of these to understand exactly how that's the case and what you can do with these things. So let's take a deeper look at some of the details of what it takes to build one of these open source client libraries. So these database client interfaces, what are they exactly? Well, we all know SQL, but if you look at what SQL specifies, it really only talks about how to manipulate the data within the database. So once you're connected and in, you can run commands with SQL. But these database client interfaces address the rest of those needs. So what does the programmer need to do to actually process those SQL queries? So these interfaces are specific to a particular language or a technology stack. But the use cases and the architectures and design patterns are largely the same between different languages. They all have a need to do some networking and connect and authenticate and create a session. They all need to be able to run queries and load some data and deal with problems and errors. And then they also have a lot of metadata and Type Mapping because you want to use these clients the way you use those programming languages. Which might be different than the way that vertica's data types and vertica's semantics work. So some of this client interfaces are truly standards. And they are robust enough in terms of what they design and call for to support a truly pluggable driver model. Where you might write an application that codes directly against the standard interface, and you can then plug in a different database driver, like a JDBC driver, to have that application work with any database that has a JDBC driver. So most of these interfaces aren't as robust as a JDBC or ODBC but that's okay. 'Cause it's good as a standard is, every database is unique for a reason. And so you can't really expose all of those unique properties of a database through these standard interfaces. So vertica's unique in that it can scale to the petabytes and beyond. And you can run it anywhere in any environment, whether it's on-prem or on clouds. So surely there's something about vertica that's unique, and we want to be able to take advantage of that fact in our solutions. So even though these standards might not cover everything, there's often a need and common patterns that arise to solve these problems in similar ways. When there isn't enough of a standard to define those comments, semantics that different databases might have in common, what you often see is tools will invent plug in layers or glue code to compensate by defining application wide standard to cover some of these same semantics. Later on, we'll get into some of those details and show off what exactly that means. So if you connect to a vertica database, what's actually happening under the covers? You have an application, you have a need to run some queries, so what does that actually look like? Well, probably as you would imagine, your application is going to invoke some API calls and some client library or tool. This library takes those API calls and implements them, usually by issuing some networking protocol operations, communicating over the network to ask vertica to do the heavy lifting required for that particular API call. And so these API's usually do the same kinds of things although some of the details might differ between these different interfaces. But you do things like establish a connection, run a query, iterate over your rows, manage your transactions, that sort of thing. Here's an example from vertica-python, which just goes into some of the details of what actually happens during the Connect API call. And you can see all these details in our GitHub implementation of this. There's actually a lot of moving parts in what happens during a connection. So let's walk through some of that and see what actually goes on. I might have my API call like this where I say Connect and I give it a DNS name, which is my entire cluster. And I give you my connection details, my username and password. And I tell the Python Client to get me a session, give me a connection so I can start doing some work. Well, in order to implement this, what needs to happen? First, we need to do some TCP networking to establish our connection. So we need to understand what the request is, where you're going to connect to and why, by pressing the connection string. and vertica being a distributed system, we want to provide high availability, so we might need to do some DNS look-ups to resolve that DNS name which might be an entire cluster and not just a single machine. So that you don't have to change your connection string every time you add or remove nodes to the database. So we do some high availability and DNS lookup stuff. And then once we connect, we might do Load Balancing too, to balance the connections across the different initiator nodes in the cluster, or in a sub cluster, as needed. Once we land on the node we want to be at, we might do some TLS to secure our connections. And vertica supports the industry standard TLS protocols, so this looks pretty familiar for everyone who've used TLS anywhere before. So you're going to do a certificate exchange and the client might send the server certificate too, and then you going to verify that the server is who it says it is, so that you can know that you trust it. Once you've established that connection, and secured it, then you can start actually beginning to request a session within vertica. So you going to send over your user information like, "Here's my username, "here's the database I want to connect to." You might send some information about your application like a session label, so that you can differentiate on the database with monitoring queries, what the different connections are and what their purpose is. And then you might also send over some session settings to do things like auto commit, to change the state of your session for the duration of this connection. So that you don't have to remember to do that with every query that you have. Once you've asked vertica for a session, before vertica will give you one, it has to authenticate you. and vertica has lots of different authentication mechanisms. So there's a negotiation that happens there to decide how to authenticate you. Vertica decides based on who you are, where you're coming from on the network. And then you'll do an auth-specific exchange depending on what the auth mechanism calls for until you are authenticated. Finally, vertica trusts you and lets you in, so you going to establish a session in vertica, and you might do some note keeping on the client side just to know what happened. So you might log some information, you might record what the version of the database is, you might do some protocol feature negotiation. So if you connect to a version of the database that doesn't support all these protocols, you might decide to turn some functionality off and that sort of thing. But finally, after all that, you can return from this API call and then your connection is good to go. So that connection is just one example of many different APIs. And we're excited here because with vertica-python we're really opening up the vertica client wire protocol for the first time. And so if you're a low level vertica developer and you might have used Postgres before, you might know that some of vertica's client protocol is derived from Postgres. But they do differ in many significant ways. And this is the first time we've ever revealed those details about how it works and why. So not all Postgres protocol features work with vertica because vertica doesn't support all the features that Postgres does. Postgres, for example, has a large object interface that allows you to stream very wide data values over. Whereas vertica doesn't really have very wide data values, you have 30, you have long bar charts, but that's about as wide as you can get. Similarly, the vertica protocol supports lots of features not present in Postgres. So Load Balancing, for example, which we just went through an example of, Postgres is a single node system, it doesn't really make sense for Postgres to have Load Balancing. But Load Balancing is really important for vertica because it is a distributed system. Vertica-python serves as an open reference implementation of this protocol. With all kinds of new details and extension points that we haven't revealed before. So if you look at these boxes below, all these different things are new protocol features that we've implemented since August 2019, out in the open on our GitHub page for Python. Now, the vertica-sql-go implementation of these things is still in progress, but the core protocols are there for basic query operations. There's more to do there but we'll get there soon. So this is really cool 'cause not only do you have now a Python Client implementation, and you have a Go client implementation of this, but you can use this protocol reference to do lots of other things, too. The obvious thing you could do is build more clients for other languages. So if you have a need for a client in some other language that are vertica doesn't support yet, now you have everything available to solve that problem and to go about doing so if you need to. But beyond clients, it's also used for other things. So you might use it for mocking and testing things. So rather than connecting to a real vertica database, you can simulate some of that. You can also use it to do things like query routing and proxies. So Uber, for example, this log here in this link tells a great story of how they route different queries to different vertical clusters by intercepting these protocol messages, parsing the queries in them and deciding which clusters to send them to. So a lot of these things are just ideas today, but now that you have the source code, there's no limit in sight to what you can do with this thing. And so we're very interested in hearing your ideas and requests and we're happy to offer advice and collaborate on building some of these things together. So let's take a look now at some of the things we've already built that do these things. So here's a picture of vertica's Grafana connector with some data powered from an example that we have in this blog link here. So this has an internet of things use case to it, where we have lots of different sensors recording flight data, feeding into Kafka which then gets loaded into vertica. And then finally, it gets visualized nicely here with Grafana. And Grafana's visualizations make it really easy to analyze the data with your eyes and see when something something happens. So in these highlighted sections here, you notice a drop in some of the activity, that's probably a problem worth looking into. It might be a lot harder to see that just by staring at a large table yourself. So how does a picture like that get generated with a tool like Grafana? Well, Grafana specializes in visualizing time series data. And time can be really tricky for computers to do correctly. You got time zones, daylight savings, leap seconds, negative infinity timestamps, please don't ever use those. In every system, if it wasn't hard enough, just with those problems, what makes it harder is that every system does it slightly differently. So if you're querying some time data, how do we deal with these semantic differences as we cross these domain boundaries from Vertica to Grafana's back end architecture, which is implemented in Go on it's front end, which is implemented with JavaScript? Well, you read this from bottom up in terms of the processing. First, you select the timestamp and Vertica is timestamp has to be converted to a Go time object. And we have to reconcile the differences that there might be as we translate it. So Go time has a different time zone specifier format, and it also supports nanosecond precision, while Vertica only supports microsecond precision. So that's not too big of a deal when you're querying data because you just see some extra zeros, not fractional seconds. But on the way in, if we're loading data, we have to find a way to resolve those things. Once it's into the Go process, it has to be converted further to render in the JavaScript UI. So that there, the Go time object has to be converted to a JavaScript Angular JS Date object. And there too, we have to reconcile those differences. So a lot of these differences might just be presentation, and not so much the actual data changing, but you might want to choose to render the date into a more human readable format, like we've done in this example here. Here's another picture. This is another picture of some time series data, and this one shows you can actually write your own queries with Grafana to provide answers. So if you look closely here you can see there's actually some functions that might not look too familiar with you if you know vertica's functions. Vertica doesn't have a dollar underscore underscore time function or a time filter function. So what's actually happening there? How does this actually provide an answer if it's not really real vertica syntax? Well, it's not sufficient to just know how to manipulate data, it's also really important that you know how to operate with metadata. So information about how the data works in the data source, Vertica in this case. So Grafana needs to know how time works in detail for each data source beyond doing that basic I/O that we just saw in the previous example. So it needs to know, how do you connect to the data source to get some time data? How do you know what time data types and functions there are and how they behave? How do you generate a query that references a time literal? And finally, once you've figured out how to do all that, how do you find the time in the database? How do you do know which tables have time columns and then they might be worth rendering in this kind of UI. So Go's database standard doesn't actually really offer many metadata interfaces. Nevertheless, Grafana needs to know those answers. And so it has its own plugin layer that provides a standardizing layer whereby every data source can implement hints and metadata customization needed to have an extensible data source back end. So we have another open source project, the Vertica-Grafana data source, which is a plugin that uses Grafana's extension points with JavaScript and the front end plugins and also with Go in the back end plugins to provide vertica connectivity inside Grafana. So the way this works, is that the plugin frameworks defines those standardizing functions like time and time filter, and it's our plugin that's going to rewrite them in terms of vertica syntax. So in this example, time gets rewritten to a vertica cast. And time filter becomes a BETWEEN predicate. So that's one example of how you can use Grafana, but also how you might build any arbitrary visualization tool that works with data in Vertica. So let's now look at some other examples and reference architectures that we have out in our GitHub page. For some advanced integrations, there's clearly a need to go beyond these standards. So SQL and these surrounding standards, like JDBC, and ODBC, were really critical in the early days of Vertica, because they really enabled a lot of generic database tools. And those will always continue to play a really important role, but the Big Data technology space moves a lot faster than these old database data can keep up with. So there's all kinds of new advanced analytics and query pushdown logic that were never possible 10 or 20 years ago, that Vertica can do natively. There's also all kinds of data-oriented application workflows doing things like streaming data, or Parallel Loading or Machine Learning. And all of these things, we need to build software with, but we don't really have standards to go by. So what do we do there? Well, open source implementations make for easier integrations, and applications all over the place. So even if you're not using Grafana for example, other tools have similar challenges that you need to overcome. And it helps to have an example there to show you how to do it. Take Machine Learning, for example. There's been many excellent Machine Learning tools that have arisen over the years to make data science and the task of Machine Learning lot easier. And a lot of those have basic database connectivity, but they generally only treat the database as a source of data. So they do lots of data I/O to extract data from a database like Vertica for processing in some other engine. We all know that's not the most efficient way to do it. It's much better if you can leverage Vertica scale and bring the processing to the data. So a lot of these tools don't take full advantage of Vertica because there's not really a uniform way to go do so with these standards. So instead, we have a project called vertica-ml-python. And this serves as a reference architecture of how you can do scalable machine learning with Vertica. So this project establishes a familiar machine learning workflow that scales with vertica. So it feels similar to like a scickit-learn project except all the processing and aggregation and heavy lifting and data processing happens in vertica. So this makes for a much more lightweight, scalable approach than you might otherwise be used to. So with vertica-ml-python, you can probably use this yourself. But you could also see how it works. So if it doesn't meet all your needs, you could still see the code and customize it to build your own approach. We've also got lots of examples of our UDX framework. And so this is an older GitHub project. We've actually had this for a couple of years, but it is really useful and important so I wanted to plug it here. With our User Defined eXtensions framework or UDXs, this allows you to extend the operators that vertica executes when it does a database load or a database query. So with UDXs, you can write your own domain logic in a C++, Java or Python or R. And you can call them within the context of a SQL query. And vertica brings your logic to that data, and makes it fast and scalable and fault tolerant and correct for you. So you don't have to worry about all those hard problems. So our UDX examples, demonstrate how you can use our SDK to solve interesting problems. And some of these examples might be complete, total usable packages or libraries. So for example, we have a curl source that allows you to extract data from any curlable endpoint and load into vertica. We've got things like an ODBC connector that allows you to access data in an external database via an ODBC driver within the context of a vertica query, all kinds of parsers and string processors and things like that. We also have more exciting and interesting things where you might not really think of vertica being able to do that, like a heat map generator, which takes some XY coordinates and renders it on top of an image to show you the hotspots in it. So the image on the right was actually generated from one of our intern gaming sessions a few years back. So all these things are great examples that show you not just how you can solve problems, but also how you can use this SDK to solve neat things that maybe no one else has to solve, or maybe that are unique to your business and your needs. Another exciting benefit is with testing. So the test automation strategy that we have in vertica-python these clients, really generalizes well beyond the needs of a database client. Anyone that's ever built a vertica integration or an application, probably has a need to write some integration tests. And that could be hard to do with all the moving parts, in the big data solution. But with our code being open source, you can see in vertica-python, in particular, how we've structured our tests to facilitate smooth testing that's fast, deterministic and easy to use. So we've automated the download process, the installation deployment process, of a Vertica Community Edition. And with a single click, you can run through the tests locally and part of the PR workflow via Travis CI. We also do this for multiple different python environments. So for all python versions from 2.7 up to 3.8 for different Python interpreters, and for different Linux distros, we're running through all of them very quickly with ease, thanks to all this automation. So today, you can see how we do it in vertica-python, in the future, we might want to spin that out into its own stand-alone testbed starter projects so that if you're starting any new vertica integration, this might be a good starting point for you to get going quickly. So that brings us to some of the future work we want to do here in the open source space . Well, there's a lot of it. So in terms of the the client stuff, for Python, we are marching towards our 1.0 release, which is when we aim to be protocol complete to support all of vertica's unique protocols, including COPY LOCAL and some new protocols invented to support complex types, which is our new feature in vertica 10. We have some cursor enhancements to do things like better streaming and improved performance. Beyond that we want to take it where you want to bring it. So send us your requests in the Go client fronts, just about a year behind Python in terms of its protocol implementation, but the basic operations are there. But we still have more work to do to implement things like load balancing, some of the advanced auths and other things. But they're two, we want to work with you and we want to focus on what's important to you so that we can continue to grow and be more useful and more powerful over time. Finally, this question of, "Well, what about beyond database clients? "What else might we want to do with open source?" If you're building a very deep or a robust vertica integration, you probably need to do a lot more exciting things than just run SQL queries and process the answers. Especially if you're an OEM or you're a vendor that resells vertica packaged as a black box piece of a larger solution, you might to have managed the whole operational lifecycle of vertica. There's even fewer standards for doing all these different things compared to the SQL clients. So we started with the SQL clients 'cause that's a well established pattern, there's lots of downstream work that that can enable. But there's also clearly a need for lots of other open source protocols, architectures and examples to show you how to do these things and do have real standards. So we talked a little bit about how you could do UDXs or testing or Machine Learning, but there's all sorts of other use cases too. That's why we're excited to announce here our awesome vertica, which is a new collection of open source resources available on our GitHub page. So if you haven't heard of this awesome manifesto before, I highly recommend you check out this GitHub page on the right. We're not unique here but there's lots of awesome projects for all kinds of different tools and systems out there. And it's a great way to establish a community and share different resources, whether they're open source projects, blogs, examples, references, community resources, and all that. And this tool is an open source project. So it's an open source wiki. And you can contribute to it by submitting yourself to PR. So we've seeded it with some of our favorite tools and projects out there but there's plenty more out there and we hope to see more grow over time. So definitely check this out and help us make it better. So with that, I'm going to wrap up. I wanted to thank you all. Special thanks to Siting Ren and Roger Huebner, who are the project leads for the Python and Go clients respectively. And also, thanks to all the customers out there who've already been contributing stuff. This has already been going on for a long time and we hope to keep it going and keep it growing with your help. So if you want to talk to us, you can find us at this email address here. But of course, you can also find us on the Vertica forums, or you could talk to us on GitHub too. And there you can find links to all the different projects I talked about today. And so with that, I think we're going to wrap up and now we're going to hand it off for some Q&A.

Published Date : Mar 30 2020

SUMMARY :

Also a reminder that you can maximize your screen and frameworks to solve the problems you need to solve.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tom WallPERSON

0.99+

Sue LeClairePERSON

0.99+

UberORGANIZATION

0.99+

Roger HuebnerPERSON

0.99+

VerticaORGANIZATION

0.99+

TomPERSON

0.99+

Python 2TITLE

0.99+

August 2019DATE

0.99+

2019DATE

0.99+

Python 3TITLE

0.99+

twoQUANTITY

0.99+

SuePERSON

0.99+

PythonTITLE

0.99+

pythonTITLE

0.99+

SQLTITLE

0.99+

late 2018DATE

0.99+

FirstQUANTITY

0.99+

end of 2019DATE

0.99+

VerticaTITLE

0.99+

todayDATE

0.99+

JavaTITLE

0.99+

SparkTITLE

0.99+

C++TITLE

0.99+

JavaScriptTITLE

0.99+

vertica-pythonTITLE

0.99+

TodayDATE

0.99+

first timeQUANTITY

0.99+

11 different releasesQUANTITY

0.99+

UDXsTITLE

0.99+

KafkaTITLE

0.99+

Extending Vertica with the Latest Vertica Ecosystem and Open Source InitiativesTITLE

0.98+

GrafanaORGANIZATION

0.98+

PyODBCTITLE

0.98+

firstQUANTITY

0.98+

UDXTITLE

0.98+

vertica 10TITLE

0.98+

ODBCTITLE

0.98+

10DATE

0.98+

PostgresTITLE

0.98+

DataDogORGANIZATION

0.98+

40 customer reported issuesQUANTITY

0.97+

bothQUANTITY

0.97+

Amit Walia, Informatica | CUBEConversations, Feb 2020


 

[Music] hello and welcome to this cube conversation here in Palo Alto California I'm John for your host of the cube we're here the very special guest I met while he is CEO of informatica newly appointed CEO about a month ago a little over a month ago had a product before that been with informatics in 2013 informatica went private in 2015 and has since been at the center of the digital transformation around data data transformation data privacy data everything around data and value in AI that made great to see you and congratulations on the new CEO role at informatica so thank you all it's good to be back here John it's been great to follow you and for the folks who don't know you you've been a very product centric CEO your products and CEO as they call it but also now you have a company in the middle of the transformation cloud scale is really mainstream enterprises look at multi cloud hybrid cloud this is something that you've been on for many many years we've talked about it so now that you're in charge you get the ship you get the wheel and you're in your hands were you taking it what is the update of informatica give us the update well thank you solook business couldn't be better I think to give you a little bit of color wavy coming from the last couple of years informatica went through a huge amount of transformation all things trying to transform our business model pivoting to subscription all things heavily bet into cloud the new workloads as we talked about and all things new like AI to give a little bit of color we basically exited it last year with a billion dollars of ARR not just revenue so we're a billion-dollar AR our company and as we pivot it to subscription as subscription business for the last couple of years has been growing north of 55 percent so that's the scale at which we are running multi-billion dollars and if you look at the other two metrics which we keep very click near and dear to hard one is innovation so we are participated in five magic quadrants and we are the leader in all five magic quadrants five one five as we like to call it Gartner Magic Quadrant very very critical to us because innovation in the tech is you know is very important also customer loyalty very important to us so we again were the number one in customer satisfaction continues to grow sore last IDC survey our market share continue to grow and be the number one in all our markets so business couldn't be at a better place where we are and what again some of the business discussed which first method on the Magic Quadrant front it's very difficult the folks that aren't in the club is to understand that to participate in multiple magic quadrants with many many do is hard because clouds horizontally scalable magic partners used to be old IT kind of categories but to be in multiple magic quadrants is the nature of the beast but to be a leader is very difficult because magic question doesn't truly capture that if you just appear play and then try to be cloud so you guys are truly that horizontal brand and and technology we've covered this on the cube so there's no secret but I want to get your comments on to be a leader and today in these quadrants you have to be on all the right waves you've got data warehouses are growing and changing at the rise of snowflake you guys partner with data bricks again machine learning and AI changing very rapidly and there's a huge growth wave behind it as well as the existing enterprises who were you know transforming you know analytics and operational workloads this is really really challenging can you just share your thoughts on why is it so hard what are the some of the key things behind these trends we can analytics I guess you can do if it's just analytics without great but this is a this horizontal data play is not easy can you share why no so yes first we are actually a I would say a very hidden secret we're the only software company and I'll say that again the only software company that was the leader in the traditional world traditional workloads legacy on-premise and via the leader in the cloud workloads not a single software company can say that they were the leader when they were started 27 years ago and there's still the leader in the magic quadrants today our cloud by the way runs at 10 trillion transactions a month scale and obviously we partner with all the hyper scalars across the board and our goal is to be the Switzerland of data for our customers and the question you ask is is a critical one when you think of he business drivers what a customer's trying to do one of them is all all things cloud all things the eye is obviously there but one is all data warehouses are going to cloud we just talked about that moving workloads to cloud whether it is analytical operational basically we have front and center helping customers do that second a big trend in the world of digital transformation is helping our customers customer experience and driving that fueling that is a master data management business so on and so products behind that but driving customer experiences big big driver of our growth and the third one is no large enterprise can live without data governance need a privacy man this is a thing today right you got to make sure that you deliver good governance whether it's compliance oriented or brand oriented privacy and risk management and all three of them basically span the business initiatives that feature into those five magic quadrants our goal is to play across all of them and that's what we do Pat Cal senior had a quote on the cube many years ago he said if you're not on the right wave you could be driftwood its meaning you're gonna get crashed oh sorry well a lot of people have we've seen a lot of companies have a good skill and then get washed away if you will by a wave you're seeing like AI and machine learning we talked a little bit about that you guys are in there I want to get your thoughts on this one is there whenever this executive changes there's always questions around you know what's happening with the company so I want you to talk about the state of informatica because you're now the CEO there's been some changes has there been a pivot has there been a sharpening focus what's going on with informatica so I think I'm cool right now is to scale and hyper scale that's the word I mean we're in a very strong position in fact we use this phrase internally within the company next phase of great we're at a great place and we are chartering the next phase of great for the company and the cool there is helping our customers I talked about these three big big initiatives that companies are investing in data warehousing and analytics going to the cloud transforming customer experiences and data governance and privacy and the fourth one that underpins all of them is all things a I mean as we've talked about before right all of these things are complex hard to do look at the volume and complexity of data and what we're investing in is what we call native ai ai needs data data needs AI as I always said right and we are investing in AI to make these things easy for our customers to make sure that they can scale and grow into the future and what we've also been very diligent about this partnering we partnered very well with the hyper scalars like whether it's AWS Microsoft whether it's GCP snowflake great partner of ours data brick skate part of ours tableau great partner of ours we have a variety of these partners and our cool is always customer first customers are investing in these technologies our goal is to help customers adopt these technologies not for the sake of technologies but for the sake of transforming those three business initiatives I thought you brought up I was gonna ask you the next question but snowflake and data versus data Brooks has been on the cube Holly a great that's a good friend of ours and he's got chops you Stan I'm not Stanford Berkeley he'll kill me with that if it's ow he's but beta Brooks is doing well they made some good bets and it's paying off of them snowflake a rising star Frank's Lubin's over there now they are clearly a choice for modern data warehouses as is any of us redshift how are you working with snowflake how do you take advantage of that can you just unpack your relationship with snowflake it's a it's a very deep partnership our goal is to help our customers you know as they pick these technology choices for data warehousing an example where snowflake comes into play to make sure that the underlying data infrastructure can work seamlessly for them see customers build this complex logic sitting in the old technologies as they move to anything new they want to make sure that that transition migration is seamless as seamless as it can be and typically they'll start something new before they retire something old with us they can carry all of that business logic for the last 27 years their business logic seamlessly and run natively in this case in the cloud so basically we allow them this whole from tool and also the ability to have the best of breed technology in the context of data management to power up these new infrastructures where they are going let me ask you the question around the industry trends what are the top and trends industry trends that are driving your business and your product direction and customer value look digital transformation has been a big trend and digital transformation has fueled all things like customer experiences being transformed so that remains a big vector of growth I would say cloud adoption is still relatively literally inning so no you love these balls we can still say what second third inning as much as we would like to believe cloud has been their customers mode with their analytical workloads first still happening the operational workloads are still in its very very infancy so that is still a big vector of growth and and a big trend that we see for the next five plus years and you guys in the middle of that oh absolutely yeah absolutely because if you're running a large operational workload it's all about the data at the end of the day because you can change the app but it's the data that you want to carry the logic that you've written that you want to carry and we participate in that I have ashes before but I want to ask you again because I want to get the modern update because pure cloud born in the cloud like you know startups and whatever it's easy to say that do that everyone knows that hybrid is clear now everyone that sees that as an architectural thing Multi cloud is kind of a state of I have multiple clouds but being true multi-cloud a little bit different maybe downstream conversation but certainly relevant so as cloud evolves from public cloud hybrid and maybe multi or certainly multi how do you see those things evolving for informatica well we believe in the word hybrid and I define hybrid exactly as these two things one is hybrid is multi cloud you can have hybrid clouds second is hybrid means you're gonna have ground and cloud interoperate for a period of time so to us we sit in the center of this hybrid cloud trend and our goal is to help customers go cloud native but make sure that they can run whatever was the old business that they were running as much possible in the most seamlessly before they can at some point cut over and which is why as I said I've been our cloud native business a cloud platform which we call informatica intelligent cloud services runs at scale globally across the globe by the way on all hyper scalars at ten plus trillion transactions a month but yet we will allowed customers to run their own Prem technologies as much as they can because they cannot just rip the band-aid over there right so multi cloud ground cloud our goal is to help customers large enterprise customers manage that complexity their AI plays a big role because these are all very complex environments and our investment in AI our REI being called Claire is to help them manage that as in an as automated way as seen this away and to be honest the most important thing for them is in the most governed way because that's where the biggest risk risks come into play that's where our investments let's say what customers per second I want to get your thoughts on this because at Amazon reinvent last year in December it was a meme going around on the queue that we that we start on the cube called if you think the tea out of cloud native it's cloud naive and so the the the point was is to say hey doing cloud native makes sense in certain cases but if you're not really thinking about the overall hybrid and the architecture of what's going on you kind of could get into a night naive situation so I asked any of this and I want to ask you any chat so I'll ask you the same question is that what would be naive for a customer to think about cloud so they can be cloud native or operate in a cloud what are some of the things they should avoid so they don't fall into that naive category now you've been you know I hey I'm doing cloud yeah for clouds sake I mean so there's kind of this perception have you got to do cloud right mm-hmm what's your view on cloud native and how does people avoid the cloud naive label it's it's a good question I think to me when I talk to customers and hundreds of them across the globe is I meet them in a year is to really think of their cloud as a reference architecture for at least the next five years if not I'm a technology changes think of a reference architecture for the next five years and in that you got to think of multiple best-of-breed technologies that can help you I mean you got to think best-of-breed as much as possible now you're not going to go have hundreds of different technologies running around because you got to scale them but think as much as possible that you are Best of Breed yet settled to what I call a few platforms as much as possible and then make sure that you basically have the right connection points across different workloads will be optimal for different let's say cloud environments analytical workload and operation workload a financial workload each one of them will have something that will work best in somewhere else right so to me putting the business focus on what the right business outcome is and working you will be back to what cloud environments are best suited for that and building that reference architecture thoughtfully with a five-year goal in mind then jumping to the next most exciting thing hot thing and try to experiment your way through it that will not scale would be the right way to go yeah it's not naive to be focusing on the business problems and operating it in a cloud architecture this is what you're saying okay so let's talk about like the customer journey around AI because this has become a big one you guys been on the AI way for many many years but now that it's become full mainstream enterprise how are the applications software guys looking at this because if I'm an enterprise and I want to go cloud native app to make my apps work yes apps are driving everything these days and you guys play a big role data is more important than ever for applicants what's your view on the app developer DevOps market so to me the big chains of VC in fact we're gonna talk a lot about that in a couple of months when we are at informatica world our user conference in May is how data is moving to the next phase and it's what developers today are doing is that they are building the apps with data in mind first data first apps I mean if you're building let's see a great customer service app you gotta first figure out what all data do you need to service a customer before you go build an app so that is a very fundamental shift that has happened and and in that context what happens is that in a cloud native environment obviously you have a lot of flexibility to begin with that bring data over there and DevOps is getting complemented by what we see is data ops having all kinds of data available for you to make those decisions as you build an application and in that discussion you're near having before is that there is so much data that you will not be able to understand that investing in metadata so you can understand data about the data I called metadata as the intelligent data if you're an intelligent enterprise you gotta invest in metadata those are the places where we see developers going first and from their ground up building what we call apps that are more intelligent apps of the future not just business process apps cloud native versus cloud naive connotation we were just having is interesting you talk about Best of Breed I want to get your thoughts on some trends we're seeing seeing even in cybersecurity with RSA coming up there's been consolidation you saw our Dell Jesolo RSA 2 private equity company so you starting to see a lot of these shiny new toy type companies being consolidated in because there's too much for companies to deal with you're seeing also skills gaps but also skills shortages there's not enough people oh now you have multiple clouds you got Amazon you got Azure you got Google GCP you've got Oracle IBM VMware now you have a shortage problem true so this is putting pressure on the customers so with that in mind how are the customers reacting to this and what is best to breed really mean so that is actually a very good point look we all live in Silicon Valley so we get excited about the latest technology and we have the best of skills here even though we have a skills problem over here right think about as you move away from Silicon Valley and you start flying and I fly all over the world and you start seeing that if you're in the middle of nowhere there is not a whole lot of developers who understand the latest cutting-edge technology that happens here our goal has been to solve that problem for our customers look our goal is to help the developers but as much as possible provide the customers the ability to have a handful of skilled developers but they can still take our offerings and we abstract away that complexity so that they are dealing only at a higher level the underlying technology comes and goes and you know it will come and go 100 times they don't have to worry about that so our goal is abstract away the underlying changes in technology focus at the business logic layer and you can move you can basically run your business for over the course of 20 years and that's what we've done for customers customers were invested with us have run their businesses seamlessly for two decades three decades while so much technology has changed with a period of time and the cloud is right here scaling up so I want to get your thoughts on the different clouds I see Amazon Web Services number one the cloud hyper scalar we're talking pure cloud that gets more announcements more capabilities then you got a sure again hyper scale trying to catch up to Amazon more Enterprise focused are doing very very well on the enterprise I was I said on Twitter they're mopping up the enterprise because it's easy to have an install base there they've been leveraging your very well stuff in atella has done team done a great job that you got Google trying to specialize and figure out where they're gonna fit Oracle IBM everyone else as you'd have to deal with this you're kind of an arms dealer in a way with data I would love to say no hands but not absolute I'm dealing that's the bad analogy but you get my point you have to play well you have to it's not like an aspiration show your requirements you have to play and operate with value in all the clouds one how is that going and what are the different clouds like well I always begin with the philosophy that its customer first you go with the customers a queen and customers choose different technologies for different use cases as deems fit for them our job is to make sure our customers are successful so we begin with the customer in mind and we solve from there number two that's a big market there is plenty of room for everybody to play of course there is competition across the board but plenty of room for everybody to play and our job is to make sure that we assist all of them to help at the end of the day our joint customers we have great success stories with all of them again you get in mind the end customer so that has always been informatic as philosophy customer first and we partner with a critical strategic partners in that context and and we invest and we've invested with all of them deep partnerships of all of them they've all been at informatica well you've seen them so again as I said and I think the easiest way we obviously believe they do this incident of data but keep the customer in mind all the time and everything follows from there what is multi-cloud me to your customers if your customer centric obviously we hear people say yeah I use this for that and I get that when I talk to CIOs and see says with his real dollars and interact on the business there tends to be a gravitational pull towards one cloud a lot of people are building their own stacks in house development has shifted to be very DevOps I'm cloud native and then I'll have a secondary cloud but they recognize that they have multiple clouds but they're not spreading their staff around for the reasons around skill shortage yeah are you seeing that same trend and to what do you see is multi cloud well it is 1d cloud I think I think people sometimes don't realize they're already in a multi cloud world I mean you have so many SAS applications running around right look around that so whether you have work day with your salesforce.com and I can keep going on and on and on right there are multiple similarly multi platform clouds are there right I mean people are using hash or for some use case they may want to go a dime us for certain other negative use cases so quite naturally customers begin with something to begin with and then the scale from there but they realize as we as I talk to customers I realize hey look I have use cases and they're optimally set for some things that are multi-cloud and they'll end up there but they all have to begin somewhere before they go somewhere so I have multiple clouds which I agree with you by the way and talking about this one cube a lot there's multi multiple clouds and then this interoperability among clouds I mean remember multi-vendor back in the old days multi-cloud it kind of feels like a multi vendor kind of value proposition but if I have Salesforce or workday in these different clouds in Amazon where I'm developing or Azure what is the multi cloud interoperability is it the data control plane what problems are the customers facing and the challenge that they want to turn into opportunities do a good example multi-cloud see a good example one of the biggest areas of growth for us is helping a customers transform the customer experience now if you think about an enterprise company that is thinking about having a great understanding of their customer now just think about the number of places that customer data sets one of the one of the big areas of investment viability the CRM product called salesforce.com right good customer data sits there but there could be where ticketing data sets there could be where marketing data sits there could be some legacy applications the customer data sits in so many places more often than not we realize when we talk to a customer it sits in at least 20 places within an enterprise and then there is so much customer data sitting outside of the firewalls of an enterprise right clickstream data where people had parts or shared a partner data so in that context bringing that data together becomes extremely important for you to have a full view of your customer and deliver a better customer experience from there so it is the cost the customers have the problem it's a huge problem right now huge problem right now across the board where cup a per customer like hey I want to serve my customer better but I need to know my customer better before I can serve them better so we are squarely in the middle of that helping and B being the Switzerland of data being fully understanding the application layer and the platform layer we can bring all that stuff and through the lens of our customer 360 which is fueled by our master data management product we allow customers to get to see that full view and from there you can service them better give them a next best offer or you can understand their lives either full lifetime value for customer so on and so forth so that's how we see the world and that's how we help our customers in this really fragmented cloud world that's your primary value proposition it's a huge value proposition and again as I said always think customer first I met you got your big event coming up this spring so looking forward to seeing you there I want to get your take as now that you're looking at the next great chapter of informatica what is your vision how do you see that twenty miles stare out in the marketplace as you execute again your product oriented CEO because your product chops now you're leading the team what's your vision what's the 20 mile stair well as simple as possible we're gonna double the company our goal is to double the company across the board we have a great foundation of innovation we put together and we remain paranoid all the time as to where and we always start to look where the world is going serve our customers and as long as we have great customer loyalty which we have today have the foundations of great innovation and a great team and culture at the company which we fundamentally believe in we basically right now have the vision of doubling the company that's awesome well really appreciate you taking the time one final question I want to get your thoughts on you know it's looking valley and in the industry starting to see Indian American executives become CEOs you now see you have informatica congratulations Arvind over at IBM sathi natella this has been a culture of the technology for generations I remember when I broken the business in the late 80s 90s this is the pure love of tech and the and the meritocracy of Technology is at play here this is a historic moment it's been written about but I want to get your thoughts on how you see it evolving and advice for young entrepreneurs out there future CEOs what's it take to get there what's it like what's your personal thoughts well first of all it's been a humbling moment for me to lead in from it's a great company and a great opportunity I mean I can say like it's the true Americans dream I mean I came here in 1998 I mean as a lot of immigrants Ted didn't have much in my pocket I went to business school I was deep in loans and and I believed in the opportunity and I think there is something very special about America and I would say something really special about Silicon Valley where it's all about at the end of the day value it's all about meritocracy the color of your skin and your accent and your those things don't really matter and I think we're such an embracing culture typically over here and my advice to anybody is that look believe and I genuinely use that word and I've gone through stages in my life where you sometimes doubt it but you have to believe and stay honest what you want and look there is no substitute to hard work sometimes luck does play a role but there is no substitute artwork and at the end of the day good things happen as we say that for the love of the game love attack your tech athlete love to love to interview and congratulate been great to follow your career get to know you and informatica it's great to see you at the helm thank you John pleasure being here I'm John 4 here is cube conversation in Palo Alto getting the update on the new CEO from informatics at MIT Walia friend of the cube and of course a great tech athlete and now running the great company I'm John forever here thanks for watching [Music] you [Music]

Published Date : Feb 18 2020

SUMMARY :

at the end of the day because you can

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David NicholsonPERSON

0.99+

ChrisPERSON

0.99+

Lisa MartinPERSON

0.99+

JoelPERSON

0.99+

Jeff FrickPERSON

0.99+

PeterPERSON

0.99+

MonaPERSON

0.99+

Dave VellantePERSON

0.99+

David VellantePERSON

0.99+

KeithPERSON

0.99+

AWSORGANIZATION

0.99+

JeffPERSON

0.99+

KevinPERSON

0.99+

Joel MinickPERSON

0.99+

AndyPERSON

0.99+

RyanPERSON

0.99+

Cathy DallyPERSON

0.99+

PatrickPERSON

0.99+

GregPERSON

0.99+

Rebecca KnightPERSON

0.99+

StephenPERSON

0.99+

Kevin MillerPERSON

0.99+

MarcusPERSON

0.99+

Dave AlantePERSON

0.99+

EricPERSON

0.99+

AmazonORGANIZATION

0.99+

twoQUANTITY

0.99+

DanPERSON

0.99+

Peter BurrisPERSON

0.99+

Greg TinkerPERSON

0.99+

UtahLOCATION

0.99+

IBMORGANIZATION

0.99+

JohnPERSON

0.99+

RaleighLOCATION

0.99+

BrooklynLOCATION

0.99+

Carl KrupitzerPERSON

0.99+

LisaPERSON

0.99+

LenovoORGANIZATION

0.99+

JetBlueORGANIZATION

0.99+

2015DATE

0.99+

DavePERSON

0.99+

Angie EmbreePERSON

0.99+

Kirk SkaugenPERSON

0.99+

Dave NicholsonPERSON

0.99+

2014DATE

0.99+

SimonPERSON

0.99+

UnitedORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

SouthwestORGANIZATION

0.99+

KirkPERSON

0.99+

FrankPERSON

0.99+

Patrick OsbornePERSON

0.99+

1984DATE

0.99+

ChinaLOCATION

0.99+

TravisPERSON

0.99+

BostonLOCATION

0.99+

Jason Thomas, Cole, Scott & Kissane | Pure Accelerate 2019


 

>> from Austin, Texas. It's Theo Cube covering your storage accelerate 2019. Brought to you by pure storage. How >> do you all how to do Dave Great Legal garden with you? Yes, I am Lisa Martin with David Lantana. And can you guess we're in Texas were at pure Accelerate 2019 Day one of our coverage here and the Buzzy Expo Hall. Pleased to welcome one of Pierre's customers to the Q B of Jason Thomas, the CEO of Coal, Scott Hussein or C. S K Legal Jason. Welcome to the program. So talk to us a little bit about si es que legal. You're based out of Florida. You're CEO. Give us a little bit of a picture of the law firm, your I T environment and your role. ISS leader of information >> So cold, Scott is saying, >> has been around >> 20 plus years. I joined about three and 1/2 years ago, Um, and we have now this one. We have 13 officers. We just opened up 13th office. We're the largest law firm in Florida currently, and only in Florida. Interestingly enough, I actually live and work out of Boston, but you know, these days there's no reason why you can't work remote. I go, they're off enoughto needed. >> You can avoid the hurricanes by living in >> a snowstorm over >> hurting any >> day because I've been a >> good pro sports in Boston. Better, better college sports in Florida. >> Yeah, No one cares about college sports. >> Best of both worlds. All right, so we're here Appear. You guys have been appear customer for a while. But give us this This picture of the legal landscape from a data volume perspective, I could imagine tons of documentation. I think you guys have hundreds of attorneys. What were some of the challenges three years ago when you were looking for the ideal long? You know, storage service is that you were really looking to four companies like your help eliminate and allow you to really deliver on the business needs. >> So we're heavy, heavy volume, business tons and tons of documents. Um, And when I came on board 39 years ago, the ever start of iron was basically a lot of physical servers, a lot of local storage which, quite frankly, scared me. I came from my previous company. I was that I came from a nap shop And that was when my first initiatives was bringing in a sand into the firm and centralizing all the storage on also setting up D r a cz. Well, along with that. So it started evaluation process pretty much within a few months, coming on board the firm. >> So you knew Netapp. Sorry, Dave. You knew Net up your pure customer perspective. Of what? For some of those things that you were looking for that when you found pure was, like, checks all the boxes. >> I can tell you what I wasn't looking for. It was I wasn't looking to hire a storage admin. So I want to find something super simple demand something that I could manage or any of the guys could manage any this this admits, could manage. So that was like starting point of the evaluation. >> So you had a bunch of sounds like discreet Dad asked direct access storage, and he said that concern you, presumably because it was hard to manage to get a handle on. So you wanted to consolidate >> way had if we had our sequel No sequel box go down down for a day, and, uh, do you ever stole from backups in previous night. Not really a good set up at the time >> in our most of your attorneys century, located in one location. Are they distributed there? >> They're spread out all across up and down floors. So we have 13 offices. So between there, they're all over the place. But a lot of work remote down, too. So that's becoming a big thing as well. So the >> reason I asked you to get the pendulum swinging right, you had almost ass, and then you went to a sin. And now this. You got the head you get cloud. I don't know if you're taking advantage of cloud, are you? >> Uh, we are actually we a lot of our software now that we've slowly start to move a lot of our main main line products to the cloud or a cloud edition of this product. So I would say we're probably 50 to 60% cloud now. >> Yes. So you were tied up in the keynotes this morning, but one of the things we heard in the key notice you could have the pure management experience. No matter where your data lives, bring the the pure cloud experience to your date on Prim and the public cloud hybrid. Is that something that's appealing to you? Is that resonate? Yeah. >> Absolutely. Absolutely. It makes it. Look, I can I can actually blogging appear one of my phone if I want to, you know, and check the room. Not that I ever do. Quite. I'll say I never really need to look at >> it. Well, your c i o. Right. I mean, you got other things to worry about. Get my I would like >> to be involved with fingers in it. >> It's interesting. So I mean, you know, a lot of time CEOs, they don't they let, but your tech I love your technical. See a lot of that. A lot of technical CEOs as well, but But also, you don't want to hire a storage admin. Correct. So you want general is to be able to deal this stuff. Okay, so you know your question. Why? Why pure? What would you look at? And >> so we looked at, um, way looked at HP street power. Big name. Um, we looked at fewer and we looked at 10 tree and I pretty much especially with three part I knew that would be management heavy so that when I toss that one out pretty quickly, not that it's not a great product. But it just wasn't for me or what I was >> the right fit. >> You're not right for us. So we came down the pier and 10 tree. I had a had a buddy who worked at another law firm, and he's like and he was like, Look, just don't even waste time just go pure And it's a phrase that I use Sometimes I stole from him, but he he's like, Dude, this is like storage crack. You'll love it. >> Storage crack. Wow, They need a T shirt. That first >> first hit's free. Okay, so that was the right fit for you. It was your peer was appear that that enticed you. That's obviously take a bit. I presume you take a lot of hair advice. >> Lot appeared, but we didn't even do a POC. >> Wow, this is this is a good period that you obviously trust. >> All right, how to >> see was the interface yet you showed me the interface on a phone call one time, and he's like, this is it. I'm like, That's it. >> What did you actually bring in. What are you using? >> I'm sorry, >> What products That you're actually using, What? Or with pure >> Oh, so I'm sorry. Um Exchange sequel. Um, that our main line, our bookkeeping time, time and building. All that that that's that's the meaning of >> all the legal absent all the legal dated the data stores. Which product from pure is that? Do you know a fan? Is it? Uh, it's the all flash array. Yeah. >> I'm sorry. Yes, it's the FBI. >> Yeah. Okay. And so, thinking about before and after hell kind of a as is and the to be how would you compare and contrast two when you brought it in the pre in the post >> your environment. >> Oh, for your business. >> That's Ah, good question. I felt more comfortable sleeping at night. You know why? Just the reliability of the ease of management. You know, if we need to bring up a volume or expanded volume, we could do it very quickly. It doesn't. It doesn't take a rocket science to do it. And from everyone I spoke to I mean, I can't I'm not I can't speak to it, but I can't. I don't I don't believe I've ever talked anybody that's had an outage or whether you raise gone down. In fact, it seems that they tell me before we even know if there's, you know, an issue. Andi. They jump on it right away. So we've never had never had now has never had an issue, never had an issue with an upgrade. It's been fantastic. That supports awesome. >> No need for a rocket scientist or a storage admin, >> and you're sleeping better. This is very, very good thing so far this interview. So in terms of the traditional storage model that you're well familiar with, as you said, you know, being very familiar with netapp it a previous role, the whole every three years. Allies like it. We've got to switch things out, disrupting operations here, comes along with the Evergreen model, and we go, How much of that is marketing and how much of that really actually means? And I know you're a big >> you're in my mind. So yeah, I was like, Oh, so I'm pre paying for support or, you know, But you know what? One side. Once I understood what it really waas and the advantages of of it inmate sentence. We didn't. We didn't I didn't think we would upgrade as much as we have already. We've already gone through to storage up, raising two controller upgrades. So that's really where where it really makes sense is when you're doing storage controller upgrade. So if you want to start our small, which we do is start a little bit small in the beginning. And then then our business grew like crazy and our storage needs expanded. So we went through at least two upgrades for years. >> So you you bring in a rare you paying basically perpetual license up front boom. And then and then you're doing the evergreen model. And then now you're on a subscription in perpetuity, is that correct? Okay, so you you essentially go from cap Ex Op X over the life cycle, and then when you add capacity, you're paying for that capacity, and then >> you just like you return the equipment, you get your money back, and then, uh, you get new equipment >> is truly non disruptive. >> We've been through to upgrades and to control operates with your major upgrades and, um, both of them we did at 5 p.m. Just not that the firm close. If I were anything but, you know, just to feel comfortable. I don't know how you do it at five, and it's okay because you know, if anything goes down from five and if no one's working right, so But here, obviously, we're always attorneys are always on and know they're really smooth. No problems. Every I mean, they got a great strategy and method to the upgrades way stayed up the entire time. >> I mean, it is a big issue for practitioners. We we've done some quantification over the years, and it was like the minimum to migrate. Honore was $50,000. When you add it all in people's time, the cost of the array, the complexity and you're saying first of all, sound reasonable, right kind of number, right? I mean, that's probably gonna make room for the conservative right. Is that essentially been eliminated? I mean, it gives you some planning, I guess are >> pretty much. And as far as the planning goes, you know, these these guys take care of all that. So when we're ready to make the switch, they just log in and do their thing, and then it's done, >> and in terms of training for yourself or your team. When you've done these two upgrades that what's that process been like? >> Log in and figure it out. I mean, >> it sounds pretty simple. >> There's not much to it. Yeah. >> So what's on the C I ose mind these days? Obviously, you don't stay awake at night now thinking about story. >> I stay awake for security, for >> talk about that data >> breach security seems like every every week. Now it it seems I'm on my Twitter feed and this is there's a new breech home. It just it's It's almost got to the point where, you know, it's just another thing that happens. >> So what's your challenge there? Is it managing all these tools? Is it knowing what to respond to it? Is it the skill sets all of the above? My >> biggest thing is, I believe in lots of redundancy. So, um, so one. Starting with the pure we have, we have a second array in another data center outside the state, so we replicate the to raise between each other. That's that's what we started with that side. We also running, you know, regular backups. We run rubric for that. And we also now have just oh, establishing cloud strategy for backups. Immutable. Um, long, long retention. So we also send our backup to the cloud as well. So now I'm feeling like I can sleep. Probably can sleep late now. I just gotta wait for somebody for something to happen, I guess, and makes sure, and hopefully your strategy is pretty solid here. >> Okay, so D r and backup are part of that overall data protection and security strategy that extends obviously into the perimeter device, etcetera, etcetera. So you have a SEC ops team. How do you weigh? >> Don't have a dedicated no. See. So, >> Well, you're the C cell. >> I'm exactly exactly so. Sher Sher bulls with a small group of us that are also the security team. And we've got a pretty I think we've got at this point a pretty solid security sack. Always room for improvement. Always looking at the new stuff. What's out there? I mean, there's all kinds of cool tech out there. Sometimes I get a little overboard with the team, gets a little upset with me because, you know, I just want to see I want to do another POC, and they're like we have three running. >> Okay, Like you guys have a pretty solid foundation running on pure that you stone to me, like, kind of appear customer for life. So they should at least give you a T shirt. Um, Adam, >> give me atleast >> a T shirt. >> I'll tell you one what really sold me within the first year was we had a We had a B m that wouldn't wouldn't boot up and we couldn't figure out what was going on. So we thought initially thought was a V m where issue and so we call support and you can really figure out. They said it was a pure issue. We call so decide to call Pure. One night I was 89 o'clock at night and decide to give it a shot, and the guy got on the phone and come to find. Now there was some issue with the data stores of'em where it was crossed, her data stores and one was deleted. Oh, apparently maybe me had deleted a small data store that had nothing on it, but apparently it was linked to the data store. This b m for some unknown reason known. Behold, bmr issue. But the guy on the line actually knew of resource within pure. That was That was a big bm weren't guy and he came in. He actually logged in and help us unlinked to data stores. So totally not appear issue. But, you know, he went the extra mile to help us recover that GM gotta back up the same night. >> You know, we got to go, But I ask you a question. You work. You have a lot of vendors you've experienced. What, Avengers do that really tick you off? That they should stop doing? How's your chance? >> I don't like the term road map. >> Really? >> Any time I hear road map, it means, you know >> we don't have it. You >> don't have >> yet, >> But we're gonna look into that so don't do business with people that have no road. >> Jason, thank you so much for share your candor with David. Me on the key. We appreciate it. Congratulations on all your success. >> Thank you >> for David. Dante. I'm Lisa Martin. You're watching the Cube at pure accelerate 19. Thanks for watching

Published Date : Sep 17 2019

SUMMARY :

Brought to you by pure storage. And can you guess we're in Texas were at pure Accelerate Interestingly enough, I actually live and work out of Boston, but you know, good pro sports in Boston. You know, storage service is that you were I was that I came from a nap shop And that was when my first initiatives was So you knew Netapp. I can tell you what I wasn't looking for. So you had a bunch of sounds like discreet Dad asked direct access storage, and he said that concern and, uh, do you ever stole from backups in previous night. in our most of your attorneys century, located in one location. So the You got the head you get cloud. So I would say we're probably 50 Is that something that's appealing to you? I want to, you know, and check the room. I mean, you got other things to worry about. So I mean, you know, a lot of time CEOs, they don't they let, so we looked at, um, way looked at HP street power. So we came down the pier and 10 tree. That first I presume you take a lot of hair advice. see was the interface yet you showed me the interface on a phone call one time, and he's like, What did you actually bring in. All that that that's that's the meaning of Do you know a fan? Yes, it's the FBI. of a as is and the to be how would you compare and contrast two before we even know if there's, you know, an issue. So in terms of the traditional storage model that you're well familiar with, So yeah, I was like, Oh, so I'm pre paying for support or, you know, over the life cycle, and then when you add capacity, you're paying for that capacity, I don't know how you do it I mean, it gives you some planning, I guess are And as far as the planning goes, you know, these these guys take care of all that. and in terms of training for yourself or your team. I mean, There's not much to it. Obviously, you don't stay awake at night now thinking about story. where, you know, it's just another thing that happens. you know, regular backups. So you have a SEC ops team. Don't have a dedicated no. See. you know, I just want to see I want to do another POC, and they're like we have three running. So they should at least give you a T shirt. you know, he went the extra mile to help us recover that GM gotta back up the same night. You know, we got to go, But I ask you a question. we don't have it. Jason, thank you so much for share your candor with David. You're watching the Cube at pure accelerate 19.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

DavidPERSON

0.99+

JasonPERSON

0.99+

FloridaLOCATION

0.99+

ScottPERSON

0.99+

BostonLOCATION

0.99+

TexasLOCATION

0.99+

DavePERSON

0.99+

$50,000QUANTITY

0.99+

50QUANTITY

0.99+

David LantanaPERSON

0.99+

DantePERSON

0.99+

13 officersQUANTITY

0.99+

FBIORGANIZATION

0.99+

HPORGANIZATION

0.99+

13 officesQUANTITY

0.99+

Jason ThomasPERSON

0.99+

5 p.m.DATE

0.99+

AdamPERSON

0.99+

Austin, TexasLOCATION

0.99+

Theo CubePERSON

0.99+

bothQUANTITY

0.99+

2019DATE

0.99+

CoalORGANIZATION

0.99+

firstQUANTITY

0.99+

three years agoDATE

0.98+

13th officeQUANTITY

0.98+

One sideQUANTITY

0.98+

60%QUANTITY

0.98+

second arrayQUANTITY

0.98+

ISSORGANIZATION

0.98+

twoQUANTITY

0.98+

AndiPERSON

0.97+

threeQUANTITY

0.97+

39 years agoDATE

0.97+

first initiativesQUANTITY

0.97+

oneQUANTITY

0.97+

one locationQUANTITY

0.96+

PierrePERSON

0.96+

a dayQUANTITY

0.96+

89 o'clock at nightDATE

0.95+

SECORGANIZATION

0.95+

one timeQUANTITY

0.95+

both worldsQUANTITY

0.95+

NetORGANIZATION

0.94+

Scott HusseinPERSON

0.94+

four companiesQUANTITY

0.92+

AvengersORGANIZATION

0.92+

two controllerQUANTITY

0.91+

three partQUANTITY

0.91+

fiveQUANTITY

0.9+

10QUANTITY

0.9+

Buzzy Expo HallLOCATION

0.89+

EvergreenORGANIZATION

0.88+

TwitterORGANIZATION

0.88+

1/2 years agoDATE

0.87+

hundreds of attorneysQUANTITY

0.87+

NetappORGANIZATION

0.86+

two upgradesQUANTITY

0.86+

AccelerateORGANIZATION

0.85+

AccelerateCOMMERCIAL_ITEM

0.82+

20 plus yearsQUANTITY

0.81+

C. S K Legal JasonORGANIZATION

0.8+

KissanePERSON

0.79+

netappTITLE

0.78+

tonsQUANTITY

0.78+

Ex Op XOTHER

0.76+

three yearsQUANTITY

0.75+

first yearQUANTITY

0.74+

this morningDATE

0.73+

Day oneQUANTITY

0.72+

least two upgradesQUANTITY

0.71+

HonoreORGANIZATION

0.68+

about three andDATE

0.67+

Cole, ScottORGANIZATION

0.66+

One nightQUANTITY

0.66+

accelerate 19OTHER

0.66+

treeORGANIZATION

0.61+

nightDATE

0.59+

Yasmeen Al Sharaf & Abdulla Almoayed | AWSPS Summit Bahrain 2019


 

>> from Bahrain. It's the Q covering AWS Public sector Bahrain brought to you by Amazon Web service is >> Okay Welcome back, everyone to the cube coverage We are hearing by rain for a W s summit where cloud computing is changing the games. The Fintech panel discussion Yasmine el Sharif, head of Fintech Innovation Unit, Central Bank of Rain Thank you for joining >> us. Thank you for having me >> Elmo Yacht. Whose founder and CEO of Ammonia Technologies Thank you for coming on. Thank you for having so We're very robust Conversation before they turn on the cameras Fit in tech is hot. I'll see in global fintech Everyone knows what that is, but it's interesting because entrepreneurship and innovation is not just for start ups. It's for countries and hearing by rain, this ecosystem and the mandate to go cloud first has had a ripple effect. We were talking about open banking, mandate, open banking versus regulation, chasing innovation, holding it back. You guys here taking a different approach. Take a minute to explain the philosophy. >> Yeah, I think there's there's benefits to being late adopters to the game. I think in the case of behind it's been a very interesting journey. I think the we started with the whole AWS. But if you look at the prerequisites of technical adoption and creating Data Pool's for analytics to run on, I think the what's interesting about Bahrain is it's really led by regulation. If you look at the prerequisites of creating a digital economy, what's happening in financial service is, or the digitization or openness of financial service. Is it really one context off the bigger picture of Bahrain's digitization plan or the economic strategy? And really, what happens here is if you look at first built the data fools and or the data centers bring a W. A s in and create the data centers. Number two is creator data or cloud First policy. Move the entire government onto the cloud and then give the ownership of the data to the people by implementing the Bahrain personal data protection laws. Once you've done that, then you've given the ownership to the people and you've created what we have is we started with a unique identifies. So the citizens of the country or the residents of the country have a unique identify our number where they're known by once you've done that and then you start mandating certain sectors to open up with a P I integrations. You're creating a very, very interesting value proposition. It creates a much faster you leap frog, a generation of technology. You're going from the classic screen scraping technologies or whatever to a very a completely open infrastructure and open a P I. Where things air cryptographic Lee signed. People are in control of their data, people can control the mobility of their date, and you're really creating a very robust data pool for a lot of algorithms to sit on. >> You know what I love about this has me were talking before he came on cameras that you guys are thinking holistically as a knocking operating system is being in a geek that I am. I love that. But it's not just one thing you're doing, it's a it's a system and it's it's a modernization view. Now we all know that financial systems, power economies and fin tech innovation unit, but you're in. This is important. You gotta have that. That leg of the stool, that pillar that's working absolutely sandbox. You have technology mechanisms to roll in tech, move things quickly moving fast. What's the strategy? What if some of the key things What's the sandbox? >> Let me start by saying The Kingdom of Bahrain has always been considered as a centre of excellence as a financial centre of excellence. And we do realize at the Central Bank in order for us to maintain that position, we have to innovate. We have to remain dynamic and agile enough to make the necessary reforms within our regulations to meet the dynamics off the digital economy. Technology is changing the paradigm off the financial system on the changes happening extremely fast. Regulators have had to come up with a mechanism whereby they can harness and test the feasibility of these innovations whilst putting the risks in a controlled environments as regulators were not typically assigned to host incubators to host startups. However, because of all this change in technology, it has become extremely essential that we come up with a regulatory approach to enable startups as well as existing financial institutions to test out their innovative financial solutions in a controlled environment. So a sandbox is really a controlled live bounds time bounds environment, enabling startups as well as existing financial institutions to test out their innovative solutions under the strict supervision off the regulator, without being required to abide by full regulatory requirements directly with volunteer customers. >> You have to put this trick standards now but means sandboxes. What developers? No, it's a collaborative approach, absolutely not being an incubator. But you're setting up a rules of engagement, Senator startups to take what they know how to do >> exactly >> end up sandboxes in the cloud. That's what everyone does >> absolutely, and our journey with the sandbox has been very successful. We've launched our sandbox back in 2000 and 17. Up to date, we have 35 companies that have been admitted into the sun box. We have been able to graduate to companies successfully. One of them has been licensed as a crypto acid provider, the other as an open biking service provider. We have four other companies in the pipeline ready to graduates. I think all in all, our experience with Sun Box has enabled us to grow and develop his regulators. It has enabled us to maintain open communication with animators, to come tea, to learn the needs of innovators and to enable innovators to live, get familiar realized. With the regulatory environment of the Kingdom of Bahrain, >> you know, you guys are doing some really pioneering work. I wouldn't want to say it's really commendable. I know it's fast and new, but if you look at the United States with Facebook there now asking to be regulated regulation if it comes too late is bad because you know things got out of control and if you're too early, you can put a clamp down and stifle innovation. So the balance between regulation and innovation has always been an art, if you will. >> Exactly. >> What do you guys, How do you view that? What's the philosophy? >> So from a regular perspective, we think that regulation and innovation goes hand in hand, and we have to embrace innovation open heartedly. However, having said that, regulators have to run all common sense checks, meaning that we don't accept an innovation that will potentially pulls more harm to the financial stability of the economy as opposed to the advantages that puzzles. We've passed the number of different regulations to support innovation in the financial services sector dating back to 2014 when we first issued our payment service provider licenses allowing more competition and innovation within the payments sector. We've issued CROWDFUNDING regulations. We've issued robo advisory regulations. We've issued insurance aggregator regulations, crypto asset service provider regulations, open banking regulations, Justin in a few. And I think that each of the regulations that we have issued solves a specific pain point, whether it's to enhance financial inclusion, whether it's to empower customers by retaining ownership back, uh, of their financial information and data, Whether it's too also empower startups and to enable them to get it gain access to funding through digital platforms. >> Have dual. I want to get you in here because as an entrepreneur, like I love all that great, I just wanna get funded. I want my product to market. I need a capital market that's going to be robust. And I need to have that's capital providers state venture capital for private equity supporting their limited partners. So I want to see that I don't wanna be standing there when I need gas for my car. I need fuel. I got to get to the next level. This is what I want And he bought >> on. I think, the one thing John that is very important that people look at in the context of fintech today. Raising money investing into fintech Regulatory uncertainty is one that defines scalability today. Once your technology is proven, where you go next really is dependent on the regulator that you'll be dealing with in the context of that specific activity that you'll be performing. In the case of Bahrain, I must say we were blown away by the receptiveness. We in what way? Yes, yes, mean mentioned open banking, for example. We got into the regulatory sandbox, which you hear a lot about sandboxes all around the world. We got into the sandbox. We got into the sandbox with contact with with with an idea of building and accounts aggregator direct FBI integration to these banks. And we got into the sandbox. We There were no regulations at the time. They like the idea. We started bouncing ideas back and forth on how to develop it. We developed the technology. We started piloting the technology. We integrated to 15 banks in the country on a sandbox environment. The consul, the white paper on open banking, was listed. They sent it out for consultation. We integrated on a production environment to more than 70% of the banks in it in the country. The central Bank of Bahrain mandated open banking across the entire nation. With every retail bank all in a period of less than 18 months. That's insane. That's the kind of context. So as a no Vester exactly so as an investor or as an entrepreneur that looks at the sector. The question is here. If anything, I think the regulator in Bahrain is the one that's leading the innovation and these air the benefits of being late adopters. We get to test out and see what's going on in the rest of the world and really develop great regulations that will embrace and and foster innovation. >> You know, I love the liquidity conversation because this neck goes to the next level. Liquidity is a wonderful thing started. Wanna go public? If that's what happens in the U. S. Mergers and acquisitions, we have an incubator that we're gonna interview here flat Six labs just had to come. One of their companies got sold to match dot com. So you're seeing a lot of cross border liquidity. Yeah, this is a new dynamic. It's only gonna get stronger, more come. He's gonna come out of my reign in the region. Liquid is important. Absent. So how do you guys want to foster that? What's the strategy? Continue to do the same. >> So from a regular perspective again, we don't really holds. Thank you. Beaters are actually two accelerators, but what we do as we refined our regulations to support startups to gain access to liquidity, for example, are crowdfunding regulations that have been passed in 2017 and they support both. Equity is one of financing crowdfunding, including conventional as well as Sharia compliant. Crowdfunding transactions were also currently working on refining our regulations for enabling venture capitalists to take roots and marine and to support these startups. >> Yeah, I think John, you mentioned two things you mentioned regulation leading. When you mandate something like open banking, you are ultimately pushing the entire sector forward, saying you better innovators fastest possible. And there's a gap that you need to you need to basically bridge, and that really loosens up a lot of liquidity when it comes to partnerships. When it comes to acquisitions, when it comes to these banks ultimately looking for better solutions, so they that's the role of the regulator. Here we are seeing a lot of VC activity come to the region right now, the region is only starting to open up. AWS just went live a few months ago. We're seeing the cloud adoption start to really take effect, and this is where you'll start seeing real scalability. But I think the most compelling thing here is Previously people would look at the Middle East with a boot with a bit of skepticism. How much innovation can really take place and the reality is here. There are a few prerequisites that have been put in place. Foreign ownership is at 100% cloud. First policy. There's a lot of things that can really foster innovation. And we're, I mean, where as an entrepreneur, where living proof off this whole Team Bahrain initiative of the fact that you can get in you can build in accounts aggregator in a country that never even had the regulations to adopted to mandate it and to be Ultimately, I think Bahrain will become the global reference point for open banking very soon because it has mandated a regulation of open AP eyes with cryptographic signatures ultimate security frameworks with a robust infrastructure across an entire nation. And don't forget, we still have a population of below the age of 30 70% of our population below. So it gives a very compelling story t test your technology. And then what we end up saying is, once you're on AWS or any cloud for that matter than the scalability of the technology just depends on where you want to go in there. >> No doubt the demographics are solid here, and I love the announcement here. The bachelor's degree. Yeah, cloud computing. We've seen some data science degrees, so new skills are coming on. My vision is interesting. I think that would interest me about the region of Amazon. Being here is these regions create revitalisation? >> Yeah, you >> guys are in perfect position with this Modernization trend is beautiful, not only to be a template for the world but a center for global banking. So I think to me, is that, you know is I'm trying to put together and connect the dots of where this goes in the next two decades. I mean, if crypto currency market continues to get matured and stabilized, that's still flowing with a lot of money. A lot of money in the relay >> absolutely >> was not just the region business to do here for couples to come here. It's you guys playing a role in global financial system. That's of interest to me. What's your vision? >> Absolutely. I think that regulators around the world are starting to realize the importance of collaborating together, to try and work on policy challenges in line with innovation within the financial service of sector and to share experiences to share lessons learned at the Central Bank of Bahrain were a member of the Global Financial Innovation Network, which is an initiative that has Bean passed by the F C A in the UK Again, we're also a member of the authentic working group of the GCC and through these two different initiatives, we work alongside other regulators to collaborate on solving policy issues, to solve, to share experiences and knowledge and to try and harmonize our regulations. Because of the end of the day, startups and innovators ultimately will want to scale up and want to serve customers across the friend jurisdictions. So it's important to have that kind of harmonization in terms of regulations to foster innovation as well as to safeguard the overall security of the international financial. Um, >> keep partnerships. Do you guys need to do to kind of go global on this 20 year vision? Is there other things they have to fall into place? That needs to happen? >> I think >> 20 years is a long time, I say in the next. Let's take five years, for example. If you say in the next five years and where I see this going, the question is, what do entrepreneurs and startups need to look at a jurisdiction and say That's where I want to test my technology. You need a robust infrastructure. You need a regulator than embraces you. You need technical subsidies and financial subsidies that are available, and then you need an independent arm that can really hand hold you and take you to that >> thrust. Its critical trust, money making absolutely ability. >> Just add to that and Byron, we take great pride in our human capital, which we believe is one of our biggest assets. And today, with having your Amazon web service is in Bahrain, this has enabled training of young Bahrainis for the data and knowledge economies which is expected Thio greet around 5000 jobs within becoming five years through different schemes such as Amazon education. For example. >> This is super exciting, which we had more time. Congratulations. Love the vision again. Occupiers like to make money. They wanted environments could be trustworthy and some scalability on behind it. So good luck. We're behind you. We'll keep following up. Thanks for having a cube coverage here and by rain for AWS. I'm John Ferrier. Stay tuned for more after this short break.

Published Date : Sep 15 2019

SUMMARY :

Public sector Bahrain brought to you by Amazon Web service is Okay Welcome back, everyone to the cube coverage We are hearing by rain for a W s summit where Take a minute to explain the philosophy. of the data to the people by implementing the Bahrain personal data protection laws. That leg of the stool, Regulators have had to come up with a mechanism whereby they can harness You have to put this trick standards now but means sandboxes. That's what everyone does companies in the pipeline ready to graduates. So the balance between regulation and innovation has always We've passed the number of different regulations to support innovation in the financial services And I need to have that's capital providers state venture capital for private equity We got into the regulatory sandbox, which you hear a lot about sandboxes all around the world. You know, I love the liquidity conversation because this neck goes to the next level. to support startups to gain access to liquidity, for example, We're seeing the cloud adoption start to really take effect, and this is where you'll start seeing real No doubt the demographics are solid here, and I love the announcement here. to me, is that, you know is I'm trying to put together and connect the dots of where this goes in the next That's of interest to me. Because of the end of the day, startups and innovators Is there other things they have to fall into place? the question is, what do entrepreneurs and startups need to look at a jurisdiction and say Just add to that and Byron, we take great pride in our human capital, Occupiers like to make money.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

GCCORGANIZATION

0.99+

Yasmine el SharifPERSON

0.99+

2017DATE

0.99+

FBIORGANIZATION

0.99+

Global Financial Innovation NetworkORGANIZATION

0.99+

2014DATE

0.99+

FacebookORGANIZATION

0.99+

John FerrierPERSON

0.99+

BahrainLOCATION

0.99+

Ammonia TechnologiesORGANIZATION

0.99+

Abdulla AlmoayedPERSON

0.99+

35 companiesQUANTITY

0.99+

five yearsQUANTITY

0.99+

OneQUANTITY

0.99+

AWSORGANIZATION

0.99+

100%QUANTITY

0.99+

Yasmeen Al SharafPERSON

0.99+

2000DATE

0.99+

Elmo YachtPERSON

0.99+

AmazonORGANIZATION

0.99+

15 banksQUANTITY

0.99+

Central BankORGANIZATION

0.99+

20 yearQUANTITY

0.99+

UKLOCATION

0.99+

less than 18 monthsQUANTITY

0.99+

eachQUANTITY

0.99+

Middle EastLOCATION

0.99+

20 yearsQUANTITY

0.99+

JustinPERSON

0.99+

bothQUANTITY

0.99+

FirstQUANTITY

0.99+

Sun BoxORGANIZATION

0.99+

F C AORGANIZATION

0.99+

First policyQUANTITY

0.99+

oneQUANTITY

0.99+

central Bank of BahrainORGANIZATION

0.99+

LeePERSON

0.99+

todayDATE

0.99+

17DATE

0.99+

firstQUANTITY

0.99+

two acceleratorsQUANTITY

0.98+

Team BahrainORGANIZATION

0.98+

United StatesLOCATION

0.98+

two thingsQUANTITY

0.98+

Six labsQUANTITY

0.97+

Central Bank of BahrainORGANIZATION

0.97+

Fintech Innovation UnitORGANIZATION

0.97+

around 5000 jobsQUANTITY

0.97+

ThioPERSON

0.97+

30 70%QUANTITY

0.96+

two different initiativesQUANTITY

0.92+

BeatersORGANIZATION

0.92+

BahrainisPERSON

0.9+

few months agoDATE

0.89+

one contextQUANTITY

0.88+

one thingQUANTITY

0.83+

four other companiesQUANTITY

0.83+

Kingdom ofLOCATION

0.83+

dualQUANTITY

0.83+

more than 70% of theQUANTITY

0.81+

Amazon WebORGANIZATION

0.81+

next two decadesDATE

0.79+

Number twoQUANTITY

0.79+

AWSPS Summit Bahrain 2019EVENT

0.78+

RainORGANIZATION

0.76+

couplesQUANTITY

0.69+

U.ORGANIZATION

0.61+

ByronPERSON

0.55+

ShariaTITLE

0.54+

BahrainORGANIZATION

0.54+

BeanPERSON

0.5+

assetsQUANTITY

0.48+

David Raymond, Virginia Tech | AWS Imagine 2019


 

>> from Seattle WASHINGTON. It's the Q covering AWS Imagine brought to you by Amazon Web service is >> Hey, welcome back already, Jeffrey. Here with the cue, we're in downtown Seattle at the AWS. Imagine, Edie, you event. It's a small conference. It's a second year, but it'll crow like a weed like everything else does the of us. And it's all about Amazon and a degree. As for education, and that's everything from K through 12 community college, higher education, retraining vets coming out of the service. It's a really big area. And we're really excited to have fresh off his keynote presentations where he changed his title on me from what it was >> this morning tow. It was the senator duties >> David Raymond, the director of what was the Virginia Cyber Range and now is the U. S. Cyber range. Virginia Tech. David, Great to see you. >> Yeah, Thank you. Thanks. So the Virginia cyber age actually will continue to exist in its current form. Okay, Well, it'll still serve faculty and students in the in the Commonwealth of Virginia, funded by the state of Virginia. Now the U. S. Cyber Angel fund will provide service to folks outside over, >> so we jumped ahead. So? So it's back up. A step ladder is the Virginia, >> So the Virginia Cyber Range provides courseware and infrastructure so students could do hands on cyber security, educational activities in Virginia, high schools and colleges so funded by the state of Virginia and, um provides this service at no charge to the schools >> and even in high school, >> even in high school. Yes, so now that there are now cybersecurity courses in the Virginia Department of Education course catalogue as of two years ago, and I mean they've grown like wildfire, >> I'm just so a ton of talk here about skills gap. And there's tremendous skills gap. Even the machine's gonna take everybody's job. There's a whole lot of jobs are filled, but what's interesting? I mean, it's the high school angle is really weird. I mean, how do you Most high school kids haven't even kind of clued in tow, privacy and security, opting in and opting out. It's gotta be a really interesting conversation when now you bring security into that a potential career into that and directly reflects on all those things that you do on your phone. >> Well, I would argue that that's exactly the problem. Students are not exposed to cyber security, you know. They don't want the curia potentials are they really don't understand what it is we talked about. We talked about teenagers being digital natives. Really? They know how to use smartphones. They know how to use computers, but they don't understand how they work. And they don't understand the security aspects that go along with using all this technology. And I would argue that by the time a student gets into college they have a plan, right? So I have a student in college. He's he's gonna be a doctor. He knows what a doctor is. He heard of that his whole life. And in high school, he was able to get certified as a nursing assistant. We need cyber security in that same realm, right? If we start students in high school and we and we expose them to cybersecurity courses, they're all elective courses. Some of the students will latch onto it, and I'll say, Hey, this is what I want to be when I grew up. And in Virginia, we have we have this dearth of cyber security expertise and this is true across the country. In Virginia, right now, we have over 30,000 cyber security jobs that are unfilled. That's about 1/3 of the cyber security jobs in this state. And I mean, that's a serious problem, not only in Virginia but nationwide. And one of the ways to fix that is to get high school students exposed to cybersecurity classes, give them some real hands on opportunities. So they're really doing it, not just learning the words and passing the test, and I mean really again in Virginia, this is this is grown like wildfire and really thinks revolutionized cybersecurity education in the state. >> And what are some of the topics that say, a high school level, where you know you're kind of getting versed on the vocabulary and the terminology vs when they go into into college and start to take those types, of course, is >> yeah, so in Virginia, there's actually cybersecurity courses across the C T E career pathways. And so SETI is the career and technical education curricula. And so there are courses like cyber security and health care, where students learn about personal health data and how to secure that specific specific kinds of data, they learn about the regulations behind that data. There's healthcare in manufacturing, where students learn about industrial control systems and you know how those things need to be secured and how they're different from a laptop or a phone. And the way those air secured and what feeds into all of those courses is an introductory course. Cyber security fundamentals, where students learn some of the very basics they learn the terminology. They learn things like the C I. A. Triad right, confidentiality, integrity and availability of the three basic components of security that you try to maintain for any system. So they start out learning the basics. But still they're doing that hands on. So they're so they're in a network environment where they see that you know that later on in the course during Capstone exercises, they might see someone trying to attack a computer that they're that they're tasked to defend and a defender of what does that look like? What are the things that I'm going to do? That computer? You know, I might install anti virus. I might have a firewall on the computer. And how do I set that up and etcetera etcetera. So high school start with the basics. As as students progressed through their high school years, there are opportunities to take further more advanced classes in the high schools. And then when they get to college, some of those students are gonna have latched onto cyber security as a potential career field. Now, now we've got him right way, get him into the right into the right majors and into the right courses. And our hope is that that's gonna sort of kick start this pipeline of students in Virginia colleges, >> right? And then I wonder if you could >> talk a little bit about the support at the state level. And it's pretty interesting that you had him from the state level we heard earlier today about supported the state level. And it was Louisiana for for another big initiative. So you know that the fact that the governor and the Legislature are basically branding this at the state level, not the individual school district level, is a pretty strong statement of the prioritization that they're putting on this >> that has been critical to our success. If we didn't have state level support, significant state level support, there's no way we could be where we are. So the previous governor of Virginia, Terry McAuliffe, he latched on to cyber security education as one of his signature initiatives. In fact, he was the president of the State Governors Association, and in that role he cybersecurity was one of his condition. So so he felt strongly about educating K 12 education college students feeding that cybersecurity pipeline Onda Cyberangels one of one of a handful of different initiatives. So they were veterans scholarships, and there were some community college scholarships and other other initiatives. Some of those are still ongoing so far are not. But but Cyber Range has been very successful. Funded by the state provides a service at no cost to high schools and colleges on Dad's Been >> critically, I can't help. We're at our say earlier this year, and I'm just thinking of all the CEOs that I was sitting with over the course of a couple of days that are probably looking for your phone number right now. Make introduction. But I'm curious. Are are the company's security companies. I mean, Arcee is a huge show. Amazon just had their first ever security conference means a lot of money being invested in this space. Are they behind it? Have you have you looked for in a kind of private company participation to help? Because they desperately need these employees? >> Definitely. So we've just started down that road, Really? I mean, our state funding has kept us strong to this point in our state funding is gonna continue into the foreseeable future. But you're right. There are definitely opportunities to work with industry. Certainly a DBS has been a very strong partner of our since the very beginning. They really I mean, without without the help of some, some of their cloud architects and other technical folks way could not have built what we built in the eight of us. Cloud. We've also been talking to Palo Alto about using some of their virtual appliances in our network environments. So yeah, so we're definitely going down the road of industry partners and that will continue to grow, I'm sure >> So then fast forward today to the keynote and your your announcement that now you taking it beyond just Virginia. So now it's the U. S. Cyber range. Have that come apart? Come about. What does that mean? >> Yes, So we've been We've been sharing the story of the Virginia cyber range for the last couple of years, and I goto national conferences and talk about it. And, um, just to just sort of inform other states, other other school systems what Virginia's doing. How could you? How could you potentially match what we're doing and what The question that I keep getting is I don't want to reinvent the wheel. How can I buy what you have? And that's been sort of a constant drumbeat over the last couple of years. So we decided fairly early on that we might want to try to expand beyond Virginia, and it just sort of the conditions were right about six months ago. So we set a mark on the wall, he said. In Summer of 2019 we're gonna make this available to folks outside of Virginia. And so, so again, the Virginia Cyberangels still exist. Funded by the Commonwealth of Virginia, the U. S cyber range is still part of Virginia Tech. So within Virginia Tech, but we will have to we will have to essentially recoup our costs so we'll have to spend money on cloud infrastructure and We'll have to spend salary money on folks who support this effort. And so we'll recoup costs from folks that are outside of Virginia using our service. But, um, we think the costs are gonna be very competitive compared to similar efforts. And we're looking forward to some successes here. >> And do you think you're you're kind of breakthrough will be at the high school level, the You know, that underground level, you know, where do you kind of see the opportunities? You've got the whole thing covered with state support in Virginia. How does that get started in California? How's that get started here? Yeah, that's a Washington state. >> That's a great question. So really, when we started this, I thought we were building a thing for higher ed. That's my experience. I've been teaching cyber security and higher ed for several years, and I knew I knew what I would want if I was using it, and I do use it. So I teach classes at Virginia Tech Graduate program. So I I used the Virginia side in my class, and, um, what has happened is that the high schools have latched onto this as I mentioned, and Most of our users are high schools. In Virginia, we have 180. Virginia High School is using the Virgin Cyber. That's almost >> 188 1 >> 180. That's almost half the high schools in the state using the Virginia cyber age. So we think. And if you think about, you know, higher. Ed has been teaching cybersecurity classes that the faculty members who have been teaching them a lot of them have set up their own network infrastructure. They have it set up the way they want it, and it ties into their existing courseware, and you know they're going to use that, At least for now. What we provide is is something that makes it so that a high school or a community college doesn't have to figure out how to fund or figure out how to actually put this network architecture together. They just come to us. They have the flexibility of the flexibility to use, just are very basic plug and play network environments, or they have flexibility to, um, make modifications depending on how sophisticated they themselves are with with, you know, manipulating systems and many playing the network so so Our expectation is that the biggest growth is going to be in the high school market, >> right? That's great, because when you say cyber range God, finally, Donna me use it like a target range. It's like a place to go practice >> where the name comes from, right? >> Absolutely. If I finally like okay, I get it. So because it's not only the curriculum and the course where and everything else but it's actually an environment, it depends on the stage things and do things exactly >> So students could d'oh offensive, offensive and defensive cybersecurity activities. And so early on, when we were teaching students howto hack essentially in colleges, you know, there were people who were concerned about that on the military case we make for that is you can't teach somebody how to defend unless they understand how they're gonna be attacked. The same is true in this case. So all of our all of our course, where has lots of ethics and no other legal and other other discussions embedded throughout. So students understand the implications of what their actions would be if they do it somewhere else. And, um, right, these are all isolated network environments their places where students can get hands on in a place where they can essentially do whatever they want without causing trouble on the school network or on the Internet. And it's very much akin to a rifle range, >> right? Like you said, you can have different scenarios. And I would imagine there's probably gonna be competitions of you think. Fact. You know what's going on in the robotics world for lots of all these things, right? Like white hat, black hat hacker. Well, very, very exciting. David, Congratulations. And it sounds like you're well on your way. Thanks. Great. Alright, >> He's David. I'm Jeff. You're watching The Cube were at Washington State Convention Centre just across the street at a W s. Imagine. Thanks for watching. We'll see you next time. >> Thanks.

Published Date : Jul 10 2019

SUMMARY :

AWS Imagine brought to you by Amazon Web service else does the of us. this morning tow. David Raymond, the director of what was the Virginia Cyber Range and now is the U. So the Virginia cyber age actually will continue to exist in its current form. A step ladder is the Virginia, Yes, so now that there are now cybersecurity courses in the Virginia Department of Education I mean, it's the high school angle is really weird. That's about 1/3 of the cyber security jobs in this state. And the way those air secured and what feeds into all of those courses is And it's pretty interesting that you had him from the Funded by the state provides a service at no cost to high schools and colleges on Dad's Been all the CEOs that I was sitting with over the course of a couple of days that are probably looking in our state funding is gonna continue into the foreseeable future. So now it's the U. S. Cyber range. And so, so again, the Virginia Cyberangels still exist. the You know, that underground level, you know, happened is that the high schools have latched onto this as I mentioned, and Most of our users so Our expectation is that the biggest growth is going to be in the high school market, That's great, because when you say cyber range God, finally, Donna me use it like a target range. So because it's not only the curriculum and the course where and everything So all of our all of our course, where has lots of you think. the street at a W s. Imagine.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
VirginiaLOCATION

0.99+

CaliforniaLOCATION

0.99+

AmazonORGANIZATION

0.99+

JeffPERSON

0.99+

JeffreyPERSON

0.99+

David RaymondPERSON

0.99+

Terry McAuliffePERSON

0.99+

DavidPERSON

0.99+

AWSORGANIZATION

0.99+

todayDATE

0.99+

State Governors AssociationORGANIZATION

0.99+

WashingtonLOCATION

0.99+

Virginia TechORGANIZATION

0.99+

DBSORGANIZATION

0.99+

Washington State Convention CentreLOCATION

0.99+

second yearQUANTITY

0.99+

EdiePERSON

0.99+

oneQUANTITY

0.99+

Cyber RangeORGANIZATION

0.99+

The CubeTITLE

0.99+

Summer of 2019DATE

0.98+

over 30,000 cyber security jobsQUANTITY

0.98+

Palo AltoLOCATION

0.98+

Virginia Department of EducationORGANIZATION

0.98+

Virginia Cyber RangeORGANIZATION

0.98+

Virginia High SchoolORGANIZATION

0.98+

firstQUANTITY

0.97+

eightQUANTITY

0.97+

180QUANTITY

0.96+

Virginia Tech GraduateORGANIZATION

0.96+

two years agoDATE

0.96+

ArceeORGANIZATION

0.96+

U. S. Cyber rangeLOCATION

0.95+

C I. A. TriadTITLE

0.95+

earlier this yearDATE

0.93+

last couple of yearsDATE

0.93+

earlier todayDATE

0.92+

Seattle WASHINGTONLOCATION

0.91+

U. SLOCATION

0.91+

Commonwealth ofLOCATION

0.91+

Virginia CyberangelsORGANIZATION

0.9+

12 community collegeQUANTITY

0.89+

Virgin CyberORGANIZATION

0.87+

2019DATE

0.87+

Commonwealth ofORGANIZATION

0.85+

about 1/3QUANTITY

0.83+

about six months agoDATE

0.83+

188QUANTITY

0.82+

three basic componentsQUANTITY

0.82+

downtown SeattleLOCATION

0.79+

this morningDATE

0.78+

W s. ImagineORGANIZATION

0.76+

governorPERSON

0.75+

S. Cyber AngelOTHER

0.73+

U.ORGANIZATION

0.72+

signature initiativesQUANTITY

0.71+

RangeORGANIZATION

0.65+

LouisianaLOCATION

0.63+

Onda CyberangelsORGANIZATION

0.63+

CapstoneTITLE

0.61+

K 12OTHER

0.6+

U. S. Cyber rangeLOCATION

0.6+

Eric Herzog, IBM | IBM Think 2019


 

>> Live from San Francisco, it's theCUBE. Covering IBM Think 2019, brought to you by IBM. >> Hello everyone welcome back to theCUBE's live coverage here at IBM Think 2019 in San Francisco, our exclusive coverage, day four, four days of coverage events winding down, I'm John Furrier with Stu Miniman, our next guest, Eric Herzog, CUBE alumni, CMO of IBM storage and VP of storage channels, Eric great to see you wearing the Hawaiian shirt as usual. >> Great, I can't come to theCUBE and not wear the Hawaiian shirt. You guys give me too much of a heart attack. >> Love getting you on to get down and dirty on storage and the impact of Cloud and infrastructure. First, you gave a great talk yesterday to a packed house, I saw that on social media, great response, what's going on for you at the show, tell us. >> So the big focuses for us are around four key initiatives. One is multi-cloud particularly from a hybrid perspective and in fact, I had three presenters with me, panelists and users, all of them were using multiple public cloud providers and all of them had a private cloud. One of them also was a software as a service vendor, so clearly they're really monetizing it. So that's one, the second one is around AI, both AI that we use inside of our storage to make it more efficient and more cost effective for the end user, but also as the platform for AI work loads and applications. Cyber resiliency is our other big theme, we've got all kinds of security, yes everyone is used to of course the Great Wall of China protecting you and then of course chasing the bad guy down when they breach you, but when they breach you it'd sure be nice if everything had data at rest encryption, or when you tiered out to the cloud you knew that it was being backed up or tiered out fully encrypted or how about something that can help you with ransomware and malware. So we have that, and that's a storage product not a regular, you know what you think of from a security vendor. So those are the big things that we've been harking on at the show. >> One of the things that I've observed, you've been very active out in the field, we've seen you at a lot of different events, Cisco Live, others, you guys have had an interesting storage product portfolio, very broad and specific leadership categories, but you also have the ability to work with other partners. This has been a big part of your strategy, you get the channels. What is, how would you summarize the current story around IBM storage and systems, because it's now an ingredient part of other people's infrastructure with cloud storage then becomes a key equation, how would you describe the IBM storage posture, product portfolio, what are the key things? >> So I think the key thing from a portfolio perspective, while it looks broad it's really four things. Software defined storage which we also happened to have bet on on array so theoretically that's one product line, same exact software. Other vendors don't do that, they have an array pack and you buy the array but if you buy their software defined storage it's actually different software, for us it's the same software. Then we have modern data protection and then we have management playing. That's kind of it. I do think one of the big differentiator for us, is even though we're part of IBM, we have already been working with everyone any way. So as we talked about at Cisco Live, for Spectrum Protect alone, our modern data protection platform, we have 400 small and medium cloud service providers all over the world that their back up service is based on it, so even though IBM Cloud has their own cloud division theoretically, we're enabling the competition but we've had that story at IBM storage now for four years. >> So storage anywhere basically is the theme here, AI anywhere storage anywhere, I mean it's not the official tagline but that's the philosophy with software. >> And that's yeah, so even if you think look at AI. We have an AI reference architecture with the power product line, we also have an AI reference architecture with the Nvidia product line, and we're working on a third one right now with another major server vendor because we want our storage to be anywhere there's AI and anywhere there's a cloud, big medium or small. >> Alright, Eric let's tease that out a little bit because I had a great conversation with an IBM fellow yesterday and we think back ten years ago, when you talked about hybrid and multi cloud, when you talked about an application it's "Am I spanning between environments? "Am I bursting between environments?" And architectures just didn't work that way. Today microservices architecture, there's pieces of the solution that can live in lots of environments, Compute I can spin up almost anywhere at any time, data doesn't move and I need to worry about my data, I need to worry about security so there's certain things that multi cloud like data protection, cyber resiliency, those kind of ones need to live everywhere, but when I talk about storage, I'm not moving my storage and my persistent database all over the place. So help us kind of tease out as to what is the multi everywhere and what is the you know the data that the Compute's going to actually move to that data, help us squint through that a little bit. >> So let's do the storage part first. So most applications, workloads, and use cases that are either business critical or mission critical are going to stay on prem, doesn't mean you can't use a public cloud provider for overflow whether that be IBM or Amazon or Microsoft or like I said the 400 cloud providers that we sell to that are not IBM, so but you're still going to have this hybridness where the data is partially on prem and off prem, in that case you're going to be using the public cloud provider, and by the way we did a survey, IBM did, and when you're looking enterprise, so let's say companies that are three or four billion US and up, anywhere in the world, you're seeing that most of them are using five or six different public clouds, whether that be salesforce.com which really is sales enablement software as a service. We have a startup that we work with who uses IBM's flash system and they do cyber security as a service, that's their whole business. So all of this software vendors that now deliver not on prem but you know over the cloud. Then you've got regular public cloud providers for file, block, and object for example we not only support IBM Cloud object storage protocol, but S3. So we have customers that put data out in S3, we have customers that put it out on other clouds because as you know S3's become the de facto standard so all the mid to small cloud providers use it. So I think what you've got is hybrid cloud is a sort of a subset of multi cloud and then multi cloud what you're seeing is because of software as service could even be geographic issues, we have a lot of data centers at IBM Cloud so do the three major cloud providers, but we are not in all 212 countries so if you have the law like in Canada where the data has to physically stay within the premises of Canada, now we all happen to have data centers that are big enough, but that doesn't mean we have data centers in every country, so you have legal issues, you have applications what applications are good, that make sense, what about pricing, and as you know some big companies still buy regionally. >> Eric, one of the things I'd love to get your perspective on is the SAS providers because if we look at the storage market in many ways, you know there was like the threat of public cloud, but really you got to follow the application, follow the data and as SAS proliferation happens, your data is going to go with that, you know you have them as customers in a lot of environments, what are you seeing from the SAS providers, how do they choose what offerings they have and how do they look at their data center versus public cloud mix? >> So when you look at a SAS provider, they've got a couple of different parameters that they look at which is why we've been very successful. One is performance, they already know their subject to the vicissitudes of the cloud so you can't have any bottle neck in your core data center because you're serving that app up, and if it's too slow or it doesn't work right, then of course the end user will go buy a different piece of software from another SAS provider. Second one is availability, because you have no idea when wiki bomb theCUBE is going to turn on that service, it could be the middle of the night right? If you guys expand to Asia, you guys will be asleep but your guy in Australia will be using that software, so it can't ever go down, so availability. Resiliency, can it handle pounding. If CUBE wiki bomb becomes ginormous, and you buy all these other analyst firms and the next thing you know the biggest analyst firm in the world, if you have thousands of people guess what now you're hammering on that software, so it's got to be able to take that workload abuse, right? And that's the kind of thing, so they look for that. >> That's scale basically, scale is critical. >> Right, they cannot have any issues of resiliency or availability and performance so A: they're usually going all flash, some of them will buy like a tape or the older all hard drive arrays as a backup store, ideal for IBM cloud object storage but again the main thing they focus on is flash because they're serving up that software. >> Let me ask you a question, so I know you've been in this business for a long time, storage you know everything about the speeds and fees but also you've been a historian too, you're on the front edge. IBM has got a killer strategy with cloud private, doing very well with Openshift and Redhat acquisition, you're now poised to essentially bring cloud scale across multiple clouds and with AI, it really puts storage at the center of the action. How is storage now positioned and how should customers think about storage, because scale is table stakes, enabling developers to program infrastructure as code, how does storage and how has it changed and how are you guys positioned to take advantage of that? How would you kind of explain that to a customer? >> Yeah so I think there's a couple of changes, first of all you're looking for a storage vendor which should be us, but you're looking for a storage vendor that is always making sure, for example when micro services first came out and containers, okay great except when containers came out and it's still a problem, you don't have storage consistency whereas in a VM ware or a hyper V or you know KVM environment, you do. So when you move things around, you don't lose the dataset, well we have persistency storage. So the key thing that you want to look for is a storage vendor that will stay on that leading edge as you move. Our copy data manager has an API so the developers can spit up their own environments but use real data, so as you guys know well from your pasts that the last thing you want to do is have the dev ops guy be developing things on faux datasets, try to put it in production, and then the real dataset doesn't work, at the same time if they put it out to a public cloud provider you could have a legal or security breach, right? So by being able to take modern data protection, as an example, and not just to have grandfather, father son back up, we all remember that I remember it better than you guys since I'm older, but that's back up right? It's not back up any more, it's modern data protection. You need to be able to take the snapshot, the replica or the back up dataset and use it for development, so you want a storage vendor that's going to be on the leading edge of that. We've done that at IBM on the Kenner side, the modern data protection side, and we'll continue to the do that. The whole multi cloud thing, IBM as you know is now all about multi cloud, what Redhat's been in, the storage division of IBM has been working with Redhat for 15 years. Going to the Redhat summit every year, I know you guys do theCUBE from there sometimes. >> You're on, but this is software defined so at the end of the day a software defined bet with arrays have paid off. >> Yes. >> You'd say that would be kind of a key linchpin. >> I would argue that, while there's some hardware aspects to it, so for example our flash core modules give us a big differentiator from a flash perspective, in general the number one differentiator for a strong, powerful array vendor is actually the underlying software code. The RAID stack, what you can wrap around it, file block and object support, what could you enhance, our Spectrum discover, allowing you to use metadata about unstructured data whether that be in the file space of the object store. That allows the data scientist to dramatically reduce the time it takes to prep the data when they're doing either AI or an analytic workload, so we just saved them money but we're really a storage company that came up with something that a data scientist could use because we understand how storage is at the central foundation and how you could literally use the metadata for something actually valuable, not to a storage person because a data scientist is not the storage guy of course. >> Yeah and Eric I would love to get your feedback, what are some of those key discussions you're having with customers here at the show? We've been talking a lot this week digital transformation, AI into everything there, are those some of the themes? What are the struggles that really the enterprises of today are facing and how your group's helping them? >> So one of the big things is understanding that it's going to be multi cloud and so because we've already been the Switzerland of the storage industry and working with every cloud provider, all the big ones, including ones that compete with our own sister division, but all the little small ones too, right? And all the software as service vendors we work with that we're the safe bet, you don't have to worry about it. Because whoever you pick, or for a big enterprise, in fact I had Aetna on stage with me and he said he's using seven different clouds, one of which is their private cloud and then six different cloud providers they use, and he said not counting salesforce.com and I forgot the other name, so really if you count the softwares there, she really got like nine clouds. She said I use IBM cause I know it's going to work with whoever, and you're not going to say oh I don't work with this one or that one. So that's been obviously making sure everyone realizes that, the whole company is embracing it as you saw and what we're going to do obviously with Redhat and continue for them to participate with all of their existing customer base that they've been doing for years. >> So you see multi cloud and sweet spot, that highlights your value proposition, would you say that to be true? >> I would say that and then the second one is around AI. All the storage vendors including us have had AI sort of inside, what I'll call inside of the box, inside of the array and use that to make the array better, but now with AI being ubiquitous from a work load perspective, you have to have the right foundation underneath that, again performance resiliency availability, if you're going to use AI in a giant car factory, and it's going to run all of those machines, you better make sure the thing never fails because then the assembly line goes down and those things are hundreds of millions of dollars of build every day. So that's the kind of thing you got to look for, so AI's got to have the right platform underneath it as well. >> Eric you have some reporting from the field as you're out in the, doing a lot of talks a lot of customers, give it a couple of anecdotal examples of where the leading edge is in storage and where are use cases that would be a good tell sign of where this kind of multi cloud is going. Can you just give some examples of the use cases, situation, and kind of why is that relevant for where everyone will be going? Where is the puck going to be, so I can skate to where the puck is, as they say. >> So from a multi cloud perspective, A: you've got to deal with how your company is structured, if you have a divisionalized company or one that really lets the regions make their own buy decisions, then you may have NTT Cloud in Japan, you may have Ali Baba in China, you may have IBM Cloud Australia, and then you might have Amazon in Latin America. And as IT guys you got to make sure you're dealing with that, and embrace it. One of the things I think from an IT perspective is why I'm wearing the Hawaiian shirt, you don't fight the wave, you ride the wave. And that's what everyone's got to realize so, they're going to use multi cloud, and remember the cloud was the web was the internet, it's actually all the same stuff from a long time ago, the mid 90's, which also means now procurement's involved and when procurement's involved, what are they going to say to you? Did you get a bid from IBM Cloud, did you see that bid from Amazon and Microsoft? So it's changed the whole thing of, I can just go to any cloud I want to, now procurement's involved that even mid-size companies procurement says you did get another bid right, did you not? Which for server, storage, and network vendors that's been the way it's been for 35, 40 years. >> The bids are changing too, so what are the requirements now? Amazon has a cloud, they have storage, you have storage, but people have on premise they have multiple environments. If the world is one big data center, with multiple regions and locations, this is the resilience you spoke of, what's the new requirements as procurement gets involved because procurement isn't dictating the requirements, they're getting the requirements from the application work loads and the infrastructure, so what are the new requirements that you see? >> So I think the thing you're seeing is if you take cloud just a couple years ago, I'm going to put my storage out there, okay great, I need this kind of availability, ooh that's extra money, sorry Mr. Wikibomb, Mr. CUBE we got to charge you a little extra for that. Oh we need a certain amount of performance, oh that's a little extra. And then for heavy transactional work loads the data's constantly moving back and forth, oh we forgot to tell you that we're charging you every time you move the data in and every time you move the data out. So as you're putting together these RFPs you needs to be aware of that. >> Those are hidden costs. >> Those are hidden costs that are, I think the reason you're seeing such the ride of the hybrid is people went to public cloud and then someone in finance, or maybe even in the IT group sat down with a spread sheet and said "Oh my god, we could've just bought an IBM array "or someone else's array" and actually had less money even counting support, because all every time we're moving the data, but for archive, for back up we don't move the data around a lot, it's a great solution for anything. Then you have the whole factoring of software as a service, so part of that is the software itself, if you're going to go up against salesforce.com then whoever does, they better make sure the software's good, then on top of that again you negotiate with the software vendor, I need it globally, okay what's the fee for that? So I think the IT guys need to understand that with the ubiquity of the cloud, you've got to ask way more questions, in the storage array business, everyone's got five nines and almost everybody's got six nines, well way back when it was four nines then it was five and now it's six, so you don't ask anymore because you know it just changes right? And the cloud is still new enough and the whole software as a service is a different angle, and a lot of people don't even realize software as a service is cloud, but when you say that they go, what are you talking about, it's just I'm getting it over a service. Where do you think it comes from? A cloud data center. >> Well the trend is software defined, you guys are on that early. Congratulations, and don't forget the hardware, the high performance hardware as well, arrays and what not. So great job. Eric thanks for coming on, appreciate it. >> Great, thank you very much. >> CUBE coverage here, I'm John Furrier, Stu Miniman. Day four of our live coverage here in Moscone North, in San Francisco for IBM Think 2019. Great packed house here at IBM Think, back for more coverage after this short break. (electronic outro music)

Published Date : Feb 14 2019

SUMMARY :

Covering IBM Think 2019, brought to you by IBM. Eric great to see you wearing the Hawaiian shirt as usual. Great, I can't come to theCUBE and the impact of Cloud and infrastructure. to the cloud you knew that it was being backed up leadership categories, but you also have the ability and you buy the array but if you buy their software So storage anywhere basically is the theme here, And that's yeah, so even if you think look at AI. the you know the data that the Compute's going to actually move and as you know some big companies still buy regionally. and the next thing you know the biggest analyst firm the main thing they focus on is flash and how are you guys positioned to take advantage of that? So the key thing that you want to look for so at the end of the day a software defined bet is at the central foundation and how you could literally use and I forgot the other name, so really if you count So that's the kind of thing you got to look for, Eric you have some reporting from the field And as IT guys you got to make sure you're dealing so what are the new requirements that you see? oh we forgot to tell you that we're charging you as a service, so part of that is the software itself, Congratulations, and don't forget the hardware, Day four of our live coverage here in Moscone North,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MicrosoftORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Eric HerzogPERSON

0.99+

Stu MinimanPERSON

0.99+

AustraliaLOCATION

0.99+

CanadaLOCATION

0.99+

threeQUANTITY

0.99+

AsiaLOCATION

0.99+

EricPERSON

0.99+

fiveQUANTITY

0.99+

John FurrierPERSON

0.99+

Moscone NorthLOCATION

0.99+

sixQUANTITY

0.99+

NvidiaORGANIZATION

0.99+

JapanLOCATION

0.99+

San FranciscoLOCATION

0.99+

oneQUANTITY

0.99+

15 yearsQUANTITY

0.99+

ChinaLOCATION

0.99+

CUBEORGANIZATION

0.99+

yesterdayDATE

0.99+

OneQUANTITY

0.99+

Latin AmericaLOCATION

0.99+

four yearsQUANTITY

0.99+

FirstQUANTITY

0.99+

mid 90'sDATE

0.99+

nine cloudsQUANTITY

0.99+

WikibombPERSON

0.99+

ten years agoDATE

0.99+

RedhatORGANIZATION

0.99+

AetnaORGANIZATION

0.98+

S3TITLE

0.98+

hundreds of millions of dollarsQUANTITY

0.98+

RedhatEVENT

0.98+

three presentersQUANTITY

0.98+

Day fourQUANTITY

0.98+

TodayDATE

0.97+

thousands of peopleQUANTITY

0.97+

212 countriesQUANTITY

0.97+

four billionQUANTITY

0.97+

four daysQUANTITY

0.97+

Great Wall of ChinaLOCATION

0.97+

six different cloud providersQUANTITY

0.96+

bothQUANTITY

0.96+

second oneQUANTITY

0.96+

Second oneQUANTITY

0.96+

Cisco LiveORGANIZATION

0.95+

todayDATE

0.94+

SwitzerlandLOCATION

0.94+

USLOCATION

0.94+

Dr. Faisal Hammad, University of Bahrain | AWS Summit Bahrain


 

>> Live from Bahrain, it's theCUBE. Covering AWS Summit Bahrain. (upbeat music) Brought to you by Amazon Web Services. >> Okay, welcome back everyone. We're here live in Bahrain for theCUBE's exclusive coverage here in the Middle East for AWS, Amazon Web Services', new region being announced and being deployed early 2019. I'm John Furrier your host. Our next guest is Faisal Hammad, Assistant Professor, Information Systems at the University of Bahrain. Welcome to theCUBE. >> Thank you very much. Thank you for having me and welcome to Bahrain. >> It's been a great pleasure. Our team has been blown away. It's been a very surreal experience. We're really excited. We've learned a lot and we're super impressed with the people and the culture. >> Yeah thank you very much. >> It's just Silicone Valley vibe. It's got community. It's got money and it's got, now, an ecosystem that's going to be flourishing. It really looks, really good. >> Yes, yes. As I told you, we'll have the little desserts of Silicon Valley soon, inshallah. >> Now Silicon Valley, I wanted to bring this up because one of the big success stories of Silicon Valley is they let the innovation flow. They have soil and they feed it with money and things grow and the entrepreneurs are out there making things happen, but they have two universities. They've got Stanford and University of California, Berkeley, Of course you've got UCLA in Southern California so research is really important and also at the role of academia is really important. Not in the sense of just being too hard core but creating a ground for free thinking, entrepreneurship, and then as the kids come out of school, sometimes dropping out, they just want to start companies. >> Alright. >> This is big. How are you guys looking at this massive wave of innovation coming because it's got to be taking you by surprise. You got, ya know the old way, get the computer science, here's some IT, like oh my god here comes cloud. All these new languages, data science. >> So, it didn't take us by surprise, if you say. We have been expecting this change for quite sometime. The thing is with the leadership of the government of Bahrain, as well as the leadership of the University, they want to make sure that we are able to produce talents to the economy. And, Bahrain, the University of Bahrain was involved from early on steps in the cloud first initiatives, or cloud first policy. So, we were aware that we have to change the ways that we are operating in order for us to produce these, not produce them but to shape these talents for the students to compete not just locally but internationally. >> So you see this coming, okay that fair, but the way this here, there's multiple waves coming in, it's going to be a 20, 30 year generation of waves. So you got to get the surf boards, to use the metaphor from California. Sorry, I'm from California. >> (laughing) There's no waves in the desert, the water's 91 degrees. But, as a metaphor, this is what's happening. So how has that shaped some of the curriculum, some of the interactions? Certainly the economic development board, the EDB has been gung ho supporting entrepreneurial resources. But when you're going to come in, you're going to be feeding the young kids the nutrients, what are giving them? New languages, new IT, what's the plan? >> Let me just, try to focus the, focus the discussion on the University and what the University is doing. So, what we are doing here at the University now, we have that partnership, with AWS. And now University of Bahrain is an AWS accredited academy. So we now provide curriculum, that is aligned with AWS, so that when our students take these courses, they will be able to take the certification and then be certified upon graduating. So, in that sense, we're providing the talents, and trained talents, to start working immediately with limited, or lower, training needed. As well as, in terms of research. If you say, it used to take us a long time if you want to research something. If you want, for example, the data centers, let's say for example some expert in artificial intelligence, it would take us a long time and a lot of effort to do so. >> Yeah >> But with AWS, all you need to do is, just log into the console. >> Amazon is doing all the research for you. They've got all the tools. >> Yes >> Yes, so if a student is, or even a researcher, is interested in, let's say for example, artificial intelligence, instead of waiting for the instructor to be knowledgeable, waiting for an instructor to be knowledgeable about that part, they could just start plugging in and playing with it. And then with that experimentation, they could do a lot of great stuff. >> What about software, let's get back to software, and I want to get to the IT in just a second because I know information technology is in your wheelhouse. But software is driving a lot of the dev-ops and the cloud native IT disruption. >> Yes Amazon is now winning a lot of that business, that's the main Amazon Web Services. But they started with developers. That's where the software developers are, how is that developing in the University? Are people taking to software programing, what's the curriculum like? >> So, in terms, >> What's the story? >> Yeah, so, we don't, we're not going to just focus on creating a curriculum for cloud computing. Cloud computing now is embedded throughout the all the curricular that we have in the University. So, in any let's say, program, whether it's in IT or even Arts, as well as Business, there's a small component of cloud computing telling them what is cloud computing, and what can it provide for them. >> [John} So you're focusing on cloud first? >> Yes, cloud first. >> And then we have these courses designed specially for IT students, as I told you before we are partners with AWS, the AWS academy, so now we'll be able to provide a curriculum that's actually updated by AWS and all we have to do is just deliver this material. >> How long have the courses been out there? Have they been released yet? Have they been out there for a while? >> They just has been released, and we have almost 50 students now, taking these courses. >> [John} Well, you know, University of California, in Berkeley, where my daughter goes, the number one class is Intro to Computer Science and Intro to Data Science. It seems that the younger kids are wanting that intro to programing >> Yes and intro to data science. Is there any data thing going on with Amazon? They do a lot of big data, you got Red Shift, Aurora, you got I.O.T. >> So in our, >> SageMaker, is one of the most popular features of Amazon, is like, I think it's going to be the most popular but... >> So, in our department, for example, the Department of Information Systems, Instead of just having a bachelors in Information Systems, now we have smaller tracks within the program itself. So if the student is, let's say interested in cloud computing, then he can take the cloud computing track and take all these cloud computing components as part of the curriculum. If he or she is interested in, >> Yeah let's say big data, we have a big data track within our program. >> And the government is really behind you on this right? >> Yes, yes, The government is behind us in the way that they want students, not just to rely on having to secure a white collar job. They want them to create the jobs for others. They are trying to create this culture of entrepreneurship. So you start your own business, you don't have to wait for opportunities, you make your own opportunities. With the help of, I think Temp Keen, EDP, all of them are giving them the platform to just flourish, to just go into the world and then create opportunities not just for themselves, as I told you, but for others. >> So, final question I want to ask you. Okay, personal opinion, what do you think is going to happen after the Amazon region gets deployed. You're going to get these training classes, people are going to be coming into the marketplace, graduating. What's the impact? What's your vision? >> What's my, I don't know! >> Any guesses? If you had to kind of project and connect the dots. >> I think there's going to be a huge move towards, small business. Because it used to cost a lot, owning a business, or starting a start-up used to cost a lot. Now, it doesn't cost that much if they choose, let's say, for example cloud computing, or if the choose AWS in particular. It's just going to cost them the operational expenditures, there's no huge capital expense that they have to pay. So my projection is that we're going to see a lot of small businesses, small newer apps, and newer ways to go around businesses because of these opportunities offered by... >> Yeah, it lowers the bar to get a new innovation going. And it certainly cost less than provisioning servers. >> Exactly, so if a company wants to start up a business, if it's a small business, they don't have that much time to spend on servers, spend on many things. >> Faisal, thanks for coming on theCUBE, we really appreciate it. >> Thank you very much, thank you for having me. >> We're looking forward to following what's going on in the University when we come back. We'll certainly be back here, >> Thank you very much. in the future covering you guys. It's certainly a lot of action, Dubai right around the corner. This is a new hot area for innovation. For theCUBE, covering our first time here, we're excited. I'm, John Furrier. You can reach me on Twitter @furrier, or find me anywhere online, all my channels are open. Stay with us for exclusive coverage of AWS's new region here in Bahrain, be right back. (upbeat music)

Published Date : Sep 30 2018

SUMMARY :

Brought to you by Amazon Web Services. here in the Middle East for Thank you very much. with the people and the culture. that's going to be flourishing. the little desserts of Not in the sense of get the computer science, leadership of the government but the way this here, there's some of the curriculum, and a lot of effort to do so. just log into the console. They've got all the tools. the instructor to be knowledgeable, lot of the dev-ops and the how is that developing in the University? not going to just focus on the AWS academy, so now and we have almost 50 students It seems that the younger and intro to data science. SageMaker, is one of the So if the student is, let's say big data, we the platform to just flourish, What's the impact? project and connect the dots. or if the choose AWS in particular. Yeah, it lowers the bar to to spend on servers, spend on many things. we really appreciate it. Thank you very much, going on in the University in the future covering you guys.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Faisal HammadPERSON

0.99+

CaliforniaLOCATION

0.99+

BerkeleyLOCATION

0.99+

John FurrierPERSON

0.99+

University of BahrainORGANIZATION

0.99+

BahrainLOCATION

0.99+

University of CaliforniaORGANIZATION

0.99+

StanfordORGANIZATION

0.99+

91 degreesQUANTITY

0.99+

Southern CaliforniaLOCATION

0.99+

UCLAORGANIZATION

0.99+

Amazon Web Services'ORGANIZATION

0.99+

FaisalPERSON

0.99+

Middle EastLOCATION

0.99+

JohnPERSON

0.99+

two universitiesQUANTITY

0.99+

first timeQUANTITY

0.99+

EDBORGANIZATION

0.99+

University of BahrainORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

early 2019DATE

0.99+

DubaiLOCATION

0.98+

oneQUANTITY

0.94+

theCUBEORGANIZATION

0.92+

Dr.PERSON

0.91+

almost 50 studentsQUANTITY

0.91+

EDPORGANIZATION

0.9+

first policyQUANTITY

0.9+

AWS SummitEVENT

0.89+

Silicone ValleyLOCATION

0.89+

20, 30 yearQUANTITY

0.88+

TwitterORGANIZATION

0.88+

BahrainORGANIZATION

0.86+

firstQUANTITY

0.8+

first initiativesQUANTITY

0.77+

AWS academyORGANIZATION

0.75+

a secondQUANTITY

0.74+

I.O.T.ORGANIZATION

0.7+

@furrierPERSON

0.68+

waveEVENT

0.66+

Red ShiftTITLE

0.65+

Temp KeenPERSON

0.58+

ofORGANIZATION

0.51+

AuroraORGANIZATION

0.51+

SageMakerORGANIZATION

0.49+

Lenovo Transform 2.0 Keynote | Lenovo Transform 2018


 

(electronic dance music) (Intel Jingle) (ethereal electronic dance music) ♪ Okay ♪ (upbeat techno dance music) ♪ Oh oh oh oh ♪ ♪ Oh oh oh oh ♪ ♪ Oh oh oh oh oh ♪ ♪ Oh oh oh oh ♪ ♪ Oh oh oh oh oh ♪ ♪ Take it back take it back ♪ ♪ Take it back ♪ ♪ Take it back take it back ♪ ♪ Take it back ♪ ♪ Take it back take it back ♪ ♪ Yeah everybody get loose yeah ♪ ♪ Yeah ♪ ♪ Ye-yeah yeah ♪ ♪ Yeah yeah ♪ ♪ Everybody everybody yeah ♪ ♪ Whoo whoo ♪ ♪ Whoo whoo ♪ ♪ Whoo yeah ♪ ♪ Everybody get loose whoo ♪ ♪ Whoo ♪ ♪ Whoo ♪ ♪ Whoo ♪ >> As a courtesy to the presenters and those around you, please silence all mobile devices, thank you. (electronic dance music) ♪ Everybody get loose ♪ ♪ Whoo ♪ ♪ Whoo ♪ ♪ Whoo ♪ ♪ Whoo ♪ ♪ Whoo ♪ ♪ Whoo ♪ ♪ Whoo ♪ ♪ Whoo ♪ (upbeat salsa music) ♪ Ha ha ha ♪ ♪ Ah ♪ ♪ Ha ha ha ♪ ♪ So happy ♪ ♪ Whoo whoo ♪ (female singer scatting) >> Ladies and gentlemen, please take your seats. Our program will begin momentarily. ♪ Hey ♪ (female singer scatting) (male singer scatting) ♪ Hey ♪ ♪ Whoo ♪ (female singer scatting) (electronic dance music) ♪ All hands are in don't go ♪ ♪ Red all hands are in don't go ♪ ♪ Red red red red ♪ ♪ All hands are in don't go ♪ ♪ Red all hands are in don't go ♪ ♪ Red red red red ♪ ♪ All hands are in don't go ♪ ♪ Red all hands are in don't go ♪ ♪ All hands are in don't go ♪ ♪ Red all hands are in don't go ♪ ♪ Red red red red ♪ ♪ Red don't go ♪ ♪ All hands are in don't go ♪ ♪ In don't go ♪ ♪ Oh red go ♪ ♪ All hands are in don't go ♪ ♪ Red all hands are in don't go ♪ ♪ All hands are in don't go ♪ ♪ Red all hands are in don't go ♪ ♪ Red red red red ♪ ♪ All hands are red don't go ♪ ♪ All hands are in red red red red ♪ ♪ All hands are in don't go ♪ ♪ All hands are in red go ♪ >> Ladies and gentlemen, there are available seats. Towards house left, house left there are available seats. If you are please standing, we ask that you please take an available seat. We will begin momentarily, thank you. ♪ Let go ♪ ♪ All hands are in don't go ♪ ♪ Red all hands are in don't go ♪ ♪ All hands are in don't go ♪ ♪ Red all hands are in don't go ♪ (upbeat electronic dance music) ♪ Just make me ♪ ♪ Just make me ♪ ♪ Just make me ♪ ♪ Just make me ♪ ♪ Just make me ♪ ♪ I live ♪ ♪ Just make me ♪ ♪ Just make me ♪ ♪ Hey ♪ ♪ Yeah ♪ ♪ Oh ♪ ♪ Ah ♪ ♪ Ah ah ah ah ah ah ♪ ♪ Just make me ♪ ♪ Just make me ♪ (bouncy techno music) >> Ladies and gentlemen, once again we ask that you please take the available seats to your left, house left, there are many available seats. If you are standing, please make your way there. The program will begin momentarily, thank you. Good morning! This is Lenovo Transform 2.0! (keyboard clicks) >> Progress. Why do we always talk about it in the future? When will it finally get here? We don't progress when it's ready for us. We need it when we're ready, and we're ready now. Our hospitals and their patients need it now, our businesses and their customers need it now, our cities and their citizens need it now. To deliver intelligent transformation, we need to build it into the products and solutions we make every day. At Lenovo, we're designing the systems to fight disease, power businesses, and help you reach more customers, end-to-end security solutions to protect your data and your companies reputation. We're making IT departments more agile and cost efficient. We're revolutionizing how kids learn with VR. We're designing smart devices and software that transform the way you collaborate, because technology shouldn't just power industries, it should power people. While everybody else is talking about tomorrow, we'll keep building today, because the progress we need can't wait for the future. >> Please welcome to the stage Lenovo's Rod Lappen! (electronic dance music) (audience applauding) >> Alright. Good morning everyone! >> Good morning. >> Ooh, that was pretty good actually, I'll give it one more shot. Good morning everyone! >> Good morning! >> Oh, that's much better! Hope everyone's had a great morning. Welcome very much to the second Lenovo Transform event here in New York. I think when I got up just now on the steps I realized there's probably one thing in common all of us have in this room including myself which is, absolutely no one has a clue what I'm going to say today. So, I'm hoping very much that we get through this thing very quickly and crisply. I love this town, love New York, and you're going to hear us talk a little bit about New York as we get through here, but just before we get started I'm going to ask anyone who's standing up the back, there are plenty of seats down here, and down here on the right hand side, I think he called it house left is the professional way of calling it, but these steps to my right, your left, get up here, let's get you all seated down so that you can actually sit down during the keynote session for us. Last year we had our very first Lenovo Transform. We had about 400 people. It was here in New York, fantastic event, today, over 1,000 people. We have over 62 different technology demonstrations and about 15 breakout sessions, which I'll talk you through a little bit later on as well, so it's a much bigger event. Next year we're definitely going to be shooting for over 2,000 people as Lenovo really transforms and starts to address a lot of the technology that our commercial customers are really looking for. We were however hampered last year by a storm, I don't know if those of you who were with us last year will remember, we had a storm on the evening before Transform last year in New York, and obviously the day that it actually occurred, and we had lots of logistics. Our media people from AMIA were coming in. They took the, the plane was circling around New York for a long time, and Kamran Amini, our General Manager of our Data Center Infrastructure Group, probably one of our largest groups in the Lenovo DCG business, took 17 hours to get from Raleigh, North Carolina to New York, 17 hours, I think it takes seven or eight hours to drive. Took him 17 hours by plane to get here. And then of course this year, we have Florence. And so, obviously the hurricane Florence down there in the Carolinas right now, we tried to help, but still Kamran has made it today. Unfortunately, very tragically, we were hoping he wouldn't, but he's here today to do a big presentation a little bit later on as well. However, I do want to say, obviously, Florence is a very serious tragedy and we have to take it very serious. We got, our headquarters is in Raleigh, North Carolina. While it looks like the hurricane is just missing it's heading a little bit southeast, all of our thoughts and prayers and well wishes are obviously with everyone in the Carolinas on behalf of Lenovo, everyone at our headquarters, everyone throughout the Carolinas, we want to make sure everyone stays safe and out of harm's way. We have a great mixture today in the crowd of all customers, partners, industry analysts, media, as well as our financial analysts from all around the world. There's over 30 countries represented here and people who are here to listen to both YY, Kirk, and Christian Teismann speak today. And so, it's going to be a really really exciting day, and I really appreciate everyone coming in from all around the world. So, a big round of applause for everyone whose come in. (audience applauding) We have a great agenda for you today, and it starts obviously a very consistent format which worked very successful for us last year, and that's obviously our keynote. You'll hear from YY, our CEO, talk a little bit about the vision he has in the industry and how he sees Lenovo's turned the corner and really driving some great strategy to address our customer's needs. Kirk Skaugen, our Executive Vice President of DCG, will be up talking about how we've transformed the DCG business and once again are hitting record growth ratios for our DCG business. And then you'll hear from Christian Teismann, our SVP and General Manager for our commercial business, get up and talk about everything that's going on in our IDG business. There's really exciting stuff going on there and obviously ThinkPad being the cornerstone of that I'm sure he's going to talk to us about a couple surprises in that space as well. Then we've got some great breakout sessions, I mentioned before, 15 breakout sessions, so while this keynote section goes until about 11:30, once we get through that, please go over and explore, and have a look at all of the breakout sessions. We have all of our subject matter experts from both our PC, NBG, and our DCG businesses out to showcase what we're doing as an organization to better address your needs. And then obviously we have the technology pieces that I've also spoken about, 62 different technology displays there arranged from everything IoT, 5G, NFV, everything that's really cool and hot in the industry right now is going to be on display up there, and I really encourage all of you to get up there. So, I'm going to have a quick video to show you from some of the setup yesterday on a couple of the 62 technology displays we've got on up on stage. Okay let's go, so we've got a demonstrations to show you today, one of the greats one here is the one we've done with NC State, a high-performance computing artificial intelligence demonstration of fresh produce. It's about modeling the population growth of the planet, and how we're going to supply water and food as we go forward. Whoo. Oh, that is not an apple. Okay. (woman laughs) Second one over here is really, hey Jonas, how are you? Is really around virtual reality, and how we look at one of the most amazing sites we've got, as an install on our high-performance computing practice here globally. And you can see, obviously, that this is the Barcelona supercomputer, and, where else in New York can you get access to being able to see something like that so easily? Only here at Lenovo Transform. Whoo, okay. (audience applauding) So there's two examples of some of the technology. We're really encouraging everyone in the room after the keynote to flow into that space and really get engaged, and interact with a lot of the technology we've got up there. It seems I need to also do something about my fashion, I've just realized I've worn a vest two days in a row, so I've got to work on that as well. Alright so listen, the last thing on the agenda, we've gone through the breakout sessions and the demo, tonight at four o'clock, there's about 400 of you registered to be on the cruise boat with us, the doors will open behind me. the boat is literally at the pier right behind us. You need to make sure you're on the boat for 4:00 p.m. this evening. Outside of that, I want everyone to have a great time today, really enjoy the experience, make it as experiential as you possibly can, get out there and really get in and touch the technology. There's some really cool AI displays up there for us all to get involved in as well. So ladies and gentlemen, without further adieu, it gives me great pleasure to introduce to you a lover of tennis, as some of you would've heard last year at Lenovo Transform, as well as a lover of technology, Lenovo, and of course, New York City. I am obviously very pleasured to introduce to you Yang Yuanqing, our CEO, as we like to call him, YY. (audience applauding) (upbeat funky music) >> Good morning, everyone. >> Good morning. >> Thank you Rod for that introduction. Welcome to New York City. So, this is the second year in a row we host our Transform event here, because New York is indeed one of the most transformative cities in the world. Last year on this stage, I spoke about the Fourth Industrial Revolution, and our vision around the intelligent transformation, how it would fundamentally change the nature of business and the customer relationships. And why preparing for this transformation is the key for the future of our company. And in the last year I can assure you, we were being very busy doing just that, from searching and bringing global talents around the world to the way we think about every product and every investment we make. I was here in New York just a month ago to announce our fiscal year Q1 earnings, which was a good day for us. I think now the world believes it when we say Lenovo has truly turned the corner to a new phase of growth and a new phase of acceleration in executing the transformation strategy. That's clear to me is that the last few years of a purposeful disruption at Lenovo have led us to a point where we can now claim leadership of the coming intelligent transformation. People often asked me, what is the intelligent transformation? I was saying this way. This is the unlimited potential of the Fourth Industrial Revolution driven by artificial intelligence being realized, ordering a pizza through our speaker, and locking the door with a look, letting your car drive itself back to your home. This indeed reflect the power of AI, but it just the surface of it. The true impact of AI will not only make our homes smarter and offices more efficient, but we are also completely transformed every value chip in every industry. However, to realize these amazing possibilities, we will need a structure built around the key components, and one that touches every part of all our lives. First of all, explosions in new technology always lead to new structures. This has happened many times before. In the early 20th century, thousands of companies provided a telephone service. City streets across the US looked like this, and now bundles of a microscopic fiber running from city to city bring the world closer together. Here's what a driving was like in the US, up until 1950s. Good luck finding your way. (audience laughs) And today, millions of vehicles are organized and routed daily, making the world more efficient. Structure is vital, from fiber cables and the interstate highways, to our cells bounded together to create humans. Thankfully the structure for intelligent transformation has emerged, and it is just as revolutionary. What does this new structure look like? We believe there are three key building blocks, data, computing power, and algorithms. Ever wondered what is it behind intelligent transformation? What is fueling this miracle of human possibility? Data. As the Internet becomes ubiquitous, not only PCs, mobile phones, have come online and been generating data. Today it is the cameras in this room, the climate controls in our offices, or the smart displays in our kitchens at home. The number of smart devices worldwide will reach over 20 billion in 2020, more than double the number in 2017. These devices and the sensors are connected and generating massive amount of data. By 2020, the amount of data generated will be 57 times more than all the grains of sand on Earth. This data will not only make devices smarter, but will also fuel the intelligence of our homes, offices, and entire industries. Then we need engines to turn the fuel into power, and the engine is actually the computing power. Last but not least the advanced algorithms combined with Big Data technology and industry know how will form vertical industrial intelligence and produce valuable insights for every value chain in every industry. When these three building blocks all come together, it will change the world. At Lenovo, we have each of these elements of intelligent transformations in a single place. We have built our business around the new structure of intelligent transformation, especially with mobile and the data center now firmly part of our business. I'm often asked why did you acquire these businesses? Why has a Lenovo gone into so many fields? People ask the same questions of the companies that become the leaders of the information technology revolution, or the third industrial transformation. They were the companies that saw the future and what the future required, and I believe Lenovo is the company today. From largest portfolio of devices in the world, leadership in the data center field, to the algorithm-powered intelligent vertical solutions, and not to mention the strong partnership Lenovo has built over decades. We are the only company that can unify all these essential assets and deliver end to end solutions. Let's look at each part. We now understand the important importance data plays as fuel in intelligent transformation. Hundreds of billions of devices and smart IoTs in the world are generating better and powering the intelligence. Who makes these devices in large volume and variety? Who puts these devices into people's home, offices, manufacturing lines, and in their hands? Lenovo definitely has the front row seats here. We are number one in PCs and tablets. We also produces smart phones, smart speakers, smart displays. AR/VR headsets, as well as commercial IoTs. All of these smart devices, or smart IoTs are linked to each other and to the cloud. In fact, we have more than 20 manufacturing facilities in China, US, Brazil, Japan, India, Mexico, Germany, and more, producing various devices around the clock. We actually make four devices every second, and 37 motherboards every minute. So, this factory located in my hometown, Hu-fi, China, is actually the largest laptop factory in the world, with more than three million square feet. So, this is as big as 42 soccer fields. Our scale and the larger portfolio of devices gives us access to massive amount of data, which very few companies can say. So, why is the ability to scale so critical? Let's look again at our example from before. The early days of telephone, dozens of service providers but only a few companies could survive consolidation and become the leader. The same was true for the third Industrial Revolution. Only a few companies could scale, only a few could survive to lead. Now the building blocks of the next revolution are locking into place. The (mumbles) will go to those who can operate at the scale. So, who could foresee the total integration of cloud, network, and the device, need to deliver intelligent transformation. Lenovo is that company. We are ready to scale. Next, our computing power. Computing power is provided in two ways. On one hand, the modern supercomputers are providing the brute force to quickly analyze the massive data like never before. On the other hand the cloud computing data centers with the server storage networking capabilities, and any computing IoT's, gateways, and miniservers are making computing available everywhere. Did you know, Lenovo is number one provider of super computers worldwide? 170 of the top 500 supercomputers, run on Lenovo. We hold 89 World Records in key workloads. We are number one in x86 server reliability for five years running, according to ITIC. a respected provider of industry research. We are also the fastest growing provider of hyperscale public cloud, hyper-converged and aggressively growing in edge computing. cur-ges target, we are expand on this point soon. And finally to run these individual nodes into our symphony, we must transform the data and utilize the computing power with advanced algorithms. Manufactured, industry maintenance, healthcare, education, retail, and more, so many industries are on the edge of intelligent transformation to improve efficiency and provide the better products and services. We are creating advanced algorithms and the big data tools combined with industry know-how to provide intelligent vertical solutions for several industries. In fact, we studied at Lenovo first. Our IT and research teams partnered with our global supply chain to develop an AI that improved our demand forecasting accuracy. Beyond managing our own supply chain we have offered our deep learning supply focused solution to other manufacturing companies to improve their efficiency. In the best case, we have improved the demand, focused the accuracy by 30 points to nearly 90 percent, for Baosteel, the largest of steel manufacturer in China, covering the world as well. Led by Lenovo research, we launched the industry-leading commercial ready AR headset, DaystAR, partnering with companies like the ones in this room. This technology is being used to revolutionize the way companies service utility, and even our jet engines. Using our workstations, servers, and award-winning imaging processing algorithms, we have partnered with hospitals to process complex CT scan data in minutes. So, this enable the doctors to more successfully detect the tumors, and it increases the success rate of cancer diagnosis all around the world. We are also piloting our smart IoT driven warehouse solution with one of the world's largest retail companies to greatly improve the efficiency. So, the opportunities are endless. This is where Lenovo will truly shine. When we combine the industry know-how of our customers with our end-to-end technology offerings, our intelligent vertical solutions like this are growing, which Kirk and Christian will share more. Now, what will drive this transformation even faster? The speed at which our networks operate, specifically 5G. You may know that Lenovo just launched the first-ever 5G smartphone, our Moto Z3, with the new 5G Moto model. We are partnering with multiple major network providers like Verizon, China Mobile. With the 5G model scheduled to ship early next year, we will be the first company to provide a 5G mobile experience to any users, customers. This is amazing innovation. You don't have to buy a new phone, just the 5G clip on. What can I say, except wow. (audience laughs) 5G is 10 times the fast faster than 4G. Its download speed will transform how people engage with the world, driverless car, new types of smart wearables, gaming, home security, industrial intelligence, all will be transformed. Finally, accelerating with partners, as ready as we are at Lenovo, we need partners to unlock our full potential, partners here to create with us the edge of the intelligent transformation. The opportunities of intelligent transformation are too profound, the scale is too vast. No company can drive it alone fully. We are eager to collaborate with all partners that can help bring our vision to life. We are dedicated to open partnerships, dedicated to cross-border collaboration, unify the standards, share the advantage, and market the synergies. We partner with the biggest names in the industry, Intel, Microsoft, AMD, Qualcomm, Google, Amazon, and Disney. We also find and partner with the smaller innovators as well. We're building the ultimate partner experience, open, shared, collaborative, diverse. So, everything is in place for intelligent transformation on a global scale. Smart devices are everywhere, the infrastructure is in place, networks are accelerating, and the industries demand to be more intelligent, and Lenovo is at the center of it all. We are helping to drive change with the hundreds of companies, companies just like yours, every day. We are your partner for intelligent transformation. Transformation never stops. This is what you will hear from Kirk, including details about Lenovo NetApp global partnership we just announced this morning. We've made the investments in every single aspect of the technology. We have the end-to-end resources to meet your end-to-end needs. As you attend the breakout session this afternoon, I hope you see for yourself how much Lenovo has transformed as a company this past year, and how we truly are delivering a future of intelligent transformation. Now, let me invite to the stage Kirk Skaugen, our president of Data Center growth to tell you about the exciting transformation happening in the global Data C enter market. Thank you. (audience applauding) (upbeat music) >> Well, good morning. >> Good morning. >> Good morning! >> Good morning! >> Excellent, well, I'm pleased to be here this morning to talk about how we're transforming the Data Center and taking you as our customers through your own intelligent transformation journey. Last year I stood up here at Transform 1.0, and we were proud to announce the largest Data Center portfolio in Lenovo's history, so I thought I'd start today and talk about the portfolio and the progress that we've made over the last year, and the strategies that we have going forward in phase 2.0 of Lenovo's transformation to be one of the largest data center companies in the world. We had an audacious vision that we talked about last year, and that is to be the most trusted data center provider in the world, empowering customers through the new IT, intelligent transformation. And now as the world's largest supercomputer provider, giving something back to humanity, is very important this week with the hurricanes now hitting North Carolina's coast, but we take this most trusted aspect very seriously, whether it's delivering the highest quality products on time to you as customers with the highest levels of security, or whether it's how we partner with our channel partners and our suppliers each and every day. You know we're in a unique world where we're going from hundreds of millions of PCs, and then over the next 25 years to hundred billions of connected devices, so each and every one of you is going through this intelligent transformation journey, and in many aspects were very early in that cycle. And we're going to talk today about our role as the largest supercomputer provider, and how we're solving humanity's greatest challenges. Last year we talked about two special milestones, the 25th anniversary of ThinkPad, but also the 25th anniversary of Lenovo with our IBM heritage in x86 computing. I joined the workforce in 1992 out of college, and the IBM first personal server was launching at the same time with an OS2 operating system and a free mouse when you bought the server as a marketing campaign. (audience laughing) But what I want to be very clear today, is that the innovation engine is alive and well at Lenovo, and it's really built on the culture that we're building as a company. All of these awards at the bottom are things that we earned over the last year at Lenovo. As a Fortune now 240 company, larger than companies like Nike, or AMEX, or Coca-Cola. The one I'm probably most proud of is Forbes first list of the top 2,000 globally regarded companies. This was something where 15,000 respondents in 60 countries voted based on ethics, trustworthiness, social conduct, company as an employer, and the overall company performance, and Lenovo was ranked number 27 of 2000 companies by our peer group, but we also now one of-- (audience applauding) But we also got a perfect score in the LGBTQ Equality Index, exemplifying the diversity internally. We're number 82 in the top working companies for mothers, top working companies for fathers, top 100 companies for sustainability. If you saw that factory, it's filled with solar panels on the top of that. And now again, one of the top global brands in the world. So, innovation is built on a customer foundation of trust. We also said last year that we'd be crossing an amazing milestone. So we did, over the last 12 months ship our 20 millionth x86 server. So, thank you very much to our customers for this milestone. (audience applauding) So, let me recap some of the transformation elements that have happened over the last year. Last year I talked about a lot of brand confusion, because we had the ThinkServer brand from the legacy Lenovo, the System x, from IBM, we had acquired a number of networking companies, like BLADE Network Technologies, et cetera, et cetera. Over the last year we've been ramping based on two brand structures, ThinkAgile for next generation IT, and all of our software-defined infrastructure products and ThinkSystem as the world's highest performance, highest reliable x86 server brand, but for servers, for storage, and for networking. We have transformed every single aspect of the customer experience. A year and a half ago, we had four different global channel programs around the world. Typically we're about twice the mix to our channel partners of any of our competitors, so this was really important to fix. We now have a single global Channel program, and have technically certified over 11,000 partners to be technical experts on our product line to deliver better solutions to our customer base. Gardner recently recognized Lenovo as the 26th ranked supply chain in the world. And, that's a pretty big honor, when you're up there with Amazon and Walmart and others, but in tech, we now are in the top five supply chains. You saw the factory network from YY, and today we'll be talking about product shipping in more than 160 countries, and I know there's people here that I've met already this morning, from India, from South Africa, from Brazil and China. We announced new Premier Support services, enabling you to go directly to local language support in nine languages in 49 countries in the world, going directly to a native speaker level three support engineer. And today we have more than 10,000 support specialists supporting our products in over 160 countries. We've delivered three times the number of engineered solutions to deliver a solutions orientation, whether it's on HANA, or SQL Server, or Oracle, et cetera, and we've completely reengaged our system integrator channel. Last year we had the CIO of DXE on stage, and here we're talking about more than 175 percent growth through our system integrator channel in the last year alone as we've brought that back and really built strong relationships there. So, thank you very much for amazing work here on the customer experience. (audience applauding) We also transformed our leadership. We thought it was extremely important with a focus on diversity, to have diverse talent from the legacy IBM, the legacy Lenovo, but also outside the industry. We made about 19 executive changes in the DCG group. This is the most senior leadership team within DCG, all which are newly on board, either from our outside competitors mainly over the last year. About 50 percent of our executives were now hired internally, 50 percent externally, and 31 percent of those new executives are diverse, representing the diversity of our global customer base and gender. So welcome, and most of them you're going to be able to meet over here in the breakout sessions later today. (audience applauding) But some things haven't changed, they're just keeping getting better within Lenovo. So, last year I got up and said we were committed with the new ThinkSystem brand to be a world performance leader. You're going to see that we're sponsoring Ducati for MotoGP. You saw the Ferrari out there with Formula One. That's not a surprise. We want the Lenovo ThinkSystem and ThinkAgile brands to be synonymous with world record performance. So in the last year we've gone from 39 to 89 world records, and partners like Intel would tell you, we now have four times the number of world record workloads on Lenovo hardware than any other server company on the planet today, with more than 89 world records across HPC, Java, database, transaction processing, et cetera. And we're proud to have just brought on Doug Fisher from Intel Corporation who had about 10-17,000 people on any given year working for him in workload optimizations across all of our software. It's just another testament to the leadership team we're bringing in to keep focusing on world-class performance software and solutions. We also per ITIC, are the number one now in x86 server reliability five years running. So, this is a survey where CIOs are in a blind survey asked to submit their reliability of their uptime on their x86 server equipment over the last 365 days. And you can see from 2016 to 2017 the downtime, there was over four hours as noted by the 750 CXOs in more than 20 countries is about one percent for the Lenovo products, and is getting worse generation from generation as we went from Broadwell to Pearlie. So we're taking our reliability, which was really paramount in the IBM System X heritage, and ensuring that we don't just recognize high performance but we recognize the highest level of reliability for mission-critical workloads. And what that translates into is that we at once again have been ranked number one in customer satisfaction from you our customers in 19 of 22 attributes, in North America in 18 of 22. This is a survey by TVR across hundreds of customers of us and our top competitors. This is the ninth consecutive study that we've been ranked number one in customer satisfaction, so we're taking this extremely seriously, and in fact YY now has increased the compensation of every single Lenovo employee. Up to 40 percent of their compensation bonus this year is going to be based on customer metrics like quality, order to ship, and things of this nature. So, we're really putting every employee focused on customer centricity this year. So, the summary on Transform 1.0 is that every aspect of what you knew about Lenovo's data center group has transformed, from the culture to the branding to dedicated sales and marketing, supply chain and quality groups, to a worldwide channel program and certifications, to new system integrator relationships, and to the new leadership team. So, rather than me just talk about it, I thought I'd share a quick video about what we've done over the last year, if you could run the video please. Turn around for a second. (epic music) (audience applauds) Okay. So, thank you to all our customers that allowed us to publicly display their logos in that video. So, what that means for you as investors, and for the investor community out there is, that our customers have responded, that this year Gardner just published that we are the fastest growing server company in the top 10, with 39 percent growth quarter-on-quarter, and 49 percent growth year-on-year. If you look at the progress we've made since the transformation the last three quarters publicly, we've grown 17 percent, then 44 percent, then 68 percent year on year in revenue, and I can tell you this quarter I'm as confident as ever in the financials around the DCG group, and it hasn't been in one area. You're going to see breakout sessions from hyperscale, software-defined, and flash, which are all growing more than a 100 percent year-on-year, supercomputing which we'll talk about shortly, now number one, and then ultimately from profitability, delivering five consecutive quarters of pre-tax profit increase, so I think, thank you very much to the customer base who's been working with us through this transformation journey. So, you're here to really hear what's next on 2.0, and that's what I'm excited to talk about today. Last year I came up with an audacious goal that we would become the largest supercomputer company on the planet by 2020, and this graph represents since the acquisition of the IBM System x business how far we were behind being the number one supercomputer. When we started we were 182 positions behind, even with the acquisition for example of SGI from HP, we've now accomplished our goal actually two years ahead of time. We're now the largest supercomputer company in the world. About one in every four supercomputers, 117 on the list, are now Lenovo computers, and you saw in the video where the universities are said, but I think what I'm most proud of is when your customers rank you as the best. So the awards at the bottom here, are actually Readers Choice from the last International Supercomputing Show where the scientific researchers on these computers ranked their vendors, and we were actually rated the number one server technology in supercomputing with our ThinkSystem SD530, and the number one storage technology with our ThinkSystem DSS-G, but more importantly what we're doing with the technology. You're going to see we won best in life sciences, best in data analytics, and best in collaboration as well, so you're going to see all of that in our breakout sessions. As you saw in the video now, 17 of the top 25 research institutions in the world are now running Lenovo supercomputers. And again coming from Raleigh and watching that hurricane come across the Atlantic, there are eight supercomputers crunching all of those models you see from Germany to Malaysia to Canada, and we're happy to have a SciNet from University of Toronto here with us in our breakout session to talk about what they're doing on climate modeling as well. But we're not stopping there. We just announced our new Neptune warm water cooling technology, which won the International Supercomputing Vendor Showdown, the first time we've won that best of show in 25 years, and we've now installed this. We're building out LRZ in Germany, the first ever warm water cooling in Peking University, at the India Space Propulsion Laboratory, at the Malaysian Weather and Meteorological Society, at Uninett, at the largest supercomputer in Norway, T-Systems, University of Birmingham. This is truly amazing technology where we're actually using water to cool the machine to deliver a significantly more energy-efficient computer. Super important, when we're looking at global warming and some of the electric bills can be millions of dollars just for one computer, and could actually power a small city just with the technology from the computer. We've built AI centers now in Morrisville, Stuttgart, Taipei, and Beijing, where customers can bring their AI workloads in with experts from Intel, from Nvidia, from our FPGA partners, to work on their workloads, and how they can best implement artificial intelligence. And we also this year launched LICO which is Lenovo Intelligent Compute Orchestrator software, and it's a software solution that simplifies the management and use of distributed clusters in both HPC and AI model development. So, what it enables you to do is take a single cluster, and run both HPC and AI workloads on it simultaneously, delivering better TCO for your environment, so check out LICO as well. A lot of the customers here and Wall Street are very excited and using it already. And we talked about solving humanity's greatest challenges. In the breakout session, you're going to have a virtual reality experience where you're going to be able to walk through what as was just ranked the world's most beautiful data center, the Barcelona Supercomputer. So, you can actually walk through one of the largest supercomputers in the world from Barcelona. You can see the work we're doing with NC State where we're going to have to grow the food supply of the world by 50 percent, and there's not enough fresh water in the world in the right places to actually make all those crops grow between now and 2055, so you're going to see the progression of how they're mapping the entire globe and the water around the world, how to build out the crop population over time using AI. You're going to see our work with Vestas is this largest supercomputer provider in the wind turbine areas, how they're working on wind energy, and then with University College London, how they're working on some of the toughest particle physics calculations in the world. So again, lots of opportunity here. Take advantage of it in the breakout sessions. Okay, let me transition to hyperscale. So in hyperscale now, we have completely transformed our business model. We are now powering six of the top 10 hyperscalers in the world, which is a significant difference from where we were two years ago. And the reason we're doing that, is we've coined a term called ODM+. We believe that hyperscalers want more procurement power than an ODM, and Lenovo is doing about $18 billion of procurement a year. They want a broader global supply chain that they can get from a local system integrator. We're more than 160 countries around the world, but they want the same world-class quality and reliability like they get from an MNC. So, what we're doing now is instead of just taking off the shelf motherboards from somewhere, we're starting with a blank sheet of paper, we're working with the customer base on customized SKUs and you can see we already are developing 33 custom solutions for the largest hyperscalers in the world. And then we're not just running notebooks through this factory where YY said, we're running 37 notebook boards a minute, we're now putting in tens and tens and tens of thousands of server board capacity per month into this same factory, so absolutely we can compete with the most aggressive ODM's in the world, but it's not just putting these things in in the motherboard side, we're also building out these systems all around the world, India, Brazil, Hungary, Mexico, China. This is an example of a new hyperscale customer we've had this last year, 34,000 servers we delivered in the first six months. The next 34,000 servers we delivered in 68 days. The next 34,000 servers we delivered in 35 days, with more than 99 percent on-time delivery to 35 data centers in 14 countries as diverse as South Africa, India, China, Brazil, et cetera. And I'm really ashamed to say it was 99.3, because we did have a forklift driver who rammed their forklift right through the middle of the one of the server racks. (audience laughing) At JFK Airport that we had to respond to, but I think this gives you a perspective of what it is to be a top five global supply chain and technology. So last year, I said we would invest significantly in IP, in joint ventures, and M and A to compete in software defined, in networking, and in storage, so I wanted to give you an update on that as well. Our newest software-defined partnership is with Cloudistics, enabling a fully composable cloud infrastructure. It's an exclusive agreement, you can see them here. I think Nag, our founder, is going to be here today, with a significant Lenovo investment in the company. So, this new ThinkAgile CP series delivers the simplicity of the public cloud, on-premise with exceptional support and a marketplace of essential enterprise applications all with a single click deployment. So simply put, we're delivering a private cloud with a premium experience. It's simple in that you need no specialists to deploy it. An IT generalist can set it up and manage it. It's agile in that you can provision dozens of workloads in minutes, and it's transformative in that you get all of the goodness of public cloud on-prem in a private cloud to unlock opportunity for use. So, we're extremely excited about the ThinkAgile CP series that's now shipping into the marketplace. Beyond that we're aggressively ramping, and we're either doubling, tripling, or quadrupling our market share as customers move from traditional server technology to software-defined technology. With Nutanix we've been public, growing about more than 150 percent year-on-year, with Nutanix as their fastest growing Nutanix partner, but today I want to set another audacious goal. I believe we cannot just be Nutanix's fastest growing partner but we can become their largest partner within two years. On Microsoft, we are already four times our market share on Azure stack of our traditional business. We were the first to launch our ThinkAgile on Broadwell and on Skylake with the Azure Stack Infrastructure. And on VMware we're about twice our market segment share. We were the first to deliver an Intel-optimized Optane-certified VSAN node. And with Optane technology, we're delivering 50 percent more VM density than any competitive SSD system in the marketplace, about 10 times lower latency, four times the performance of any SSD system out there, and Lenovo's first to market on that. And at VMworld you saw CEO Pat Gelsinger of VMware talked about project dimension, which is Edge as a service, and we're the only OEM beyond the Dell family that is participating today in project dimension. Beyond that you're going to see a number of other partnerships we have. I'm excited that we have the city of Bogota Columbia here, an eight million person city, where we announced a 3,000 camera video surveillance solution last month. With pivot three you're going to see city of Bogota in our breakout sessions. You're going to see a new partnership with Veeam around backup that's launching today. You're going to see partnerships with scale computing in IoT and hyper-converged infrastructure working on some of the largest retailers in the world. So again, everything out in the breakout session. Transitioning to storage and data management, it's been a great year for Lenovo, more than a 100 percent growth year-on-year, 2X market growth in flash arrays. IDC just reported 30 percent growth in storage, number one in price performance in the world and the best HPC storage product in the top 500 with our ThinkSystem DSS G, so strong coverage, but I'm excited today to announce for Transform 2.0 that Lenovo is launching the largest data management and storage portfolio in our 25-year data center history. (audience applauding) So a year ago, the largest server portfolio, becoming the largest fastest growing server OEM, today the largest storage portfolio, but as you saw this morning we're not doing it alone. Today Lenovo and NetApp, two global powerhouses are joining forces to deliver a multi-billion dollar global alliance in data management and storage to help customers through their intelligent transformation. As the fastest growing worldwide server leader and one of the fastest growing flash array and data management companies in the world, we're going to deliver more choice to customers than ever before, global scale that's never been seen, supply chain efficiencies, and rapidly accelerating innovation and solutions. So, let me unwrap this a little bit for you and talk about what we're announcing today. First, it's the largest portfolio in our history. You're going to see not just storage solutions launching today but a set of solution recipes from NetApp that are going to make Lenovo server and NetApp or Lenovo storage work better together. The announcement enables Lenovo to go from covering 15 percent of the global storage market to more than 90 percent of the global storage market and distribute these products in more than 160 countries around the world. So we're launching today, 10 new storage platforms, the ThinkSystem DE and ThinkSystem DM platforms. They're going to be centrally managed, so the same XClarity management that you've been using for server, you can now use across all of your storage platforms as well, and it'll be supported by the same 10,000 plus service personnel that are giving outstanding customer support to you today on the server side. And we didn't come up with this in the last month or the last quarter. We're announcing availability in ordering today and shipments tomorrow of the first products in this portfolio, so we're excited today that it's not just a future announcement but something you as customers can take advantage of immediately. (audience applauding) The second part of the announcement is we are announcing a joint venture in China. Not only will this be a multi-billion dollar global partnership, but Lenovo will be a 51 percent owner, NetApp a 49 percent owner of a new joint venture in China with the goal of becoming in the top three storage companies in the largest data and storage market in the world. We will deliver our R and D in China for China, pooling our IP and resources together, and delivering a single route to market through a complementary channel, not just in China but worldwide. And in the future I just want to tell everyone this is phase one. There is so much exciting stuff. We're going to be on the stage over the next year talking to you about around integrated solutions, next-generation technologies, and further synergies and collaborations. So, rather than just have me talk about it, I'd like to welcome to the stage our new partner NetApp and Brad Anderson who's the senior vice president and general manager of NetApp Cloud Infrastructure. (upbeat music) (audience applauding) >> Thank You Kirk. >> So Brad, we've known each other a long time. It's an exciting day. I'm going to give you the stage and allow you to say NetApp's perspective on this announcement. >> Very good, thank you very much, Kirk. Kirk and I go back to I think 1994, so hey good morning and welcome. My name is Brad Anderson. I manage the Cloud Infrastructure Group at NetApp, and I am honored and privileged to be here at Lenovo Transform, particularly today on today's announcement. Now, you've heard a lot about digital transformation about how companies have to transform their IT to compete in today's global environment. And today's announcement with the partnership between NetApp and Lenovo is what that's all about. This is the joining of two global leaders bringing innovative technology in a simplified solution to help customers modernize their IT and accelerate their global digital transformations. Drawing on the strengths of both companies, Lenovo's high performance compute world-class supply chain, and NetApp's hybrid cloud data management, hybrid flash and all flash storage solutions and products. And both companies providing our customers with the global scale for them to be able to meet their transformation goals. At NetApp, we're very excited. This is a quote from George Kurian our CEO. George spent all day yesterday with YY and Kirk, and would have been here today if it hadn't been also our shareholders meeting in California, but I want to just convey how excited we are for all across NetApp with this partnership. This is a partnership between two companies with tremendous market momentum. Kirk took you through all the amazing results that Lenovo has accomplished, number one in supercomputing, number one in performance, number one in x86 reliability, number one in x86 customers sat, number five in supply chain, really impressive and congratulations. Like Lenovo, NetApp is also on a transformation journey, from a storage company to the data authority in hybrid cloud, and we've seen some pretty impressive momentum as well. Just last week we became number one in all flash arrays worldwide, catching EMC and Dell, and we plan to keep on going by them, as we help customers modernize their their data centers with cloud connected flash. We have strategic partnerships with the largest hyperscalers to provide cloud native data services around the globe and we are having success helping our customers build their own private clouds with just, with a new disruptive hyper-converged technology that allows them to operate just like hyperscalers. These three initiatives has fueled NetApp's transformation, and has enabled our customers to change the world with data. And oh by the way, it has also fueled us to have meet or have beaten Wall Street's expectations for nine quarters in a row. These are two companies with tremendous market momentum. We are also building this partnership for long term success. We think about this as phase one and there are two important components to phase one. Kirk took you through them but let me just review them. Part one, the establishment of a multi-year commitment and a collaboration agreement to offer Lenovo branded flash products globally, and as Kurt said in 160 countries. Part two, the formation of a joint venture in PRC, People's Republic of China, that will provide long term commitment, joint product development, and increase go-to-market investment to meet the unique needs to China. Both companies will put in storage technologies and storage expertise to form an independent JV that establishes a data management company in China for China. And while we can dream about what phase two looks like, our entire focus is on making phase one incredibly successful and I'm pleased to repeat what Kirk, is that the first products are orderable and shippable this week in 160 different countries, and you will see our two companies focusing on the here and now. On our joint go to market strategy, you'll see us working together to drive strategic alignment, focused execution, strong governance, and realistic expectations and milestones. And it starts with the success of our customers and our channel partners is job one. Enabling customers to modernize their legacy IT with complete data center solutions, ensuring that our customers get the best from both companies, new offerings the fuel business success, efficiencies to reinvest in game-changing initiatives, and new solutions for new mission-critical applications like data analytics, IoT, artificial intelligence, and machine learning. Channel partners are also top of mind for both our two companies. We are committed to the success of our existing and our future channel partners. For NetApp channel partners, it is new pathways to new segments and to new customers. For Lenovo's channel partners, it is the competitive weapons that now allows you to compete and more importantly win against Dell, EMC, and HP. And the good news for both companies is that our channel partner ecosystem is highly complementary with minimal overlap. Today is the first day of a very exciting partnership, of a partnership that will better serve our customers today and will provide new opportunities to both our companies and to our partners, new products to our customers globally and in China. I am personally very excited. I will be on the board of the JV. And so, I look forward to working with you, partnering with you and serving you as we go forward, and with that, I'd like to invite Kirk back up. (audience applauding) >> Thank you. >> Thank you. >> Well, thank you, Brad. I think it's an exciting overview, and these products will be manufactured in China, in Mexico, in Hungary, and around the world, enabling this amazing supply chain we talked about to deliver in over 160 countries. So thank you Brad, thank you George, for the amazing partnership. So again, that's not all. In Transform 2.0, last year, we talked about the joint ventures that were coming. I want to give you a sneak peek at what you should expect at future Lenovo events around the world. We have this Transform in Beijing in a couple weeks. We'll then be repeating this in 20 different locations roughly around the world over the next year, and I'm excited probably more than ever about what else is coming. Let's talk about Telco 5G and network function virtualization. Today, Motorola phones are certified on 46 global networks. We launched the world's first 5G upgradable phone here in the United States with Verizon. Lenovo DCG sells to 58 telecommunication providers around the world. At Mobile World Congress in Barcelona and Shanghai, you saw China Telecom and China Mobile in the Lenovo booth, China Telecom showing a video broadband remote access server, a VBRAS, with video streaming demonstrations with 2x less jitter than they had seen before. You saw China Mobile with a virtual remote access network, a VRAN, with greater than 10 times the throughput and 10x lower latency running on Lenovo. And this year, we'll be launching a new NFV company, a software company in China for China to drive the entire NFV stack, delivering not just hardware solutions, but software solutions, and we've recently hired a new CEO. You're going to hear more about that over the next several quarters. Very exciting as we try to drive new economics into the networks to deliver these 20 billion devices. We're going to need new economics that I think Lenovo can uniquely deliver. The second on IoT and edge, we've integrated on the device side into our intelligent devices group. With everything that's going to consume electricity computes and communicates, Lenovo is in a unique position on the device side to take advantage of the communications from Motorola and being one of the largest device companies in the world. But this year, we're also going to roll out a comprehensive set of edge gateways and ruggedized industrial servers and edge servers and ISP appliances for the edge and for IoT. So look for that as well. And then lastly, as a service, you're going to see Lenovo delivering hardware as a service, device as a service, infrastructure as a service, software as a service, and hardware as a service, not just as a glorified leasing contract, but with IP, we've developed true flexible metering capability that enables you to scale up and scale down freely and paying strictly based on usage, and we'll be having those announcements within this fiscal year. So Transform 2.0, lots to talk about, NetApp the big news of the day, but a lot more to come over the next year from the Data Center group. So in summary, I'm excited that we have a lot of customers that are going to be on stage with us that you saw in the video. Lots of testimonials so that you can talk to colleagues of yourself. Alamos Gold from Canada, a Canadian gold producer, Caligo for data optimization and privacy, SciNet, the largest supercomputer we've ever put into North America, and the largest in Canada at the University of Toronto will be here talking about climate change. City of Bogota again with our hyper-converged solutions around smart city putting in 3,000 cameras for criminal detection, license plate detection, et cetera, and then more from a channel mid market perspective, Jerry's Foods, which is from my home state of Wisconsin, and Minnesota which has about 57 stores in the specialty foods market, and how they're leveraging our IoT solutions as well. So again, about five times the number of demos that we had last year. So in summary, first and foremost to the customers, thank you for your business. It's been a great journey and I think we're on a tremendous role. You saw from last year, we're trying to build credibility with you. After the largest server portfolio, we're now the fastest-growing server OEM per Gardner, number one in performance, number one in reliability, number one in customer satisfaction, number one in supercomputing. Today, the largest storage portfolio in our history, with the goal of becoming the fastest growing storage company in the world, top three in China, multibillion-dollar collaboration with NetApp. And the transformation is going to continue with new edge gateways, edge servers, NFV solutions, telecommunications infrastructure, and hardware as a service with dynamic metering. So thank you for your time. I've looked forward to meeting many of you over the next day. We appreciate your business, and with that, I'd like to bring up Rod Lappen to introduce our next speaker. Rod? (audience applauding) >> Thanks, boss, well done. Alright ladies and gentlemen. No real secret there. I think we've heard why I might talk about the fourth Industrial Revolution in data and exactly what's going on with that. You've heard Kirk with some amazing announcements, obviously now with our NetApp partnership, talk about 5G, NFV, cloud, artificial intelligence, I think we've hit just about all the key hot topics. It's with great pleasure that I now bring up on stage Mr. Christian Teismann, our senior vice president and general manager of commercial business for both our PCs and our IoT business, so Christian Teismann. (techno music) Here, take that. >> Thank you. I think I'll need that. >> Okay, Christian, so obviously just before we get down, you and I last year, we had a bit of a chat about being in New York. >> Exports. >> You were an expat in New York for a long time. >> That's true. >> And now, you've moved from New York. You're in Munich? >> Yep. >> How does that feel? >> Well Munich is a wonderful city, and it's a great place to live and raise kids, but you know there's no place in the world like New York. >> Right. >> And I miss it a lot, quite frankly. >> So what exactly do you miss in New York? >> Well there's a lot of things in New York that are unique, but I know you spent some time in Japan, but I still believe the best sushi in the world is still in New York City. (all laughing) >> I will beg to differ. I will beg to differ. I think Mr. Guchi-san from Softbank is here somewhere. He will get up an argue very quickly that Japan definitely has better sushi than New York. But obviously you know, it's a very very special place, and I have had sushi here, it's been fantastic. What about Munich? Anything else that you like in Munich? >> Well I mean in Munich, we have pork knuckles. >> Pork knuckles. (Christian laughing) Very similar sushi. >> What is also very fantastic, but we have the real, the real Oktoberfest in Munich, and it starts next week, mid-September, and I think it's unique in the world. So it's very special as well. >> Oktoberfest. >> Yes. >> Unfortunately, I'm not going this year, 'cause you didn't invite me, but-- (audience chuckling) How about, I think you've got a bit of a secret in relation to Oktoberfest, probably not in Munich, however. >> It's a secret, yes, but-- >> Are you going to share? >> Well I mean-- >> See how I'm putting you on the spot? >> In the 10 years, while living here in New York, I was a regular visitor of the Oktoberfest at the Lower East Side in Avenue C at Zum Schneider, where I actually met my wife, and she's German. >> Very good. So, how about a big round of applause? (audience applauding) Not so much for Christian, but more I think, obviously for his wife, who obviously had been drinking and consequently ended up with you. (all laughing) See you later, mate. >> That's the beauty about Oktoberfest, but yes. So first of all, good morning to everybody, and great to be back here in New York for a second Transform event. New York clearly is the melting pot of the world in terms of culture, nations, but also business professionals from all kind of different industries, and having this event here in New York City I believe is manifesting what we are trying to do here at Lenovo, is transform every aspect of our business and helping our customers on the journey of intelligent transformation. Last year, in our transformation on the device business, I talked about how the PC is transforming to personalized computing, and we've made a lot of progress in that journey over the last 12 months. One major change that we have made is we combined all our device business under one roof. So basically PCs, smart devices, and smart phones are now under the roof and under the intelligent device group. But from my perspective makes a lot of sense, because at the end of the day, all devices connect in the modern world into the cloud and are operating in a seamless way. But we are also moving from a device business what is mainly a hardware focus historically, more and more also into a solutions business, and I will give you during my speech a little bit of a sense of what we are trying to do, as we are trying to bring all these components closer together, and specifically also with our strengths on the data center side really build end-to-end customer solution. Ultimately, what we want to do is make our business, our customer's businesses faster, safer, and ultimately smarter as well. So I want to look a little bit back, because I really believe it's important to understand what's going on today on the device side. Many of us have still grown up with phones with terminals, ultimately getting their first desktop, their first laptop, their first mobile phone, and ultimately smartphone. Emails and internet improved our speed, how we could operate together, but still we were defined by linear technology advances. Today, the world has changed completely. Technology itself is not a limiting factor anymore. It is how we use technology going forward. The Internet is pervasive, and we are not yet there that we are always connected, but we are nearly always connected, and we are moving to the stage, that everything is getting connected all the time. Sharing experiences is the most driving force in our behavior. In our private life, sharing pictures, videos constantly, real-time around the world, with our friends and with our family, and you see the same behavior actually happening in the business life as well. Collaboration is the number-one topic if it comes down to workplace, and video and instant messaging, things that are coming from the consumer side are dominating the way we are operating in the commercial business as well. Most important beside technology, that a new generation of workforce has completely changed the way we are working. As the famous workforce the first generation of Millennials that have now fully entered in the global workforce, and the next generation, it's called Generation Z, is already starting to enter the global workforce. By 2025, 75 percent of the world's workforce will be composed out of two of these generations. Why is this so important? These two generations have been growing up using state-of-the-art IT technology during their private life, during their education, school and study, and are taking these learnings and taking these behaviors in the commercial workspace. And this is the number one force of change that we are seeing in the moment. Diverse workforces are driving this change in the IT spectrum, and for years in many of our customers' focus was their customer focus. Customer experience also in Lenovo is the most important thing, but we've realized that our own human capital is equally valuable in our customer relationships, and employee experience is becoming a very important thing for many of our customers, and equally for Lenovo as well. As you have heard YY, as we heard from YY, Lenovo is focused on intelligent transformation. What that means for us in the intelligent device business is ultimately starting with putting intelligence in all of our devices, smartify every single one of our devices, adding value to our customers, traditionally IT departments, but also focusing on their end users and building products that make their end users more productive. And as a world leader in commercial devices with more than 33 percent market share, we can solve problems been even better than any other company in the world. So, let's talk about transformation of productivity first. We are in a device-led world. Everything we do is connected. There's more interaction with devices than ever, but also with spaces who are increasingly becoming smart and intelligent. YY said it, by 2020 we have more than 20 billion connected devices in the world, and it will grow exponentially from there on. And users have unique personal choices for technology, and that's very important to recognize, and we call this concept a digital wardrobe. And it means that every single end-user in the commercial business is composing his personal wardrobe on an ongoing basis and is reconfiguring it based on the work he's doing and based where he's going and based what task he is doing. I would ask all of you to put out all the devices you're carrying in your pockets and in your bags. You will see a lot of you are using phones, tablets, laptops, but also cameras and even smartwatches. They're all different, but they have one underlying technology that is bringing it all together. Recognizing digital wardrobe dynamics is a core factor for us to put all the devices under one roof in IDG, one business group that is dedicated to end-user solutions across mobile, PC, but also software services and imaging, to emerging technologies like AR, VR, IoT, and ultimately a AI as well. A couple of years back there was a big debate around bring-your-own-device, what was called consumerization. Today consumerization does not exist anymore, because consumerization has happened into every single device we build in our commercial business. End users and commercial customers today do expect superior display performance, superior audio, microphone, voice, and touch quality, and have it all connected and working seamlessly together in an ease of use space. We are already deep in the journey of personalized computing today. But the center point of it has been for the last 25 years, the mobile PC, that we have perfected over the last 25 years, and has been the undisputed leader in mobility computing. We believe in the commercial business, the ThinkPad is still the core device of a digital wardrobe, and we continue to drive the success of the ThinkPad in the marketplace. We've sold more than 140 million over the last 26 years, and even last year we exceeded nearly 11 million units. That is about 21 ThinkPads per minute, or one Thinkpad every three seconds that we are shipping out in the market. It's the number one commercial PC in the world. It has gotten countless awards but we felt last year after Transform we need to build a step further, in really tailoring the ThinkPad towards the need of the future. So, we announced a new line of X1 Carbon and Yoga at CES the Consumer Electronics Show. And the reason is not we want to sell to consumer, but that we do recognize that a lot of CIOs and IT decision makers need to understand what consumers are really doing in terms of technology to make them successful. So, let's take a look at the video. (suspenseful music) >> When you're the number one business laptop of all time, your only competition is yourself. (wall shattering) And, that's different. Different, like resisting heat, ice, dust, and spills. Different, like sharper, brighter OLA display. The trackpoint that reinvented controls, and a carbon fiber roll cage to protect what's inside, built by an engineering and design team, doing the impossible for the last 25 years. This is the number one business laptop of all time, but it's not a laptop. It's a ThinkPad. (audience applauding) >> Thank you very much. And we are very proud that Lenovo ThinkPad has been selected as the best laptop in the world in the second year in a row. I think it's a wonderful tribute to what our engineers have been done on this one. And users do want awesome displays. They want the best possible audio, voice, and touch control, but some users they want more. What they want is super power, and I'm really proud to announce our newest member of the X1 family, and that's the X1 extreme. It's exceptionally featured. It has six core I9 intel chipset, the highest performance you get in the commercial space. It has Nvidia XTX graphic, it is a 4K UHD display with HDR with Dolby vision and Dolby Atmos Audio, two terabyte in SSD, so it is really the absolute Ferrari in terms of building high performance commercial computer. Of course it has touch and voice, but it is one thing. It has so much performance that it serves also a purpose that is not typical for commercial, and I know there's a lot of secret gamers also here in this room. So you see, by really bringing technology together in the commercial space, you're creating productivity solutions of one of a kind. But there's another category of products from a productivity perspective that is incredibly important in our commercial business, and that is the workstation business . Clearly workstations are very specifically designed computers for very advanced high-performance workloads, serving designers, architects, researchers, developers, or data analysts. And power and performance is not just about the performance itself. It has to be tailored towards the specific use case, and traditionally these products have a similar size, like a server. They are running on Intel Xeon technology, and they are equally complex to manufacture. We have now created a new category as the ultra mobile workstation, and I'm very proud that we can announce here the lightest mobile workstation in the industry. It is so powerful that it really can run AI and big data analysis. And with this performance you can go really close where you need this power, to the sensors, into the cars, or into the manufacturing places where you not only wannna read the sensors but get real-time analytics out of these sensors. To build a machine like this one you need customers who are really challenging you to the limit. and we're very happy that we had a customer who went on this journey with us, and ultimately jointly with us created this product. So, let's take a look at the video. (suspenseful music) >> My world involves pathfinding both the hardware needs to the various work sites throughout the company, and then finding an appropriate model of desktop, laptop, or workstation to match those needs. My first impressions when I first seen the ThinkPad P1 was I didn't actually believe that we could get everything that I was asked for inside something as small and light in comparison to other mobile workstations. That was one of the I can't believe this is real sort of moments for me. (engine roars) >> Well, it's better than general when you're going around in the wind tunnel, which isn't alway easy, and going on a track is not necessarily the best bet, so having a lightweight very powerful laptop is extremely useful. It can take a Xeon processor, which can support ECC from when we try to load a full car, and when we're analyzing live simulation results. through and RCFT post processor or example. It needs a pretty powerful machine. >> It's come a long way to be able to deliver this. I hate to use the word game changer, but it is that for us. >> Aston Martin has got a lot of different projects going. There's some pretty exciting projects and a pretty versatile range coming out. Having Lenovo as a partner is certainly going to ensure that future. (engine roars) (audience applauds) >> So, don't you think the Aston Martin design and the ThinkPad design fit very well together? (audience laughs) So if Q, would get a new laptop, I think you would get a ThinkPad X P1. So, I want to switch gears a little bit, and go into something in terms of productivity that is not necessarily on top of the mind or every end user but I believe it's on top of the mind of every C-level executive and of every CEO. Security is the number one threat in terms of potential risk in your business and the cost of cybersecurity is estimated by 2020 around six trillion dollars. That's more than the GDP of Japan and we've seen a significant amount of data breach incidents already this years. Now, they're threatening to take companies out of business and that are threatening companies to lose a huge amount of sensitive customer data or internal data. At Lenovo, we are taking security very, very seriously, and we run a very deep analysis, around our own security capabilities in the products that we are building. And we are announcing today a new brand under the Think umbrella that is called ThinkShield. Our goal is to build the world's most secure PC, and ultimately the most secure devices in the industry. And when we looked at this end-to-end, there is no silver bullet around security. You have to go through every aspect where security breaches can potentially happen. That is why we have changed the whole organization, how we look at security in our device business, and really have it grouped under one complete ecosystem of solutions, Security is always something where you constantly are getting challenged with the next potential breach the next potential technology flaw. As we keep innovating and as we keep integrating, a lot of our partners' software and hardware components into our products. So for us, it's really very important that we partner with companies like Intel, Microsoft, Coronet, Absolute, and many others to really as an example to drive full encryption on all the data seamlessly, to have multi-factor authentication to protect your users' identity, to protect you in unsecured Wi-Fi locations, or even simple things like innovation on the device itself, to and an example protect the camera, against usage with a little thing like a thinkShutter that you can shut off the camera. SO what I want to show you here, is this is the full portfolio of ThinkShield that we are announcing today. This is clearly not something I can even read to you today, but I believe it shows you the breadth of security management that we are announcing today. There are four key pillars in managing security end-to-end. The first one is your data, and this has a lot of aspects around the hardware and the software itself. The second is identity. The third is the security around online, and ultimately the device itself. So, there is a breakout on security and ThinkShield today, available in the afternoon, and encourage you to really take a deeper look at this one. The first pillar around productivity was the device, and around the device. The second major pillar that we are seeing in terms of intelligent transformation is the workspace itself. Employees of a new generation have a very different habit how they work. They split their time between travel, working remotely but if they do come in the office, they expect a very different office environment than what they've seen in the past in cubicles or small offices. They come into the office to collaborate, and they want to create ideas, and they really work in cross-functional teams, and they want to do it instantly. And what we've seen is there is a huge amount of investment that companies are doing today in reconfiguring real estate reconfiguring offices. And most of these kind of things are moving to a digital platform. And what we are doing, is we want to build an entire set of solutions that are just focused on making the workspace more productive for remote workforce, and to create technology that allow people to work anywhere and connect instantly. And the core of this is that we need to be, the productivity of the employee as high as possible, and make it for him as easy as possible to use these kind of technologies. Last year in Transform, I announced that we will enter the smart office space. By the end of last year, we brought the first product into the market. It's called the Hub 500. It's already deployed in thousands of our customers, and it's uniquely focused on Microsoft Skype for Business, and making meeting instantly happen. And the product is very successful in the market. What we are announcing today is the next generation of this product, what is the Hub 700, what has a fantastic audio quality. It has far few microphones, and it is usable in small office environment, as well as in major conference rooms, but the most important part of this new announcement is that we are also announcing a software platform, and this software platform allows you to run multiple video conferencing software solutions on the same platform. Many of you may have standardized for one software solution or for another one, but as you are moving in a world of collaborating instantly with partners, customers, suppliers, you always will face multiple software standards in your company, and Lenovo is uniquely positioned but providing a middleware platform for the device to really enable multiple of these UX interfaces. And there's more to come and we will add additional UX interfaces on an ongoing base, based on our customer requirements. But this software does not only help to create a better experience and a higher productivity in the conference room or the huddle room itself. It really will allow you ultimately to manage all your conference rooms in the company in one instance. And you can run AI technologies around how to increase productivity utilization of your entire conference room ecosystem in your company. You will see a lot more devices coming from the node in this space, around intelligent screens, cameras, and so on, and so on. The idea is really that Lenovo will become a core provider in the whole movement into the smart office space. But it's great if you have hardware and software that is really supporting the approach of modern IT, but one component that Kirk also mentioned is absolutely critical, that we are providing this to you in an as a service approach. Get it what you want, when you need it, and pay it in the amount that you're really using it. And within UIT there is also I think a new philosophy around IT management, where you're much more focused on the value that you are consuming instead of investing into technology. We are launched as a service two years back and we already have a significant number of customers running PC as a service, but we believe as a service will stretch far more than just the PC device. It will go into categories like smart office. It might go even into categories like phone, and it will definitely go also in categories like storage and server in terms of capacity management. I want to highlight three offerings that we are also displaying today that are sort of building blocks in terms of how we really run as a service. The first one is that we collaborated intensively over the last year with Microsoft to be the launch pilot for their Autopilot offering, basically deploying images easily in the same approach like you would deploy a new phone on the network. The purpose really is to make new imaging and enabling new PC as seamless as it's used to be in the phone industry, and we have a complete set of offerings, and already a significant number customers have deployed Autopilot with Lenovo. The second major offering is Premier Support, like in the in the server business, where Premier Support is absolutely critical to run critical infrastructure, we see a lot of our customers do want to have Premier Support for their end users, so they can be back into work basically instantly, and that you have the highest possible instant repair on every single device. And then finally we have a significant amount of time invested into understanding how the software as a service really can get into one philosophy. And many of you already are consuming software as a service in many different contracts from many different vendors, but what we've created is one platform that really can manage this all together. All these things are the foundation for a device as a service offering that really can manage this end-to-end. So, implementing an intelligent workplace can be really a daunting prospect depending on where you're starting from, and how big your company ultimately is. But how do you manage the transformation of technology workspace if you're present in 50 or more countries and you run an infrastructure for more than 100,000 people? Michelin, famous for their tires, infamous for their Michelin star restaurant rating, especially in New York, and instantly recognizable by the Michelin Man, has just doing that. Please welcome with me Damon McIntyre from Michelin to talk to us about the challenges and transforming collaboration and productivity. (audience applauding) (electronic dance music) Thank you, David. >> Thank you, thank you very much. >> We on? >> So, how do you feel here? >> Well good, I want to thank you first of all for your partnership and the devices you create that helped us design, manufacture, and distribute the best tire in the world, okay? I just had to say it and put out there, alright. And I was wondering, were those Michelin tires on that Aston Martin? >> I'm pretty sure there is no other tire that would fit to that. >> Yeah, no, thank you, thank you again, and thank you for the introduction. >> So, when we talk about the transformation happening really in the workplace, the most tangible transformation that you actually see is the drastic change that companies are doing physically. They're breaking down walls. They're removing cubes, and they're moving to flexible layouts, new desks, new huddle rooms, open spaces, but the underlying technology for that is clearly not so visible very often. So, tell us about Michelin's strategy, and the technology you are deploying to really enable this corporation. >> So we, so let me give a little bit a history about the company to understand the daunting tasks that we had before us. So we have over 114,000 people in the company under 170 nationalities, okay? If you go to the corporate office in France, it's Clermont. It's about 3,000 executives and directors, and what have you in the marketing, sales, all the way up to the chain of the global CIO, right? Inside of the Americas, we merged in Americas about three years ago. Now we have the Americas zone. There's about 28,000 employees across the Americas, so it's really, it's really hard in a lot of cases. You start looking at the different areas that you lose time, and you lose you know, your productivity and what have you, so there, it's when we looked at different aspects of how we were going to manage the meeting rooms, right? because we have opened up our areas of workspace, our CIO, CEOs in our zones will no longer have an office. They'll sit out in front of everybody else and mingle with the crowd. So, how do you take those spaces that were originally used by an individual but now turn them into like meeting rooms? So, we went through a large process, and looked at the Hub 500, and that really met our needs, because at the end of the day what we noticed was, it was it was just it just worked, okay? We've just added it to the catalog, so we're going to be deploying it very soon, and I just want to again point that I know everybody struggles with this, and if you look at all the minutes that you lose in starting up a meeting, and we know you know what I'm talking about when I say this, it equates to many many many dollars, okay? And so at the end the day, this product helps us to be more efficient in starting up the meeting, and more productive during the meeting. >> Okay, it's very good to hear. Another major trend we are seeing in IT departments is taking a more hands-off approach to hardware. We're seeing new technologies enable IT to create a more efficient model, how IT gets hardware in the hands of end-users, and how they are ultimately supporting themselves. So what's your strategy around the lifecycle management of the devices? >> So yeah you mentioned, again, we'll go back to the 114,000 employees in the company, right? You imagine looking at all the devices we use. I'm not going to get into the number of devices we have, but we have a set number that we use, and we have to go through a process of deploying these devices, which we right now service our own image. We build our images, we service them through our help desk and all that process, and we go through it. If you imagine deploying 25,000 PCs in a year, okay? The time and the daunting task that's behind all that, you can probably add up to 20 or 30 people just full-time doing that, okay? So, with partnering with Lenovo and their excellent technology, their technical teams, and putting together the whole process of how we do imaging, it now lifts that burden off of our folks, and it shifts it into a more automated process through the cloud, okay? And, it's with the Autopilot on the end of the project, we'll have Autopilot fully engaged, but what I really appreciate is how Lenovo really, really kind of got with us, and partnered with us for the whole process. I mean it wasn't just a partner between Michelin and Lenovo. Microsoft was also partnered during that whole process, and it really was a good project that we put together, and we hope to have something in a full production mode next year for sure. >> So, David thank you very, very much to be here with us on stage. What I really want to say, customers like you, who are always challenging us on every single aspect of our capabilities really do make the big difference for us to get better every single day and we really appreciate the partnership. >> Yeah, and I would like to say this is that I am, I'm doing what he's exactly said he just said. I am challenging Lenovo to show us how we can innovate in our work space with your devices, right? That's a challenge, and it's going to be starting up next year for sure. We've done some in the past, but I'm really going to challenge you, and my whole aspect about how to do that is bring you into our workspace. Show you how we make how we go through the process of making tires and all that process, and how we distribute those tires, so you can brainstorm, come back to the table and say, here's a device that can do exactly what you're doing right now, better, more efficient, and save money, so thank you. >> Thank you very much, David. (audience applauding) Well it's sometimes really refreshing to get a very challenging customers feedback. And you know, we will continue to grow this business together, and I'm very confident that your challenge will ultimately help to make our products even more seamless together. So, as we now covered productivity and how we are really improving our devices itself, and the transformation around the workplace, there is one pillar left I want to talk about, and that's really, how do we make businesses smarter than ever? What that really means is, that we are on a journey on trying to understand our customer's business, deeper than ever, understanding our customer's processes even better than ever, and trying to understand how we can help our customers to become more competitive by injecting state-of-the-art technology in this intelligent transformation process, into core processes. But this cannot be done without talking about a fundamental and that is the journey towards 5G. I really believe that 5G is changing everything the way we are operating devices today, because they will be connected in a way like it has never done before. YY talked about you know, 20 times 10 times the amount of performance. There are other studies that talk about even 200 times the performance, how you can use these devices. What it will lead to ultimately is that we will build devices that will be always connected to the cloud. And, we are preparing for this, and Kirk already talked about, and how many operators in the world we already present with our Moto phones, with how many Telcos we are working already on the backend, and we are working on the device side on integrating 5G basically into every single one of our product in the future. One of the areas that will benefit hugely from always connected is the world of virtual reality and augmented reality. And I'm going to pick here one example, and that is that we have created a commercial VR solution for classrooms and education, and basically using consumer type of product like our Mirage Solo with Daydream and put a solution around this one that enables teachers and schools to use these products in the classroom experience. So, students now can have immersive learning. They can studying sciences. They can look at environmental issues. They can exploring their careers, or they can even taking a tour in the next college they're going to go after this one. And no matter what grade level, this is how people will continue to learn in the future. It's quite a departure from the old world of textbooks. In our area that we are looking is IoT, And as YY already elaborated, we are clearly learning from our own processes around how we improve our supply chain and manufacturing and how we improve also retail experience and warehousing, and we are working with some of the largest companies in the world on pilots, on deploying IoT solutions to make their businesses, their processes, and their businesses, you know, more competitive, and some of them you can see in the demo environment. Lenovo itself already is managing 55 million devices in an IoT fashion connecting to our own cloud, and constantly improving the experience by learning from the behavior of these devices in an IoT way, and we are collecting significant amount of data to really improve the performance of these systems and our future generations of products on a ongoing base. We have a very strong partnership with a company called ADLINK from Taiwan that is one of the leading manufacturers of manufacturing PC and hardened devices to create solutions on the IoT platform. The next area that we are very actively investing in is commercial augmented reality. I believe augmented reality has by far more opportunity in commercial than virtual reality, because it has the potential to ultimately improve every single business process of commercial customers. Imagine in the future how complex surgeries can be simplified by basically having real-time augmented reality information about the surgery, by having people connecting into a virtual surgery, and supporting the surgery around the world. Visit a furniture store in the future and see how this furniture looks in your home instantly. Doing some maintenance on some devices yourself by just calling the company and getting an online manual into an augmented reality device. Lenovo is exploring all kinds of possibilities, and you will see a solution very soon from Lenovo. Early when we talked about smart office, I talked about the importance of creating a software platform that really run all these use cases for a smart office. We are creating a similar platform for augmented reality where companies can develop and run all their argumented reality use cases. So you will see that early in 2019 we will announce an augmented reality device, as well as an augmented reality platform. So, I know you're very interested on what exactly we are rolling out, so we will have a first prototype view available there. It's still a codename project on the horizon, and we will announce it ultimately in 2019, but I think it's good for you to take a look what we are doing here. So, I just wanted to give you a peek on what we are working beyond smart office and the device productivity in terms of really how we make businesses smarter. It's really about increasing productivity, providing you the most secure solutions, increase workplace collaboration, increase IT efficiency, using new computing devices and software and services to make business smarter in the future. There's no other company that will enable to offer what we do in commercial. No company has the breadth of commercial devices, software solutions, and the same data center capabilities, and no other company can do more for your intelligent transformation than Lenovo. Thank you very much. (audience applauding) >> Thanks mate, give me that. I need that. Alright, ladies and gentlemen, we are done. So firstly, I've got a couple of little housekeeping pieces at the end of this and then we can go straight into going and experiencing some of the technology we've got on the left-hand side of the room here. So, I want to thank Christian obviously. Christian, awesome as always, some great announcements there. I love the P1. I actually like the Aston Martin a little bit better, but I'll take either if you want to give me one for free. I'll take it. We heard from YY obviously about the industry and how the the fourth Industrial Revolution is impacting us all from a digital transformation perspective, and obviously Kirk on DCG, the great NetApp announcement, which is going to be really exciting, actually that Twitter and some of the social media panels are absolutely going crazy, so it's good to see that the industry is really taking some impact. Some of the publications are really great, so thank you for the media who are obviously in the room publishing right no. But now, I really want to say it's all of your turn. So, all of you up the back there who are having coffee, it's your turn now. I want everyone who's sitting down here after this event move into there, and really take advantage of the 15 breakouts that we've got set there. There are four breakout sessions from a time perspective. I want to try and get you all out there at least to use up three of them and use your fourth one to get out and actually experience some of the technology. So, you've got four breakout sessions. A lot of the breakout sessions are actually done twice. If you have not downloaded the app, please download the app so you can actually see what time things are going on and make sure you're registering correctly. There's a lot of great experience of stuff out there for you to go do. I've got one quick video to show you on some of the technology we've got and then we're about to close. Alright, here we are acting crazy. Now, you can see obviously, artificial intelligence machine learning in the browser. God, I hate that dance, I'm not a Millenial at all. It's effectively going to be implemented by healthcare. I want you to come around and test that out. Look at these two guys. This looks like a Lenovo management meeting to be honest with you. These two guys are actually concentrating, using their brain power to race each others in cars. You got to come past and give that a try. Give that a try obviously. Fantastic event here, lots of technology for you to experience, and great partners that have been involved as well. And so, from a Lenovo perspective, we've had some great alliance partners contribute, including obviously our number one partner, Intel, who's been a really big loyal contributor to us, and been a real part of our success here at Transform. Excellent, so please, you've just seen a little bit of tech out there that you can go and play with. I really want you, I mean go put on those black things, like Scott Hawkins our chief marketing officer from Lenovo's DCG business was doing and racing around this little car with his concentration not using his hands. He said it's really good actually, but as soon as someone comes up to speak to him, his car stops, so you got to try and do better. You got to try and prove if you can multitask or not. Get up there and concentrate and talk at the same time. 62 different breakouts up there. I'm not going to go into too much detai, but you can see we've got a very, very unusual numbering system, 18 to 18.8. I think over here we've got a 4849. There's a 4114. And then up here we've got a 46.1 and a 46.2. So, you need the decoder ring to be able to understand it. Get over there have a lot of fun. Remember the boat leaves today at 4:00 o'clock, right behind us at the pier right behind us here. There's 400 of us registered. Go onto the app and let us know if there's more people coming. It's going to be a great event out there on the Hudson River. Ladies and gentlemen that is the end of your keynote. I want to thank you all for being patient and thank all of our speakers today. Have a great have a great day, thank you very much. (audience applauding) (upbeat music) ♪ Ba da bop bop bop ♪ ♪ Ba da bop bop bop ♪ ♪ Ba da bop bop bop ♪ ♪ Ba da bop bop bop ♪ ♪ Ba da bop bop bop ♪ ♪ Ba da bop bop bop ♪ ♪ Ba da bop bop bop ba do ♪

Published Date : Sep 13 2018

SUMMARY :

and those around you, Ladies and gentlemen, we ask that you please take an available seat. Ladies and gentlemen, once again we ask and software that transform the way you collaborate, Good morning everyone! Ooh, that was pretty good actually, and have a look at all of the breakout sessions. and the industries demand to be more intelligent, and the strategies that we have going forward I'm going to give you the stage and allow you to say is that the first products are orderable and being one of the largest device companies in the world. and exactly what's going on with that. I think I'll need that. Okay, Christian, so obviously just before we get down, You're in Munich? and it's a great place to live and raise kids, And I miss it a lot, but I still believe the best sushi in the world and I have had sushi here, it's been fantastic. (Christian laughing) the real Oktoberfest in Munich, in relation to Oktoberfest, at the Lower East Side in Avenue C at Zum Schneider, and consequently ended up with you. and is reconfiguring it based on the work he's doing and a carbon fiber roll cage to protect what's inside, and that is the workstation business . and then finding an appropriate model of desktop, in the wind tunnel, which isn't alway easy, I hate to use the word game changer, is certainly going to ensure that future. And the core of this is that we need to be, and distribute the best tire in the world, okay? that would fit to that. and thank you for the introduction. and the technology you are deploying and more productive during the meeting. how IT gets hardware in the hands of end-users, You imagine looking at all the devices we use. and we really appreciate the partnership. and it's going to be starting up next year for sure. and how many operators in the world Ladies and gentlemen that is the end of your keynote.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

GeorgePERSON

0.99+

DellORGANIZATION

0.99+

KirkPERSON

0.99+

LenovoORGANIZATION

0.99+

BradPERSON

0.99+

AmazonORGANIZATION

0.99+

EMCORGANIZATION

0.99+

George KurianPERSON

0.99+

MichelinORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

NikeORGANIZATION

0.99+

WalmartORGANIZATION

0.99+

QualcommORGANIZATION

0.99+

DisneyORGANIZATION

0.99+

CaliforniaLOCATION

0.99+

IBMORGANIZATION

0.99+

HPORGANIZATION

0.99+

FranceLOCATION

0.99+

JapanLOCATION

0.99+

CanadaLOCATION

0.99+

ChinaLOCATION

0.99+

NutanixORGANIZATION

0.99+

AmericasLOCATION

0.99+

Christian TeismannPERSON

0.99+

New YorkLOCATION

0.99+

Kirk SkaugenPERSON

0.99+

MalaysiaLOCATION

0.99+

AMEXORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

Rod LappenPERSON

0.99+

University College LondonORGANIZATION

0.99+

BrazilLOCATION

0.99+

KurtPERSON

0.99+

2016DATE

0.99+

GermanyLOCATION

0.99+

17QUANTITY

0.99+

2019DATE

0.99+

AMDORGANIZATION

0.99+

VerizonORGANIZATION

0.99+

IndiaLOCATION

0.99+

sevenQUANTITY

0.99+

Hudson RiverLOCATION

0.99+

twoQUANTITY

0.99+

10xQUANTITY

0.99+

NetAppORGANIZATION

0.99+

MotorolaORGANIZATION

0.99+

USLOCATION

0.99+

South AfricaLOCATION

0.99+

Art Langer, Columbia University - Nutanix .NEXTconf 2017 - #NEXTconf - #theCUBE


 

>> Announcer: Live, from Washington, DC, it's the cube. Covering dot next conference. Brought to you by Nutanix. >> Welcome back to DC everybody, this is the Nutanix dot next conference #NEXTConf, and this is the cube, the leader in live tech coverage. We go out to the events, we extract the signal from the noise. My name is Dave Vellante, and I'm here with my co-host Stu Miniman. Dr. Arthur Langer is here, he's a professor at Columbia University, and a cube alum. Good to see you, thanks very much for coming on. >> Great to be back. >> Dave: Appreciate your time. So, interesting conversations going on at dot next. People talking about cloud and you hear a lot about virtualization and infrastructure. We're going to up level it a bit. You're giving a talk-- you're hosting a panel today, and you're also giving a talk on strategic IT. Using IT as a competitive weapon. It wasn't that long ago where people were saying does IT matter. We obviously know it matters. What's your research showing, what is your activity demonstrating about IT and how is it a strategic initiative? >> Well, if you were to first look at what goes on on board meetings today, I would say, and I think I mentioned this last time, the three prominent discussions at a board is how can I use technology for strategic advantage, how can I use predictive analytics, and how are you securing and protecting us? And when you look at that, all three of those ultimately fall in the lap of the information technology people. Now you might say digital or other parts of it, but the reality is all of this sits at the heart of information technology. And if you look at many of us in that world, we've learned very efficiently and very good how to support things. But now to move into this other area of driving business, of taking risks, of becoming better marketers. Wow, what an opportunity that is for information technology leadership. >> Dave: So, obviously you believe that IT is a strategic advantage. Is it sustainable though? You know, I was sort of tongue-in-cheek joking about the Nick Car book, but the real premise of his book was it's not a sustainable competitive advantage. Is that true in your view? >> I don't believe that at all. I live and die by that old economics curve called the S curve. In which you evaluate where your product life is going to be. I think if you go back and you look at the industrial revolution, we are very early. I think that the changes, the acceleration of changes brought on by technological innovations, will continue to haunt businesses and provide these opportunities well past our life. How's that? So, if anybody thinks that this is a passing fad, my feeling is they're delusional. We're just warming up. >> So it can be a sustainable competitive advantage, but you have to jump S curves and be willing to jump S curves at the right time. Is that a fair difference? >> Yeah, the way I would say it to you, the S curve is shrinking, so you have less time to enjoy your victories. You know, the prediction is that-- how long will people last on a dow 500 these days? Maybe two, three years, as opposed to 20, 30, 40 years. Can we change fast enough, and is there anything wrong with the S curve ending and starting a new one? Businesses reinventing themselves constantly. Change a norm. >> Professor Langer, one of the challenges we hear from customers is keeping up with that change is really tough. How do you know what technologies, do you have the right skill set? What advice are you giving? How do people try to keep up with the change, understand what they should be doing internally versus turning to partners to be able to handle. >> I think it's energy and culture and excitement. That's the first thing that I think a lot of people are missing. You need to sell this to your organizations. You need to establish why this is such a wonderful time. Alright, and then you need to get the people in, between the millenials and the baby boomers and the gen x's, and you got to get them to work together. Because we know, from research right now, that without question, the millenials will need to move into management positions faster than any of their predecessors. Because of retirements and all of the other things that are going on. But the most important thing, which is where I see IT needing to move in, is you can't just launch one thing. You have to launch lots of things. And this is the old marketing concept, right. You don't bat a thousand. And IT needs to come out of its shell in that area and say I have to launch five, six, eight, 10 initiatives. Some of them will make it. Some of them won't. Can you imagine private equity or venture people trying to launch every company and be successful? We all know that in a market of opportunity, there are risks. And to establish that as an exciting thing So, you know what, it comes back to leadership in many ways. >> Great point, because if you're not having those failures, your returns are going to be minuscule. If you're only investing in things that are sure things, then it's pretty much guaranteed to have low single-digit returns, if that. >> Look what happened at Ford. They did everything pretty well. They never took any of the money, right, but they changed CEO's because they didn't get involved in driverless cars enough. I mean these are the things that we're-- If you're trying to catch up, it's already over. So how do you predict what's coming. And who has that? It's the data. It's the way we handle the data. It's the way we secure the data. Who's going to do that? >> So, that brings me to the dark side of all this enthusiasm, which is security. You see things like IOT, you know the bad guys have AI as well. Thoughts on security, discussions that are going on in the board room. How CIOs should be thinking about communicating to the board regarding security. >> I've done a lot of work in this area. And whether that falls into the CISO, the Chief Information Security Officer, and where they report. But the bottom line is how are they briefing their boards. And once again, anybody that knows anything about security knows that you're not going to keep 'em out. It's going to be an ongoing process. It's going to be things like okay what do we do when we have these type >> response >> How do we respond to that? How do we predict things? How do we stay ahead of that? And that is the more of the norm. And what we see, and I can give you sort of an analogy, You know when the President comes to speak in a city, what do they, you know, they close down streets, don't they? They create the unpredictability. And I think one of the marvelous challenges for IT is to create architectures, and I've been writing about this, which change so that those that are trying to attack us and they're looking for the street to take inside of the network. We got to kind of have a more dynamic architecture. To create unpredictability. So these are all of the things that come into strategy, language, how to educate our boards. How to prepare the next generation of those board members. And where will the technology people sit in those processes. >> Yeah, we've had the chance to interview some older companies. Companies 75, 150 years old, that are trying to become software companies. And they're worried about the AirBnB's of the world disrupting what they're doing. How do you see the older companies keeping pace and trying to keep up with some of young software companies? >> Sure, how do you move 280 thousand people at a major bank, for example. How do you do that? And I think there's several things that people are trying. One is investing in startups with options to obtain them and purchase them. The other is to create, for lack of a better word, labs. Parts of the company that are not as controlled, or part of the predominant culture. Which as we know historically will hold back the company. Because they will just typically try to protect the domain that has worked for them so well. So those are the two main things. Creating entities within the companies that have an ability to try new things. Or investing entrepreneurially, or even intrapreneurally with new things with options to bring them in. And then the third one, and this last one is very difficult, sort of what Apple did. One of the things that has always haunted many large companies is their install base. The fact that they're trying to support the older technologies because they don't want to lose their install base. Well remember what Steve Jobs did. He came in with a new architecture and he says either you're with me or you're not. And to some extent, which is a very hard decision, you have to start looking at that. And challenge your install base to say this is the new way, we'll help you get there, but at some point we can't support those older systems. >> One of my favorite lines in the cube, Don Tapps, God created the world in six days, but he didn't have an install base. Right, because that handcuffs companies and innovation, in a lot of cases. I mean, you saw that, you've worked at big companies. So I want to ask you, Dr. Langer, we had this, for the last 10 years, this consumerization of IT. The Amazon effect. You know, the whole mobile thing. Is technology, is IT specifically, getting less complex or more complex? >> I think it's getting far more complex. I think what has happened is business people sometimes see the ease of use. The fact that we have an interface with them, which makes life a lot easier. We see more software that can be pushed together. But be careful. We have found out with cybersecurity problems how extraordinarily complicated this world is. With that power comes complexities. Block chain, other things that are coming. It's a powerful world, but it's a complicated one. And it's not one where you want amateurs running the back end of your businesses. >> Okay, so let's talk about the role of those guys running. We've talked a lot about data. You've seen the emergence of the chief data officer, particularly in regulated industries, but increasingly in non-regulated businesses. Who should be running the technology show? Is it a business person? Is it a technologist? Is it some kind of unicorn blend of those? >> I just don't think, from what we've seen by trying marketing people, by trying business people, that they can really ultimately grasp the significance of the technical aspects of this. It's almost like asking someone who's not a doctor to run a hospital. I know theoretically you could possibly do that, but think about that. So you need that technology. I'm not caught up on the titles, but I am concerned, and I've written an article in the Wall Street Journal a couple years ago, that there are just too many c-level people floating around owning this thing. And I think, whether you call it the chief technologist, or the executive technical person, or the chief automation individual, that all those people have to be talking to each other, and have to lead up to someone who's not only understanding the strategy, but really understands the back end of keeping the lights on, and the security and everything else. The way I've always said it, the IT people have the hardest job in the world. They're fighting a two-front war. Because both of those don't necessarily mesh nicely together. Tell me another area of an organization that is a driver and a supporter at the same time. You look at HR, they're a supporter. You look at marketing, they're a driver. So the complexities of this are not just who you are, but what you're doing at any moment in time. So you could have a support person that's doing something, but at one moment, in that person's function, could be doing a driving, risk-taking responsibility. >> So what are some of the projects you're working on now? What's exciting you? >> Well, the whole idea of how to drive that strategy, how to take risks, the digital disruption era, is a tremendous opportunity. This is our day for the-- because most companies are not really clear what to do. Socially, I'm looking very closely at smart cities. This is another secret wave of things that are happening. How a city's going to function. Within five, seven years, they're predicting that 75% of the world's population will live in major cities. And you won't have to work in the city and live there. You could live somewhere else. So cities will compete. And it's all about the data, and automation. And how do organizations get closer with their governments? Because our governments can't afford to implement these things. Very interesting stuff. Not to mention the issues of the socially excluded. And underserved populations in those cities. And then finally, how does this mess with cyber risk? And how does that come together to the promotion of that role in organizations. Just a few things, and then way a little bit behind, there's of course block chain. How is that going to affect the world that we live in? >> Just curious, your thoughts on the future of jobs. You know, look about what automation's happening, kind of the hollowing out of the middle class. The opportunities and risks there. >> I think it has to do with the world of what I call supply chain. And it's amazing that we still see companies coming to me saying I can't fill positions. Particularly in the five-year range. And an inability to invest in younger talent to bring them in there. Our educational institutions obviously will be challenged. We're in a skills-based market. How do they adopt? How do we change that? We see programs like IBM launching new collar. Where they're actually considering non-degree'd people. How do universities start working together to get closer, in my opinion, to corporations. Where they have to work together. And then there is, let's be careful. There are new horizons. Space, new things to challenge that technology will bring us. 20 years ago I was at a bank which I won't mention, about the closing of branch banks. Because we thought that technology would take over online banking. Well, 20 years later, online banking's done everything we predicted, and we're opening more branches than ever before. Be careful. So, I'm a believer that, with new things come new opportunities. The question is how do governments and corporations and educational institutions get closer together. This is going to be critical as we move forward. Or else the have nots are going to grow, and that's a problem. >> Alright, we have to leave it there. Dr. Arthur Langer, sir, thanks very much for coming in. To the cube >> It's always a pleasure to be here >> It's a pleasure to have you. Alright, keep it right there everybody, we'll be back with our next guest. Dave Vollante, Stu Miniman, be right back.

Published Date : Jun 28 2017

SUMMARY :

Brought to you by Nutanix. We go out to the events, We're going to up level it a bit. but the reality is all of this sits but the real premise of his book at the industrial revolution, we are very early. but you have to jump S curves You know, the prediction is that-- Professor Langer, one of the challenges we hear Because of retirements and all of the other things to have low single-digit returns, if that. It's the way we handle the data. to the dark side of all this enthusiasm, which is security. It's going to be things like okay what do we do And that is the more of the norm. How do you see the older companies keeping pace And to some extent, which is a very hard decision, One of my favorite lines in the cube, Don Tapps, is business people sometimes see the ease of use. You've seen the emergence of the chief data officer, that all those people have to be talking to each other, How is that going to affect the world that we live in? kind of the hollowing out of the middle class. Or else the have nots are going to grow, and that's a problem. To the cube It's a pleasure to have you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VollantePERSON

0.99+

Dave VellantePERSON

0.99+

Stu MinimanPERSON

0.99+

DavePERSON

0.99+

Arthur LangerPERSON

0.99+

LangerPERSON

0.99+

Steve JobsPERSON

0.99+

FordORGANIZATION

0.99+

fiveQUANTITY

0.99+

IBMORGANIZATION

0.99+

Don TappsPERSON

0.99+

NutanixORGANIZATION

0.99+

Washington, DCLOCATION

0.99+

five-yearQUANTITY

0.99+

twoQUANTITY

0.99+

20QUANTITY

0.99+

75%QUANTITY

0.99+

AppleORGANIZATION

0.99+

sixQUANTITY

0.99+

todayDATE

0.99+

DCLOCATION

0.99+

bothQUANTITY

0.99+

30QUANTITY

0.99+

AirBnBORGANIZATION

0.99+

three yearsQUANTITY

0.99+

Art LangerPERSON

0.99+

eightQUANTITY

0.99+

280 thousand peopleQUANTITY

0.99+

40 yearsQUANTITY

0.99+

75QUANTITY

0.99+

third oneQUANTITY

0.99+

20 years agoDATE

0.99+

OneQUANTITY

0.98+

20 years laterDATE

0.98+

first thingQUANTITY

0.98+

Nick CarPERSON

0.98+

Columbia UniversityORGANIZATION

0.97+

two-front warQUANTITY

0.97+

oneQUANTITY

0.97+

six daysQUANTITY

0.97+

two main thingsQUANTITY

0.97+

one momentQUANTITY

0.96+

Wall Street JournalTITLE

0.96+

seven yearsQUANTITY

0.95+

threeQUANTITY

0.95+

AmazonORGANIZATION

0.95+

#NEXTConfEVENT

0.95+

three prominent discussionsQUANTITY

0.93+

couple years agoDATE

0.91+

singleQUANTITY

0.91+

last 10 yearsDATE

0.9+

10 initiativesQUANTITY

0.89+

GodPERSON

0.87+

#NEXTconfEVENT

0.85+

NutanixEVENT

0.85+

150 years oldQUANTITY

0.81+

a thousandQUANTITY

0.78+

one thingQUANTITY

0.78+

dowQUANTITY

0.76+

first lookQUANTITY

0.71+

2017DATE

0.68+

IOTORGANIZATION

0.65+

Dr.PERSON

0.6+

Security OfficerPERSON

0.6+

500QUANTITY

0.57+

ProfessorPERSON

0.52+

Day 1 Wrap - SAP SAPPHIRE NOW - #SAPPHIRENOW #theCUBE


 

(bombastic music) >> Narrator: It's theCUBE, covering Sapphire Now 2017. Brought to you by SAP Cloud Platform and HANA Enterprise Cloud. >> Lisa Martin: Journey to the Cloud requires empathy, requires transparency, and we've both kind of chatted about... Empathy is kind of an interesting thing. >> George Gilbert: Yeah. >> We don't necessarily hear a lot of CEOs talk about that. They also really talked about and drove home the point that software is now a strategy. Being open is a game-changer. So, a couple of things I kind of wanted to recap with you was there was two initiatives that they, SAP, launched, or announced, today, reinforcing the pledge to listen to customers. And one of them is the SAP Cloud Trust Center, this public website that offers real-time information on the current operations of Cloud solutions from SAP. Along the lines of empathy and transparency and really listening to the customers, what, in your take, is the SAP Cloud Trust Center, and what does it really mean? >> Okay, maybe start with an analogy. We used to call people who did not want to outsource their infrastructure, we called them "server-huggers," you know, they wanted to own their infrastructure. And part of allowing your software, mission critical software, to migrate off your... out of your data centers, off-prem, requires a certain amount of trust that takes awhile... takes awhile to earn, because you're going from infrastructure that you've tuned and that only supports your app to infrastructure in the Cloud that's shared. And that's a big change. So, essentially, SAP is saying, "We'll give you a window onto how we operate this, so that we can earn your trust over time." You know, sort of like a marriage: through thick and thin, richer or for poorer, because there are going to be hiccups and downtimes. But ideally, SAP is taking responsibility and risk off the customer. And over time, that should be... Since they know better how to run their software than anyone else, that should work. So they're taking what they believe is a very reasonable risk in saying, "We'll show you how well we do, and we'll show you we do it better than you." >> So there are, right now, there will be three operations, three services, that will be visible, where customers can see planned maintenance schedules, four weeks of historical data, as well as real-time availability, security, and data to privacy. You brought up a great point that I think in many, many contexts, this transcends industries. This transcends peoples. That trust has to be earned. Does this set SAP apart, or differentiate them, in the market? >> Gilbert: I actually think that this was the sincerest form of flattery in terms of copying Salesforce.com. >> Martin: Ah. >> Because they've had this for awhile. And SAP is far more mission-critical, because it's sort of your system of record. It keeps track of everything that happens in your business, whereas Salesforce, it's not really a transactional system. It's more of keeping track of your opportunities, you know, and your customers. If SAP goes down, your business goes down. >> Right. Right. So another thing that they announced regarding, or along the same lines of, this pledge to customers about being empathetic, about being transparent, is the Transformation Navigator. Now, this came actually directly out of comments that Bill McDermott made at SAP Sapphire 2016, where SAP really wanted to start looking at the world through the customer's perspective, through their lens. So talk to us about the Transformation Navigator. Who is it for, what does it do, and what can people or companies expect to get from it? >> I think that one way to look at it is SAP made a bunch of very large and very important acquisitions, like Concur for expense reporting, SuccessFactors for... HR measurement and talent management, and Ariba for procurement. And I don't think they had put together a compelling case for why you buy them all together. And I think that was the first objective of the Transformation Navigator, because it says that it outlines the business value, helps you with transformation services, explains how all the Cloud apps, which were the ones they bought, integrate with the existing ERP, whether on-prem or in the Cloud, and shows you a roadmap. So it sounds to me like it's their first comprehensive attempt to say, "Buy our product family." I would say that the empathy part, the Cloud Trust Center, is a much deeper attempt to say, "Hey, we're going to make all this stuff work together." The first is a value proposition. >> Martin: Right. We should mention that there are two sessions at SAP Sapphire Now that attendees can take advantage of under the auspices of the SAP Transformation Navigator. There is a session on digital transformation, a concept session, and there's also digital transformation deep-dive sessions. So if you're around and you've got time, check those out. Another thing that we talked a lot about today, and that we heard a good amount of today, George, was this expanded Leonardo. That was brought up in the keynote on main stage this morning. And we know that Leonardo was really the brand for IoT, but now it's got new ingredients, it's got these new systems of intelligence, machine learning, artificial intelligence, analytics, blockchain. What are the keys of getting value from these technologies with this new, expanded Leonardo capability? >> I guess one way to think about it is... So the SAP core, which they call, I believe they call the... either "digital core" or just "core," which is the old system of record, and then all these new capabilities around it, which is how to extend that system of record into a system of intelligence. Again, used to be just... Last year, it was IoT, but now there's so much more richness that goes around it. These are all building blocks that customers can sort of ultimately mix and match. Like, you could use blockchain as a way of ensuring that there's no tampering or fraud from the bananas in Peru, all the way till the grocery store in New Jersey. But if you use that in conjunction with supply chain, machine learning, replenishment, you get much better asset utilization. I guess... they're trying to say, "We have your system of record. We have your mission-critical data and business processes." Now it's easy to build around on the edges, around the edge of that, to add the innovative processes. >> So it sounds like, from a value perspective, by embedding Leonardo into business applications... >> Gilbert: Yeah. >> There are innovations that customers can achieve, asset management, you talked about that, so there's clear business value. As you mentioned, it's maybe like a pick-and-choose that customers can decide which of these new systems of intelligence that they need, but there's clearly a business value derivation there. >> You could think of... Yeah, where all these new services enable transformative business outcomes, the old system of record was more, as we've talked about before, was about efficiency. So it makes sense to position these capabilities as transformative. And to say that they leverage the system of record, core, makes SAP appear to be the more natural provider of these new services. >> So in this route, they did announce that they are partnering with Deloitte. What do you think they're doing here? What's the advantage that provides to SAP's install base? >> When you're... embarking on these transformational business outcomes, there is... severe, challenging change management that has to be done. It's not just that it's... We always have products, processes, and technologies, or people, products, and technologies. Here, your processes and your people have to go through much more radical change than they would in an efficiency application, which was the old system of record. We all remember back when SAP R/3 was taking off, the big system integrators got spectacularly wealthy over the change management requirements to do the efficiency roll-outs. Now, to do the transformational ones are far more challenging right now. >> So, another thing that we chatted about earlier was that SAP has embedded machine learning into a new wave of applications. What are those applications, and what is this really for SAP as a business? >> Well, my favorite analogy is something I guess I heard from one of the SIs back in the heyday of the original SAP R/3, which was, you know... Traditional business intelligence and reporting was really about steering a ship by looking backwards at its wake. And machine learning is all about predictive... answers and solutions. So you pivot now, and we've heard a lot about this concept of "software's eating the world," but now data is eating software, because it's the data that programs the software about how to look forward. And some of those forward-looking things are figuring out how to route a service ticket, like, if something goes wrong, where does it go into the support organization? A really important top-line one is customer retention, where you predict if a customer is about to churn, what type of offer do you have to make? >> Martin: Right. >> Then there's a cash application, which, to me, is kind of administrative, where it makes it easy to match a receivable, like an invoice, with a bank statement. Still kind of clerical, and yes, you get productivity out of it, but it's not a top-line thing like the customer churn function. There's a brand impact one where it's like, "I've spent x amount to promote my brand at a sporting event, used machine vision to find out how many logos were out there, and did it have impact that I can measure?" There are a whole bunch of applications like this, and there will be more. And when I say more, I think the more impactful ones that relate to, like, supply chain, where it's optimizing the flow of goods, choosing strategic suppliers... >> So this may be, with SAP embedding machine learning into this new wave of apps, is, like, a positive first step, entry level, for them to get up the chain of value? >> Gilbert: Yeah. The first... Yes. Yes. Yes. The first ones look to be sort of like baby steps, but SAP is in a position to implement more impactful ones. But it's worth saying, though, that in the spirit of "data is eating software," the people who have the most data are not the enterprise application vendors. They're the public Cloud vendors. >> Martin: Right. >> And they are the... sort of... unacknowledged future competitors, mortal competitors, for machine learning apps. >> Okay. Interesting. So, another thing that I wanted to switch gears, see if we could get a couple more topics in before we wrap here... The digital twin for IoT devices. So the relaunching of Leonardo as SAP's digital brand, they've expanded this definition. What does that mean? What is the digital twin? >> Okay, so digital twin is probably the most brilliant two-word marketing term that's come out of our industry in awhile. >> (chuckling) >> Because GE came up with it to describe, with their industrial Internet of Things, any industrial asset or device where, you took a physical version, and then you created a very high-fidelity software representation of it, or digital representation. I don't want to say replica, because it'll never be that perfect. >> Martin: Okay. >> But they would take the design information from a piece of CAD software, like maybe PTC or Autodesk. So that's as designed. There would be information from how it was manufactured. That particular instance, in addition to, let's say all aircraft engines of this... (sudden musical interlude) ...track, each instance. >> (coughing) Excuse me. >> Then, how it was shipped or who it was sent to, how it was operated, how it was maintained, so then you could... The aircraft engine manufacturer could provide proactive fleet maintenance for all the engines. It would be different from the... very different from having the airlines looking in their manuals, saying, "Okay, every 50,000 miles I got to change the oil." Here, the sensors and the data go back to the aircraft engine manufacturer. And they can say, "Well, the one that's been flying in the Middle East is exposed to sand." So that needs to be proactively maintained at a much shorter interval. And the one that's been flying across the Atlantic, that gets very little gunk in it, can have a much larger maintenance window. So you can optimize things in a way that the current capabilities wouldn't allow you to. >> And they showed an example of that with the Arctic Wind pilot project, which is very interesting. >> Yeah. Where it showed windmills, and not just the wind farm. You saw the wind farm, but you also see the different wear and tear, or the different optimizations of individual windmills. >> Martin: Right. >> And that's pretty interesting. Because you can also reorient them based on climate conditions, microclimate conditions. >> Exactly. So last topic I wanted to dig in with you today is blockchain. So you and I chatted about this, kind of chatted about... What is blockchain, this distributed ledger technology? In the simplest definition, a reliable record of who owns what, and who transacts what. So from what we heard today, and from our conversation, it seems like maybe SAP is dipping a toe into the water here. Give us a little bit of insight about what it is they're doing with blockchain, and maybe a couple of key use cases that they shared in supply chain, for example. >> Okay. So the definition you gave, I think distills it really well, with one caveat. Which is, if it's a record of who owns what, who's done what, in the past we needed an intermediary to do that. The bank. Like, when you're closing on your house, you know, someone puts the money in, you know, someone signs the contract. And only when both are done does it exchange hands. With a blockchain, you wouldn't need someone in the middle because the transaction's not complete until, on one part of the ledger, someone has put the money in, and, on the other part, someone's put the title in. And, not to sound too grandiose, but I've heard people refer to this as the biggest change in how finance and trust operates since Italian double-entry bookkeeping was invented in, like, the 1300s, or somewhere way, way back. And so, if we take it to a modern usage scenario, we could take... foodstuffs that are grown, let's say in Southeast Asia, they get put in a container that's locked. And then we can know that it's tamper-proof, because any attempt to open that would be reflected as a transaction in the blockchain. There are other, probably better, examples, but the idea is, we can have trust in so many more scenarios without having a middleman. And so the transaction costs change dramatically. And that allows for much more friction-free transactions and business processes than we ever thought possible. Because having someone like a bank or a lawyer in the middle is expensive. >> Right. And I'm glad that you kind of brought that back to trust as we wrap up. That was kind of the key theme that we heard today. >> Gilbert: Yeah. >> And a lot of great announcements. So George, thanks so much for spending the day with me, analyzing day one of SAP Sapphire Now 2017. >> Gilbert: Thank you, Lisa. >> And we thank you for watching. George and I will be back tomorrow analyzing day two and talking about great things that are going on, again, coverage from SAP Sapphire Now 2017. For George Gilbert, I'm Lisa Martin. We'll see you next time. (fanfare)

Published Date : May 16 2017

SUMMARY :

Brought to you by SAP Cloud Platform Lisa Martin: Journey to the Cloud requires empathy, reinforcing the pledge to listen to customers. and risk off the customer. real-time availability, security, and data to privacy. the sincerest form of flattery you know, and your customers. is the Transformation Navigator. it outlines the business value, helps you with What are the keys of getting value from these technologies around the edge of that, to add the innovative processes. So it sounds like, from a value perspective, There are innovations that customers can achieve, So it makes sense to position these capabilities What's the advantage that provides to SAP's install base? that has to be done. So, another thing that we chatted about earlier because it's the data that programs the software the customer churn function. that in the spirit of "data is eating software," And they are the... So the relaunching of Leonardo as the most brilliant two-word marketing term to describe, with their industrial Internet of Things, So that's as designed. in the Middle East is exposed to sand." And they showed an example of that with the You saw the wind farm, but you also see the different Because you can also reorient them based on So you and I chatted about this, kind of chatted about... So the definition you gave, I think distills it really well, to trust as we wrap up. So George, thanks so much for spending the day with me, And we thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MartinPERSON

0.99+

GilbertPERSON

0.99+

Lisa MartinPERSON

0.99+

DeloitteORGANIZATION

0.99+

GeorgePERSON

0.99+

George GilbertPERSON

0.99+

Bill McDermottPERSON

0.99+

PeruLOCATION

0.99+

Southeast AsiaLOCATION

0.99+

LisaPERSON

0.99+

GEORGANIZATION

0.99+

New JerseyLOCATION

0.99+

two-wordQUANTITY

0.99+

three servicesQUANTITY

0.99+

SAPORGANIZATION

0.99+

Last yearDATE

0.99+

tomorrowDATE

0.99+

two initiativesQUANTITY

0.99+

SAP R/3TITLE

0.99+

AtlanticLOCATION

0.99+

LeonardoORGANIZATION

0.99+

oneQUANTITY

0.99+

bothQUANTITY

0.99+

todayDATE

0.99+

firstQUANTITY

0.99+

two sessionsQUANTITY

0.98+

three operationsQUANTITY

0.98+

Middle EastLOCATION

0.98+

four weeksQUANTITY

0.98+

each instanceQUANTITY

0.98+

AutodeskORGANIZATION

0.97+

ConcurORGANIZATION

0.97+

first stepQUANTITY

0.96+

one caveatQUANTITY

0.96+

day oneQUANTITY

0.95+

SalesforceORGANIZATION

0.95+

PTCORGANIZATION

0.94+

SAP Cloud Trust CenterORGANIZATION

0.94+

AribaORGANIZATION

0.94+

HANA Enterprise CloudTITLE

0.94+

day twoQUANTITY

0.94+

one partQUANTITY

0.93+

this morningDATE

0.93+

2016DATE

0.92+

Cloud Trust CenterORGANIZATION

0.91+

every 50,000 milesQUANTITY

0.89+

SAP Transformation NavigatorTITLE

0.88+

SapphireTITLE

0.87+

SAP SapphireORGANIZATION

0.87+

2017DATE

0.86+

1300sDATE

0.86+

ItalianOTHER

0.84+

SAP Cloud PlatformTITLE

0.83+

Salesforce.comORGANIZATION

0.83+

#SAPPHIRENOWTITLE

0.82+

DayQUANTITY

0.82+

doubleQUANTITY

0.81+

one wayQUANTITY

0.81+

first comprehensive attemptQUANTITY

0.8+

K Young, Datadog | AWS Summit SF 2017


 

>> Voiceover: Live from San Francisco, it's The Cube. Covering AWS Summit 2017. Brought to you by Amazon Web Services. >> Hi, welcome back to The Cube. We are live in San Francisco at the AWS Summit. We've had a great day so far. I'm Lisa Martin here with my co-host George Gilbert. We are very excited to be joined by Datadog. K Young the Director of Strategic Alliances from Datadog, welcome to The Cube. >> Thank you, hi. Glad to be here. >> So, tell us, besides loving your shirt, as I've already told you, tell us and our viewers a little bit about who Datadog is and what do you do. >> Alright, so Datadog does infrastructure monitoring and application performance monitoring. So what that means is we're able to not only look at your hosts and the resources they have available to them, meaning CPU and memory and that sort of thing, but also all the software that's running on top of it. So, if it's off the shelf software, like a database, like Postgres, or maybe it's EngineX, we understand over 200 different off-the-shelf types of software, integrate with them directly so all you have to do is turn on those integrations, and we can tell you whether those pieces of software are performing at the rate that they ought to, with a sufficiently low number of errors. That's the infrastructure monitoring side of things. Then application performance monitoring, is where you can actually trace execution of requests, individual requests, across different services, or microservices, and tell where time is being spent and track metadata so that in a forensic case, you can go back and determine, oh this type of call is producing a lot of errors. Oh, and those errors are coming from here, and then, you know, maybe a lot of time is being spent here, and then because Datadog also does infrastructure monitoring, drill down into, okay well, what's happening under the hood? Maybe we're having problems because our infrastructure itself is misbehaving in some way. >> You have some pretty big customers: Salesforce, Airbnb, Samsung. I was just reading yesterday, an article that was published, that you've been, Datadog, in the top five businesses profiled by IDC as the multi-cloud management vendors to look out for. So, some pretty big accolades, some pretty big customers. How long have you been in business? >> K Young: Since 2010. >> Lisa: 2010. And tell us about what you're doing with Amazon. >> What we're doing with Amazon. So, let's see, where to begin. Amazon, a lot of people come to Datadog when they have complex systems to manage, meaning highly dynamic, or high scale, or they've adopted Docker, and their infrastructure is changing frequently. More frequently than infrastructure used to change ten years ago. Because Datadog makes it easy or ... Easy, possible even, to make sense of what's happening, even as your infrastructure changes on an hourly basis. So, a lot of customers come to us around the time they're interested in using dynamic infrastructure. Sometimes that's on Amazon, and sometimes that's when you're On-Prem but you're adopting Docker, for example, or microservices. We get a lot of business on Amazon. I think it's fair to say Amazon loves us, because it makes it so much easier to use their service and to adopt their service. And we're sort of the defacto infrastructure monitoring service for Amazon. >> So, you talking about containers, microservices, hyperscale. Is there a break with earlier monitoring and management software that didn't handle the ephemeral nature of applications and infrastructure? Is that the change? >> Yeah, that's basically it. Ten years ago, you as an assistant administrator or operations person, would have known the names of every one of your servers, and you kind of treat them affectionately. "Oh, you know, old Roger is misbehaving again, we got to give it a reboot." These days you don't know, in many cases, how many servers you have, much less what's running on them. So, it used to be that you could set up monitoring where you say, "Okay, I need to look at these things. They should be doing these set of tasks." And you set it up and basically forget it for six months or a year. Now, what's happening on any given machine or what's inside of a container, is churning very, very frequently. And so, to make sense of that, you have to use tags. So to tag all of your infrastructure with what it's doing, maybe what environment it is, like if it's staging or production, whether it's in AWS or On-Prem. Maybe it's a part of a build. And then you can look at your infrastructure and its performance through those lenses. You don't have to think in advance, "Oh, I'm going to want to know what's happening in US-East-1 in production with build number 1180." You can just do that on the fly with Datadog. And that's the sort of thing that we make possible. It's necessary for modern applications and modern services, that really wasn't possible before. >> So, it sounds like it's fairly straightforward at the infrastructure level to know what metrics and events you want to collect, in the sense that, you know, CPU utilization, memory utilization and, you know, maybe even a database number of connections and query time, but as you move up at the application level, the things that you want to ask could become very different between apps. >> K Young: Yeah. >> And then very different across Cloud or On-Prem. >> Yeah, that's right. So, there's sort of two classes of different things you could want to ask. Datadog accepts totally custom metric, so we know about, as I said, 200 different technologies, and we can collect everything automatically. But then, you're going to have your own application and you're going to want to send us things that are specific to your business. We take those just as well. So, for example, I think we have one customer who tracks when cash register drawers open or close. You know, that's not built in, but they can send those metrics to us. They get graphed the same way. We can set alerts on it the same way. We can use sophisticated machine learning to make projections about how we expect those patterns to be in the future, and if the cash registers don't open at the right rate, we can let somebody know that something has gone wrong. So, we can collect any kind of metrics. Then on top of that, we've got application performance monitoring. Right, so that's where you've written custom code, and Datadog, since it's already running on all of your servers, can track requests as it moves from service to service, or between microservices, and recompile that request into a visualization that will show you everything that happened, how long it took, and allows you to drill in and get metadata about each thing. So, you can actually reconstruct where time is going or whether there are problems. >> Why don't I ask you about some of the trends? As I mentioned a minute ago reading that article, or the mention of Datadog by IDC as one of the top five multi-cloud management vendors. What are some of the trends that you were seeing with respect to hypercloud, multi-cloud? You know, we've heard some conversation today from AWS, but I'd love to get your feedback, as the Director of Strategic Initiatives, what are you seeing? >> So, the trend that ... I'm going to answer this, but the trend that we were seeing a few years ago was more and more people were adopting Cloud, period. And that's continued and continued and continued. 18 months ago, if you went and talked to a large financial services organization and you told them, we do monitoring. Okay, they're interested. Well, we run only in the Cloud, so you actually have to send your data to the Cloud. They'd show you the door very politely. And now, they say, "Oh well, we're going to the cloud, now, too." It's a great place to be. Now, we're seeing organizations of all sizes, all types, are in the Cloud. So, the next leading trend is containerization and microservices. So, we actually published a Docker adoption report. We've done it three times now. We refreshed it yesterday. We do it about every six months, and we take a look at all of the usage that we can see. Because we have this somewhat unique vantage point of being able to see tens of thousands of customer's usage, real usage, of infrastructure, and look at, okay, which percent are using Docker? When they use it, do they dabble with it? Do they fully adopt it? Do they eventually abandon it? What are they running on it? So, we published a very long report. Anyone who's interested can actually Google "Docker adoption" and we'll be the top hit there. We've got eight different fact that talk about how quickly it's being adopted. Docker adoption is really quite remarkable. We're seeing a 40% growth in true adoption, not just dabbling, since last year. At the same time, we've seen a more than 100% increase, a more than doubling, of the companies that use Docker, that are using orchestrators, like Kubernetes, to manage even more sophisticated and rapidly changing fleets of machines. And that's really meaningful, because orchestration with containers really enables microservices, which enables Devox, which enables people to move quickly with very little friction and own specific parts of a stack. >> Does that mean that their On-Prem operations are beginning to look more and more in terms of processes like the Clouds? That it's not just a VM, but they're actually orchestrating things? >> Yes, it does. And people will run orchestration on top of the Cloud, or they'll run it On-Prem. But yeah, it's exactly the same. It's the same idea. If you're On-Prem you have a physical machine, you're running several containers in it, and they can just be very fluid and dynamic. >> And then how does machine learning ... How do you fit machine learning into the, whether it's at the infrastructure level or at the application performance management level, do you run it and get a baseline of what's normal? Or ... >> So there's some very deep math behind what we do, so we're able to project where metrics ought to be in the future. Across any number of different categories or tags that you give us, it's important that we do that very accurately 'cause we don't have false positives in our alerts, meaning we don't want to wake people up unnecessarily. We also don't want to have false negatives, meaning we don't want not alert when we should have. So there's a lot of math that goes into that and we can take care of very complex periodicity even while trends are happening within metrics, and doing that at scale, so it happens in real time is a challenge, but one that we're very proud of our solution. >> So you've been able to really derive some differentiation in the market. One of the things I was also reading was that a lot of the business, I mentioned some of those great brands, is in the U.S. and your CIO has been quite vocal about wanting to change that. What's happened in the last year, maybe with big rounds of Fund-Me raise, that's going to help you get more global as even Amazon was talking about expansion and geographies this morning? >> Well so it's even been a while since we've raised money, a year and a half now, I guess, but the company is doing so well. It's a great place to be. The company's doing so well that we're just able to expand our operations and look bigger and bigger. Our two founders are actually French, or they were born in France, at any rate. And so we have a Paris office and we're moving pretty aggressively into Europe now. >> Lisa: Fantastic. >> One question on, again, the hybrid-cloud migration. Whether it's On-Prem to, say, Azure, or On-Prem to Azure and Amazon, would the use of Datadog make it easier for the customer to, essentially, run the same workloads on either of the Clouds? >> Absolutely. So we see a lot of people coming to Datadog at the moment when they need to move from pure On-Prem to maybe hybrid or maybe fully into the Cloud. Because you can set up Datadog to look at both those environments and understand the performance characteristics and then move over bytes of into the Cloud and make sure that nothing's falling apart and that everything is behaving exactly as you expect. >> And then how about for those who say, "Well, we want to be committed to two Clouds, because we don't want to be beholden." >> K Young: Right. >> Do you help with that? >> Yeah, we don't help with literally, like, data movement, which is sometimes one of the challenges. >> But in managing, it's sort of pane of glass? >> Yes, exactly. It's all one pane of glass and you can take ... Once metrics are in Datadog, it doesn't really matter where they came from, you can overlay requests per second or latency and frame Google's Cloud right alongside latency that you're seeing in AWS on the same graph or next to each other, but you can set alerts if they deviate too much from each other. >> So it's kind of an abstraction layer or at least a commonality that customers would be able to have those applications and different clouds from different providers and be able to see the performance of the application and the infrastructure. And so one last question for you, as we're getting ready up to wrap here, you know there's a lot of debate about hybrid-cloud and there's reports that say in the next few years, companies will have to be multi-cloud, just look at the Snap and IPO filing from a couple months ago. Big announcement. Two billion dollars over five years with Google. And then, revise that S1 filing to announce a billion dollar deal with Amazon. >> K Young: Yeah. >> So I'm just curious. Are you seeing that maybe with the enterprises, like a Snap, more and more that, by default, whether it's for redundancy of infrastructure operations, is that a trend that you're also seeing? That you're quite well-positioned to be able to facilitate? >> Yeah, we're definitely seeing ... You know, it's clear that Amazon is in the commanding position, for sure, but we are definitely seeing more and more interest in actual action and other Clouds as well. >> Fantastic. Well, we thank you first of all for being on the program today. Great. Congratulations on the success that you've had with Amazon, with others, and with the market differentiation. Congrats on expanding globally as well, and we look forward to having you back on the program. >> Right. Well, thanks very much for having me. >> Excellent. So K Young, Director of Strategic Alliances from Datadog. On behalf of K, my co-host George Gilbert, I'm Lisa Martin. You're watching The Cube live from the AWS Summit in San Francisco, but stick around 'cause we're going to be right back. (techno music) (dramatic music)

Published Date : Apr 20 2017

SUMMARY :

Brought to you by Amazon Web Services. We are live in San Francisco at the AWS Summit. Glad to be here. about who Datadog is and what do you do. and the resources they have available to them, How long have you been in business? And tell us about what you're doing with Amazon. and to adopt their service. Is that the change? And so, to make sense of that, you have to use tags. in the sense that, you know, CPU utilization, and if the cash registers don't open at the right rate, What are some of the trends that you were seeing but the trend that we were seeing a few years ago It's the same idea. or at the application performance management level, or tags that you give us, that's going to help you get more global but the company is doing so well. or On-Prem to Azure and Amazon, and that everything is behaving exactly as you expect. because we don't want to be beholden." Yeah, we don't help with literally, like, data movement, on the same graph or next to each other, and be able to see the performance Are you seeing that maybe with the enterprises, is in the commanding position, and we look forward to having you back on the program. Well, thanks very much for having me. from the AWS Summit in San Francisco,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
George GilbertPERSON

0.99+

Lisa MartinPERSON

0.99+

FranceLOCATION

0.99+

EuropeLOCATION

0.99+

AmazonORGANIZATION

0.99+

SamsungORGANIZATION

0.99+

six monthsQUANTITY

0.99+

LisaPERSON

0.99+

AirbnbORGANIZATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

AWSORGANIZATION

0.99+

USLOCATION

0.99+

K YoungPERSON

0.99+

San FranciscoLOCATION

0.99+

KPERSON

0.99+

DatadogORGANIZATION

0.99+

40%QUANTITY

0.99+

2010DATE

0.99+

Two billion dollarsQUANTITY

0.99+

yesterdayDATE

0.99+

last yearDATE

0.99+

two foundersQUANTITY

0.99+

U.S.LOCATION

0.99+

ParisLOCATION

0.99+

GoogleORGANIZATION

0.99+

oneQUANTITY

0.99+

200 different technologiesQUANTITY

0.99+

18 months agoDATE

0.99+

IDCORGANIZATION

0.99+

a yearQUANTITY

0.99+

SalesforceORGANIZATION

0.99+

three timesQUANTITY

0.99+

RogerPERSON

0.99+

a year and a halfQUANTITY

0.98+

OneQUANTITY

0.98+

One questionQUANTITY

0.98+

one customerQUANTITY

0.98+

two classesQUANTITY

0.98+

AWS SummitEVENT

0.98+

bothQUANTITY

0.97+

each thingQUANTITY

0.97+

Ten years agoDATE

0.97+

one last questionQUANTITY

0.97+

todayDATE

0.97+

The CubeTITLE

0.97+

firstQUANTITY

0.96+

tens of thousandsQUANTITY

0.96+

billion dollarQUANTITY

0.96+

few years agoDATE

0.95+

over 200 differentQUANTITY

0.95+

DockerTITLE

0.95+

five businessesQUANTITY

0.94+

ten years agoDATE

0.94+

AWS Summit 2017EVENT

0.92+

more than doublingQUANTITY

0.88+

FrenchLOCATION

0.88+

Bradley Wong, Docker & Kiran Kamity, Cisco - DockerCon 2017 - #theCUBE - #DockerCon


 

>> Narrator: From Austin, Texas, it's theCUBE covering DockerCon 2017, brought to you by Docker and support from it's ecosystem partners. (upbeat music) >> Hi, and we're back, I'm Stu Miniman, and this is SilconANGLES production of the Cube, here at DockerCon 2017, Austin, Texas. Happy to have on the program Kiran Kamity, who was CEO of ContainerX which was acquired by Cisco. And you're currently the senior director and head of container products at Cisco. And also joining us is Brad Wong, who is the director of product management at Docker. Gentlemen, thank you so much for joining us. >> Brad: Thanks for having us. [Kiran] Thank you, Stu. >> So Kiran, talk a little bit about ContainerX, you know, bring us back to, why containers, you know why you help start a company with containers, and when to be acquired by a big company like Cisco. >> Yeah, it was actually late 2014 is when Pradeep and I, my co-founder from ContainerX, we started brainstorming about, you know, what do we do in the space and the fact that the space was growing, and my previous company called RingCube, which has sold to Citrix, where we had actually built a container between 2006 and 2010. So we wanted to build a management platform for containers, and it was in a way there was little bit of an overlap with Docker Datacenter, but we were focusing on mostly tendency aspects of it. Bringing in concepts like viamordi rs into containers et cetera. And we were acquired by Cisco about eight months ago now, and the transition in the last eight months has been fantastic. >> Great, and Brad, you're first time on the cube, so give us your background, what brought you to Docker? >> Yeah, so actually before Docker I was at actually, a veteran of Cisco, interestingly enough. Many different ventures in Cisco, most recently I was actually part of the Insieme Networks team, focusing on the software defined networking, and Application Centric Infrastructure. Obviously I saw a pretty trend in the infrastructure space, that the future of infrastructure is being led by applications and developers. With that I actually got to start digging around with Docker quite a lot, found some good interest, and we started talking, and essentially that's how I ended up at Docker, to look at our partner ecosystem, how we can evolve that. Two years ago now, actually. >> I think two years ago Docker networking was a big discussion point. Cisco's been a partner there, but bring us up to speed if you would, both of you, on where you're engaging, on the engineering side, customer side, and the breadth and depth of what you're doing. >> You're right, two years ago, networking was in quite a different place. We kicked it off with acquiring a company back then called SocketPlane, which helped us really define-- >> Yeah and we know actually, ---- and ----, two alums, actually I know those guys, from the idea to starting the company, to doing acquisition was pretty quick for you and for them. >> Right, and we felt that we really needed to bring on board a good solid networking DNA into the company. We did that, and they helped us define what a successful model would be for networking which is why they came up with things like the container networking model, and live network, which then actually opened the door for our partners to then start creating extensions to that, and be able to ride on top of that to offer more advanced networking technologies like Contiv for example. >> Contiv was actually an open source project that was started within Cisco, even before the container was acquisitioned. Right after the acquisition happened, that team got blended into our team and we realized that there were some really crown jewels in Contiv that we wanted to productize. We've been working with Docker for the last six months now trying to productize that, and we went from alpha to beta to g a. Now Contiv is g a today, and it was announced in a blog post today, and it's actually 100% open-source networking product that Cisco TAC and Cisco advanced services have offered commercial support and services support. It's actually a unique moment, because this is the fist 100% open-source project that Cisco TAC has actually offered commercial support for, so it's a pretty interesting milestone I think. >> I think also with that, we also have it available on Docker store as well. It's actually the first Docker networking plug-in that it's been certified as well. We're pretty also happy to have that on there as well. >> Yeah. >> Anything else for the relationship we want to go in beyond those pieces? >> We also saw that there was a lot of other great synergies between the two companies as well. The first thing we wanted to do was to look at how we can also make it a lot better experience for joint customers to get Docker up and running, Docker Enterprise Edition up and running on infrastructure, specifically on Cisco infrastructure, so Cisco UCS. So we also kicked off a series of activities to test and validate and document how Docker Enterprise Edition can run on Cisco UCS, Nexus platforms, et cetera. We went ahead with that and a couple months later we brought out, jointly, to our Cisco validated designs for Docker Enterprise Edition. One on Cisco UCS infrastructure alone, and the other one jointly with NetApp as well, with the FlexPod Solution. So we're also very very happy with that as well. >> Great. Our community I'm sure knows the CVD's from what they are out there. UCS was originally designed to be the infrastructure for virtualized environments. Can you walk me through, what other significant differences there or anything kind of changing to move to containers versus what UCS for virtualized environment. >> The goal with that, UCS is esentially considered a premium kind of infrastructure server infrastructure for our customers. Not only can they run virtual environments today, but our goal is as containers become mainstreamed, containers evolved to being a first-class citizen alongside VM. We have to provide our customers with a solution that they need. And a turnkey solution from a Cisco standpoint is to take something like a Docker stack, or other stacks that our customer stopped, such as Kubernetes or other stacks as well, and offer them turnkey kind of experience. So with Docker Data Center what we have done is the CVD that we've announced so far has Docker Data Center, and the recipe provides an easy way for customers to get started with USC on Docker Data Center so that they get that turnkey experience. And with the MTA program that was announced, today at the key note. So that allows Cisco and Docker to work even more closely together to have not just the products, but also provide services to ensure that customers can completely sort of get started very very easily with support from advanced services and things like that. >> Great, I'm wondering if you have any customer examples that you can talk through. If you can't talk about a specific, logo, maybe you can talk about. Or if there are key verticals that you see that you're engaging first, or what can you share? >> We've been working joint customer evals, actually a couple of them. Once again I don't think we can point out the names yet. We haven't fully disclosed, or cleared it with their Prs Definitely into financials. Especially the online financials, a significant company that we've been working with jointly that has actually adopted both Contiv, and is actually seeing quite a lot of value in being able to take Docker, and also leverage the networking stack that Contiv provides. And be able to not just orchestrate networking policies for containers, but the other thing that they want to do is to have those same policies be able to run on cloud infrastructure, like EWS for example. So they obviously see that Docker is a great platform to be enable their affordability between on premises and also public cloud. But at the same time be able to leverage these kind of tools that makes that transition, and makes that move a lot easier so they don't have to re-think their security networking policies all over again. That's been actually a pretty used case I thought of the joint work that we did together with Contiv. >> Some of the customers that we've been talking to in fact we have one customer that I don't think I'm supposed say the name just yet, but we've drollled it out, has drolled out Contiv with the Docker on time. In five production data centers already. And these are the kind of customers that actually take to advanced networking capabilites that Contiv offers so that they can comprehensive L2 networking, L3 networking. Their monitoring pools that they currently use will be able to address the containers, because the L2, the L3 networking capabilities allows each container to have an IP address that is externally addressable, so that the current monitoring tools that you use for VMs et cetera can completely stay relevant, and be applicable in the container world. If you have an ACI fabric that continues to work with containers. So those are some of the reasons why these customers seem to like it. >> Kiran, you're relatively new into Cisco, and you were a software company. Many people they still think of Cisco as a networking company. I've heard people derogatory it's like, "Oh they made hardware define networking when they rolled out some of this stuff." Tell us about, you talk about an open source project that you guys are doing. I've talked to Lou Tucker a number of times. I know some of the software things you guys are doing. Give us your viewpoint as to your new employer, and how they might be different than people think of as the Cisco that we've known for decades. >> Cisco is, has of course it has, you know, several billion dollars of revenue coming in from hardware and infrastructure. And networking and security have been the bread and the butter for the company for many many years now But as the world moves to Cloud-Native becoming a first class citizen, the goal is really to provide complete solutions to our customers. And if you think of complete solutions, those solutions include things like networking, thing like security. Including analytics, and complete management platforms. At the same time, at the end of the day, the customers want to come to peace with the fact that this is a multi-cloud world Customers have data centers on premises, or on hosted private cloud environments. They have workloads that are running on public clouds. So with products like cloud center, our goal is to make sure that whatever they, the applications that they have, can be orchestrated across these multiple clouds. We want to make sure that the pain points the customers have around deploying whole solutions include easy set-up of products on infrastructure that they have, and that includes partnerships like UCS, or running on ACI or Nexus. We want to make sure that we give that turnkey experience to these customers. We want to make sure that those workloads can be moved across and run across these different clouds. That's where products like cloud center come in. We want to make sure that these customers have top grade analytics, which is completely software. That's were the app dynamics acquisition comes in. And we want to make sure that we provide that turnkey experience with support in terms of services. With our massive services organization, partners, et cetera. We view this as our job is to provide our customers what they need in terms of the end solution that they're looking for. And so it's not just hardware, it's just a part of it. Software, services, et cetera, complimented. >> Alright, Brad last question that I have for you in the keynote yesterday, I couldn't count how many times the word ecosystem was used. I think it was loud and clear that everybody there I think it was like, you know, Docker will not be successful unless it's partners are successful, kind of vice versa. When you look at kind of the product development piece of things, how does that resonate with you and the job that you're doing? >> We basically are seeing Docker become more of a, more and more of a platform as evidenced by yesterdays keynote. Every platform, the only way that platform's going to be successful is if we can do great, we have great options for our partners, like Cisco, to be able to integrate with us on multiple different levels, not just on one place. The networking plug-in is just one example. Many many other places as well Yesterday we announced two new open source initiatives. Lennox kit and also the movi project. You can imagine that there's probably lots of great places where partners like Cisco can actually play in there, not just only in the service fees, but maybe also in things like IOT as well, which is also a fast-emerging place for us to be. And all the way up until day two type of monitoring, type of environment as well where we think there's a lot of great places where once again, options like app dynamics, tetration analytics can fit in quite nicely with how do you take applications that have been migrated or modernized into containers, and start really tracking those using a common tool set. So we think that's really really good opportunities for our ecosystem partners to really innovate in those spaces, and to differentiate as well. >> Kiran, I want to give you the final word, take-aways that you want the users here, and those out watching the show to know about, you know, Cisco, and the Docker environment. >> I want to let everybody know that Cisco is not just hardware. Our goal is to provide turnkey complete solutions and experiences to our customers. And as they walk through this journey of embracing Cloud-Native workloads, and containerized workload there's various parts of the problem, that include all the way from hardware, to running analytics, to networking, to security, and services help, and Cisco as a company is here to offer that help, and make sure that the customers can walk away with turnkey solutions and experiences. >> Kiran and Brad, thank you so much for joining us. We'll be back with more coverage here. Day two, DockerCon 2017, you're watching theCube.

Published Date : Apr 19 2017

SUMMARY :

covering DockerCon 2017, brought to you by Docker and head of container products at Cisco. Brad: Thanks for having us. and when to be acquired by a big company like Cisco. and the fact that the space was growing, that the future of infrastructure and the breadth and depth of what you're doing. We kicked it off with acquiring a company back then from the idea to starting the company, and be able to ride on top of that and we realized that there were some really crown jewels in We're pretty also happy to have that on there as well. and the other one jointly with NetApp as well, there or anything kind of changing to move to containers and the recipe provides an easy way for customers that you can talk through. and also leverage the networking stack that Contiv provides. so that the current monitoring tools that you use for I know some of the software things you guys are doing. the goal is really to provide complete solutions and the job that you're doing? and to differentiate as well. take-aways that you want the users here, and make sure that the customers can walk away with Kiran and Brad, thank you so much for joining us.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BradPERSON

0.99+

CiscoORGANIZATION

0.99+

KiranPERSON

0.99+

Kiran KamityPERSON

0.99+

Brad WongPERSON

0.99+

Lou TuckerPERSON

0.99+

100%QUANTITY

0.99+

two companiesQUANTITY

0.99+

ContivORGANIZATION

0.99+

ContainerXORGANIZATION

0.99+

2006DATE

0.99+

DockerORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

Austin, TexasLOCATION

0.99+

RingCubeORGANIZATION

0.99+

2010DATE

0.99+

bothQUANTITY

0.99+

late 2014DATE

0.99+

StuPERSON

0.99+

SocketPlaneORGANIZATION

0.99+

EWSORGANIZATION

0.99+

firstQUANTITY

0.99+

DockerCon 2017EVENT

0.99+

todayDATE

0.99+

#DockerConEVENT

0.99+

each containerQUANTITY

0.99+

Two years agoDATE

0.99+

one customerQUANTITY

0.99+

two years agoDATE

0.99+

UCSORGANIZATION

0.99+

ACIORGANIZATION

0.99+

yesterdayDATE

0.98+

CitrixORGANIZATION

0.98+

Cisco TACORGANIZATION

0.98+

USCORGANIZATION

0.98+

one exampleQUANTITY

0.98+

YesterdayDATE

0.98+

DockerTITLE

0.98+

one placeQUANTITY

0.98+

Docker Enterprise EditionTITLE

0.98+

Day twoQUANTITY

0.97+

two alumsQUANTITY

0.97+

PradeepPERSON

0.97+

yesterdaysDATE

0.97+

Insieme NetworksORGANIZATION

0.97+

five production data centersQUANTITY

0.97+

OneQUANTITY

0.97+

Solomon Hykes, Docker - DockerCon 2017


 

>> Voiceover: Live from Austin, Texas. It's the Cube, covering DockerCon 2017, brought to you by Docker and support from its Ecosystem partners. >> Hi, I'm Stu Miniman and joining me, my co-host, for the second day of theCube's program, Jim Kobielus. Really excited to have, not only the founder of Docker, Solomon Hykes, he's also the CTO, Chief Product Officer, did some keynotes here, all over the place. So, Solomon, thank you so much, thanks for havin' us. Congratulations on all the progress and welcome back to theCUBE. >> Thanks a lot! It's a lot of fun! >> So many things to talk about, but let's start with you. How ya doin'? I'm sure there's so much that went into this week. What are you most proud of? What are you most excited about these days? >> Where to start? The cool thing, for me, about DockerCon is I focus on the keynote. We just package up the nice story, try to explain what we're doing, where we're going, and that's a pretty massive team effort. I think it's 30 of us for months preparing, deciding what we want to talk about, working on demos, pulling all-nighters. It's just really fun to see a keynote go from nothing to a really nice, fun story. Then I get to show up and discover all the other cool stuff. I'm like everyone else. I just marvel at the organization, the crowd, the energy. I'm a happy camper right now. >> It's interesting some of the dynamics in the industry. Okay, what's the important part? Who contributes to what? What fits where? Two years ago we had the hugging out as to the runtime and had the Open Source Foundation step in. Big thing at the keynote yesterday, two big things: it was Moby project and Linux Kit. Can you, maybe, unpack for our audience a little bit? What is Docker, the company? What's the Open Source? Who are some of the main players? It was the whole keynote, so we don't have time to get into it. What's real, and what was there? >> You're right, that was the big announcement, the Moby Project. Basically, in a nutshell, we launched Docker and we made it a product and an open source project, all rolled into one. We just kind of adopted this hybrid model, building a product that would just help people be more efficient, developers and ops, and at the same time, we would develop that in the open. That really helped us. It participated in the appearance of this huge Ecosystem. It was a big decision for us. Over time, both grew. Docker grew as a product, and it grew as an open source project. So over time we had to adapt to that growth. On the open source side that meant gradually spitting out smaller projects out of the main one. Now we have dozens of projects, literally. We got containerd. We got SwarmKit. We got InfraKit. We got all these components, and each of those is a project. Then we integrate them. What we're doing now, is we're completing that transformation and making sure there's a place for open source collaboration, free-for-all, openness, modularity, try new things, move fast, break things maybe. Then there's the product that integrates, takes the best parts, integrates them together, makes sure they're tested, they're solid, and then ships that to developers and customers. Basically we're saying, Moby is for open source collaboration. It's our project and all of it. And Docker is the product that integrates that open project into something that people can consume that's simple. It's two complementary parts to our platform. >> Could you talk a little bit about, there's kind of that composable nature of what you're building there. There's what Docker will build from it, and I think you've got a couple of examples of some of your partners. What's going to happen in the Cloud? What's going to happen with some of these others? Walk us through one of those. >> Everything about Docker's modular. So really, if you installed Docker for your favorite platform, whether it's the Mac, Windows, your favorite Cloud provider, Linux server, etc., you're actually installing a product that's an assembly of lots of components. Like I said, these components are developed in the open and then they're assembled. Now with the Moby Project, there's a place to assemble in the open, start the assembly in the open, so that other companies, the broader Ecosystem, can collaborate in the assembly, kind of experiment with how things fit together. The really cool thing about that is it makes it way easier to ports the platform, to expand it and customize it. So if you're a Cloud provider and you see all the pieces and you think "Well, I could optimize that. "I could add a little bit of magic "to make it work even better in my Cloud or in my hardware." Then you can do that in the open. You can do that with a community. Then you can partner with Docker to test it, and certify it, and distribute it as an easy-to-use product. Everything can go faster. >> You mentioned open a lot there. Does that mean that Docker is now closed? There's certain people that are very dogmatic when it comes to open source, so maybe you can parse that for us. >> I think it's the same people that were complaining before that we were confusing our product and an open project. We think of ourselves as having a lot to learn, and there's an Ecosystem that's made of a lot of people and companies and projects that have had a lot of experience with openness in the past. We spend most of our time listening, figuring out what the next step should be, and then taking that next step. People told us, "Clarify the relative place, "open source collaboration and your product." That's what we did. Now, I'm sure someone's going to say, "I preferred it before." Well, we just have to, at some point, chose. The key thing to remember is, Docker does everything in the open, and then integrates it into a product that you can use. If you don't like the product, if you want an alternative, then you still have all the pieces in the open right now. I would say, no. Not only is Docker not going closed, we're actually accelerating the rate at which we're opening up stuff. >> Personally, I felt it was a nice maturation of what you've done before, which was batteries are included but swappable. But we've taken the next step. It reminds me of those cool little science kits my kids get. Where it's like, oh okay, I could free build it or I can do it or I could do some other things. >> We use that tagline. It used to be, Docker has batteries included, but swappable. You can make other batteries and we'll swap them in to the product. We'll decide what's in there. Now everyone can do the swapping. It's a big free-for-all. Honestly, it's fun to watch. >> Is there any piece of Docker, the project, outside of core Docker, that Docker the company will refrain from building, will rely on ISVs to build? Or will Docker the company get involved, or reserve for itself the latitude to get involved in development of more peripheral pieces of the overall project going forward? >> We spent a lot of time thinking about that. Honestly, there's so many different constraints, we just decided we're going to follow the users, follow the customers. We just want a platform that works and solves people's problems. That's the starting point. From there, we work out the implementation details, what technology to use, the order in which to build things. Also, what makes more sense in the core platform and what makes more sense as an add-on. It's kind of on a case-by-case basis. >> Is there a grand vision document or functional service layered architecture that all of these components of the project are implementing or enabling? In other words, will Docker, as a project ever be complete or will it always be open-ended, will it constantly evolve and possibly broaden in scope continuously, indefinitely? >> If you look at the Moby Project on the one side, with experimentations and all the building blocks, I think that's going to just continuously expand. Really, openness is all about scale. There's only so much one company can build on their own, but if you really show the Ecosystem you're serious about really welcoming everybody and allowing for different opinions and approaches, then, honestly, I think there's no limit to how large that project can scale. I think Moby can go into tens of thousands of contributors as open source becomes easier and more accessible, which we're really working on, I think it can go into hundreds of thousands. That's going to take a while. That will, I think, never end growing. I think Docker, the product, the company, the reason we've been so successful is that we've been, well at least we've worked really hard to focus and be disciplined in what problems we want to solve, so it's a more iterative approach. We would rather solve less problems, but solve them really, really well, so that if you're using Docker for developing or going to production, you're really delighted Just every detail kind of fits together. There's a roadmap, of course. We're going to do more and more. But we don't want to rush trying to do everything. >> Solomon, great progress on all of these pieces. I've got the tough one for you. In the last year or so, Kubernetes has really exploded out there. Lots of your Ecosystem is heavily using it. Is it that Docker Swarm and Kubernetes will just be options out there? I look at Microsoft Dasher and they're very supportive of both initiatives. Many of your partners are there. How do you guys look at that dynamic and how would you like people to think of that going forward? >> It's a great case study of why we're transitioning to this open project model with Moby. The whole point is that at any given time, Docker, the product, will not be using all of the building blocks out there. It's just not possible. There's too many permutations. So we have to chose. One of these building blocks is orchestration. A year ago when we decided to build an orchestration, we had really specific opinions on what it should look like, as product builders. We looked around and we decided it needs to be a new kind of a building block. So we built Swarm Kits for our own use and we integrated it. Now that there's an open project for elaboration, we're throwing Swarm Kit in there so that everyone can modify it, extend it, and also replace it with something else. I think the big change, now, is that if you look at something like Kubernetes or Rocket as a container on time. Honestly, I could make a super long list of all the components out there that are really cool and we don't use in Docker. Now you can combine them all in Moby in custom assemblies. And we actually demoed that on stage yesterday. We showed taking some pieces from Docker and taking Kubernetes as a piece and plugging it together and saying "Look, there you go! "Weekend project." I think we're going to see a lot of conversions and reuse of ideas and codes, especially in the orchestration piece. I think over time, the differences between Kubernetes, Swarm Kit, and others will really diminish. We'll just integrate the bits and pieces that make the most sense. I don't really think of Kubernetes as a competitor or a problem. I think of it as another cool component in the Moby Ecosystem. Yeah, I think it's a lot of cool stuff. >> I tell ya, the Kubernetes community is just so thrilled that containerd is now open source. It really solves that issue and really it hasn't been something I've heard a lot, coming into the show. It's one of the themes we wanted to look at, and it hasn't been something that is like, Oh boy! Fight, war, anything like that. Hey! Congrats on that! I want to turn back to your root there. I think about dotCloud to Docker. It's a lot about the application modernization. Fast forward to today, Ben's up on stage talking of the journey. How do we take your legacy applications and wrap them in? What do you think about that kind of progression? We like that spectrum out there to help customers, at least partially, and be able to make changes. But I can't imagine that's when you started Docker that that was one of the use cases that you really thought you'd use. What surprised you? What's changed how you built things? What do you see from customers? >> Actually, you'll find this surprising, but this actually was a use case that we had in mind from the very beginning. I think that was lost in the noise for the first few years in the life of Docker because it became this exciting, new thing. >> Come on, Cloud native, Cloud native! >> Yeah, exactly! Docker has a huge developer community now. We spent a lot of time making it great for devs. The truth is, I used to be sysadmin. I used to be on call. I'm an ops guy first and we learned how to help developers. Developers are the customer. The Docker came out of our ops roots and then it evolved to help the developers. That's something that's now lost in the noise of history. It's a really pragmatic tool. It's built to solve real problems. One design opinion we baked in from the beginning is that it has to allow you to do things incrementally. If Docker forces you to throw away what you have, just to get the benefits, then we screwed up. The whole point is that Docker can adapt to what you're doing. For example, you'll see a lot of details in how Docker's designed to allow for stateful applications to run in there, to allow for your own network model to fit. Before Docker, all the containers solutions, all the paths, required you to change your app. Even things like port discovery. You had to change the source code. Docker did not require that. It gives you extra things you can do if you want to go further. But the starting point is incremental. Honestly, I'm really glad that now that's resonating, that we're reaching that point in the community where there's a lot of people using Docker interested in that, because for a few years I was worried that that would be missed in the noise of early adopters that don't mind rewriting everything. From the beginning, Docker was not just for Cloud-Native, microservices, Twelve-Factor, etc. I'm, personally, as a designer of products, as a pragmatist, I'm just happy that we're there. >> How do you see Docker evolving to support more complex orchestrations for data? For hybrid data cloud, environments private and public? You got the likes of Microsoft, Oracle, and IBM as partners and so forth. They have these complex scenarios now, their customers or petabytes scale and so forth. Where do you see that going, the data, the persistence of storage side of the containerization under Docker going? >> I think there's a lot of work to do. I think over time we're going to see specialized solutions for different uses of data. Data has such a big word. It's like computing. Just like computing now is no longer considered one category but it's specialized, I think data will be the same. I think it's a great fit for this modular Lego approach to the Docker Ecosystem. We're going to see different approaches to different data models, and I think we're going to see a lot modularization and a lot of different assemblies. Again, I think a lot of that will happen in Moby and we'll see a lot of cool, open stuff. We, ourselves, are facing a lot of data related questions, in request for customers. There's stuff in there already. You've got data volumes. And I think you're going to see a lot more on the data topic in the next year. >> Like containerization of artificial intelligence and deep learning and all that. Clearly, that's very incognito so far because, yeah. >> We're seeing a lot of really cool machine learning use cases using Docker already. OpenAI is all on Docker. We watch what they're doing with great interest. >> Are you a member of that consortium? >> Let's say friends and family (laughs). So OpenAI came out of the Y Combinator Ecosystem and Docker is a Y Combinator company. We spend a lot of time with them. I think AI on Docker is a really cool use case. I'm a big fan of that. >> Jim: Cool! Us too! >> Solomon, unfortunately, we're runnin' low on time. Last question I have for you is, there is so many things we can do with Docker now. Here's a bunch of the use cases like, "Oh, I can run lots of applications." Everything from Oracles in the store now, things like that. What is the quick win when you're talking to customers and let's get started? What's the thing that gets them the most excited that impacts their business the fastest? >> Ya know, it's-- >> And it never comes down to one thing, but, ya know. >> Honestly, we keep talking about Lego. I think it's like asking, what's your favorite Lego toy? I think we're maturing in the model. I think Lego is just the perfect analogy because it's a lot of building blocks. There's more and more, but there's also the sets. I think we're consolidating around a few different sets. There's maybe a dozen main use cases. We're seeing people identify with one, and then we're helping them see a starting point there. Here's a starter set for your problem, and then it clicks. >> Yeah, I hear that, and I can't help but think back. You're the big green platform that all my Legos build on. I can have my space stuff. I can have my farm set. Maybe the Duplos don't quite fit on it. It's the platform helping me to modernize a lot of what we're doing. Solomon Hykes, always a pleasure to catch up. >> Likewise! Congratulations on all the progress here, and we look forward to catching up with you the next time! We'll be back. Jim and I will be back with lots more coverage here from DockerCon 2017. You're watching theCUBE. (electronic music)

Published Date : Apr 19 2017

SUMMARY :

brought to you by Docker Congratulations on all the progress So many things to talk about, I just marvel at the organization, the crowd, the energy. and had the Open Source Foundation step in. and at the same time, we would develop that in the open. and I think you've got a couple so that other companies, the broader Ecosystem, so maybe you can parse that for us. We think of ourselves as having a lot to learn, of what you've done before, Now everyone can do the swapping. That's the starting point. I think that's going to just continuously expand. and how would you like people I think the big change, now, is that if you look I think about dotCloud to Docker. I think that was lost in the noise that it has to allow you to do things incrementally. of the containerization under Docker going? and I think we're going to see a lot modularization and deep learning and all that. We watch what they're doing with great interest. So OpenAI came out of the Y Combinator Ecosystem Here's a bunch of the use cases like, I think it's like asking, what's your favorite Lego toy? It's the platform helping me and we look forward to catching up with you the next time!

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jim KobielusPERSON

0.99+

SolomonPERSON

0.99+

MicrosoftORGANIZATION

0.99+

JimPERSON

0.99+

Solomon HykesPERSON

0.99+

IBMORGANIZATION

0.99+

OracleORGANIZATION

0.99+

DockerORGANIZATION

0.99+

Y Combinator EcosystemORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

LegoORGANIZATION

0.99+

LegosORGANIZATION

0.99+

Austin, TexasLOCATION

0.99+

yesterdayDATE

0.99+

OpenAIORGANIZATION

0.99+

A year agoDATE

0.99+

Open Source FoundationORGANIZATION

0.99+

last yearDATE

0.99+

OneQUANTITY

0.99+

DockerTITLE

0.99+

hundreds of thousandsQUANTITY

0.99+

Two years agoDATE

0.98+

OraclesORGANIZATION

0.98+

todayDATE

0.98+

eachQUANTITY

0.98+

bothQUANTITY

0.98+

DockerCon 2017EVENT

0.97+

oneQUANTITY

0.97+

second dayQUANTITY

0.97+

this weekDATE

0.97+

next yearDATE

0.97+

both initiativesQUANTITY

0.97+

KubernetesTITLE

0.97+

DockerConEVENT

0.96+

a dozen main use casesQUANTITY

0.95+

Linux KitTITLE

0.95+

two big thingsQUANTITY

0.95+

one categoryQUANTITY

0.95+

first few yearsQUANTITY

0.95+

MobyORGANIZATION

0.95+

LinuxTITLE

0.94+

dozens of projectsQUANTITY

0.94+

Y CombinatorORGANIZATION

0.94+

BenPERSON

0.93+

One design opinionQUANTITY

0.93+

two complementary partsQUANTITY

0.9+

MacCOMMERCIAL_ITEM

0.89+

WindowsTITLE

0.87+

tens of thousands of contributorsQUANTITY

0.86+

one thingQUANTITY

0.86+

dotCloudTITLE

0.86+

firstQUANTITY

0.84+

Swarm KitCOMMERCIAL_ITEM

0.83+

Docker -EVENT

0.83+

SwarmKitTITLE

0.81+

30 ofQUANTITY

0.81+

theCubeORGANIZATION

0.76+

theCUBEORGANIZATION

0.76+

Joe Selle | IBM CDO Strategy Summit 2017


 

>> Announcer: Live from Fisherman's Wharf in San Francisco. It's theCUBE. Covering IBM Chief Data Officer Strategy Summit Spring 2017. Brought to you by IBM. >> Hey Welcome back everybody. Jeff Frick with theCUBE, along with Peter Burris from Wikibon. We are in Fisherman's Wharf in San Francisco at the IBM Chief Data Officer Strategy Summit Spring 2017. Coming to the end of a busy day, running out of steam. Blah, blah, blah. I need more water. But Joe's going to take us home. We're joined by Joe Selle. He is the global operations analytic solution lead for IBM. Joe, welcome. >> Thank you, thank you very much. It's great to be here. >> So you've been in sessions all day. I'm just curious to get kind of your general impressions of the event and any surprises or kind of validations that are coming out of these sessions. >> Well, general impression is that everybody is thrilled to be here and the participants, the speakers, the audience members all know that they're at the cusp of a moment in business history of great change. And that is as we graduate from regular analytics which are descriptive and dashboarding into the world of cognitive which is taking the capabilities to a whole other level. Many levels actually advanced from the basic things. >> And you're in a really interesting position because IBM has accepted the charter of basically consuming your own champagne, drinking your own champagne, whatever expression you want to use. >> I'm so glad you said that cause most people say eating your dog food. >> Well, if we were in Germany we'd talk about beer, but you know, we'll stick with the champagne analogy. But really, trying to build, not only to build and demonstrate the values that you're trying to sell to your customers within IBM but then actually documenting it and delivering it basically, it's called the blueprint, in October. We've already been told it's coming in October. So what a great opportunity. >> Part of that is the fact that Ginni Rometty, our CEO, had her start in IBM in the consulting part of IBM, GBS, Global Business Services. She was all about consulting to clients and creating big change in other organizations. Then she went through a series of job roles and now she's CEO and she's driving two things. One is the internal transformation of IBM, which is where I am, part of my role is, I should say. Reporting to the chief data officer and the chief analytics officer and their jobs are to accelerate the transformation of big blue into the cognitive era. And Ginni also talks about showcasing what we're doing internally for the rest of the world and the rest of the economy to see because parts of this other companies can do. They can emulate our road map, the blueprint rather, sorry, that Inderpal introduced, is going to be presented in the fall. That's our own blueprint for how we've been transforming ourselves so, some part of that blueprint is going to be valid and relevant for other companies. >> So you have a dual reporting relationship, you said. The chief data officer, which is this group, but also the chief analytics officer. What's the difference between the Chief data officer, the chief data analytics officer and how does that combination drive your mission? >> Well, the difference really is the chief data officer is in charge of making some very long-term investments, including short-term investments, but let me talk about the long-term investment. Anything around an enterprise data lake would be considered a long-term investment. This is where you're creating an environment where users can go in, these would be internal to IBM or whatever client company we're talking about, where they can use some themes around self-service, get out this information, create analysis, everything's available to them. They can grab external data. They can grab internal data. They can observe Twitter feeds. They can look at weather company information. In our case we get that because we're partnered with the weather company. That's the long-term vision of the chief data officer is to create a data lake environment that serves to democratize all of this for users within a company, within IBM. The chief analytics officer has the responsibility to deliver projects that are sort of the leading projects that prove out the value of analytics. So on that side of my dual relationship, we're forming projects that can deliver a result literally in a 10 or a 12 week time period. Or a half a year. Not a year and a half but short term and we're sprinting to the finish, we're delivering something. It's quite minimally scaled. The first project is always a minimally viable product or project. It's using as few data sources as we can and still getting a notable result. >> The chief analytics officer is at the vanguard of helping the business think about use cases, going after those use cases, asking problems the right way, finding data with effectiveness as well as efficiency and leading the charge. And then the Chief data officer is helping to accrete that experience and institutionalize it in the technology, the practices, the people, et cetera. So the business builds a capability over time. >> Yes, scalable. It's sort of an issue of it can scale. Once Inderpal and the Chief data officer come to the equation, we're going to scale this thing massively. So, high volume, high speed, that's all coming from a data lake and the early wins and the medium term wins maybe will be more in the realm of the chief analytics officer. So on your first summary a second ago, you're right in that the chief analytics officer is going around, and the team that I'm working with is doing this, to each functional group of IBM. HR, Legal, Supply Chain, Finance, you name it, and we're engaging in cognitive discovery sessions with them. You know, what is your roadmap? You're doing some dashboarding now, you're doing some first generation analytics or something but, what is your roadmap for getting cognitive? So we're helping to burst the boundaries of what their roadmap is, really build it out into something that was bigger then they had been conceiving of it. Adding the cognitive projects and then, program managing this giant portfolio so that we're making some progress and milestones that we can report to various stake holders like Ginni Rometty or Jim Kavanaugh who are driving this from a senior senior executive standpoint. We need to be able to tell them, in one case, every couple of weeks, what have you gotten done. Which is a terrible cadence, by the way, it's too fast. >> So in many Respects-- >> But we have to get there every couple of weeks we've got to deliver another few nuggets. >> So in many respects, analytics becomes the capability and data becomes the asset. >> Yes, that's true. Analytics has assets as well though. >> Paul: Sure, of course. >> Because we have models and we have techniques and we bake the models into a business process to make it real so people actually use it. It doesn't just sit over there as this really nifty science experiment. >> Right but kind of where are we on the journey? It's real still early days, right? Because, you know, we hear all the time about machine learning and deep learning and AI and VR and AI and all this stuff. >> We're patchy, every organization is patchy even IBM, but I'm learning from being here, so this is end of day one, I'm learning. I'm getting a little more perspective on the fact that we at IBM are actually, 'cause we've been investing in this heavily for a number of years. I came through the ranks and supply chain. We've been investing in these capabilities for six or seven years. We were some of the early adopters within IBM. But, I would say that maybe 10% of the people at this conference are sort of in the category of I'm running fast and I'm doing things. So that's 10%. Then there's maybe another 30% that are jogging or fast walking. And then there's the rest of them, so maybe 50%, if my math is right, it's been a long day. Are kind of looking and saying, yeah, I got to get that going at some point and I have two or three initiatives but I'm really looking forward to scaling it at some point. >> Right. >> I've just painted a picture to you of the fact that the industry in general is just starting this whole journey and the big potential is still in front of us. >> And then on the Champagne. So you've got the cognitive, you've got the brute and then you've got the Watson. And you know, there's a lot of, from the outside looking in at IBM, there's a lot of messaging about Watson and a lot of messaging about cognitive. How the two mesh and do they mesh within some of the projects that you're working on? Or how should people think of the two of them? >> Well, people should know that Watson is a brand and there are many specific technologies under the Watson brand. So, and then, think of it more as capabilities instead of technologies. Things like being able to absorb unstructured information. So you've heard, if you've been to any conferences, whether they're analytics or data, any company, any industry, 80% of your data is unstructured and invisible and you're probably working with 20% of your data on an active basis. So, do you want to go the 80%-- >> With 40% shrinking. >> As a percentage. >> That's true. >> As a percentage. >> Yeah because the volumes are growing. >> Tripling in size but shrinking as a percentage. >> Right, right. So, just, you know, think about that. >> Is Watson really then kind of the packaging of cognitive, more specific application? Because we're walking for health or. >> I'll tell you, Watson is a mechanism and a tool to achieve the outcome of cognitive business. That's a good way to think of it. And Watson capabilities that I was just about to get to are things like reading, if you will. In Watson Health, he reads oncology articles and they know, once one of them has been read, it's never forgotten. And by the way, you can read 200 a week and you can create the smartest doctor that there is on oncology. So, a Watson capability is absorbing information, reading. It's in an automated fashion, improving its abilities. So these are concepts around deep learning and machine learning. So the algorithms are either self correcting or people are providing feedback to correct them. So there's two forms of learning in there. >> Right, right. >> But these are kind of capabilities all around Watson. I mean, there are so many more. Optical, character recognition. >> Right. >> Retrieve and rank. >> Right. >> So giving me a strategy and telling me there's an 85% chance, Joe, that you're best move right now, given all these factors is to do x. And then I can say, well, x wouldn't work because of this other constraint which maybe the system didn't know about. >> Jeff: Right. >> Then the system will tell me, in that case, you should consider y and it's still an 81% chance of success verses the first which was at 85. >> Jeff: Right. >> So retrieving and ranking, these are capabilities that we call Watson. >> Jeff: Okay. >> And we try to work those in to all the job roles. >> Jeff: Okay. >> So again, whether you're in HR, legal, intellectual property management, environmental compliance. You know, regulations around the globe are changing all the time. Trade compliance. And if you violate some of these rules and regs, then you're prohibited from doing business in a certain geography. >> Jeff: Right. >> It's devastating. The stakes are really high. So these are the kind of tools we want. >> So I'm just curious, from your perspective, you've got a corporate edict behind you at the highest level, and your customers, your internal customers, have that same edict to go execute quickly. So given that you're not in that kind of slow moving or walking or observing half, what are the biggest challenges that you have to overcome even given the fact that you've got the highest level most senior edict both behind you as well as your internal customers. >> Yeah, well it, guess what, it comes down to data. Often, a lot of times, it comes to data. We can put together an example of a solution that is a minimally viable solution which might have only three or four or five different pieces of data and that's pretty neat and we can deliver a good result. But if we want to scale it and really move the needle so that it's something that Ginni Rometty sees and cares about, or a shareholder, then we have to scale. Then we need a lot of data, so then we come back to Inderpal, and the chief data officer role. So the constraint is on many of the programs and projects is if you want to get beyond the initial proof of concept, >> Jeff: Right. >> You need to access and be able to manipulate the big data and then you need to train these cognitive systems. This is the other area that's taking a lot of time. And I think we're going to have some technology and innovation here, but you have to train a cognitive system. You don't program it. You do some painstaking back and forth. You take a room full of your best experts in whatever the process is and they interact with the system. They provide input, yes, no. They rank the efficacy of the recommendations coming out of the system and the system improves. But it takes months. >> That's even the starting point. >> Joe: That's a problem. >> And then you trade it over often, an extended period of time. >> Joe: A lot of it gets better over time. >> Exactly. >> As long as you use this thing, like a corpus of information is built and then you can mine the corpus. >> But a lot of people seem to believe that you roll all this data, you run a bunch of algorithms and suddenly, boom, you've got this new way of doing things. And it is a very very deep set of relationships between people who are being given recommendations as you said, weighing them, voting them, voting on them, et cetera. This is a highly interactive process. >> Yeah, it is. If you're expecting lightning fast results, you're really talking about a more deterministic kind of solution. You know, if/then. If this is, then that's the answer. But we're talking about systems that understand and they reason and they tap you on the shoulder with a recommendation and tell you that there's an 85% chance that this is what you should do. And you can talk back to the system, like my story a minute ago, and you can say, well it makes sense, but, or great, thanks very much Watson, and then go ahead and do it. Those systems that are expert systems that have expertise just woven through them, you cannot just turn those on. But, as I was saying, one of the things we talked about on some of the panels today, was there's new techniques around training. There's new techniques around working with these corpuses of information. Actually, I'm not sure what the plural of corpus. Corpi? It's not Corpi. >> Jeff: I can look that up. >> Yeah, somebody look that up. >> It's not corpi. >> So anyway, I want to give you the last word, Jeff. So you've been doing this for a while, what advice would you give to someone kind of in your role at another company who's trying to be the catalyst to get these things moving. What kind of tips and tricks would you share, you know, having gone through it and working on this for a while? >> Sure. I would, the first thing I would do is, in your first move, keep the projects tightly defined and small with a minimum of input and keep, contain your risk and your risk of failure, and make sure that if you do three projects, at least one of them is going to be a hands down winner. And then once you have a winner, tout it through your organization. A lot of folks get so enamored with the technology that they start talking more about the technology than the business impact. And what you should be touting and bragging about is not the fact that I was able to simultaneously read 5,000 procurement contracts with this tool, you should be saying, it used to take us three weeks in a conference room with a team of one dozen lawyers and now we can do that whole thing in one week with six lawyers. That's what you should talk about, not the technology piece of it. >> Great, great. Well thank you very much for sharing and I'm glad to hear the conference is going so well. Thank you. >> And it's Corpa. >> Corpa? >> The answer to the question? Corpa. >> Peter: Not corpuses. >> With Joe, Peter, and Jeff, you're watching theCUBE. We'll be right back from the IBM chief data operator's strategy summit. Thanks for watching.

Published Date : Mar 30 2017

SUMMARY :

Brought to you by IBM. He is the global operations analytic solution lead for IBM. It's great to be here. of the event and any surprises or kind of validations the audience members all know that they're at the cusp because IBM has accepted the charter of basically I'm so glad you said that cause most people and demonstrate the values that you're trying to Part of that is the fact that Ginni Rometty, but also the chief analytics officer. that prove out the value of analytics. of helping the business think about use cases, Once Inderpal and the Chief data officer But we have to get there every couple of weeks So in many respects, analytics becomes the capability Yes, that's true. and we bake the models into a business process to make Because, you know, we hear all the time about I'm getting a little more perspective on the fact that we and the big potential is still in front of us. How the two mesh and do they mesh within some of the So, do you want to go the 80%-- So, just, you know, think about that. of cognitive, more specific application? And by the way, you can read 200 a week and you can create But these are kind of capabilities all around Watson. given all these factors is to do x. Then the system will tell me, in that case, you should these are capabilities that we call Watson. You know, regulations around the globe So these are the kind of tools we want. challenges that you have to overcome even given the fact and the chief data officer role. and the system improves. And then you trade it over often, and then you can mine the corpus. But a lot of people seem to believe that you that there's an 85% chance that this is what you should do. What kind of tips and tricks would you share, you know, and make sure that if you do three projects, the conference is going so well. The answer to the question? We'll be right back from the IBM chief data

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

JoePERSON

0.99+

JeffPERSON

0.99+

Peter BurrisPERSON

0.99+

Jeff FrickPERSON

0.99+

Ginni RomettyPERSON

0.99+

Joe SellePERSON

0.99+

GBSORGANIZATION

0.99+

OctoberDATE

0.99+

twoQUANTITY

0.99+

Jim KavanaughPERSON

0.99+

20%QUANTITY

0.99+

one weekQUANTITY

0.99+

PeterPERSON

0.99+

three weeksQUANTITY

0.99+

PaulPERSON

0.99+

10%QUANTITY

0.99+

10QUANTITY

0.99+

80%QUANTITY

0.99+

85%QUANTITY

0.99+

50%QUANTITY

0.99+

six lawyersQUANTITY

0.99+

sixQUANTITY

0.99+

firstQUANTITY

0.99+

GermanyLOCATION

0.99+

81%QUANTITY

0.99+

fourQUANTITY

0.99+

Global Business ServicesORGANIZATION

0.99+

12 weekQUANTITY

0.99+

40%QUANTITY

0.99+

OneQUANTITY

0.99+

two formsQUANTITY

0.99+

seven yearsQUANTITY

0.99+

three projectsQUANTITY

0.99+

30%QUANTITY

0.99+

GinniPERSON

0.99+

San FranciscoLOCATION

0.99+

one dozen lawyersQUANTITY

0.99+

one caseQUANTITY

0.99+

85QUANTITY

0.99+

todayDATE

0.99+

threeQUANTITY

0.98+

two thingsQUANTITY

0.98+

a yearQUANTITY

0.98+

5,000 procurement contractsQUANTITY

0.98+

bothQUANTITY

0.98+

first projectQUANTITY

0.98+

TwitterORGANIZATION

0.98+

oneQUANTITY

0.98+

WatsonPERSON

0.98+

CorpaORGANIZATION

0.98+

Fisherman's WharfLOCATION

0.98+

200 a weekQUANTITY

0.97+

three initiativesQUANTITY

0.97+

WatsonTITLE

0.96+

five different piecesQUANTITY

0.96+

first summaryQUANTITY

0.95+

WikibonORGANIZATION

0.93+