HORSEMAN and HANLEY Fixed
(upbeat music) >> Hello everyone, welcome to this special Cube conversation. I'm John Furrier, host of theCube. We're here in Palo Alto. We've got some remote guests. Going to break down the Fortinet vulnerability, which was confirmed last week as a critical vulnerability that exposed a zero-day flaw for some of their key products, obviously, FortiOS and FortiProxy for remote attacks. So we're going to break this down. It's a real time vulnerability that happened is discovered in the industry. Horizon3.ai is one of the companies that was key in identifying this. And they have a product that helps companies detect and remediate and a bunch of other cool things you've heard on the cube here. We've got James Horseman, an exploit developer. Love the title. Got to got to say, I'm not going to lie. I like that one. And Zach Hanley, who's the chief attack engineer at Horizon3.ai. Gentlemen, first, thank you for joining the Cube conversation. >> Thank you. It's good to be here. >> Yeah, thank you so much for having us. >> So before we get into the whole Fortinet, this vulnerability that was exposed and how you guys are playing into this I just got to say I love the titles. Exploit developer, Chief Attack Engineers, you don't see that every day. Explain the titles Zach, let's start with you. Chief Attack Engineer, what do you do? >> Yeah, sure. So the gist of it is, is that there is a lot to do and the cybersecurity world. And we made up a new engineering title called Attack Engineer because there's so many different things an attacker will actually do over the course of attack. So we just named them an engineer. And I lead that team that helps develop the offensive capabilities for our product. >> Got it. James, you're the Exploit Developer, exploiting. What are you exploiting? What's going on there? >> So what I'll do in a day to day is we'll take N-days, which are vulnerabilities that have been disclosed to a vendor, but not yet publicly patched necessarily or a pocket exists for them. And I'll try to reverse engineer and find them, so we can integrate them into our product and our customers can use them to make sure that they're actually secure. And then if there's no interesting N-days to go after, we'll sometimes search for zero-days, which are vulnerabilities in products that the vendor doesn't yet know about. >> Yeah, and those are most critical. Those things can being really exploited and cause a lot of damage. Well James, thanks for coming on. We're here to talk about the vulnerability that happened with Fortinet and their products zero-day vulnerability. But first with the folks, for context, Horizon3.ai is a new startup rapidly growing. They've been on theCube. The CEOs, Snehal and team have described their product as an autonomous pen testing. But as part of that, they also have more of a different approach to testing environment. So they're constantly putting companies under pressure. Let's get into it. Let's get into this hack. So you guys are kind of like, I call it the early warning detection system. You're seeing things early because your product's constantly testing infrastructure. Okay? Over time, all the time always on. How did this come come about? How did you guys see this? What happened? Take us through. >> Yeah, sure. I'll start off. So on Friday, we saw on Twitter, which is actually a really good source of threat intelligence these days, We saw a person released details that 40 minutes sent advanced warning email that a critical vulnerability had been discovered and that an emergency patch was released. And the details that we saw, we saw that was an authentication bypass and we saw that it affected the 40 OS, 40 proxy and the 40 switch manager. And we knew right off the bat those are some of their most heavily used products. And for us to understand how this vulnerability worked and for us to actually help our clients and other people around the world understand it, we needed to get after it. So after that, James and I got on it, and then James can tell you what we did after we first heard. >> Yeah. Take us through play by play. >> Sure. So we saw it was a 9.8 CVSS, which means it's easy to exploit and low complexity and also kind of gives you the keys that take them. So we like to see those because they're easy to find, easy to go after. They're big wins. So as soon as we saw this come out we downloaded some firmware for 40 OS. And the first few hours were really about unpacking the firmware, seeing if we could even to get it run. We got it running a a VMware VMDK file. And then we started to unpack the firmware to see what we could find inside. And that was probably at least half of the time. There seemed to be maybe a little bit of obfuscation in the firmware. We were able to analyze the VDMK files and get them mounted and we saw that they were, their operating system was compressed. And when we went to decompress them we were getting some strange decompression errors, corruption errors. And we were kind of scratching our heads a little bit, like you know, "What's going on here?" "These look like they're legitimately compressed files." And after a while we noticed they had what seemed to be a different decompression tool than what we had on our systems also in that VMDK. And so we were able to get that running and decompress the firmware. And from there we were off to the races to dive deeper into the differences between the vulnerable firmware and the patch firmware. >> So the compressed files were hidden. They basically hid the compressed files. >> Yeah, we're not so sure if they were intentionally obfuscated or maybe it was just a really old version of that compression algorithm. It was the XZ compression tool. >> Got it. So what happens next? So take us through. So you discovered, you guys tested. What do you guys do next? How did this thing... I mean, I saw the news it hit heavily. You know, they updated, everyone updated their catalog for patching. So this kind of hangs out there. There's a time lag out there. What's the state of the security at that time? Say Friday, it breaks over the weekend, potentially a lot of attacks might have happened. >> Yeah, so they chose to release this emergency pre-warning on Friday, which is a terrible day because most people are probably already swamped with work or checking out for the weekend. And by Sunday, James and I had actually figured out the vulnerability. Well, to make the timeline a little shorter. But generally what we do between when we discover or hear news of the CV and when we actually pocket is there's a lot of what we call patch diffing. And that's when we take the patched version and the unpatched version and we run it through a tool that kind of shows us the differences. And those differences are really key insight into, "Hey, what was actually going on?" "How did this vulnerability happen?" So between Friday and Sunday, we were kind of scratching our heads and had some inspiration Sunday night and we actually figured it out. So Sunday night, we released news on Twitter that we had replicated the exploit. And the next day, Monday morning, finally, Fortinet actually released their PSIRT notice, where they actually announced to the world publicly that there was a vulnerability and here are the mitigation steps that you can take to mitigate the vulnerability if you cannot patch. And they also release some indicators of compromise but their indicators of compromise were very limited. And what we saw was a lot of people on social media, hey asking like, "These indicators of compromise aren't sufficient." "We can't tell if we've been compromised." "Can you please give us more information?" So because we already had the exploit, what we did was we exploited our test Fortinet devices in our lab and we collected our own indicators of compromise and we wrote those up and then released them on Tuesday, so that people would have a better indication to judge their environments if they've been already exploited in the wild by this issue. Which they also announced in their PSIRT that it was a zero-day being exploited in the wild It wasn't a security researcher that originally found the issue. >> So unpack the difference for the folks that don't know the difference between a zero-day versus a research note. >> Yeah, so a zero-day is essentially a vulnerability that is exploited and taken advantage of before it's made public. An N-day, where a security researcher may find something and report it, that and then once they announce the CVE, that's considered an N-day. So once it's known, it's an N-day and once if it's exploited before that, it's a zero-day. >> Yeah. And the difference is zero-day people can get in there and get into it. You guys saw it Friday on Twitter you move into action Fortinet goes public on Monday. The lag between those days is critical time. What was going on? Why are you guys doing this? Is this part of the autonomous pen testing product? Is this part of what you guys do? Why Horizon3.ai? Is this part of your business model? Or was this was one of those things where you guys just jumped on it? Take us through Friday to Monday. >> James, you want to take this one? >> Sure. So we want to hop on it because we want to be able to be the first to have a tool that we can use to exploit our customer system in a safe manner to prove that they're vulnerable, so then they can go and fix it. So the earlier that we have these tools to exploit the quicker our customers can patch and verify that they are no longer vulnerable. So that's the drive for us to go after these breaking exploits. So like I said, Friday we were able to get the firmware, get it decompressed. We actually got a test system up and running, familiarized ourself with the system a little bit. And we just started going through the patch. And one of the first things we noticed was in their API server, they had a a dip where they started including some extra HTTP headers when they proxied a connection to one of their backend servers. And there were, I believe, three headers. There was a HTTP forwarded header, a Vdom header, and a Cert header. And so we took those strings and we put them into our de-compiled version of the firmware to kind of start to pinpoint an area for us to look because this firmware is gigantic. There's tons of files to look at. And so having that patch is really critical to being able to quickly reverse engineer what they did to find the original exploit. So after we put those strings into our firmware, we found some interesting parts centered around authorization and authentication for these devices. And what we found was when you set a specific forwarded header, the system, for lack of better term, thought that you were on the inside. So a lot of these systems they'll have kind of, two methods of entry. One is through the front door, where if you come in you have to provide some credentials. They don't really trust you. You have to provide a cookie or some kind of session ID in order to be allowed to make requests. And the other side is kind of through the back door, where it looks like you are part of the system itself. So if you want to ask for a particular resource, if you look like you're part of the system they're not going to scrutinize you too much. They'll just let you do whatever you want to do. So really the nature of this exploit was we were able to manipulate some of those HTP headers to trick the system into thinking that we were coming in through the back door when we really coming in through the front. >> So take me through that that impact. That means remote execution. I can come in remotely and anonymous and act like I'm on the inside system. >> Yeah. >> And that's the case of the kingdom as you said earlier, right? >> Yeah. So the crux of the vulnerability is it allows you to make any kind of request you want to this system as if you were an administrator. So it lets you control the interfaces, set them up or down, lets you create packet captures, lets you add and remove users. And what we tried to do, which surprisingly the exploit didn't let us do was to create a new admin user. So there was some kind of extra code in there to stop somebody that did get that extra access to create an admin user. And so that kind of bummed us out. And so after we discovered the exploit we were kind of poking around to see what we could do with it, couldn't create an admin user. We were like, "Oh no, what are we going to do?" And eventually we came up with the idea to modify the existing administrator user. And that the exploit did allow us to do. So our initial POC, took some SSH keys adding them to an existing administrative user and then we were able to SSH in through the system. >> Awesome. Great, description. All right, so Zach, let's get to you for a second. So how does this happen? What does this... How did we get here? What was the motivation? If you're the chief attacker and you want to make this exploit happen, take me through what the other guy's thinking and what he did or she. >> Sure. So you mean from like the attacker's perspective, why are they doing this? >> Yeah. How'd this exploit happen? >> Yeah. >> And what was it motivated by? Was it a mistake? Was it intentional? >> Yeah, ultimately, like, I don't think any vendor purposefully creates vulnerabilities, but as you create a system and it builds and builds, it gets more complex and naturally logic bugs happen. And this was a logic bug. So there's no blame Fortinet for like, having this vulnerability and like, saying it's like, a back door. It just happens. You saw throughout this last year, F5 had a very similar vulnerability, VMware had a very similar vulnerability, all introducing authentication bypasses. So from the attacker's mindset, why they're actually going after this is a lot of these devices that Fortinet has, are on the edge of corporate networks and ransomware and whatever else. If you're a an APT, you want to get into organizations. You want to get from the outside to the inside. So these edge devices are super important and they're going to get a lot of eyes from attackers trying to figure out different ways to get into the system. And as you saw, this was in the wild exploited and that's how Fortinet became aware of it. So obviously there are some attackers out there doing this right now. >> Well, this highlights your guys' business model. I love what you guys do. I think it's a unique and needed approach. You take on the role of, I guess white hacker as... white hat hacker as a service. I don't know what to call it. You guys are constantly penetrating, testing, creating value for the customers to avoid in this case a product that's popular that just had the situation and needed to be resolved. And the hard part is how do you do it, right? So again, there's all these things are going on. This is the future of security where you need to have these, I won't say simulations, but constant kind of testing at scale. >> Yeah. >> I mean, you got the edge, it takes one little entry point to get into the network. It could be anywhere. >> Yeah, it definitely security, it has to be continuous these days. Because if you're only doing a pen test once a year or twice a year you have a year to six months of risk just building and building. And there's countless vulnerabilities and countless misconfigurations that can be introduced into a your network as the time goes on. >> Well, autonomous pen testing- >> Just because you're- >> ... is great. That's awesome stuff. I think it just frees up the talent in the organization to do other things and again, get on the real important stuff. >> Just because your network was secure yesterday doesn't mean it's going to be secure today. So in addition to your defense in depth and making sure that you have all the right configurations, you want to be continuously testing the security of your network to make sure that no new vulnerabilities have been introduced. >> And with the cloud native modern application environment we have now, hardware's got to keep up. More logic potential vulnerability could emerge. You just never know when that one N-vulnerability is going to be there. And so constantly looking out for is a really big deal. >> Definitely. Yeah, the switch to cloud and moving into hybrid cloud has introduced a lot more complexity in environments. And it's definitely another hole attackers going and after. >> All right. Well I got you guys here. I really appreciate the commentary on this vulnerability and this exploit opportunity that Fortinet had to move fast and you guys helped them and the customers. In general, as you guys see the security business now and the practitioners out there, there's a lot of pain points. What are the most powerful acute pain points that the security ops guys (laughing) are dealing with right now? Is it just the constant barrage of attacks? What's the real pain right now? >> I think it really matters on the organization. I think if you're looking at it from a in the news level, where you're constantly seeing all these security products being offered. The reality is, is that the majority of companies in the US actually don't have a security staff. They maybe have an IT guy, just one and he's not a security guy. So he's having to manage helping his company have the resources he needs, but also then he's overwhelmed with all the security things that are happening in the world. So I think really time and resources are the pain points right now. >> Awesome. James, any comment? >> Yeah, just to add to what Zach said, these IT guys they're put under pressure. These Fortinet devices, they could be used in a company that just recently transitioned to a lot of work from home because of COVID and whatnot. And they put these devices online and now they're under pressure to keep them up to date, keep them configured and keep them patched. But anytime you make a change to a system, there's a risk that it goes down. And if the employees can't VPN or log in from home anymore, then they can't work. The company can't make money. So it's really a balancing act for that IT guy to make sure that his environment is up to date, while also making sure it's not taken down for any reason. So it's a challenging position to be in and prioritizing what you need to fix and when is definitely a difficult problem. >> Well, this is a great example, this news article and this. Fortinet news highlights the Horizon3.ai advantage and what you guys do. I think this is going to be the table stakes for security in the industry as people have to build their own, I call it the militia. You got to have your own testing. (laughing) You got to have your own way to help protect yourself. And one of them is to know what's going on all the time every day, today and tomorrow. So congratulations and thanks for sharing the exploit here on this zero-day flaw that was exposed. Thanks for for coming on. >> Yeah, thanks for having us. >> Thank you. >> Okay. This is theCube here in Palo Alto, California. I'm John Furrier. You're watching security update, security news, breaking down the exploit, the zero-day flaw that was exploited at least one attack that was documented. Fortinet devices now identified and patched. This is theCube. Thanks for watching. (upbeat music)
SUMMARY :
Horizon3.ai is one of the companies It's good to be here. and how you guys are playing into this So the gist of it is, is that What are you exploiting? that the vendor doesn't yet know about. I call it the early And the details that we saw, And the first few hours were really about So the compressed files were hidden. of that compression algorithm. I mean, I saw the news and here are the mitigation steps for the folks that don't that and then once they announce the CVE, And the difference is zero-day And one of the first things we noticed was and act like I'm on the inside system. And that the exploit did allow us to do. let's get to you for a second. So you mean from like the How'd this exploit happen? So from the attacker's mindset, And the hard part is to get into the network. it has to be continuous these days. get on the real important stuff. and making sure that you have is going to be there. Yeah, the switch to cloud and the practitioners out there, The reality is, is that the James, any comment? And if the employees can't VPN and what you guys do. the zero-day flaw that was exploited
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Zach Hanley | PERSON | 0.99+ |
James | PERSON | 0.99+ |
James Horseman | PERSON | 0.99+ |
Fortinet | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Zach | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Tuesday | DATE | 0.99+ |
Friday | DATE | 0.99+ |
Monday | DATE | 0.99+ |
Sunday night | DATE | 0.99+ |
six months | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
last week | DATE | 0.99+ |
Sunday | DATE | 0.99+ |
HANLEY | PERSON | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Snehal | PERSON | 0.99+ |
Monday morning | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
40 minutes | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
last year | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
Horizon3.ai | ORGANIZATION | 0.98+ |
One | QUANTITY | 0.98+ |
three headers | QUANTITY | 0.98+ |
two methods | QUANTITY | 0.97+ |
next day | DATE | 0.97+ |
HORSEMAN | PERSON | 0.97+ |
once a year | QUANTITY | 0.96+ |
a year | QUANTITY | 0.96+ |
twice a year | QUANTITY | 0.96+ |
40 OS | QUANTITY | 0.95+ |
tons of files | QUANTITY | 0.94+ |
zero | QUANTITY | 0.93+ |
first things | QUANTITY | 0.91+ |
VMware | ORGANIZATION | 0.9+ |
ORGANIZATION | 0.89+ | |
VMDK | TITLE | 0.88+ |
zero-day | QUANTITY | 0.85+ |
Horizon3.ai | TITLE | 0.84+ |
COVID | OTHER | 0.83+ |
first few hours | QUANTITY | 0.79+ |
Attack Engineer | TITLE | 0.76+ |
days | QUANTITY | 0.76+ |
one little entry point | QUANTITY | 0.72+ |
F5 | TITLE | 0.71+ |
one attack | QUANTITY | 0.71+ |
FortiProxy | TITLE | 0.7+ |
Cube | ORGANIZATION | 0.62+ |
Cube | COMMERCIAL_ITEM | 0.62+ |
VMware | TITLE | 0.58+ |
Steven Mih, Ahana & Girish Baliga, Uber | CUBE Conversation
(bright music) >> Hey everyone, welcome to this CUBE conversation featuring Ahana, I'm your host Lisa Martin. I've got two guests here with me today. Steven Mih joins us, the Presto Foundation governing board member, co-founder and CEO of Ahana, and Girish Baliga Presto Foundation governing board chair and senior engineering manager at Uber. Guys thanks for joining us. >> Thanks for having us. >> Thanks for having us. >> So Steven we're going to dig into and unpack Presto in the next few minutes or so, but Steven let's go ahead and start with you. Talk to us about some of the challenges with the open data lake house market. What are some of those key challenges that organizations are facing? >> Yeah, just pulling up the slide you know, what we see is that many organizations are dealing with a lot more data and very different data types and putting that all into, traditionally as the data warehouse, which has been the workhorse for BI and analytics traditionally, it becomes very, very expensive, and there's a lot of lock in associated with that. And so what's happening is that people are putting the data semistructured and unstructured data for example, in cloud data lakes or other data lakes, and they find that they can query directly with a SQL query engine like Presto. And that lets you have a much more approach to dealing with getting insights out of your data. And that's what this is all about, and that's why companies are moving to a modern architecture. Girish maybe you can share some of your thoughts on how Uber uses Presto for this. >> Yeah, at Uber we use Presto in our internal deployments. So at Uber we have our own data centers, we store data locally in our data centers, but we have made the conscious choice to go with an open data stack. Our entire data stack is built around open source technologies like Hadoop, Hive, Spark and Presto. And so Presto is an invaluable engine that is able to connect to all these different storage and data formats and allow us to have a single entry point for our users, to run their SQL engines and get insights rather quickly compared to some of the other engines that we have at Uber. >> So let's talk a little bit about Presto so that the audience gets a good overview of that. Steven starting with you, you talked about the challenges of the traditional data warehouse application. Talk to us about why Presto was founded the open, the project, give us that background information if you will. >> Absolutely, so Presto was originally developed out of the biggest hyperscaler out there which is Facebook now known as Meta. And they donated that project to the, and open sourced it and donated it to the Linux Foundation. And so Presto is a SQL query engine, it's a storage SQL query engine, that runs directly on open data lakes, so you can put your data into open formats like 4K or C, and get insights directly from that at a very good price performance ratio. The Presto Foundation of which Girish and I are part of, we're all working together as a consortium of companies that all want to see Presto continue to get bigger and bigger. Kind of like Kubernetes has a, has an organization called CNCF, Presto has Presto Foundation all under the umbrella of the Linux Foundation. And so there's a lot of exciting things that are coming on the roadmap that make Presto very unique. You know, RaptorX is a multilevel caching system that it's been fantastic, Aria optimizations are another area, we Ahana have developed some security features with donating the integrations with Apache Ranger and that's the type of things that we do to help the community. But maybe Girish can talk about some of the exciting items on the roadmap that you're looking forward to. >> Absolutely, I think from Uber's point of view just a sheer scale of data and our volume of query traffic. So we run about half a million Presto queries a day, right? And we have thousands of machines in our Presto deployments. So at that scale in addition to functionality you really want a system that can handle traffic reliably, that can scale, and that is backed by a strong community which guarantees that if you pull in the new version of Presto, you won't break anything, right? So all of those things are very important to us. So I think that's where we are relying on our partners particularly folks like Facebook and Twitter and Ahana to build and maintain this ecosystem that gives us those guarantees. So that is on the reliability front, but on the roadmap side we are also excited to see where Presto is extending. So in addition to the projects that Steven talked about, we are also looking at things like Presto and Spark, right? So take the Presto SQL and run it as a Spark job for instance, or running Presto on real-time analytics applications something that we built and contributed from Uber side. So we are all taking it in very different directions, we all have different use cases to support, and that's the exciting thing about the foundation. That it allows us all to work together to get Presto to a bigger and better and more flexible engine. >> You guys mentioned Facebook and I saw on the slide I think Twitter as well. Talk to me about some of the organizations that are leveraging the Presto engine and some of the business benefits. I think Steve you talked about insights, Steven obviously being able to get insights from data is critical for every business these days. >> Yeah, a major, major use case is finding the ad hoc and interactive queries, and being able to drive insights from doing so. And so, as I mentioned there's so much data that's being generated and stored, and to be able to query that data in place, at a, with very, very high performance, meaning that you can get answers back in seconds of time. That lets you have the interactive ability to drill into data and innovate your business. And so this is fantastic because it's been developed at hyperscalers like Uber that allow you to have open source technology, pick that up, and just download it right from prestodb.io, and then start to run with this and join the community. I think from an open source perspective this project under the governance of Linux Foundation gives you the confidence that it's fully transparent and you'll never see any licensing changes by the Linux Foundation charter. And therefore that means the technology remains free forever without later on limitations occurring, which then would perhaps favor commercialization of any one vendor. That's not the case. So maybe Girish your thoughts on how we've been able to attract industry giants to collaborate, to innovate further, and your thoughts on that. >> Yeah, so of the interesting I've seen in the space is that there is a bifurcation of companies in this ecosystem. So there are these large internet scale companies like Facebook, and Uber, and Twitter, which basically want to use something like Presto for their internal use cases. And then there is the second set of companies, enterprise companies like Ahana which basically wanted to take Presto and provide it as a service for other companies to use as an alternative to things like Snowflake and other systems right? So, and the foundation is a great place for both sets of companies to come together and work. The internet scale companies bring in the scale, the reliability, the different kind of ways in which you can challenge the system, optimize it, and so forth, and then companies like Ahana bring in the flexibility and the extensibility. So you can work with different clouds, different storage formats, different engines, and I think it's a great partnership that we can see happening primarily through the foundational spaces. Which you would be hard pressed to find in a single vendor or a, you know, a single-source system that is there on the market today. >> How long ago was the Presto Foundation initiated? >> It's been over three years now and it's been going strong, we're over a dozen members and it's open to everyone. And it's all governed like the Linux Foundation so we use best practices from that and you can just check it out at prestodb.io where you can get the software, or you can hear about how to join the foundation. So it includes members like Intel, and HPE as well, and we're really excited for new members to come, and contribute in and participate. >> Sounds like you've got good momentum there in the foundation. Steven talk a little bit about the last two years. Have you seen the acceleration in use cases in the number of users as we've been in such an interesting environment where the need for real-time insights is essential for every business initially a few couple of years ago to survive but now to be, to really thrive, is it, have you seen the acceleration in Presto in that timeframe? >> Absolutely, we see there's acceleration of being more data-driven and especially moving to cloud and having more data in the cloud, we think that innovation is happening, digital innovation is happening very fast and Presto is a major enabler of that, again, being able to get, drive insights from the data this is not just your typical business data, it's now getting into really clickstream data, knowing about how customers are operating today, Uber is a great example of all the different types of innovations they can drive, whether it be, you know, knowing in real time what's happening with rides, or offering you a subscription for special deals to use the service more. So, you know, Ahana we really love Presto, and we provide a SaaS manage service of the open source and provide free trials, and help people get up to speed that may not have the same type of skills as Uber or Facebook does. And we work with all companies in that way. >> Think about the consumers these days, we're very demanding, right? When I think one of the things that was in short supply during the last two years was patience. And if I think of Uber as a great example, I want to know if I'm asking for a ride I want to know exactly in real time what's coming for me? Where is it now? How many more minutes is it going to take? I mean, that need to fulfill real-time insights is critical across every industry but have you seen anything in the last couple years that's been more leading edge, like e-commerce or retail for example? I'm just curious. >> Girish you want to take that one or? >> Yeah, sure. So I can speak from the Uber point of view. So real-time insights has really exploded as an area, particularly as you mentioned with this just-in-time economy, right? Just to talk about it a little bit from Uber side, so some of the insights that you mentioned about when is your ride coming, and things of that nature, right? Look at it from the driver's point of view who are, now we have Uber Eats, so look at it from the restaurant manager's point of view, right? They also want to know how is their business coming? How many customer orders are coming for instance? what is the conversion rate? And so forth, right? And today these are all insights that are powered by a system which has a Presto as an front-end interface at Uber. And these queries run like, you have like tens of thousands of queries every single second, and the queries run in like a second and so forth. So you are really talking about production systems running on top of Presto, production serving systems. So coming to other use cases like eCommerce, we definitely have seen some of that uptake happen as well, so in the broader community for instance, we have companies like Stripe, and other folks who are also using this hashtag which is very similar to us based on another open source technology called Pino, using Presto as an interface. And so we are seeing this whole open data lakehouse more from just being, you know, about interactive analytics to driving all different kinds of analytics. Having anything to do with data and insights in this space. >> Yeah, sounds like the evolution has been kind of on a rocket ship the last couple years. Steven, one more time we're out of time, but can you mention that URL where folks can go to learn more? >> Yeah, prestodb.io and that's the Presto Foundation. And you know, just want to say that we'll be sharing the use case at the Startup Showcase coming up with theCUBE. We're excited about that and really welcome everyone to join the community, it's a real vibrant, expanding community and look forward to seeing you online. >> Sounds great guys. Thank you so much for sharing with us what Presto Foundation is doing, all of the things that it is catalyzing, great stuff, we look forward to hearing that customer use case, thanks for your time. >> Thank you. >> Thanks Lisa, thank you. >> Thanks everyone. >> For Steven and Girish, I'm Lisa Martin, you're watching theCUBE the leader in live tech coverage. (bright music)
SUMMARY :
and Girish Baliga Presto in the next few minutes or so, And that lets you have that is able to connect to so that the audience gets and that's the type of things that we do So that is on the reliability front, and some of the business benefits. and then start to run with So, and the foundation is a great place and it's open to everyone. in the number of users as we've been and having more data in the cloud, I mean, that need to fulfill so some of the insights that you mentioned Yeah, sounds like the evolution and look forward to seeing you online. all of the things that it For Steven and Girish, I'm Lisa Martin,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Steven | PERSON | 0.99+ |
Steve | PERSON | 0.99+ |
Girish | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Steven Mih | PERSON | 0.99+ |
Presto Foundation | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Ahana | ORGANIZATION | 0.99+ |
Linux Foundation | ORGANIZATION | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Intel | ORGANIZATION | 0.99+ |
two guests | QUANTITY | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Presto | ORGANIZATION | 0.99+ |
second set | QUANTITY | 0.99+ |
both sets | QUANTITY | 0.99+ |
over three years | QUANTITY | 0.99+ |
Ahana | PERSON | 0.98+ |
Kubernetes | ORGANIZATION | 0.98+ |
Spark | TITLE | 0.97+ |
Girish Baliga | PERSON | 0.97+ |
about half a million | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
over a dozen members | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
Presto | TITLE | 0.96+ |
SQL | TITLE | 0.95+ |
single | QUANTITY | 0.95+ |
thousands of machines | QUANTITY | 0.94+ |
every single second | QUANTITY | 0.93+ |
Girish Baliga Presto Foundation | ORGANIZATION | 0.92+ |
prestodb.io | OTHER | 0.91+ |
last couple years | DATE | 0.9+ |
4K | OTHER | 0.89+ |
Startup Showcase | EVENT | 0.88+ |
one vendor | QUANTITY | 0.88+ |
HelloFresh v2
>>Hello. And we're here at the cube startup showcase made possible by a Ws. Thanks so much for joining us today. You know when Jim McDaid Ghani was formulating her ideas around data mesh, She wasn't the only one thinking about decentralized data architecture. Hello, Fresh was going into hyper growth mode and realized that in order to support its scale, it needed to rethink how it thought about data. Like many companies that started in the early part of last decade, Hello Fresh relied on a monolithic data architecture and the internal team. It had concerns about its ability to support continued innovation at high velocity. The company's data team began to think about the future and work backwards from a target architecture which possessed many principles of so called data mesh even though they didn't use that term. Specifically, the company is a strong example of an early but practical pioneer of data mission. Now there are many practitioners and stakeholders involved in evolving the company's data architecture, many of whom are listed here on this on the slide to are highlighted in red are joining us today, we're really excited to welcome into the cube Clements cheese, the Global Senior Director for Data at Hello Fresh and christoph Nevada who's the Global Senior Director of data also, of course. Hello Fresh folks. Welcome. Thanks so much for making some time today and sharing your story. >>Thank you very much. Hey >>steve. All right, let's start with Hello Fresh. You guys are number one in the world in your field, you deliver hundreds of millions of meals each year to many, many millions of people around the globe. You're scaling christoph. Tell us a little bit more about your company and its vision. >>Yeah. Should I start or Clements maybe maybe take over the first piece because Clements has actually been a longer trajectory yet have a fresh. >>Yeah go ahead. Climate change. I mean yes about approximately six years ago I joined handle fresh and I didn't think about the startup I was joining would eventually I. P. O. And just two years later and the freshman public and approximately three years and 10 months after. Hello fresh was listed on the German stock exchange which was just last week. Hello Fresh was included in the Ducks Germany's leading stock market index and debt to mind a great great milestone and I'm really looking forward and I'm very excited for the future for the future for head of fashion. All our data. Um the vision that we have is to become the world's leading food solution group and there's a lot of attractive opportunities. So recently we did lounge and expand Norway. This was in july and earlier this year we launched the U. S. Brand green >>chef in the U. K. As >>well. We're committed to launch continuously different geographies in the next coming years and have a strong pipe ahead of us with the acquisition of ready to eat companies like factor in the U. S. And the planned acquisition of you foods in Australia. We're diversifying our offer now reaching even more and more untapped customer segments and increase our total addressable market. So by offering customers and growing range of different alternatives to shop food and consumer meals. We are charging towards this vision and the school to become the world's leading integrated food solutions group. >>Love it. You guys are on a rocket ship, you're really transforming the industry and as you expand your tam it brings us to sort of the data as a as a core part of that strategy. So maybe you guys could talk a little bit about your journey as a company specifically as it relates to your data journey. You began as a start up. You had a basic architecture like everyone. You made extensive use of spreadsheets. You built a Hadoop based system that started to grow and when the company I. P. O. You really started to explode. So maybe describe that journey from a data perspective. >>Yes they saw Hello fresh by 2015 approximately had evolved what amount of classical centralized management set up. So we grew very organically over the years and there were a lot of very smart people around the globe. Really building the company and building our infrastructure. Um This also means that there were a small number of internal and external sources. Data sources and a centralized the I team with a number of people producing different reports, different dashboards and products for our executives for example of our different operations teams, christian company's performance and knowledge was transferred um just via talking to each other face to face conversations and the people in the data where's team were considered as the data wizard or as the E. T. L. Wizard. Very classical challenges. And those et al. Reserves indicated the kind of like a silent knowledge of data management. Right? Um so a central data whereas team then was responsible for different type of verticals and different domains, different geographies and all this setup gave us to the beginning the flexibility to grow fast as a company in 2015 >>christoph anything that might add to that. >>Yes. Um Not expected to that one but as as clement says it right, this was kind of set up that actually work for us quite a while. And then in 2017 when L. A. Freshman public, the company also grew rapidly and just to give you an idea how that looked like. As was that the tech department self actually increased from about 40 people to almost 300 engineers And the same way as a business units as Clemens has described, also grew sustainable, sustainably. So we continue to launch hello fresh and new countries launching brands like every plate and also acquired other brands like much of a factor and with that grows also from a data perspective the number of data requests that centrally we're getting become more and more and more and also more and more complex. So that for the team meant that they had a fairly high mental load. So they had to achieve a very or basically get a very deep understanding about the business. And also suffered a lot from this context switching back and forth, essentially there to prioritize across our product request from our physical product, digital product from the physical from sorry, from the marketing perspective and also from the central reporting uh teams. And in a nutshell this was very hard for these people. And this that also to a situation that, let's say the solution that we have became not really optimal. So in a nutshell, the central function became a bottleneck and slowdown of all the innovation of the company. >>It's a classic case, isn't it? I mean Clements, you see you see the central team becomes a bottleneck and so the lines of business, the marketing team salesman's okay, we're going to take things into our own hands. And then of course I I. T. And the technical team is called in later to clean up the mess. Uh maybe, I mean was that maybe I'm overstating it, but that's a common situation, isn't it? >>Yeah. Uh This is what exactly happened. Right. So um we had a bottleneck, we have the central teams, there was always a little of tension um analytics teams then started in this business domains like marketing, trade chain, finance, HR and so on. Started really to build their own data solutions at some point you have to get the ball rolling right and then continue the trajectory um which means then that the data pipelines didn't meet the engineering standards. And um there was an increased need for maintenance and support from central teams. Hence over time the knowledge about those pipelines and how to maintain a particular uh infrastructure for example left the company such that most of those data assets and data sets are turned into a huge step with decreasing data quality um also decrease the lack of trust, decreasing transparency. And this was increasing challenge where majority of time was spent in meeting rooms to align on on data quality for example. >>Yeah. And and the point you were making christoph about context switching and this is this is a point that Jemaah makes quite often is we've we've we've contextualized are operational systems like our sales systems, our marketing system but not our our data system. So you're asking the data team, Okay. Be an expert in sales, be an expert in marketing, be an expert in logistics, be an expert in supply chain and it start stop, start, stop, it's a paper cut environment and it's just not as productive. But but on the flip side of that is when you think about a centralized organization you think, hey this is going to be a very efficient way, a cross functional team to support the organization but it's not necessarily the highest velocity, most effective organizational structure. >>Yeah, so so I agree with that. Is that up to a certain scale, a centralized function has a lot of advantages, right? That's clear for everyone which would go to some kind of expert team. However, if you see that you actually would like to accelerate that and specific and this hyper growth, right, you wanna actually have autonomy and certain teams and move the teams or let's say the data to the experts in these teams and this, as you have mentioned, right, that increases mental load and you can either internally start splitting your team into a different kind of sub teams focusing on different areas. However, that is then again, just adding another peace where actually collaboration needs to happen busy external sees, so why not bridging that gap immediately and actually move these teams and to end into into the function themselves. So maybe just to continue what, what was Clements was saying and this is actually where over. So Clements, my journey started to become one joint journey. So Clements was coming actually from one of these teams to build their own solutions. I was basically having the platform team called database housed in these days and in 2019 where basically the situation become more and more serious, I would say so more and more people have recognized that this model doesn't really scale In 2019, basically the leadership of the company came together and I identified data as a key strategic asset and what we mean by that, that if we leverage data in a proper way, it gives us a unique competitive advantage which could help us to, to support and actually fully automated our decision making process across the entire value chain. So what we're, what we're trying to do now or what we should be aiming for is that Hello, Fresh is able to build data products that have a purpose. We're moving away from the idea. Data is just a by problem products, we have a purpose why we would like to collect this data. There's a clear business need behind that. And because it's so important to for the company as a business, we also want to provide them as a trust versi asset to the rest of the organization. We say there's the best customer experience, but at least in a way that users can easily discover, understand and security access high quality data. >>Yeah, so and and and Clements, when you c J Maxx writing, you see, you know, she has the four pillars and and the principles as practitioners you look at that say, okay, hey, that's pretty good thinking and then now we have to apply it and that's and that's where the devil meets the details. So it's the four, you know, the decentralized data ownership data as a product, which we'll talk about a little bit self serve, which you guys have spent a lot of time on inclement your wheelhouse which is which is governance and a Federated governance model. And it's almost like if you if you achieve the first two then you have to solve for the second to it almost creates a new challenges but maybe you could talk about that a little bit as to how it relates to Hello fresh. >>Yes. So christophe mentioned that we identified economic challenge beforehand and for how can we actually decentralized and actually empower the different colleagues of ours. This was more a we realized that it was more an organizational or a cultural change and this is something that somebody also mentioned I think thought words mentioned one of the white papers, it's more of a organizational or cultural impact and we kicked off a um faced reorganization or different phases we're currently and um in the middle of still but we kicked off different phases of organizational reconstruct oring reorganization, try unlock this data at scale. And the idea was really moving away from um ever growing complex matrix organizations or matrix setups and split between two different things. One is the value creation. So basically when people ask the question, what can we actually do, what shall we do? This is value creation and how, which is capability building and both are equal in authority. This actually then creates a high urge and collaboration and this collaboration breaks up the different silos that were built and of course this also includes different needs of stuffing forward teams stuffing with more, let's say data scientists or data engineers, data professionals into those business domains and hence also more capability building. Um Okay, >>go ahead. Sorry. >>So back to Tzemach did johnny. So we the idea also Then crossed over when she published her papers in May 2019 and we thought well The four colors that she described um we're around decentralized data ownership, product data as a product mindset, we have a self service infrastructure and as you mentioned, Federated confidential governance. And this suited very much with our thinking at that point of time to reorganize the different teams and this then leads to a not only organisational restructure but also in completely new approach of how we need to manage data, show data. >>Got it. Okay, so your business is is exploding. Your data team will have to become domain experts in too many areas, constantly contact switching as we said, people started to take things into their own hands. So again we said classic story but but you didn't let it get out of control and that's important. So we actually have a picture of kind of where you're going today and it's evolved into this Pat, if you could bring up the picture with the the elephant here we go. So I would talk a little bit about the architecture, doesn't show it here, the spreadsheet era but christoph maybe you can talk about that. It does show the Hadoop monolith which exists today. I think that's in a managed managed hosting service, but but you you preserve that piece of it, but if I understand it correctly, everything is evolving to the cloud, I think you're running a lot of this or all of it in A W. S. Uh you've got everybody's got their own data sources, uh you've got a data hub which I think is enabled by a master catalog for discovery and all this underlying technical infrastructure. That is really not the focus of this conversation today. But the key here, if I understand it correctly is these domains are autonomous and not only that this required technical thinking, but really supportive organizational mindset, which we're gonna talk about today. But christoph maybe you could address, you know, at a high level some of the architectural evolution that you guys went through. >>Yeah, sure. Yeah, maybe it's also a good summary about the entire history. So as you have mentioned, right, we started in the very beginning with the model is on the operation of playing right? Actually, it wasn't just one model is both to one for the back end and one for the for the front and and or analytical plane was essentially a couple of spreadsheets and I think there's nothing wrong with spreadsheets, right, allows you to store information, it allows you to transform data allows you to share this information. It allows you to visualize this data, but all the kind of that's not actually separating concern right? Everything in one tool. And this means that obviously not scalable, right? You reach the point where this kind of management set up in or data management of isn't one tool reached elements. So what we have started is we've created our data lake as we have seen here on Youtube. And this at the very beginning actually reflected very much our operational populace on top of that. We used impala is a data warehouse, but there was not really a distinction between borders, our data warehouse and borders our data like the impala was used as a kind of those as the kind of engine to create a warehouse and data like construct itself and this organic growth actually led to a situation as I think it's it's clear now that we had to centralized model is for all the domains that will really lose kimball modeling standards. There was no uniformity used actually build in house uh ways of building materialized use abuse that we have used for the presentation layer, there was a lot of duplication of effort and in the end essentially they were missing feedbacks, food, which helped us to to improve of what we are filled. So in the end, in the natural, as we have said, the lack of trust and that's basically what the starting point for us to understand. Okay, how can we move away and there are a lot of different things that you can discuss of apart from this organizational structure that we have said, okay, we have these three or four pillars from from Denmark. However, there's also the next extra question around how do we implement our talking about actual right, what are the implications on that level? And I think that is there's something that we are that we are currently still in progress. >>Got it. Okay, so I wonder if we could talk about switch gears a little bit and talk about the organizational and cultural challenges that you faced. What were those conversations like? Uh let's dig into that a little bit. I want to get into governance as well. >>The conversations on the cultural change. I mean yes, we went through a hyper growth for the last year since obviously there were a lot of new joiners, a lot of different, very, very smart people joining the company which then results that collaboration uh >>got a bit more difficult. Of course >>there are times and changes, you have different different artifacts that you were created um and documentation that were flying around. Um so we were we had to build the company from scratch right? Um Of course this then resulted always this tension which I described before, but the most important part here is that data has always been a very important factor at l a fresh and we collected >>more of this >>data and continued to improve use data to improve the different key areas of our business. >>Um even >>when organizational struggles, the central organizational struggles data somehow always helped us to go through this this kind of change. Right? Um in the end those decentralized teams in our local geography ease started with solutions that serve the business which was very very important otherwise wouldn't be at the place where we are today but they did by all late best practices and standards and I always used sport analogy Dave So like any sport, there are different rules and regulations that need to be followed. These rules are defined by calling the sports association and this is what you can think about data governance and compliance team. Now we add the players to it who need to follow those rules and bite by them. This is what we then called data management. Now we have the different players and professionals, they need to be trained and understand the strategy and it rules before they can play. And this is what I then called data literacy. So we realized that we need to focus on helping our teams to develop those capabilities and teach the standards for how work is being done to truly drive functional excellence in a different domains. And one of our mission of our data literacy program for example is to really empower >>every employee at hello >>fresh everyone to make the right data informs decisions by providing data education that scaled by royal Entry team. Then this can be different things, different things like including data capabilities, um, with the learning paths for example. Right? So help them to create and deploy data products connecting data producers and data consumers and create a common sense and more understanding of each other's dependencies, which is important, for example, S. S. L. O. State of contracts and etcetera. Um, people getting more of a sense of ownership and responsibility. Of course, we have to define what it means, what does ownership means? But the responsibility means. But we're teaching this to our colleagues via individual learning patterns and help them up skill to use. Also, there's shared infrastructure and those self self service applications and overall to summarize, we're still in this progress of of, of learning, we are still learning as well. So learning never stops the tele fish, but we are really trying this um, to make it as much fun as possible. And in the end we all know user behavior has changed through positive experience. Uh, so instead of having massive training programs over endless courses of workshops, um, leaving our new journalists and colleagues confused and overwhelmed. >>We're applying um, >>game ification, right? So split different levels of certification where our colleagues can access, have had access points, they can earn badges along the way, which then simplifies the process of learning and engagement of the users and this is what we see in surveys, for example, where our employees that your justification approach a lot and are even competing to collect Those learning path batteries to become the # one on the leader board. >>I love the game ification, we've seen it work so well and so many different industries, not the least of which is crypto so you've identified some of the process gaps uh that you, you saw it is gloss over them. Sometimes I say paved the cow path. You didn't try to force, in other words, a new architecture into the legacy processes. You really have to rethink your approach to data management. So what what did that entail? >>Um, to rethink the way of data management. 100%. So if I take the example of Revolution, Industrial Revolution or classical supply chain revolution, but just imagine that you have been riding a horse, for example, your whole life and suddenly you can operate a car or you suddenly receive just a complete new way of transporting assets from A to B. Um, so we needed to establish a new set of cross functional business processes to run faster, dry faster, um, more robustly and deliver data products which can be trusted and used by downstream processes and systems. Hence we had a subset of new standards and new procedures that would fall into the internal data governance and compliance sector with internal, I'm always referring to the data operations around new things like data catalog, how to identify >>ownership, >>how to change ownership, how to certify data assets, everything around classical software development, which we know apply to data. This this is similar to a new thinking, right? Um deployment, versioning, QA all the different things, ingestion policies, policing procedures, all the things that suffer. Development has been doing. We do it now with data as well. And in simple terms, it's a whole redesign of the supply chain of our data with new procedures and new processes and as a creation as management and as a consumption. >>So data has become kind of the new development kit. If you will um I want to shift gears and talk about the notion of data product and, and we have a slide uh that we pulled from your deck and I'd like to unpack it a little bit. Uh I'll just, if you can bring that up, I'll read it. A data product is a product whose primary objective is to leverage on data to solve customer problems where customers, both internal and external. So pretty straightforward. I know you've gone much deeper and you're thinking and into your organization, but how do you think about that And how do you determine for instance who owns what? How did you get everybody to agree? >>I can take that one. Um, maybe let me start with the data product. So I think um that's an ongoing debate. Right? And I think the debate itself is an important piece here, right? That visit the debate, you clarify what we actually mean by that product and what is actually the mindset. So I think just from a definition perspective, right? I think we find the common denominator that we say okay that our product is something which is important for the company has come to its value what you mean by that. Okay, it's it's a solution to a customer problem that delivers ideally maximum value to the business. And yes, it leverages the power of data and we have a couple of examples but it had a fresh year, the historical and classical ones around dashboards for example, to monitor or error rates but also more sophisticated ways for example to incorporate machine learning algorithms in our recipe recommendations. However, I think the important aspects of the data product is a there is an owner, right? There's someone accountable for making sure that the product that we are providing is actually served and is maintained and there are, there is someone who is making sure that this actually keeps the value of that problem thing combined with the idea of the proper documentation, like a product description, right that people understand how to use their bodies is about and related to that peace is the idea of it is a purpose. Right? You need to understand or ask ourselves, Okay, why does this thing exist does it provide the value that you think it does. That leads into a good understanding about the life cycle of the data product and life cycle what we mean? Okay from the beginning from the creation you need to have a good understanding, we need to collect feedback, we need to learn about that. We need to rework and actually finally also to think about okay benefits time to decommission piece. So overall, I think the core of the data product is product thinking 11 right that we start the point is the starting point needs to be the problem and not the solution and this is essentially what we have seen what was missing but brought us to this kind of data spaghetti that we have built there in in Russia, essentially we built at certain data assets, develop in isolation and continuously patch the solution just to fulfill these articles that we got and actually these aren't really understanding of the stakeholder needs and the interesting piece as a result in duplication of work and this is not just frustrating and probably not the most efficient way how the company should work. But also if I build the same that assets but slightly different assumption across the company and multiple teams that leads to data inconsistency and imagine the following too narrow you as a management for management perspective, you're asking basically a specific question and you get essentially from a couple of different teams, different kind of grass, different kind of data and numbers and in the end you do not know which ones to trust. So there's actually much more ambiguity and you do not know actually is a noise for times of observing or is it just actually is there actually a signal that I'm looking for? And the same is if I'm running in a B test right, I have a new future, I would like to understand what has it been the business impact of this feature. I run that specific source in an unfortunate scenario. Your production system is actually running on a different source. You see different numbers. What you've seen in a B test is actually not what you see then in production typical thing then is you're asking some analytics tend to actually do a deep dive to understand where the discrepancies are coming from. The worst case scenario. Again, there's a different kind of source. So in the end it's a pretty frustrating scenario and that's actually based of time of people that have to identify the root cause of this divergence. So in a nutshell, the highest degree of consistency is actually achieved that people are just reusing Dallas assets and also in the media talk that we have given right, we we start trying to establish this approach for a B testing. So we have a team but just providing or is kind of owning their target metric associated business teams and they're providing that as a product also to other services including the A B testing team, they'll be testing team can use this information defines an interface is okay I'm joining this information that the metadata of an experiment and in the end after the assignment after this data collection face, they can easily add a graph to the dashboard. Just group by the >>Beatles Hungarian. >>And we have seen that also in other companies. So it's not just a nice dream that we have right. I have actually worked in other companies where we worked on search and we established a complete KPI pipeline that was computing all this information. And this information was hosted by the team and it was used for everything A B test and deep dives and and regular reporting. So uh just one of the second the important piece now, why I'm coming back to that is that requires that we are treating this data as a product right? If you want to have multiple people using the things that I am owning and building, we have to provide this as a trust mercy asset and in a way that it's easy for people to discover and actually work with. >>Yeah. And coming back to that. So this is to me this is why I get so excited about data mesh because I really do think it's the right direction for organizations. When people hear data product they say well, what does that mean? Uh but then when you start to sort of define it as you did, it's it's using data to add value, that could be cutting costs, that could be generating revenue, it could be actually directly you're creating a product that you monetize, So it's sort of in the eyes of the beholder. But I think the other point that we've made is you made it earlier on to and again, context. So when you have a centralized data team and you have all these P NL managers a lot of times they'll question the data because they don't own it. They're like wait a minute. If they don't, if it doesn't agree with their agenda, they'll attack the data. But if they own the data then they're responsible for defending that and that is a mindset change, that's really important. Um And I'm curious uh is how you got to, you know, that ownership? Was it a was it a top down with somebody providing leadership? Was it more organic bottom up? Was it a sort of a combination? How do you decide who owned what in other words, you know, did you get, how did you get the business to take ownership of the data and what is owning? You know, the data actually mean? >>That's a very good question. Dave I think this is one of the pieces where I think we have a lot of learnings and basically if you ask me how we could start the feeling. I think that would be the first piece. Maybe we need to start to really think about how that should be approached if it stopped his ownership. Right? It means somehow that the team has a responsibility to host and self the data efforts to minimum acceptable standards. This minimum dependencies up and down string. The interesting piece has been looking backwards. What what's happening is that under that definition has actually process that we have to go through is not actually transferring ownership from the central team to the distributor teams. But actually most cases to establish ownership, I make this difference because saying we have to transfer ownership actually would erroneously suggests that the data set was owned before. But this platform team, yes, they had the capability to make the changes on data pipelines, but actually the analytics team, they're always the ones who had the business understands, you use cases and but no one actually, but it's actually expensive expected. So we had to go through this very lengthy process and establishing ownership. We have done that, as in the beginning, very naively. They have started, here's a document here, all the data assets, what is probably the nearest neighbor who can actually take care of that and then we we moved it over. But the problem here is that all these things is kind of technical debt, right? It's not really properly documented, pretty unstable. It was built in a very inconsistent over years and these people who have built this thing have already left the company. So there's actually not a nice thing that is that you want to see and people build up a certain resistance, e even if they have actually bought into this idea of domain ownership. So if you ask me these learnings, but what needs to happen as first, the company needs to really understand what our core business concept that they have, they need to have this mapping from. These are the core business concept that we have. These are the domain teams who are owning this concept and then actually link that to the to the assets and integrated better with both understanding how we can evolve actually, the data assets and new data build things new in the in this piece in the domain. But also how can we address reduction of technical death and stabilizing what we have already. >>Thank you for that christoph. So I want to turn a direction here and talk about governance and I know that's an area that's passionate, you're passionate about. Uh I pulled this slide from your deck, which I kind of messed up a little bit sorry for that, but but by the way, we're going to publish a link to the full video that you guys did. So we'll share that with folks. But it's one of the most challenging aspects of data mesh, if you're going to decentralize you, you quickly realize this could be the Wild West as we talked about all over again. So how are you approaching governance? There's a lot of items on this slide that are, you know, underscore the complexity, whether it's privacy, compliance etcetera. So, so how did you approach this? >>It's yeah, it's about connecting those dots. Right. So the aim of the data governance program is about the autonomy of every team was still ensuring that everybody has the right interoperability. So when we want to move from the Wild West riding horses to a civilised way of transport, um you can take the example of modern street traffic, like when all participants can manoeuvre independently and as long as they follow the same rules and standards, everybody can remain compatible with each other and understand and learn from each other so we can avoid car crashes. So when I go from country to country, I do understand what the street infrastructure means. How do I drive my car? I can also read the traffic lights in the different signals. Um, so likewise as a business and Hello Fresh, we do operate autonomously and consequently need to follow those external and internal rules and standards to set forth by the redistribution in which we operate so in order to prevent a car crash, we need to at least ensure compliance with regulations to account for society's and our customers increasing concern with data protection and privacy. So teaching and advocating this advantage, realizing this to everyone in the company um was a key community communication strategy and of course, I mean I mentioned data privacy external factors, the same goes for internal regulations and processes to help our colleagues to adapt to this very new environment. So when I mentioned before the new way of thinking the new way of um dealing and managing data, this of course implies that we need new processes and regulations for our colleagues as well. Um in a nutshell then this means the data governance provides a framework for managing our people the processes and technology and culture around our data traffic. And those components must come together in order to have this effective program providing at least a common denominator, especially critical for shared dataset, which we have across our different geographies managed and shared applications on shared infrastructure and applications and is then consumed by centralized processes um for example, master data, everything and all the metrics and KPI s which are also used for a central steering. Um it's a big change day. Right. And our ultimate goal is to have this noninvasive, Federated um ultimatum and computational governance and for that we can't just talk about it. We actually have to go deep and use case by use case and Qc buy PVC and generate learnings and learnings with the different teams. And this would be a classical approach of identifying the target structure, the target status, match it with the current status by identifying together with the business teams with the different domains have a risk assessment for example, to increase transparency because a lot of teams, they might not even know what kind of situation they might be. And this is where this training and this piece of illiteracy comes into place where we go in and trade based on the findings based on the most valuable use case um and based on that help our teams to do this change to increase um their capability just a little bit more and once they hand holding. But a lot of guidance >>can I kind of kind of trying to quickly David will allow me I mean there's there's a lot of governance piece but I think um that is important. And if you're talking about documentation for example, yes, we can go from team to team and tell these people how you have to document your data and data catalog or you have to establish data contracts and so on the force. But if you would like to build data products at scale following actual governance, we need to think about automation right. We need to think about a lot of things that we can learn from engineering before. And that starts with simple things like if we would like to build up trust in our data products, right, and actually want to apply the same rigor and the best practices that we know from engineering. There are things that we can do and we should probably think about what we can copy and one example might be. So the level of service level agreements, service level objectives. So that level indicators right, that represent on on an engineering level, right? If we're providing services there representing the promises we made to our customers or consumers, these are the internal objectives that help us to keep those promises. And actually these are the way of how we are tracking ourselves, how we are doing. And this is just one example of that thing. The Federated Governor governance comes into play right. In an ideal world, we should not just talk about data as a product but also data product. That's code that we say, okay, as most as much as possible. Right? Give the engineers the tool that they are familiar basis and actually not ask the product managers for example to document their data assets in the data catalog but make it part of the configuration. Have this as a, as a C D C I, a continuous delivery pipeline as we typically see another engineering task through and services we say, okay, there is configuration, we can think about pr I can think about data quality monitoring, we can think about um the ingestion data catalog and so on and forest, I think ideally in the data product will become of a certain templates that can be deployed and are actually rejected or verified at build time before we actually make them deploy them to production. >>Yeah, So it's like devoPS for data product um so I'm envisioning almost a three phase approach to governance and you kind of, it sounds like you're in early phases called phase zero where there's there's learning, there's literacy, there's training, education, there's kind of self governance and then there's some kind of oversight, some a lot of manual stuff going on and then you you're trying to process builders at this phase and then you codify it and then you can automate it. Is that fair? >>Yeah, I would rather think think about automation as early as possible in the way and yes, there needs to be certain rules but then actually start actually use case by use case. Is there anything that small piece that we can already automate? It's as possible. Roll that out and then actually extended step by step, >>is there a role though that adjudicates that? Is there a central Chief state officer who is responsible for making sure people are complying or is it how do you handle that? >>I mean from a from a from a platform perspective, yes, we have a centralized team to uh implement certain pieces they'll be saying are important and actually would like to implement. However, that is actually working very closely with the governance department. So it's Clements piece to understand and defy the policies that needs to be implemented. >>So Clements essentially it's it's your responsibility to make sure that the policy is being followed. And then as you were saying, christoph trying to compress the time to automation as fast as possible percent. >>So >>it's really it's uh >>what needs to be really clear that it's always a split effort, Right? So you can't just do one thing or the other thing, but everything really goes hand in hand because for the right automation for the right engineering tooling, we need to have the transparency first. Uh I mean code needs to be coded so we kind of need to operate on the same level with the right understanding. So there's actually two things that are important which is one its policies and guidelines, but not only that because more importantly or even well equally important to align with the end user and tech teams and engineering and really bridge between business value business teams and the engineering teams. >>Got it. So just a couple more questions because we gotta wrap I want to talk a little bit about the business outcome. I know it's hard to quantify and I'll talk about that in a moment but but major learnings, we've got some of the challenges that you cited. I'll just put them up here. We don't have to go detailed into this, but I just wanted to share with some folks. But my question, I mean this is the advice for your peers question if you had to do it differently if you had a do over or a Mulligan as we like to say for you golfers, what would you do differently? Yeah, >>I mean can we start with from a from the transformational challenge that understanding that it's also high load of cultural change. I think this is this is important that a particular communication strategy needs to be put into place and people really need to be um supported. Right? So it's not that we go in and say well we have to change towards data mesh but naturally it's in human nature, you know, we're kind of resistance to to change right? Her speech uncomfortable. So we need to take that away by training and by communicating um chris we're gonna add something to that >>and definitely I think the point that I have also made before right we need to acknowledge that data mesh is an architecture of scale, right? You're looking for something which is necessary by huge companies who are vulnerable, data productive scale. I mean Dave you mentioned it right, there are a lot of advantages to have a centralized team but at some point it may make sense to actually decentralized here and at this point right? If you think about data Mash, you have to recognize that you're not building something on a green field. And I think there's a big learning which is also reflected here on the slide is don't underestimate your baggage. It's typically you come to a point where the old model doesn't doesn't broke anymore and has had a fresh right? We lost our trust in our data and actually we have seen certain risks that we're slowing down our innovation so we triggered that this was triggering the need to actually change something. So this transition implies that you typically have a lot of technical debt accumulated over years and I think what we have learned is that potentially we have decentralized some assets to earlier, this is not actually taking into account the maturity of the team where we are actually distributed to and now we actually in the face of correcting pieces of that one. Right? But I think if you if you if you start from scratch you have to understand, okay, is are my team is actually ready for taking on this new uh, this news capabilities and you have to make sure that business decentralization, you build up these >>capabilities and the >>teams and as Clements has mentioned, right, make sure that you take the people on your journey. I think these are the pieces that also here, it comes with this knowledge gap, right? That we need to think about hiring and literacy the technical depth I just talked about and I think the last piece that I would add now which is not here on the flight deck is also from our perspective, we started on the analytical layer because that's kind of where things are exploding, right, this is the thing that people feel the pain but I think a lot of the efforts that we have started to actually modernize the current state uh, towards data product towards data Mash. We've understood that it always comes down basically to a proper shape of our operational plane and I think what needs to happen is is I think we got through a lot of pains but the learning here is this need to really be a commitment from the company that needs to happen and to act. >>I think that point that last point you made it so critical because I I hear a lot from the vendor community about how they're gonna make analytics better and that's that's not unimportant, but but through data product thinking and decentralized data organizations really have to operationalize in order to scale. So these decisions around data architecture an organization, their fundamental and lasting, it's not necessarily about an individual project are why they're gonna be project sub projects within this architecture. But the architectural decision itself is an organizational, its cultural and what's the best approach to support your business at scale. It really speaks to to to what you are, who you are as a company, how you operate and getting that right, as we've seen in the success of data driven driven companies is yields tremendous results. So I'll ask each of you to give give us your final thoughts and then we'll wrap maybe >>maybe it quickly, please. Yeah, maybe just just jumping on this piece that you have mentioned, right, the target architecture. If we talk about these pieces right, people often have this picture of mind like OK, there are different kind of stages, we have sources, we have actually ingestion layer, we have historical transformation presentation layer and then we're basically putting a lot of technology on top of that kind of our target architecture. However, I think what we really need to make sure is that we have these different kind of viewers, right? We need to understand what are actually the capabilities that we need in our new goals. How does it look and feel from the different kind of personas and experience view? And then finally, that should actually go to the to the target architecture from a technical perspective um maybe just to give an outlook but what we're what we're planning to do, how we want to move that forward. We have actually based on our strategy in the in the sense of we would like to increase that to maturity as a whole across the entire company and this is kind of a framework around the business strategy and it's breaking down into four pillars as well. People meaning the data, cultural, data literacy, data organizational structure and so on that. We're talking about governance as Clements has actually mentioned that, right, compliance, governance, data management and so on. You talk about technology and I think we could talk for hours for that one. It's around data platform, better science platform and then finally also about enablement through data, meaning we need to understand that a quality data accessibility and the science and data monetization. >>Great, thank you christophe clement. Once you bring us home give us your final thoughts. >>Can't can just agree with christoph that uh important is to understand what kind of maturity people have to understand what the maturity level, where the company where where people organization is and really understand what does kind of some kind of a change replies to that those four pillars for example, um what needs to be taken first and this is not very clear from the very first beginning of course them it's kind of like Greenfield you come up with must wins to come up with things that we really want to do out of theory and out of different white papers. Um only if you really start conducting the first initiatives you do understand. Okay, where we have to put the starts together and where do I missed out on one of those four different pillars? People, process technology and governance. Right? And then that kind of an integration. Doing step by step, small steps by small steps not boiling the ocean where you're capable ready to identify the gaps and see where either you can fill um the gaps are where you have to increase maturity first and train people or increase your text text, >>you know Hello Fresh is an excellent example of a company that is innovating. It was not born in Silicon Valley which I love. It's a global company. Uh and I gotta ask you guys, it seems like this is an amazing place to work you guys hiring? >>Yes, >>definitely. We do >>uh as many rights as was one of these aspects distributing. And actually we are hiring as an entire company specifically for data. I think there are a lot of open roles serious. Please visit or our page from better engineering, data, product management and Clemens has a lot of rules that you can speak about. But yes >>guys, thanks so much for sharing with the cube audience, your, your pioneers and we look forward to collaborations in the future to track progress and really want to thank you for your time. >>Thank you very much. Thank you very much. Dave >>thank you for watching the cubes startup showcase made possible by A W. S. This is Dave Volonte. We'll see you next time. >>Yeah.
SUMMARY :
and realized that in order to support its scale, it needed to rethink how it thought Thank you very much. You guys are number one in the world in your field, Clements has actually been a longer trajectory yet have a fresh. So recently we did lounge and expand Norway. ready to eat companies like factor in the U. S. And the planned acquisition of you foods in Australia. So maybe you guys could talk a little bit about your journey as a company specifically as So we grew very organically So that for the team becomes a bottleneck and so the lines of business, the marketing team salesman's okay, we're going to take things into our own Started really to build their own data solutions at some point you have to get the ball rolling But but on the flip side of that is when you think about a centralized organization say the data to the experts in these teams and this, as you have mentioned, right, that increases mental load look at that say, okay, hey, that's pretty good thinking and then now we have to apply it and that's And the idea was really moving away from um ever growing complex go ahead. we have a self service infrastructure and as you mentioned, the spreadsheet era but christoph maybe you can talk about that. So in the end, in the natural, as we have said, the lack of trust and that's and cultural challenges that you faced. The conversations on the cultural change. got a bit more difficult. there are times and changes, you have different different artifacts that you were created These rules are defined by calling the sports association and this is what you can think about So learning never stops the tele fish, but we are really trying this and this is what we see in surveys, for example, where our employees that your justification not the least of which is crypto so you've identified some of the process gaps uh So if I take the example of This this is similar to a new thinking, right? gears and talk about the notion of data product and, and we have a slide uh that we There's someone accountable for making sure that the product that we are providing is actually So it's not just a nice dream that we have right. So this is to me this is why I get so excited about data mesh because I really do the company needs to really understand what our core business concept that they have, they need to have this mapping from. to the full video that you guys did. in order to prevent a car crash, we need to at least ensure the promises we made to our customers or consumers, these are the internal objectives that help us to keep a three phase approach to governance and you kind of, it sounds like you're in early phases called phase zero where Is there anything that small piece that we can already automate? and defy the policies that needs to be implemented. that the policy is being followed. so we kind of need to operate on the same level with the right understanding. or a Mulligan as we like to say for you golfers, what would you do differently? So it's not that we go in and say So this transition implies that you typically have a lot of the company that needs to happen and to act. It really speaks to to to what you are, who you are as a company, how you operate and in the in the sense of we would like to increase that to maturity as a whole across the entire company and this is kind Once you bring us home give us your final thoughts. and see where either you can fill um the gaps are where you Uh and I gotta ask you guys, it seems like this is an amazing place to work you guys hiring? We do you can speak about. really want to thank you for your time. Thank you very much. thank you for watching the cubes startup showcase made possible by A W. S.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
Australia | LOCATION | 0.99+ |
Dave Volonte | PERSON | 0.99+ |
May 2019 | DATE | 0.99+ |
2017 | DATE | 0.99+ |
2019 | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
Hello Fresh | ORGANIZATION | 0.99+ |
Russia | LOCATION | 0.99+ |
David | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
100% | QUANTITY | 0.99+ |
july | DATE | 0.99+ |
Denmark | LOCATION | 0.99+ |
Clements | PERSON | 0.99+ |
Jim McDaid Ghani | PERSON | 0.99+ |
U. S. | LOCATION | 0.99+ |
christophe | PERSON | 0.99+ |
two years later | DATE | 0.99+ |
last year | DATE | 0.99+ |
first piece | QUANTITY | 0.99+ |
one example | QUANTITY | 0.99+ |
Clements | ORGANIZATION | 0.99+ |
steve | PERSON | 0.99+ |
last week | DATE | 0.99+ |
Beatles | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
one tool | QUANTITY | 0.98+ |
two things | QUANTITY | 0.98+ |
Norway | LOCATION | 0.98+ |
second | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
four | QUANTITY | 0.98+ |
christoph | PERSON | 0.98+ |
today | DATE | 0.98+ |
first two | QUANTITY | 0.98+ |
hundreds of millions of meals | QUANTITY | 0.98+ |
one model | QUANTITY | 0.98+ |
four colors | QUANTITY | 0.97+ |
four pillars | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
first initiatives | QUANTITY | 0.97+ |
earlier this year | DATE | 0.97+ |
Jemaah | PERSON | 0.97+ |
each | QUANTITY | 0.96+ |
handle fresh | ORGANIZATION | 0.96+ |
U. K. | LOCATION | 0.95+ |
Dallas | LOCATION | 0.95+ |
christoph Nevada | PERSON | 0.95+ |
johnny | PERSON | 0.95+ |
Wild West | LOCATION | 0.94+ |
Youtube | ORGANIZATION | 0.94+ |
christophe clement | PERSON | 0.94+ |
four different pillars | QUANTITY | 0.94+ |
about 40 people | QUANTITY | 0.93+ |
each year | QUANTITY | 0.93+ |
A W. S. | PERSON | 0.92+ |
two different things | QUANTITY | 0.92+ |
Hello fresh | ORGANIZATION | 0.92+ |
millions of people | QUANTITY | 0.91+ |
UNLIST TILL 4/2 - The Shortest Path to Vertica – Best Practices for Data Warehouse Migration and ETL
hello everybody and thank you for joining us today for the virtual verdict of BBC 2020 today's breakout session is entitled the shortest path to Vertica best practices for data warehouse migration ETL I'm Jeff Healey I'll leave verdict and marketing I'll be your host for this breakout session joining me today are Marco guesser and Mauricio lychee vertical product engineer is joining us from yume region but before we begin I encourage you to submit questions or comments or in the virtual session don't have to wait just type question in a comment in the question box below the slides that click Submit as always there will be a Q&A session the end of the presentation will answer as many questions were able to during that time any questions we don't address we'll do our best to answer them offline alternatively visit Vertica forums that formed at vertical comm to post your questions there after the session our engineering team is planning to join the forums to keep the conversation going also reminder that you can maximize your screen by clicking the double arrow button and lower right corner of the sides and yes this virtual session is being recorded be available to view on demand this week send you a notification as soon as it's ready now let's get started over to you mark marco andretti oh hello everybody this is Marco speaking a sales engineer from Amir said I'll just get going ah this is the agenda part one will be done by me part two will be done by Mauricio the agenda is as you can see big bang or piece by piece and the migration of the DTL migration of the physical data model migration of et I saw VTL + bi functionality what to do with store procedures what to do with any possible existing user defined functions and migration of the data doctor will be by Maurice it you want to talk about emeritus Rider yeah hello everybody my name is Mauricio Felicia and I'm a birth record pre-sales like Marco I'm going to talk about how to optimize that were always using some specific vertical techniques like table flattening live aggregated projections so let me start with be a quick overview of the data browser migration process we are going to talk about today and normally we often suggest to start migrating the current that allows the older disease with limited or minimal changes in the overall architecture and yeah clearly we will have to port the DDL or to redirect the data access tool and we will platform but we should minimizing the initial phase the amount of changes in order to go go live as soon as possible this is something that we also suggest in the second phase we can start optimizing Bill arouse and which again with no or minimal changes in the architecture as such and during this optimization phase we can create for example dog projections or for some specific query or optimize encoding or change some of the visual spools this is something that we normally do if and when needed and finally and again if and when needed we go through the architectural design for these operations using full vertical techniques in order to take advantage of all the features we have in vertical and this is normally an iterative approach so we go back to name some of the specific feature before moving back to the architecture and science we are going through this process in the next few slides ok instead in order to encourage everyone to keep using their common sense when migrating to a new database management system people are you often afraid of it it's just often useful to use the analogy of how smooth in your old home you might have developed solutions for your everyday life that make perfect sense there for example if your old cent burner dog can't walk anymore you might be using a fork lifter to heap in through your window in the old home well in the new home consider the elevator and don't complain that the window is too small to fit the dog through this is very much in the same way as Narita but starting to make the transition gentle again I love to remain in my analogy with the house move picture your new house as your new holiday home begin to install everything you miss and everything you like from your old home once you have everything you need in your new house you can shut down themselves the old one so move each by feet and go for quick wins to make your audience happy you do bigbang only if they are going to retire the platform you are sitting on where you're really on a sinking ship otherwise again identify quick wings implement published and quickly in Vertica reap the benefits enjoy the applause use the gained reputation for further funding and if you find that nobody's using the old platform anymore you can shut it down if you really have to migrate you can still go to really go to big battle in one go only if you absolutely have to otherwise migrate by subject area use the group all similar clear divisions right having said that ah you start off by migrating objects objects in the database that's one of the very first steps it consists of migrating verbs the places where you can put the other objects into that is owners locations which is usually schemers then what do you have that you extract tables news then you convert the object definition deploy them to Vertica and think that you shouldn't do it manually never type what you can generate ultimate whatever you can use it enrolls usually there is a system tables in the old database that contains all the roads you can export those to a file reformat them and then you have a create role and create user scripts that you can apply to Vertica if LDAP Active Directory was used for the authentication the old database vertical supports anything within the l dubs standard catalogued schemas should be relatively straightforward with maybe sometimes the difference Vertica does not restrict you by defining a schema as a collection of all objects owned by a user but it supports it emulates it for old times sake Vertica does not need the catalog or if you absolutely need the catalog from the old tools that you use it it usually said it is always set to the name of the database in case of vertical having had now the schemas the catalogs the users and roles in place move the take the definition language of Jesus thought if you are allowed to it's best to use a tool that translates to date types in the PTL generated you might see as a mention of old idea to listen by memory to by the way several times in this presentation we are very happy to have it it actually can export the old database table definition because they got it works with the odbc it gets what the old database ODBC driver translates to ODBC and then it has internal translation tables to several target schema to several target DBMS flavors the most important which is obviously vertical if they force you to use something else there are always tubes like sequel plots in Oracle the show table command in Tara data etc H each DBMS should have a set of tools to extract the object definitions to be deployed in the other instance of the same DBMS ah if I talk about youth views usually a very new definition also in the old database catalog one thing that you might you you use special a bit of special care synonyms is something that were to get emulated different ways depending on the specific needs I said I stop you on the view or table to be referred to or something that is really neat but other databases don't have the search path in particular that works that works very much like the path environment variable in Windows or Linux where you specify in a table an object name without the schema name and then it searched it first in the first entry of the search path then in a second then in third which makes synonym hugely completely unneeded when you generate uvl we remained in the analogy of moving house dust and clean your stuff before placing it in the new house if you see a table like the one here at the bottom this is usually corpse of a bad migration in the past already an ID is usually an integer and not an almost floating-point data type a first name hardly ever has 256 characters and that if it's called higher DT it's not necessarily needed to store the second when somebody was hired so take good care in using while you are moving dust off your stuff and use better data types the same applies especially could string how many bytes does a string container contains for eurozone's it's not for it's actually 12 euros in utf-8 in the way that Vertica encodes strings and ASCII characters one died but the Euro sign thinks three that means that you have to very often you have when you have a single byte character set up a source you have to pay attention oversize it first because otherwise it gets rejected or truncated and then you you will have to very carefully check what their best science is the best promising is the most promising approach is to initially dimension strings in multiples of very initial length and again ODP with the command you see there would be - I you 2 comma 4 will double the lengths of what otherwise will single byte character and multiply that for the length of characters that are wide characters in traditional databases and then load the representative sample of your cells data and profile using the tools that we personally use to find the actually longest datatype and then make them shorter notice you might be talking about the issues of having too long and too big data types on projection design are we live and die with our projects you might know remember the rules on how default projects has come to exist the way that we do initially would be just like for the profiling load a representative sample of the data collector representative set of already known queries from the Vertica database designer and you don't have to decide immediately you can always amend things and otherwise follow the laws of physics avoid moving data back and forth across nodes avoid heavy iOS if you can design your your projections initially by hand encoding matters you know that the database designer is a very tight fisted thing it would optimize to use as little space as possible you will have to think of the fact that if you compress very well you might end up using more time in reading it this is the testimony to run once using several encoding types and you see that they are l e is the wrong length encoded if sorted is not even visible while the others are considerably slower you can get those nights and look it in look at them in detail I will go in detail you now hear about it VI migrations move usually you can expect 80% of everything to work to be able to live to be lifted and shifted you don't need most of the pre aggregated tables because we have live like regain projections many BI tools have specialized query objects for the dimensions and the facts and we have the possibility to use flatten tables that are going to be talked about later you might have to ride those by hand you will be able to switch off casting because vertical speeds of everything with laps Lyle aggregate projections and you have worked with molap cubes before you very probably won't meet them at all ETL tools what you will have to do is if you do it row by row in the old database consider changing everything to very big transactions and if you use in search statements with parameter markers consider writing to make pipes and using verticals copy command mouse inserts yeah copy c'mon that's what I have here ask you custom functionality you can see on this slide the verticals the biggest number of functions in the database we compare them regularly by far compared to any other database you might find that many of them that you have written won't be needed on the new database so look at the vertical catalog instead of trying to look to migrate a function that you don't need stored procedures are very often used in the old database to overcome their shortcomings that Vertica doesn't have very rarely you will have to actually write a procedure that involves a loop but it's really in our experience very very rarely usually you can just switch to standard scripting and this is basically repeating what Mauricio said in the interest of time I will skip this look at this one here the most of the database data warehouse migration talks should be automatic you can use you can automate GDL migration using ODB which is crucial data profiling it's not crucial but game-changing the encoding is the same thing you can automate at you using our database designer the physical data model optimization in general is game-changing you have the database designer use the provisioning use the old platforms tools to generate the SQL you have no objects without their onus is crucial and asking functions and procedures they are only crucial if they depict the company's intellectual property otherwise you can almost always replace them with something else that's it from me for now Thank You Marco Thank You Marco so we will now point our presentation talking about some of the Vertica that overall the presentation techniques that we can implement in order to improve the general efficiency of the dot arouse and let me start with a few simple messages well the first one is that you are supposed to optimize only if and when this is needed in most of the cases just a little shift from the old that allows to birth will provide you exhaust the person as if you were looking for or even better so in this case probably is not really needed to to optimize anything in case you want optimize or you need to optimize then keep in mind some of the vertical peculiarities for example implement delete and updates in the vertical way use live aggregate projections in order to avoid or better in order to limit the goodbye executions at one time used for flattening in order to avoid or limit joint and and then you can also implement invert have some specific birth extensions life for example time series analysis or machine learning on top of your data we will now start by reviewing the first of these ballots optimize if and when needed well if this is okay I mean if you get when you migrate from the old data where else to birth without any optimization if the first four month level is okay then probably you only took my jacketing but this is not the case one very easier to dispute in session technique that you can ask is to ask basket cells to optimize the physical data model using the birth ticket of a designer how well DB deal which is the vertical database designer has several interfaces here I'm going to use what we call the DB DB programmatic API so basically sequel functions and using other databases you might need to hire experts looking at your data your data browser your table definition creating indexes or whatever in vertical all you need is to run something like these are simple as six single sequel statement to get a very well optimized physical base model you see that we start creating a new design then we had to be redesigned tables and queries the queries that we want to optimize we set our target in this case we are tuning the physical data model in order to maximize query performances this is why we are using my design query and in our statement another possible journal tip would be to tune in order to reduce storage or a mix between during storage and cheering queries and finally we asked Vertica to produce and deploy these optimized design in a matter of literally it's a matter of minutes and in a few minutes what you can get is a fully optimized fiscal data model okay this is something very very easy to implement keep in mind some of the vertical peculiarities Vaska is very well tuned for load and query operations aunt Berta bright rose container to biscuits hi the Pharos container is a group of files we will never ever change the content of this file the fact that the Rose containers files are never modified is one of the political peculiarities and these approach led us to use minimal locks we can add multiple load operations in parallel against the very same table assuming we don't have a primary or unique constraint on the target table in parallel as a sage because they will end up in two different growth containers salad in read committed requires in not rocket fuel and can run concurrently with insert selected because the Select will work on a snapshot of the catalog when the transaction start this is what we call snapshot isolation the kappa recovery because we never change our rows files are very simple and robust so we have a huge amount of bandages due to the fact that we never change the content of B rows files contain indiarose containers but on the other side believes and updates require a little attention so what about delete first when you believe in the ethica you basically create a new object able it back so it appeared a bit later in the Rose or in memory and this vector will point to the data being deleted so that when the feed is executed Vertica will just ignore the rules listed in B delete records and it's not just about the leak and updating vertical consists of two operations delete and insert merge consists of either insert or update which interim is made of the little insert so basically if we tuned how the delete work we will also have tune the update in the merge so what should we do in order to optimize delete well remember what we said that every time we please actually we create a new object a delete vector so avoid committing believe and update too often we reduce work the work for the merge out for the removal method out activities that are run afterwards and be sure that all the interested projections will contain the column views in the dedicate this will let workers directly after access the projection without having to go through the super projection in order to create the vector and the delete will be much much faster and finally another very interesting optimization technique is trying to segregate the update and delete operation from Pyrenean third workload in order to reduce lock contention beliefs something we are going to discuss and these contain using partition partition operation this is exactly what I want to talk about now here you have a typical that arouse architecture so we have data arriving in a landing zone where the data is loaded that is from the data sources then we have a transformation a year writing into a staging area that in turn will feed the partitions block of data in the green data structure we have at the end those green data structure we have at the end are the ones used by the data access tools when they run their queries sometimes we might need to change old data for example because we have late records or maybe because we want to fix some errors that have been originated in the facilities so what we do in this case is we just copied back the partition we want to change or we want to adjust from the green interior a the end to the stage in the area we have a very fast operation which is Tokyo Station then we run our updates or our adjustment procedure or whatever we need in order to fix the errors in the data in the staging area and at the very same time people continues to you with green data structures that are at the end so we will never have contention between the two operations when we updating the staging area is completed what we have to do is just to run a swap partition between tables in order to swap the data that we just finished to adjust in be staging zone to the query area that is the green one at the end this swap partition is very fast is an atomic operation and basically what will happens is just that well exchange the pointer to the data this is a very very effective techniques and lot of customer useless so why flops on table and live aggregate for injections well basically we use slot in table and live aggregate objection to minimize or avoid joint this is what flatten table are used for or goodbye and this is what live aggregate projections are used for now compared to traditional data warehouses better can store and process and aggregate and join order of magnitudes more data that is a true columnar database joint and goodbye normally are not a problem at all they run faster than any traditional data browse that page there are still scenarios were deficits are so big and we are talking about petabytes of data and so quickly going that would mean be something in order to boost drop by and join performances and this is why you can't reduce live aggregate projections to perform aggregations hard loading time and limit the need for global appear on time and flux and tables to combine information from different entity uploading time and again avoid running joint has query undefined okay so live aggregate projections at this point in time we can use live aggregate projections using for built in aggregate functions which are some min Max and count okay let's see how this works suppose that you have a normal table in this case we have a table unit sold with three columns PIB their time and quantity which has been segmented in a given way and on top of this base table we call it uncle table we create a projection you see that we create the projection using the salad that will aggregate the data we get the PID we get the date portion of the time and we get the sum of quantity from from the base table grouping on the first two columns so PID and the date portion of day time okay what happens in this case when we load data into the base table all we have to do with load data into the base table when we load data into the base table we will feel of course big injections that assuming we are running with k61 we will have to projection to projections and we will know the data in those two projection with all the detail in data we are going to load into the table so PAB playtime and quantity but at the very same time at the very same time and without having to do nothing any any particular operation or without having to run any any ETL procedure we will also get automatically in the live aggregate projection for the data pre aggregated with be a big day portion of day time and the sum of quantity into the table name total quantity you see is something that we get for free without having to run any specific procedure and this is very very efficient so the key concept is that during the loading operation from VDL point of view is executed again the base table we do not explicitly aggregate data or we don't have any any plc do the aggregation is automatic and we'll bring the pizza to be live aggregate projection every time we go into the base table you see the two selection that we have we have on in this line on the left side and you see that those two selects will produce exactly the same result so running select PA did they trying some quantity from the base table or running the select star from the live aggregate projection will result exactly in the same data you know this is of course very useful but is much more useful result that if we and we can observe this if we run an explained if we run the select against the base table asking for this group data what happens behind the scene is that basically vertical itself that is a live aggregate projection with the data that has been already aggregating loading phase and rewrite your query using polite aggregate projection this happens automatically you see this is a query that ran a group by against unit sold and vertical decided to rewrite this clearly as something that has to be collected against the light aggregates projection because if I decrease this will save a huge amount of time and effort during the ETL cycle okay and is not just limited to be information you want to aggregate for example another query like select count this thing you might note that can't be seen better basically our goodbyes will also take advantage of the live aggregate injection and again this is something that happens automatically you don't have to do anything to get this okay one thing that we have to keep very very clear in mind Brassica what what we store in the live aggregate for injection are basically partially aggregated beta so in this example we have two inserts okay you see that we have the first insert that is entered in four volts and the second insert which is inserting five rules well in for each of these insert we will have a partial aggregation you will never know that after the first insert you will have a second one so better will calculate the aggregation of the data every time irin be insert it is a key concept and be also means that you can imagine lies the effectiveness of bees technique by inserting large chunk of data ok if you insert data row by row this technique live aggregate rejection is not very useful because for every goal that you insert you will have an aggregation so basically they'll live aggregate injection will end up containing the same number of rows that you have in the base table but if you everytime insert a large chunk of data the number of the aggregations that you will have in the library get from structure is much less than B base data so this is this is a key concept you can see how these works by counting the number of rows that you have in alive aggregate injection you see that if you run the select count star from the solved live aggregate rejection the query on the left side you will get four rules but actually if you explain this query you will see that he was reading six rows so this was because every of those two inserts that we're actively interested a few rows in three rows in India in the live aggregate projection so this is a key concept live aggregate projection keep partially aggregated data this final aggregation will always happen at runtime okay another which is very similar to be live aggregate projection or what we call top K projection we actually do not aggregate anything in the top case injection we just keep the last or limit the amount of rows that we collect using the limit over partition by all the by clothes and this again in this case we create on top of the base stable to top gay projection want to keep the last quantity that has been sold and the other one to keep the max quantity in both cases is just a matter of ordering the data in the first case using the B time column in the second page using quantity in both cases we fill projection with just the last roof and again this is something that we do when we insert data into the base table and this is something that happens automatically okay if we now run after the insert our select against either the max quantity okay or be lost wanted it okay we will get the very last you see that we have much less rows in the top k projections okay we told at the beginning that basically we can use for built-in function you might remember me max sum and count what if I want to create my own specific aggregation on top of the lid and customer sum up because our customers have very specific needs in terms of live aggregate projections well in this case you can code your own live aggregate production user-defined functions so you can create the user-defined transport function to implement any sort of complex aggregation while loading data basically after you implemented miss VPS you can deploy using a be pre pass approach that basically means the data is aggregated as loading time during the data ingestion or the batch approach that means that the data is when that woman is running on top which things to remember on live a granade projections they are limited to be built in function again some max min and count but you can call your own you DTF so you can do whatever you want they can reference only one table and for bass cab version before 9.3 it was impossible to update or delete on the uncle table this limit has been removed in 9.3 so you now can update and delete data from the uncle table okay live aggregate projection will follow the segmentation of the group by expression and in some cases the best optimizer can decide to pick the live aggregates objection or not depending on if depending on the fact that the aggregation is a consistent or not remember that if we insert and commit every single role to be uncoachable then we will end up with a live aggregate indirection that contains exactly the same number of rows in this case living block or using the base table it would be the same okay so this is one of the two fantastic techniques that we can implement in Burtka this live aggregate projection is basically to avoid or limit goodbyes the other which we are going to talk about is cutting table and be reused in order to avoid the means for joins remember that K is very fast running joints but when we scale up to petabytes of beta we need to boost and this is what we have in order to have is problem fixed regardless the amount of data we are dealing with so how what about suction table let me start with normalized schemas everybody knows what is a normalized scheme under is no but related stuff in this slide the main scope of an normalized schema is to reduce data redundancies so and the fact that we reduce data analysis is a good thing because we will obtain fast and more brides we will have to write into a database small chunks of data into the right table the problem with these normalized schemas is that when you run your queries you have to put together the information that arrives from different table and be required to run joint again jointly that again normally is very good to run joint but sometimes the amount of data makes not easy to deal with joints and joints sometimes are not easy to tune what happens in in the normal let's say traditional data browser is that we D normalize the schemas normally either manually or using an ETL so basically we have on one side in this light on the left side the normalized schemas where we can get very fast right on the other side on the left we have the wider table where we run all the three joints and pre aggregation in order to prepare the data for the queries and so we will have fast bribes on the left fast reads on the Left sorry fast bra on the right and fast read on the left side of these slides the probability in the middle because we will push all the complexity in the middle in the ETL that will have to transform be normalized schema into the water table and the way we normally implement these either manually using procedures that we call the door using ETL this is what happens in traditional data warehouse is that we will have to coach in ETL layer in order to round the insert select that will feed from the normalized schema and right into the widest table at the end the one that is used by the data access tools we we are going to to view store to run our theories so this approach is costly because of course someone will have to code this ETL and is slow because someone will have to execute those batches normally overnight after loading the data and maybe someone will have to check the following morning that everything was ok with the batch and is resource intensive of course and is also human being intensive because of the people that will have to code and check the results it ever thrown because it can fail and introduce a latency because there is a get in the time axis between the time t0 when you load the data into be normalized schema and the time t1 when we get the data finally ready to be to be queried so what would be inverter to facilitate this process is to create this flatten table with the flattened T work first you avoid data redundancy because you don't need the wide table on the normalized schema on the left side second is fully automatic you don't have to do anything you just have to insert the data into the water table and the ETL that you have coded is transformed into an insert select by vatika automatically you don't have to do anything it's robust and this Latin c0 is a single fast as soon as you load the data into the water table you will get all the joints executed for you so let's have a look on how it works in this case we have the table we are going to flatten and basically we have to focus on two different clauses the first one is you see that there is one table here I mentioned value 1 which can be defined as default and then the Select or set using okay the difference between the fold and set using is when the data is populated if we use default data is populated as soon as we know the data into the base table if we use set using Google Earth to refresh but everything is there I mean you don't need them ETL you don't need to code any transformation because everything is in the table definition itself and it's for free and of course is in latency zero so as soon as you load the other columns you will have the dimension value valued as well okay let's see an example here suppose here we have a dimension table customer dimension that is on the left side and we have a fact table on on the right you see that the fact table uses columns like o underscore name or Oh the score city which are basically the result of the salad on top of the customer dimension so Beezus were the join is executed as soon as a remote data into the fact table directly into the fact table without of course loading data that arise from the dimension all the data from the dimension will be populated automatically so let's have an example here suppose that we are running this insert as you can see we are running be inserted directly into the fact table and we are loading o ID customer ID and total we are not loading made a major name no city those name and city will be automatically populated by Vertica for you because of the definition of the flood table okay you see behave well all you need in order to have your widest tables built for you your flattened table and this means that at runtime you won't need any join between base fuck table and the customer dimension that we have used in order to calculate name and city because the data is already there this was using default the other option was is using set using the concept is absolutely the same you see that in this case on the on the right side we have we have basically replaced this all on the school name default with all underscore name set using and same is true for city the concept that I said is the same but in this case which we set using then we will have to refresh you see that we have to run these select trash columns and then the name of the table in this case all columns will be fresh or you can specify only certain columns and this will bring the values for name and city reading from the customer dimension so this technique this technique is extremely useful the difference between default and said choosing just to summarize the most important differences remember you just have to remember that default will relate your target when you load set using when you refresh end and in some cases you might need to use them both so in some cases you might want to use both default end set using in this example here we'll see that we define the underscore name using both default and securing and this means that we love the data populated either when we load the data into the base table or when we run the Refresh this is summary of the technique that we can implement in birth in order to make our and other browsers even more efficient and well basically this is the end of our presentation thank you for listening and now we are ready for the Q&A session you
SUMMARY :
the end to the stage in the area we have
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Tom | PERSON | 0.99+ |
Marta | PERSON | 0.99+ |
John | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Chris Keg | PERSON | 0.99+ |
Laura Ipsen | PERSON | 0.99+ |
Jeffrey Immelt | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Chris O'Malley | PERSON | 0.99+ |
Andy Dalton | PERSON | 0.99+ |
Chris Berg | PERSON | 0.99+ |
Dave Velante | PERSON | 0.99+ |
Maureen Lonergan | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Paul Forte | PERSON | 0.99+ |
Erik Brynjolfsson | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Andrew McCafee | PERSON | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
Cheryl | PERSON | 0.99+ |
Mark | PERSON | 0.99+ |
Marta Federici | PERSON | 0.99+ |
Larry | PERSON | 0.99+ |
Matt Burr | PERSON | 0.99+ |
Sam | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Dave Wright | PERSON | 0.99+ |
Maureen | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Cheryl Cook | PERSON | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
$8,000 | QUANTITY | 0.99+ |
Justin Warren | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
2012 | DATE | 0.99+ |
Europe | LOCATION | 0.99+ |
Andy | PERSON | 0.99+ |
30,000 | QUANTITY | 0.99+ |
Mauricio | PERSON | 0.99+ |
Philips | ORGANIZATION | 0.99+ |
Robb | PERSON | 0.99+ |
Jassy | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Mike Nygaard | PERSON | 0.99+ |
UNLIST TILL 4/2 - A Technical Overview of Vertica Architecture
>> Paige: Hello, everybody and thank you for joining us today on the Virtual Vertica BDC 2020. Today's breakout session is entitled A Technical Overview of the Vertica Architecture. I'm Paige Roberts, Open Source Relations Manager at Vertica and I'll be your host for this webinar. Now joining me is Ryan Role-kuh? Did I say that right? (laughs) He's a Vertica Senior Software Engineer. >> Ryan: So it's Roelke. (laughs) >> Paige: Roelke, okay, I got it, all right. Ryan Roelke. And before we begin, I want to be sure and encourage you guys to submit your questions or your comments during the virtual session while Ryan is talking as you think of them as you go along. You don't have to wait to the end, just type in your question or your comment in the question box below the slides and click submit. There'll be a Q and A at the end of the presentation and we'll answer as many questions as we're able to during that time. Any questions that we don't address, we'll do our best to get back to you offline. Now, alternatively, you can visit the Vertica forums to post your question there after the session as well. Our engineering team is planning to join the forums to keep the conversation going, so you can have a chat afterwards with the engineer, just like any other conference. Now also, you can maximize your screen by clicking the double arrow button in the lower right corner of the slides and before you ask, yes, this virtual session is being recorded and it will be available to view on demand this week. We'll send you a notification as soon as it's ready. Now, let's get started. Over to you, Ryan. >> Ryan: Thanks, Paige. Good afternoon, everybody. My name is Ryan and I'm a Senior Software Engineer on Vertica's Development Team. I primarily work on improving Vertica's query execution engine, so usually in the space of making things faster. Today, I'm here to talk about something that's more general than that, so we're going to go through a technical overview of the Vertica architecture. So the intent of this talk, essentially, is to just explain some of the basic aspects of how Vertica works and what makes it such a great database software and to explain what makes a query execute so fast in Vertica, we'll provide some background to explain why other databases don't keep up. And we'll use that as a starting point to discuss an academic database that paved the way for Vertica. And then we'll explain how Vertica design builds upon that academic database to be the great software that it is today. I want to start by sharing somebody's approximation of an internet minute at some point in 2019. All of the data on this slide is generated by thousands or even millions of users and that's a huge amount of activity. Most of the applications depicted here are backed by one or more databases. Most of this activity will eventually result in changes to those databases. For the most part, we can categorize the way these databases are used into one of two paradigms. First up, we have online transaction processing or OLTP. OLTP workloads usually operate on single entries in a database, so an update to a retail inventory or a change in a bank account balance are both great examples of OLTP operations. Updates to these data sets must be visible immediately and there could be many transactions occurring concurrently from many different users. OLTP queries are usually key value queries. The key uniquely identifies the single entry in a database for reading or writing. Early databases and applications were probably designed for OLTP workloads. This example on the slide is typical of an OLTP workload. We have a table, accounts, such as for a bank, which tracks information for each of the bank's clients. An update query, like the one depicted here, might be run whenever a user deposits $10 into their bank account. Our second category is online analytical processing or OLAP which is more about using your data for decision making. If you have a hardware device which periodically records how it's doing, you could analyze trends of all your devices over time to observe what data patterns are likely to lead to failure or if you're Google, you might log user search activity to identify which links helped your users find the answer. Analytical processing has always been around but with the advent of the internet, it happened at scales that were unimaginable, even just 20 years ago. This SQL example is something you might see in an OLAP workload. We have a table, searches, logging user activity. We will eventually see one row in this table for each query submitted by users. If we want to find out what time of day our users are most active, then we could write a query like this one on the slide which counts the number of unique users running searches for each hour of the day. So now let's rewind to 2005. We don't have a picture of an internet minute in 2005, we don't have the data for that. We also don't have the data for a lot of other things. The term Big Data is not quite yet on anyone's radar and The Cloud is also not quite there or it's just starting to be. So if you have a database serving your application, it's probably optimized for OLTP workloads. OLAP workloads just aren't mainstream yet and database engineers probably don't have them in mind. So let's innovate. It's still 2005 and we want to try something new with our database. Let's take a look at what happens when we do run an analytic workload in 2005. Let's use as a motivating example a table of stock prices over time. In our table, the symbol column identifies the stock that was traded, the price column identifies the new price and the timestamp column indicates when the price changed. We have several other columns which, we should know that they're there, but we're not going to use them in any example queries. This table is designed for analytic queries. We're probably not going to make any updates or look at individual rows since we're logging historical data and want to analyze changes in stock price over time. Our database system is built to serve OLTP use cases, so it's probably going to store the table on disk in a single file like this one. Notice that each row contains all of the columns of our data in row major order. There's probably an index somewhere in the memory of the system which will help us to point lookups. Maybe our system expects that we will use the stock symbol and the trade time as lookup keys. So an index will provide quick lookups for those columns to the position of the whole row in the file. If we did have an update to a single row, then this representation would work great. We would seek to the row that we're interested in, finding it would probably be very fast using the in-memory index. And then we would update the file in place with our new value. On the other hand, if we ran an analytic query like we want to, the data access pattern is very different. The index is not helpful because we're looking up a whole range of rows, not just a single row. As a result, the only way to find the rows that we actually need for this query is to scan the entire file. We're going to end up scanning a lot of data that we don't need and that won't just be the rows that we don't need, there's many other columns in this table. Many information about who made the transaction, and we'll also be scanning through those columns for every single row in this table. That could be a very serious problem once we consider the scale of this file. Stocks change a lot, we probably have thousands or millions or maybe even billions of rows that are going to be stored in this file and we're going to scan all of these extra columns for every single row. If we tried out our stocks use case behind the desk for the Fortune 500 company, then we're probably going to be pretty disappointed. Our queries will eventually finish, but it might take so long that we don't even care about the answer anymore by the time that they do. Our database is not built for the task we want to use it for. Around the same time, a team of researchers in the North East have become aware of this problem and they decided to dedicate their time and research to it. These researchers weren't just anybody. The fruits of their labor, which we now like to call the C-Store Paper, was published by eventual Turing Award winner, Mike Stonebraker, along with several other researchers from elite universities. This paper presents the design of a read-optimized relational DBMS that contrasts sharply with most current systems, which are write-optimized. That sounds exactly like what we want for our stocks use case. Reasoning about what makes our queries executions so slow brought our researchers to the Memory Hierarchy, which essentially is a visualization of the relative speeds of different parts of a computer. At the top of the hierarchy, we have the fastest data units, which are, of course, also the most expensive to produce. As we move down the hierarchy, components get slower but also much cheaper and thus you can have more of them. Our OLTP databases data is stored in a file on the hard disk. We scanned the entirety of this file, even though we didn't need most of the data and now it turns out, that is just about the slowest thing that our query could possibly be doing by over two orders of magnitude. It should be clear, based on that, that the best thing we can do to optimize our query's execution is to avoid reading unnecessary data from the disk and that's what the C-Store researchers decided to look at. The key innovation of the C-Store paper does exactly that. Instead of storing data in a row major order, in a large file on disk, they transposed the data and stored each column in its own file. Now, if we run the same select query, we read only the relevant columns. The unnamed columns don't factor into the table scan at all since we don't even open the files. Zooming out to an internet scale sized data set, we can appreciate the savings here a lot more. But we still have to read a lot of data that we don't need to answer this particular query. Remember, we had two predicates, one on the symbol column and one on the timestamp column. Our query is only interested in AAPL stock, but we're still reading rows for all of the other stocks. So what can we do to optimize our disk read even more? Let's first partition our data set into different files based on the timestamp date. This means that we will keep separate files for each date. When we query the stocks table, the database knows all of the files we have to open. If we have a simple predicate on the timestamp column, as our sample query does, then the database can use it to figure out which files we don't have to look at at all. So now all of our disk reads that we have to do to answer our query will produce rows that pass the timestamp predicate. This eliminates a lot of wasteful disk reads. But not all of them. We do have another predicate on the symbol column where symbol equals AAPL. We'd like to avoid disk reads of rows that don't satisfy that predicate either. And we can avoid those disk reads by clustering all the rows that match the symbol predicate together. If all of the AAPL rows are adjacent, then as soon as we see something different, we can stop reading the file. We won't see any more rows that can pass the predicate. Then we can use the positions of the rows we did find to identify which pieces of the other columns we need to read. One technique that we can use to cluster the rows is sorting. So we'll use the symbol column as a sort key for all of the columns. And that way we can reconstruct a whole row by seeking to the same row position in each file. It turns out, having sorted all of the rows, we can do a bit more. We don't have any more wasted disk reads but we can still be more efficient with how we're using the disk. We've clustered all of the rows with the same symbol together so we don't really need to bother repeating the symbol so many times in the same file. Let's just write the value once and say how many rows we have. This one length encoding technique can compress large numbers of rows into a small amount of space. In this example, we do de-duplicate just a few rows but you can imagine de-duplicating many thousands of rows instead. This encoding is great for reducing the amounts of disk we need to read at query time, but it also has the additional benefit of reducing the total size of our stored data. Now our query requires substantially fewer disk reads than it did when we started. Let's recap what the C-Store paper did to achieve that. First, we transposed our data to store each column in its own file. Now, queries only have to read the columns used in the query. Second, we partitioned the data into multiple file sets so that all rows in a file have the same value for the partition column. Now, a predicate on the partition column can skip non-matching file sets entirely. Third, we selected a column of our data to use as a sort key. Now rows with the same value for that column are clustered together, which allows our query to stop reading data once it finds non-matching rows. Finally, sorting the data this way enables high compression ratios, using one length encoding which minimizes the size of the data stored on the disk. The C-Store system combined each of these innovative ideas to produce an academically significant result. And if you used it behind the desk of a Fortune 500 company in 2005, you probably would've been pretty pleased. But it's not 2005 anymore and the requirements of a modern database system are much stricter. So let's take a look at how C-Store fairs in 2020. First of all, we have designed the storage layer of our database to optimize a single query in a single application. Our design optimizes the heck out of that query and probably some similar ones but if we want to do anything else with our data, we might be in a bit of trouble. What if we just decide we want to ask a different question? For example, in our stock example, what if we want to plot all the trade made by a single user over a large window of time? How do our optimizations for the previous query measure up here? Well, our data's partitioned on the trade date, that could still be useful, depending on our new query. If we want to look at a trader's activity over a long period of time, we would have to open a lot of files. But if we're still interested in just a day's worth of data, then this optimization is still an optimization. Within each file, our data is ordered on the stock symbol. That's probably not too useful anymore, the rows for a single trader aren't going to be clustered together so we will have to scan all of the rows in order to figure out which ones match. You could imagine a worse design but as it becomes crucial to optimize this new type of query, then we might have to go as far as reconfiguring the whole database. The next problem of one of scale. One server is probably not good enough to serve a database in 2020. C-Store, as described, runs on a single server and stores lots of files. What if the data overwhelms this small system? We could imagine exhausting the file system's inodes limit with lots of small files due to our partitioning scheme. Or we could imagine something simpler, just filling up the disk with huge volumes of data. But there's an even simpler problem than that. What if something goes wrong and C-Store crashes? Then our data is no longer available to us until the single server is brought back up. A third concern, another one of scalability, is that one deployment does not really suit all possible things and use cases we could imagine. We haven't really said anything about being flexible. A contemporary database system has to integrate with many other applications, which might themselves have pretty restricted deployment options. Or the demands imposed by our workloads have changed and the setup you had before doesn't suit what you need now. C-Store doesn't do anything to address these concerns. What the C-Store paper did do was lead very quickly to the founding of Vertica. Vertica's architecture and design are essentially all about bringing the C-Store designs into an enterprise software system. The C-Store paper was just an academic exercise so it didn't really need to address any of the hard problems that we just talked about. But Vertica, the first commercial database built upon the ideas of the C-Store paper would definitely have to. This brings us back to the present to look at how an analytic query runs in 2020 on the Vertica Analytic Database. Vertica takes the key idea from the paper, can we significantly improve query performance by changing the way our data is stored and give its users the tools to customize their storage layer in order to heavily optimize really important or commonly wrong queries. On top of that, Vertica is a distributed system which allows it to scale up to internet-sized data sets, as well as have better reliability and uptime. We'll now take a brief look at what Vertica does to address the three inadequacies of the C-Store system that we mentioned. To avoid locking into a single database design, Vertica provides tools for the database user to customize the way their data is stored. To address the shortcomings of a single node system, Vertica coordinates processing among multiple nodes. To acknowledge the large variety of desirable deployments, Vertica does not require any specialized hardware and has many features which smoothly integrate it with a Cloud computing environment. First, we'll look at the database design problem. We're a SQL database, so our users are writing SQL and describing their data in SQL way, the Create Table statement. Create Table is a logical description of what your data looks like but it doesn't specify the way that it has to be stored, For a single Create Table, we could imagine a lot of different storage layouts. Vertica adds some extensions to SQL so that users can go even further than Create Table and describe the way that they want the data to be stored. Using terminology from the C-Store paper, we provide the Create Projection statement. Create Projection specifies how table data should be laid out, including column encoding and sort order. A table can have multiple projections, each of which could be ordered on different columns. When you query a table, Vertica will answer the query using the projection which it determines to be the best match. Referring back to our stock example, here's a sample Create Table and Create Projection statement. Let's focus on our heavily optimized example query, which had predicates on the stock symbol and date. We specify that the table data is to be partitioned by date. The Create Projection Statement here is excellent for this query. We specify using the order by clause that the data should be ordered according to our predicates. We'll use the timestamp as a secondary sort key. Each projection stores a copy of the table data. If you don't expect to need a particular column in a projection, then you can leave it out. Our average price query didn't care about who did the trading, so maybe our projection design for this query can leave the trader column out entirely. If the question we want to ask ever does change, maybe we already have a suitable projection, but if we don't, then we can create another one. This example shows another projection which would be much better at identifying trends of traders, rather than identifying trends for a particular stock. Next, let's take a look at our second problem, that one, or excuse me, so how should you decide what design is best for your queries? Well, you could spend a lot of time figuring it out on your own, or you could use Vertica's Database Designer tool which will help you by automatically analyzing your queries and spitting out a design which it thinks is going to work really well. If you want to learn more about the Database Designer Tool, then you should attend the session Vertica Database Designer- Today and Tomorrow which will tell you a lot about what the Database Designer does and some recent improvements that we have made. Okay, now we'll move to our next problem. (laughs) The challenge that one server does not fit all. In 2020, we have several orders of magnitude more data than we had in 2005. And you need a lot more hardware to crunch it. It's not tractable to keep multiple petabytes of data in a system with a single server. So Vertica doesn't try. Vertica is a distributed system so will deploy multiple severs which work together to maintain such a high data volume. In a traditional Vertica deployment, each node keeps some of the data in its own locally-attached storage. Data is replicated so that there is a redundant copy somewhere else in the system. If any one node goes down, then the data that it served is still available on a different node. We'll also have it so that in the system, there's no special node with extra duties. All nodes are created equal. This ensures that there is no single point of failure. Rather than replicate all of your data, Vertica divvies it up amongst all of the nodes in your system. We call this segmentation. The way data is segmented is another parameter of storage customization and it can definitely have an impact upon query performance. A common way to segment data is by using a hash expression, which essentially randomizes the node that a row of data belongs to. But with a guarantee that the same data will always end up in the same place. Describing the way data is segmented is another part of the Create Projection Statement, as seen in this example. Here we segment on the hash of the symbol column so all rows with the same symbol will end up on the same node. For each row that we load into the system, we'll apply our segmentation expression. The result determines which segment the row belongs to and then we'll send the row to each node which holds the copy of that segment. In this example, our projection is marked KSAFE 1, so we will keep one redundant copy of each segment. When we load a row, we might find that its segment had copied on Node One and Node Three, so we'll send a copy of the row to each of those nodes. If Node One is temporarily disconnected from the network, then Node Three can serve the other copy of the segment so that the whole system remains available. The last challenge we brought up from the C-Store design was that one deployment does not fit all. Vertica's cluster design neatly addressed many of our concerns here. Our use of segmentation to distribute data means that a Vertica system can scale to any size of deployment. And since we lack any special hardware or nodes with special purposes, Vertica servers can run anywhere, on premise or in the Cloud. But let's suppose you need to scale out your cluster to rise to the demands of a higher workload. Suppose you want to add another node. This changes the division of the segmentation space. We'll have to re-segment every row in the database to find its new home and then we'll have to move around any data that belongs to a different segment. This is a very expensive operation, not something you want to be doing all that often. Traditional Vertica doesn't solve that problem especially well, but Vertica Eon Mode definitely does. Vertica's Eon Mode is a large set of features which are designed with a Cloud computing environment in mind. One feature of this design is elastic throughput scaling, which is the idea that you can smoothly change your cluster size without having to pay the expenses of shuffling your entire database. Vertica Eon Mode had an entire session dedicated to it this morning. I won't say any more about it here, but maybe you already attended that session or if you haven't, then I definitely encourage you to listen to the recording. If you'd like to learn more about the Vertica architecture, then you'll find on this slide links to several of the academic conference publications. These four papers here, as well as Vertica Seven Years Later paper which describes some of the Vertica designs seven years after the founding and also a paper about the innovations of Eon Mode and of course, the Vertica documentation is an excellent resource for learning more about what's going on in a Vertica system. I hope you enjoyed learning about the Vertica architecture. I would be very happy to take all of your questions now. Thank you for attending this session.
SUMMARY :
A Technical Overview of the Vertica Architecture. Ryan: So it's Roelke. in the question box below the slides and click submit. that the best thing we can do
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ryan | PERSON | 0.99+ |
Mike Stonebraker | PERSON | 0.99+ |
Ryan Roelke | PERSON | 0.99+ |
2005 | DATE | 0.99+ |
2020 | DATE | 0.99+ |
thousands | QUANTITY | 0.99+ |
2019 | DATE | 0.99+ |
$10 | QUANTITY | 0.99+ |
Paige Roberts | PERSON | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
Paige | PERSON | 0.99+ |
Node Three | TITLE | 0.99+ |
Today | DATE | 0.99+ |
First | QUANTITY | 0.99+ |
each file | QUANTITY | 0.99+ |
Roelke | PERSON | 0.99+ |
each row | QUANTITY | 0.99+ |
Node One | TITLE | 0.99+ |
millions | QUANTITY | 0.99+ |
each hour | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
second category | QUANTITY | 0.99+ |
each column | QUANTITY | 0.99+ |
One technique | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
two predicates | QUANTITY | 0.99+ |
each node | QUANTITY | 0.99+ |
One server | QUANTITY | 0.99+ |
SQL | TITLE | 0.99+ |
C-Store | TITLE | 0.99+ |
second problem | QUANTITY | 0.99+ |
Ryan Role | PERSON | 0.99+ |
Third | QUANTITY | 0.99+ |
North East | LOCATION | 0.99+ |
each segment | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
single entry | QUANTITY | 0.98+ |
each date | QUANTITY | 0.98+ |
ORGANIZATION | 0.98+ | |
one row | QUANTITY | 0.98+ |
one server | QUANTITY | 0.98+ |
single server | QUANTITY | 0.98+ |
single entries | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
20 years ago | DATE | 0.98+ |
two paradigms | QUANTITY | 0.97+ |
a day | QUANTITY | 0.97+ |
this week | DATE | 0.97+ |
billions of rows | QUANTITY | 0.97+ |
Vertica | TITLE | 0.97+ |
4/2 | DATE | 0.97+ |
single application | QUANTITY | 0.97+ |
each query | QUANTITY | 0.97+ |
Each projection | QUANTITY | 0.97+ |
UNLIST TILL 4/2 - Migrating Your Vertica Cluster to the Cloud
>> Jeff: Hello everybody, and thank you for joining us today for the virtual Vertica BDC 2020. Today's break-out session has been titled, "Migrating Your Vertica Cluster to the Cloud." I'm Jeff Healey, and I'm in Vertica marketing. I'll be your host for this break-out session. Joining me here are Sumeet Keswani and Chris Daly, Vertica product technology engineers and key members of our customer success team. Before we begin, I encourage you to submit questions and comments during the virtual session. You don't have to wait, just type your question or comment in the question box below the slides and click Submit. As always, there will be a Q&A session at the end of the presentation. We'll answer as many questions as we're able to during that time. Any questions that we don't address, we'll do our best to answer them offline. And alternatively, you can visit Vertica forums at forum.vertica.com to post your questions there after the session. Our engineering team is planning to join the forums to keep the conversation going. Also as a reminder that you can maximize your screen by clicking the double arrow button in the lower right corner of the slides. And yes, this virtual session is being recorded and will be available to view on demand this week. We'll send you a notification as soon as it's ready. Now let's get started. Over to you, Sumeet. >> Sumeet: Thank you, Jeff. Hello everyone, my name is Sumeet Keswani, and I will be talking about planning to deploy or migrate your Vertica cluster to the Cloud. So you may be moving an on-prem cluster or setting up a new cluster in the Cloud. And there are several design and operational considerations that will come into play. You know, some of these are cost, which industry you are in, or which expertise you have, in which Cloud platform. And there may be a personal preference too. After that, you know, there will be some operational considerations like VM and cluster sizing, what Vertica mode you want to deploy, Eon or Enterprise. It depends on your use keys. What are the DevOps skills available, you know, what elasticity, separation you need, you know, what is your backup and DR strategy, what do you want in terms of high availability. And you will have to think about, you know, how much data you have and where it's going to live. And in order to understand the cost, or the cost and the benefit of deployment and you will have to understand the access patterns, and how you are moving data from and to the Cloud. So things to consider before you move a deployment, a Vertica deployment to the Cloud, right, is one thing to keep in mind is, virtual CPUs, or CPUs in the Cloud, are not the same as the usual CPUs that you've been familiar with in your data center. A vCPU is half of a CPU because of hyperthreading. There is definitely the noisy neighbor effect. There is, depending on what other things are hosted in the Cloud environment, you may see performance, you may occasionally see performance issues. There are I/O limitations on the instance that you provision, so that what that really means is you can't always scale up. You might have to scale up, basically, you have to add more instances rather than getting bigger or the right size instances. Finally, there is an important distinction here. Virtualization is not free. There can be significant overhead to virtualization. It could be as much as 30%, so when you size and scale your clusters, you must keep that in mind. Now the other important aspect is, you know, where you put Vertica cluster is important. The choice of the region, how far it is from your various office locations. Where will the data live with respect to the cluster. And remember, popular locations can fill up. So if you want to scale out, additional capacity may or may not be available. So these are things you have to keep in mind when picking or choosing your Cloud platform and your deployment. So at this point, I want to make a plug for Eon mode. Eon mode is the latest mode, is a Cloud mode from Vertica. It has been designed with Cloud economics in mind. It uses shared storage, which is durable, available, and very cheap, like S3 storage or Google Cloud storage. It has been designed for quick scaling, like scale out, and highly elastic deployments. It has also been designed for high workload isolation, where each application or user group can be isolated from the other ones, so that they'll be paid and monitored separately, without affecting each other. But there are some disadvantages, or perhaps, you know, there's a cost for using Eon mode. Storage in S3 is neither cheap nor efficient. So there is a high latency of I/O when accessing data from S3. There is API and data access cost. There is API and data access cost associated with accessing your data in S3. Vertica in Eon mode has a pay as you go model, which you know, works for some people and does not work for others. And so therefore it is important to keep that in mind. And performance can be a little bit variable here, because it depends on cache, it depends on the local depot, which is a cache, and it is not as predictable as EE mode, so that's another trade-off. So let's spend about a minute and see how a Vertica cluster in Eon mode looks like. A Vertica cluster in Eon mode has S3 as the durability layer where all the data sits. There are subclusters, which are essentially just aggregation groups, which is separated compute, which will service different workloads. So for in this example, you may have two subclusters, one servicing ETL workload and the other one servicing (mic interference obscures speaking). These clusters are isolated, and they do not affect each other's performance. This allows you to scale them independently and isolate workloads. So this is the new Vertica Eon mode which has been specifically designed by us for use in the Cloud. But beyond this, you can use EE mode or Eon mode in the Cloud, it really depends on what your use case is. But both of these are possible, and we highly recommend Eon mode wherever possible. Okay, let's talk a little bit about what we mean by Vertica support in the Cloud. Now as you know, a Cloud is a shared data center, right. Performance in the Cloud can vary. It can vary between regions, availability zones, time of the day, choice of instance type, what concurrency you use, and of course the noisy neighbor effect. You know, we in Vertica, we performance, load, and stress test our product before every release. We have a bunch of use cases, we go through all of them, make sure that we haven't, you know, regressed any performance, and make sure that it works up to standards and gives you the high performance that you've come to expect. However, your solution or your workload is unique to you, and it is still your responsibility to make sure that it is tuned appropriately. To do this, one of the easiest things you can do is you know, pick a tested operating system, allocate the virtual machine, you know, with enough resources. It's something that we recommend, because we have tested it thoroughly. It goes a long way in giving you predictability. So after this I would like to now go into the various platforms, Cloud platforms, that Vertica has worked on. And I'll start with AWS, and my colleague Chris will speak about Azure and GCP. And our thoughts forward. So without further ado, let's start with the Amazon Web Services platform. So this is Vertica running on the Amazon Web Services platform. So as you probably are all aware, Amazon Web Services is the market leader in this space, and indeed really our biggest provider by far, and have been here for a very long time. And Vertica has a deep integration in the Amazon Web Services space. We provide a marketplace offering which has both pay as you go or a bring your own license model. We have many, you know, knowledge base articles, best practices, scripts, and resources that help you configure and use a Vertica database in the Cloud. We have several customers in the Cloud for many, many years now, and we have managed and console-based point and click deployments, you know, for ease of use in the Cloud. So Vertica has a deep integration in the Amazon space, and has been there for quite a bit now. So we communicate a lot of experience here. So let's talk about sizing on AWS. And sizing on any platform comes down to you know, these four or five different things. It comes down to picking the right instance type, picking the right disk volume and type, tuning and optimizing your networking, and finally, you know, some operational concerns like security, maintainability, and backup. So let's go into each one of these on the AWS ecosystem. So the choice of instance type is one of the important choices that you will make. In Eon mode, you know, you don't really need persistent disk. You can, you should probably choose ephemeral disk because it gives you extra speed, and speed with the instance type. We highly recommend the i3.4x instance types, which are very economical, have a big, 4 terabyte depot or cache per node. The i3.metal is similar to the i3.4, but has got significantly better performance, for those subclusters that need this extra oomph. The i3.2 is good for scale out of small ad hoc clusters. You know, they have a smaller cache and lower performance but it's cheap enough to use very indiscriminately. If you were in EE mode, well we don't use S3 as the layer of durability. Your local volumes is where we persist the data. Hence you do need an EBS volume in EE mode. In order to make sure that, you know, that the instance or the deployment is manageable, you might have to use some sort of a software RAID array over the EBS volumes. The most common instance type you see in EE mode is the r4.4x, the c4, or the m4 instance types. And then of course for temp space and depot we always recommend instance volumes. They're just much faster. Okay. So let's go, let's talk about optimizing your network or tuning your network. So the best, the best thing you can do about tuning your network, especially in Eon mode but in other modes too, is to get a VPC S3 endpoint. This is essentially a route table that makes sure that all traffic between your cluster and S3 goes over an internal fabric. This makes it much faster, you don't pay for egress cost, especially if you're doing external tables or your communal storage, but you do need to create it. Many times people will forget doing it. So you really do have to create it. And best of all, it's free. It doesn't cost you anything extra. You just have to create it during cluster creation time, and there's a significant performance difference for using it. The next thing about tuning your network is, you know, sizing it correctly. Pick the closest geographical region to where you'll consume the data. Pick the right availability zone. We highly recommend using cluster placement groups. In fact, they are required for the stability of the cluster. A cluster placement group is essentially, it operates this notion of rack. Nodes in a cluster placement group, are, you know, physically closer to each other than they would otherwise be. And this allows, you know, a 10 Gbps, bidirectional, TCP/IP flow between the nodes. And this makes sure that, you know, you get a high amount of Gbps per second. As you probably are all aware, the Cloud does not support broadcast or UDP broadcast. Hence you must use point-to-point UDP for spread in the Cloud, or in AWS. Beyond that, you know, point-to-point UDP does not scale very well beyond 20 nodes. So you know, as your cluster sizes increase, you must switch over to large cluster mode. And finally, use instances with enhanced networking or SR-IOV support. Again, it's free, it comes with the choice of the instance type and the operating system. We highly recommend it, it makes a big difference in terms of how your workload will perform. So let's talk a little bit about security, configuration, and orchestration. As I said, we provide CloudFormation scripts to make the ease of deployment. You can use the MC point and click. With regard to security, you know, Vertica does support instance profiles out of the box in Amazon. We recommend you use it. This is highly desirable so that you're not passing access keys and secret keys around. If you use our marketplace image, we have picked the latest operating systems, we have patched them, Amazon actually validates everything on marketplace and scans them for security vulnerabilities. So you get that for free. We do some basic configuration, like we disable root ssh access, we disallow any password access, we turn on encryption. And we run a basic set of security checks to make sure that the image is secure. Of course, it could be made more secure. But we try to balance out security, performance, and convenience. And finally, let's talk about backups. Especially in Eon mode I get the question, "Do we really need to back up our system, "since the data is in S3?" And the answer is yes, you do. Because you know, S3's not going to protect you against an accidental drop table. You know, S3 has a finite amount of reliability, durability, and availability. And you may want to be able to restore data differently. Also, backups are important if you're doing DR, or if you have additional cluster in a different region. The other cluster can be considered a backup. And finally, you know, why not create a backup or a disaster recovery cluster, you know, storage is cheap in the Cloud. So you know, we highly recommend you use it. So with this, I would like to hand it over to my colleague Christopher Daly, who will talk about the other two platforms that we support, that is Google and Azure. Over to you, Chris, thank you. >> Chris: Thanks, Sumeet, and hi everyone. So while there's no argument that we here at Vertica have a long history of running within the Amazon Web Services space, there are other alternative Cloud service providers where we do have a presence, such as Google Cloud Platform, or GCP. For those of you who are unfamiliar with GCP, it's considered the third-largest Cloud service provider in the marketspace, and it's priced very competitively to its peers. Has a lot of similarities to AWS in the products and services that it offers, but it tends to be the go-to place for newer businesses or startups. We officially started supporting GCP a little over a year ago with our first entry into their GCP marketplace. So a solution that deployed a fully-functional and ready-to-use Enterprise mode cluster. We followed up on that with the release and the support of Google storage buckets, and now I'm extremely pleased to announce that with the launch of Vertica 10, we're officially supporting Eon mode architecture in GCP as well. But that's not all, as we're adding additional offerings into the GCP marketplace. With the launch of version 10 we'll be introducing a second listing in the marketplace that allows for the deployment of an Eon mode cluster. It's all being driven by our own management consult. This will allow customers to quickly spin up Eon-based clusters within the GCP space. And if that wasn't enough, I'm also pleased to tell you that very soon after the launch we're going to be offering Vertica by the hour in GCP as well. And while we've done a lot to automate the solutions coming out of the marketplace, we recognize the simple fact that for a lot of you, building your cluster manually is really the only option. So with that in mind, let's talk about the things you need to understand in GCP to get that done. So wag me if you think this slide looks familiar. Well nope, it's not an erroneous duplicate slide from Sumeet's AWS section, it's merely an acknowledgement of all the things you need to consider for running Vertica in the Cloud. In Vertica, the choice of the operational mode will dictate some of the choices you'll need to make in the infrastructure, particularly around storage. Just like on-prem solutions, you'll need to understand the disk and networking capacities to get the most out of your cluster. And one of the most attractive things in GCP is the pricing, as it tends to run a little less than the others. But it does translate into less choices and options within the environment. If nothing else, I want you to take one thing away from this slide, and Sumeet said this earlier. VMs running, about AWS, Sumeet said this about AWS earlier. VMs running in the GCP space run on top of hardware that has hyperthreading enabled. And that a vCPU doesn't equate to a core, but rather a processing thread. This becomes particularly important if you're moving from an on-prem environment into the Cloud. Because a physical Vertica node with 32 cores is not the same thing as a VM with 32 vCPUs. In fact, with 32 vCPUs, you're only getting about 16 cores worth of performance. GCP does offer a handful of VM types, which they categorize by letter, but for us, most of these don't make great choices for Vertica nodes. The M series, however, does offer a good core to memory ratio, especially when you're looking at the high-mem variants. Also keep in mind, performance in I/O, such as network and disk, are partially dependent on the VM size, so customers in GCP space should be focusing on 16 vCPU VMs and above for their Vertica nodes. Disk options in GCP can be broken down into two basic types, persistent disks and local disks, which are ephemeral. Persistent disks come in two forms, standard or SSD. For Vertica in Eon mode, we recommend that customers use persistent SSD disks for the catalog, and either local SSD disks or persistent SSD disks for the depot and the temp space. Couple of things to think about here, though. Persistent disks are provisioned as a single device with a settable size. Local disks are provisioned as multiple disk devices with a fixed size, requiring you to use some kind of software RAIDing to create a single storage device. So while local SSD disks provide much more throughput, you're using CPU resources to maintain that RAID set. So you're giving, it's a little bit of a trade-off. Persistent disks offer redundancy, either within the zone that they exist or within the region, and if you're selecting regional redundancy, the disks are replicated across multiple zones in the region. This does have an effect in the performance to VM, so we don't recommend this. What we do recommend is the zonal redundancy when you're using persistent disks, as it gives you that redundancy level without actually affecting the performance. Remember also, in the Cloud space, all I/O is network I/O, as disks are basically block storage devices. This means that disk actions can and will slow down network traffic. And finally, the storage bucket access in GCP is based on GCP interoperability mode, which means that it's basically compliant with the AWS S3 API. In interoperability mode, access to the bucket is granted by a key pair that GCP refers to as HMAC keys. HMAC keys can be generated for individual users or for service accounts. We will recommend that when you're creating HMAC keys, choose a service account to ensure that the keys are not tied to a single employee. When thinking about storage for Enterprise mode, things change a little bit. We still recommend persistent SSD disks over standard ones. However, the use of local SSD disks for anything other than temp space is highly discouraged. I said it before, local SSD disks are ephemeral, meaning that the data's lost if the machine is turned off or goes down. So not really a place you want to store your data. In GCP, multiple persistent disks placed into a software RAID set does not create more throughput like you can find in other Clouds. The I/O saturation usually hits the VM limit long before it hits the disk limit. In fact, performance of a persistent disk is determined not just by the size of the disk but also by the size of the VM. So a good rule of thumb in GCP is to maximize your I/O throughput for persistent disks, is that the size tends to max out at two terabytes for SSDs and 10 terabytes for standard disks. Network performance in GCP can be thought of in two distinct ways. There's node-to-node traffic, and then there's egress traffic. Node-to-node performance in GCP is really good within the zone, with typical traffic between nodes falling in the 10-15 gigabits per second range. This might vary a little from zone to zone and region to region, but usually it's only limited, they're only limited by the existing traffic where the VMs exist. So kind of a noisy neighbor effect. Egress traffic from a VM, however, is subject to throughput caps, and these are based on the size of the VM. So the speed is set for the number of vCPUs in the VM at two gigabits per second per vCPU, and tops out at 32 gigabits per second. So the larger the VM, the more vCPUs you get, the larger the cap. So some things to consider in the NAV ring space for your Vertica cluster, pick a region that's physically close to you, even if you're connecting to the GCP network from a corporate LAN as opposed to the internet. The further the packets have to travel, the longer it's going to take. Also, GCP, like most Clouds, doesn't support UDP broadcast traffic on their virtual NAV ring, so you do have to use the point-to-point flag for spread when you're creating your cluster. And since the network cap on VMs is set at 32 gigabits per second per VM, maximize your network egress throughput and don't use VMs that are smaller than 16 vCPUs for your Vertica nodes. And that gets us to the one question I get asked the most often. How do I get my data into and out of the Cloud? Well, GCP offers many different methods to support different speeds and different price points for data ingress and egress. There's the obvious one, right, across the internet either directly to the VMs or into the storage bucket. Or you can, you know, light up a VPN tunnel to encrypt all that traffic. But additionally, GCP offers direct network interconnect from your corporate network. These get provided either by Google or by a partner, and they vary in speed. They also offer things called direct or carrier peering, which is connecting the edges of the networks between your network and GCP, and you can use a CDN interconnect, which creates, I believe, an on-demand connection from the GCP network, your network to the GCP network provided by a large host of CDN service providers. So GCP offers a lot of ways to move your data around in and out of the GCP Cloud. It's really a matter of what price point works for you, and what technology your corporation is looking to use. So we've talked about AWS, we've talked about GCP, it really only leaves one more Cloud. So last, and by far not the least, there's the Microsoft Azure environment. Holding on strong to the number two place in the major Cloud providers, Azure offers a very robust Cloud offering that's attractive to customers that already consume services from Microsoft. But what you need to keep in mind is that the underlying foundation of their Cloud is based on the Microsoft Windows products. And this makes their Cloud offering a little bit different in the services and offerings that they have. The good news here, though, is that Microsoft has done a very good job of getting their virtualization drivers baked into the modern kernels of most Linux operating systems, making running Linux-based VMs in Azure fairly seamless. So here's the slide again, but now you're going to notice some slight differences. First off, in Azure we only support Enterprise mode. This is because the Azure storage product is very different from Google Cloud storage and S3 on AWS. So while we're working on getting this supported, and we're starting to focus on this, we're just not there yet. This means that since we're only supporting Enterprise mode in Azure, getting the local disk performance right is one of the keys to success of running Vertica here, with the other major key being making sure that you're getting the appropriate networking speeds. Overall, Azure's a really good platform for Vertica, and its performance and pricing are very much on par with AWS. But keep in mind that the newer versions of the Linux operating systems like RHEL and CentOS run much better here than the older versions. Okay, so first things first again, just like GCP, in Azure VMs are running on top of hardware that has hyperthreading enabled. And because of the way Hyper-V, Azure's virtualization engine works, you can actually see this, right? So if you look down into the CPU information of the VM, you'll actually see how it groups the vCPUs by core and by thread. Azure offers a lot of VM types, and is adding new ones all the time. But for us, we see three VM types that make the most sense for Vertica. For customers that are looking to run production workloads in Azure, the Es_v3 and the Ls_v2 series are the two main recommendations. While they differ slightly in the CPU to memory ratio and the I/O throughput, the Es_v3 series is probably the best recommendation for a generalized Vertica node, with the Ls_v2 series being recommended for workloads with higher I/O requirements. If you're just looking to deploy a sandbox environment, the Ds_v3 series is a very suitable choice that really can reduce your overall Cloud spend. VM storage in Azure is provided by a grouping of four different types of disks, all offering different levels of performance. Introduced at the end of last year, the Ultra Disk option is the highest-performing disk type for VMs in Azure. It was designed for database workloads where high throughput and low latency is very desirable. However, the Ultra Disk option is not available in all regions yet, although that's been changing slowly since their launch. The Premium SSD option, which has been around for a while and is widely available, can also offer really nice performance, especially higher capacities. And just like other Cloud providers, the I/O throughput you get on VMs is dictated not only by the size of the disk, but also by the size of the VM and its type. So a good rule of thumb here, VM types with an S will have a much better throughput rate than ones that don't, meaning, and the larger VMs will have, you know, higher I/O throughput than the smaller ones. You can expand the VM disk throughput by using multiple disks in Azure and using a software RAID. This overcomes limitations of single disk performance, but keep in mind, you're now using CPU cycles to maintain that raid, so it is a bit of a trade-off. The other nice thing in Azure is that all their managed disks are encrypted by default on the server side, so there's really nothing you need to do here to enable that. And of course I mentioned this earlier. There is no native access to Azure storage yet, but it is something we're working on. We have seen folks using third-party applications like MinIO to access Azure's storage as an S3 bucket. So it might be something you want to keep in mind and maybe even test out for yourself. Networking in Azure comes in two different flavors, standard and accelerated. In standard networking, the entire network stack is abstracted and virtualized. So this works really well, however, there are performance limitations. Standard networking tends to top out around four gigabits per second. Accelerated networking in Azure is based on single root I/O virtualization of the Mellanox adapter. This is basically the VM talking directly to the physical network card in the host hardware, and it can produce network speeds up to 20 gigabits per second, so much, much faster. Keep in mind, though, that not all VM types and operating systems actually support accelerated networking, and you know, just like disk throughput, network throughput is based on VM type and size. So what do you need to think about for networking in the Azure space? Again, stay close to home. Pick regions that are geographically close to your location. Yes, the backbones between the regions are very, very fast, but the more hops your packets have to make, the longer it takes. Azure offers two types of groupings of their VMs, availability sets and availability zones. Availability zones offer good redundancy across multiple zones, but this actually increases the node-to-node latency, so we recommend you avoid this. Availability sets, on the other hand, keep all your VMs grouped together within a single zone, but makes sure that no two VMs are running on the same host hardware, for redundancy. And just like the other Clouds, UDP broadcast is not supported. So you have to use the point-to-point flag when you're creating your database to ensure that the spread works properly. Spread time out, okay, this is a good one. So recently, Microsoft has started monthly rolling updates of their environment. What this looks like is VMs running on top of hardware that's receiving an update can be paused. And this becomes problematic when the pausing of the VM exceeds eight seconds, as the unpaused members of the cluster now think the paused VM is down. So consider adjusting the spread time out for your clusters in Azure to 30 seconds, and this will help avoid a little of that. If you're deploying a large cluster in Azure, more than 20 nodes, use large closer mode, as point-to-point for spread doesn't really scale well with a lot of Vertica nodes. And finally, you know, pick VM types and operating systems that support accelerated networking. The difference in the node-to-node speeds can be very dramatic. So how do we move data around in Azure, right? So Microsoft views data egress a little differently than other Clouds, as it classifies any data being transmitted by a VM as egress. However, it only bills for data egress that actually leaves the Azure environment. Egress speed limits in Azure are based entirely on the VM type and size, and then they're limited by your connection to them. While not offering as many pathways to access their Cloud as GCP, Azure does offer a direct network-to-network connection called ExpressRoute. Offered by a large group of third-party processors, partners, the ExpressRoute offers multiple tiers of performance that are based on a flat charge for inbound data and a metered charge for outbound data. And of course you can still access these via the internet, and securely through a VPN gateway. So on behalf of Jeff, Sumeet, and myself, I'd like to thank you for listening to our presentation today, and we're now ready for Q&A.
SUMMARY :
Also as a reminder that you can maximize your screen So the best, the best thing you can do and the larger VMs will have, you know,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
Sumeet | PERSON | 0.99+ |
Jeff Healey | PERSON | 0.99+ |
Chris Daly | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Christopher Daly | PERSON | 0.99+ |
Sumeet Keswani | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Vertica | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
10 Gbps | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
forum.vertica.com | OTHER | 0.99+ |
30 seconds | QUANTITY | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
RHEL | TITLE | 0.99+ |
Today | DATE | 0.99+ |
32 cores | QUANTITY | 0.99+ |
CentOS | TITLE | 0.99+ |
more than 20 nodes | QUANTITY | 0.99+ |
32 vCPUs | QUANTITY | 0.99+ |
two platforms | QUANTITY | 0.99+ |
eight seconds | QUANTITY | 0.99+ |
Vertica | TITLE | 0.99+ |
10 terabytes | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
20 nodes | QUANTITY | 0.99+ |
two terabytes | QUANTITY | 0.99+ |
each application | QUANTITY | 0.99+ |
S3 | TITLE | 0.99+ |
two types | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
two subclusters | QUANTITY | 0.98+ |
first entry | QUANTITY | 0.98+ |
one question | QUANTITY | 0.98+ |
four | QUANTITY | 0.98+ |
Azure | TITLE | 0.98+ |
Vertica 10 | TITLE | 0.98+ |
4/2 | DATE | 0.98+ |
First | QUANTITY | 0.98+ |
16 vCPU | QUANTITY | 0.98+ |
two forms | QUANTITY | 0.97+ |
MinIO | TITLE | 0.97+ |
single employee | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
this week | DATE | 0.96+ |
Amy Chandler, Jean Younger & Elena Christopher | UiPath FORWARD III 2019
>> Live, from Las Vegas, it's theCUBE covering UiPath Forward Americas 2019. Brought to you by UiPath. >> Welcome back to the Bellagio in Las Vegas, everybody. You're watching theCUBE, the leader in live tech coverage. My name is Dave Vellante. Day one of UiPath Forward III, hashtag UiPathForward. Elena Christopher is here. She's the senior vice president at HFS Research, and Elena, I'm going to recruit you to be my co-host here. >> Co-host! >> On this power panel. Jean Youngers here, CUBE alum, VP, a Six Sigma Leader at Security Benefit. Great to see you again. >> Thank you. >> Dave: And Amy Chandler, who is the Assistant Vice President and Director of Internal Controls, also from Security Benefit. >> Hello. >> Dave: Thanks for coming on theCUBE. >> Thank you. >> Alright Elena, let's start off with you. You follow this market, you have for some time, you know HFS is sort of anointed as formulating this market place, right? >> Elena: We like to think of ourselves as the voice-- >> You guys were early on. >> The voice of the automation industry. >> So, what are you seeing? I mean, process automation has been around forever, RPA is a hot recent trend, but what are you seeing the last year or two? What are the big trends and rip currents that you see in the market place? >> I mean, I think one of the big trends that's out there, I mean, RPA's come on to the scene. I like how you phrase it Dave, because you refer to it as, rightly so, automation is not new, and so we sort of say the big question out there is, "Is RPA just flavor of the month?" RPA is definitely not, and I come from a firm, we put out a blog earlier this year called "RPA is dead. Long live automation." And that's because, when we look at RPA, and when we think about what it's impact is in the market place, to us the whole point of automation in any form, regardless of whether it's RPA, whether it be good old old school BPM, whatever it may be, it's mission is to drive transformation, and so the HFS perspective, and what all of our research shows and sort of justifies that the goal is, what everyone is striving towards, is to get to that transformation. And so, the reason we put out that piece, the "RPA is dead. Long live integrated automation platforms" is to make the point that if you're not- 'cause what does RPA allow? It affords an opportunity for change to drive transformation so, if you're not actually looking at your processes within your company and taking this opportunity to say, "What can I change, what processes are just bad, "and we've been doing them, I'm not even sure why, "for so long. What can we transform, "what can we optimize, what can we invent?" If you're not taking that opportunity as an enterprise to truly embrace the change and move towards transformation, that's a missed opportunity. So I always say, RPA, you can kind of couch it as one of many technologies, but what RPA has really done for the market place today, it's given business users and business leaders the realization that they can have a role in their own transformation. And that's one of the reasons why it's actually become very important, but a single tool in it's own right will never be the holistic answer. >> So Jean, Elena's bringing up a point about transformation. We, Stew Bennett and I interviewed you last year and we've played those clips a number of times, where you sort of were explaining to us that it didn't make sense before RPA to try to drive Six Sigma into business processes; you couldn't get the return. >> Jean: Right. >> Now you can do it very cheaply. And for Six Sigma or better, is what you use for airplane engines, right? >> Right. >> So, now you're bringing up the business process. So, you're a year in, how's it going? What kind of results are you seeing? Is it meeting your expectations? >> It's been wonderful. It has been the best, it's been probably the most fun I've had in the last fifteen years of work. I have enjoyed, partly because I get to work with this great person here, and she's my COE, and helps stand up the whole RPA solution, but you know, we have gone from finance into investment operations, into operations, you know we've got one sitting right now that we're going to be looking at statements that it's going to be fourteen thousand hours out of both time out as well as staff hours saved, and it's going to touch our customer directly, that they're not going to get a bad statement anymore. And so, you know, it has just been an incredible journey for us over the past year, it really has. >> And so okay Amy, your role is, you're the hardcore practitioner here right? >> Amy: That's right. >> You run the COE. Tell us more about your role, and I'm really interested in how you're bringing it out, RPA to the organization. Is that led by your team, or is it kind of this top-down approach? >> Yeah, this last year, we spent a lot of time trying to educate the lower levels and go from a bottom-up perspective. Pretty much, we implemented our infrastructure, we had a nice solid change management process, we built in logical access, we built in good processes around that so that we'd be able to scale easily over this last year, which kind of sets us up for next year, and everything that we want to accomplish then. >> So Elena, we were talking earlier on theCUBE about you know, RPA, in many ways, I called it cleaning up the crime scene, where stuff is kind of really sort of a mass and huge opportunities to improve. So, my question to you is, it seems like RPA is, in some regards, successful because you can drop it into existing processes, you're not changing things, but in a way, this concerns that, oh well, I'm just kind of paving the cow path. So how much process reinvention should have to occur in order to take advantage of RPA? >> I love that you use that phrase, "paving the cow path." As a New Englander, as you know the roads in Boston are in fact paved cow paths, so we know that can lead to some dodgy roads, and that's part of, and I say it because that's part of what the answer is, because the reinvention, and honestly the optimization has to be part of what the answer is. I said it just a little bit earlier in my comments, you're missing an opportunity with RPA and broader automation if you don't take that step to actually look at your processes and figure out if there's just essentially deadwood that you need to get rid of, things that need to be improved. One of the sort of guidelines, because not all processes are created equal, because you don't want to spend the time and effort, and you guys should chime in on this, you don't want to spend the time and effort to optimize a process if it's not critical to your business, if you're not going to get lift from it, or from some ROI. It's a bit of a continuum, so one of the things that I always encourage enterprises to think about, is this idea of, well what's the, obviously, what business problem are you trying to solve? But as you're going through the process optimization, what kind of user experience do you want out of this? And your users, by the way, you tend to think of your user as, it could be your end customer, it could be your employee, it could even be your partner, but trying to figure out what the experience is that you actually want to have, and then you can actually then look at the process and figure out, do we need to do something different? Do we need to do something completely new to actually optimize that? And then again, line it with what you're trying to solve and what kind of lift you want to get from it. But I'd love to, I mean, hopping over to you guys, you live and breathe this, right? And so I think you have a slightly different opinion than me, but-- >> We do live and breathe it, and every process we look at, we take into consideration. But you've also got to, you have a continuum right? If it's a simple process and we can put it up very quickly, we do, but we've also got ones where one process'll come into us, and a perfect example is our rate changes. >> Amy: Rate changes. >> It came in and there was one process at the very end and they ended up, we did a wing to wing of the whole thing, followed the data all the way back through the process, and I think it hit, what, seven or eight-- >> Yeah. >> Different areas-- >> Areas. >> Of the business, and once we got done with that whole wing to wing to see what we could optimize, it turned into what, sixty? >> Amy: Yeah, sixty plus. Yeah. >> Dave: Sixty plus what? >> Bot processes from one entry. >> Yeah. >> And so, right now, we've got 189 to 200 processes in the back log. And so if you take that, and exponentially increase it, we know that there's probably actually 1,000 to 2,000 more processes, at minimum, that we can hit for the company, and we need to look at those. >> Yeah, and I will say, the wing to wing approach is very important because you're following the data as it's moving along. So if you don't do that, if you only focus on a small little piece of it, you don't what's happening to the data before it gets to you and you don't know what's going to happen to it when it leaves you, so you really do have to take that wing to wing approach. >> So, internal controls is in your title, so talking about scale, it's a big theme here at UiPath, and these days, things scale really fast, and boo-boos can happen really fast. So how are you ensuring, you know that the edicts of the organization are met, whether it's security, compliance, governance? Is that part of your role? >> Yeah, we've actually kept internal audit and internal controls, and in fact, our external auditors, EY. We've kept them all at the table when we've gone through processes, when we've built out our change management process, our logical access. When we built our whole process from beginning to end they kind of sat at the table with us and kind of went over everything to make sure that we were hitting all the controls that we needed to do. >> And actually, I'd like to piggyback on that comment, because just that inclusion of the various roles, that's what we found as an emerging best practice, and in all of our research and all of the qualitative conversations that we have with enterprises and service providers, is because if you do things, I mean it applies on multiple levels, because if you do things in a silo, you'll have siloed impact. If you bring the appropriate constituents to the table, you're going to understand their perspective, but it's going to have broader reach. So it helps alleviate the silos but it also supports the point that you just made Amy, about looking at the processes end to end, because you've got the necessary constituents involved so you know the context, and then, I believe, I mean I think you guys shared this with me, that particularly when audit's involved, you're perhaps helping cultivate an understanding of how even their processes can improve as well. >> Right. >> That is true, and from an overall standpoint with controls, I think a lot of people don't realize that a huge benefit is your controls, cause if you're automating your controls, from an internal standpoint, you're not going to have to test as much, just from an associate process owner paying attention to their process to the internal auditors, they're not going to have to test as much either, and then your external auditors, which that's revenue. I mean, that's savings. >> You lower your auditing bill? >> Yeah. Yeah. >> Well we'll see right? >> Yeah. (laughter) >> That's always the hope. >> Don't tell EY. (laughter) So I got to ask you, so you're in a little over a year So I don't know if you golf, but you know a mulligan in golf. If you had a mulligan, a do over, what would you do over? >> The first process we put in place. At least for me, it breaks a lot, and we did it because at the time, we were going through decoupling and trying to just get something up to make sure that what we stood up was going to work and everything, and so we kind of slammed it in, and we pay for that every quarter, and so actually it's on our list to redo. >> Yeah, we automated a bad process. >> Yeah, we automated a bad process. >> That's a really good point. >> So we pay for it in maintenance every quarter, we pay for it, cause it breaks inevitably. >> Yes. >> Okay so what has to happen? You have to reinvent the process, to Elena's? >> Yes, you know, we relied on a process that somebody else had put in place, and in looking at it, it was kind of a up and down and through the hoop and around this way to get what they needed, and you know there's much easier ways to get the data now. And that's what we're doing. In fact, we've built our own, we call it a bot mart. That's where all our data goes, they won't let us touch the other data marts and so forth so they created us a bot mart, and anything that we need data for, they dump in there for us and then that's where our bot can hit, and our bot can hit it at anytime of the day or night when we need the data, and so it's worked out really well for us, and so the bot mart kind of came out of that project of there's got to be a better way. How can we do this better instead of relying on these systems that change and upgrade and then we run the bot and its working one day and the next day, somebody has gone in and tweaked something, and when all's I really need out of that system is data, that's all I need. I don't need, you know, a report. I don't need anything like that, cause the reports change and they get messed up. I just want the raw data, and so that's what we're starting to do. >> How do you ensure that the data is synchronized with your other marts and warehouses, is that a problem? >> Not yet. >> No not yet! (laughter) >> I'm wondering cause I was thinking the exact same question Dave, because on one hand its a nice I think step from a governance standpoint. You have what you need, perhaps IT or whomever your data curators are, they're not going to have a heart attack that you're touching stuff that they don't want you to, but then there is that potential for synchronization issues, cause that whole concept of golden source implies one copy if you will. >> Well, and it is. It's all coming through, we have a central data repository that the data's going to come through, and it's all sitting there, and then it'll move over, and to me, what I most worry about, like I mentioned on the statement once, okay, I get my data in, is it the same data that got used to create those statements? And as we're doing the testing and as we're looking at going live, that's one of our huge test cases. We need to understand what time that data comes in, when will it be into our bot mart, so when can I run those bots? You know, cause they're all going to be unattended on those, so you know, the timing is critical, and so that's why I said not yet. >> Dave: (chuckle) >> But you want to know what, we can build the bot to do that compare of the data for us. >> Haha all right. I love that. >> I saw a stat the other day. I don't know where it was, on Twitter or maybe it was your data, that more money by whatever, 2023 is going to be spent on chat bots than mobile development. >> Jean: I can imagine, yes. >> What are you doing with chat bots? And how are you using them? >> Do you want to answer that one or do you want me to? >> Go ahead. >> Okay so, part of the reason I'm so enthralled by the chat bot or personal assistant or anything, is because the unattended robots that we have, we have problems making sure that people are doing what they're supposed to be doing in prep. We have some in finance, and you know, finance you have a very fine line of what you can automate and what you need the user to still understand what they're doing, right? And so we felt like we had a really good, you know, combination of that, but in some instances, they forget to do things, so things aren't there and we get the phone call the bot broke, right? So part of the thing I'd like to do is I'd like to move that back to an unattended bot, and I'm going to put a chat bot in front of it, and then all's they have to do is type in "run my bot" and it'll come up if they have more than one bot, it'll say "which one do you want to run?" They'll click it and it'll go. Instead of having to go out on their machine, figure out where to go, figure out which button to do, and in the chat I can also send them a little message, "Did you run your other reports? Did you do this?" You know, so, I can use it for the end user, to make that experience for them better. And plus, we've got a lot of IT, we've got a lot of HR stuff that can fold into that, and then RPA all in behind it, kind of the engine on a lot of it. >> I mean you've child proofed the bot. >> Exactly! There you go. There you go. >> Exactly. Exactly. And it also provides a means to be able to answer those commonly asked questions for HR for example. You know, how much vacation time do I have? When can I change my benefits? Examples of those that they answer frequently every day. So that provides another avenue for utilization of the chat bot. >> And if I may, Dave, it supports a concept that I know we were talking about yesterday. At HFS it's our "Triple-A Trifecta", but it's taking the baseline of automation, it intersects with components of AI, and then potentially with analytics. This is starting to touch on some of the opportunities to look at other technologies. You say chat bots. At HFS we don't use the term chat bot, just because we like to focus and emphasize the cognitive capability if you will. But in any case, you guys essentially are saying, well RPA is doing great for what we're using RPA for, but we need a little bit of extension of functionality, so we're layering in the chat bot or cognitive assistant. So it's a nice example of some of that extension of really seeing how it's, I always call it the power of and if you will. Are you going to layer these things in to get what you need out of it? What best solves your business problems? Just a very practical approach I think. >> So Elena, Guy has a session tomorrow on predictions. So we're going to end with some predictions. So our RPA is dead, (chuckle) will it be resuscitated? What's the future of RPA look like? Will it live up to the hype? I mean so many initiatives in our industry haven't. I always criticize enterprise data warehousing and ETL and big data is not living up to the hype. Will RPA? >> It's got a hell of a lot of hype to live up to, I'll tell you that. So, back to some of our causality about why we even said it's dead. As a discrete software category, RPA is clearly not dead at all. But unless it's helping to drive forward with transformation, and even some of the strategies that these fine ladies from Security Benefit are utilizing, which is layering in additional technology. That's part of the path there. But honestly, the biggest challenge that you have to go through to get there and cannot be underestimated, is the change that your organization has to go through. Cause think about it, if we look at the grand big vision of where RPA and broader intelligent automation takes us, the concept of creating a hybrid workforce, right? So what's a hybrid workforce? It's literally our humans complemented by digital workers. So it still sounds like science fiction. To think that any enterprise could try and achieve some version of that and that it would be A, fast or B, not take a lot of change management, is absolutely ludicrous. So it's just a very practical approach to be eyes wide open, recognize that you're solving problems but you have to want to drive change. So to me, and sort of the HFS perspective, continues to be that if RPA is not going to die a terrible death, it needs to really support that vision of transformation. And I mean honestly, we're here at a UiPath event, they had many announcements today that they're doing a couple of things. Supporting core functionality of RPA, literally adding in process discovery and mining capabilities, adding in analytics to help enterprises actually track what your benefit is. >> Jean: Yes. >> These are very practical cases that help RPA live another day. But they're also extending functionality, adding in their whole announcement around AI fabric, adding in some of the cognitive capability to extend the functionality. And so prediction-wise, RPA as we know it three years from now is not going to look like RPA at all. I'm not going to call it AI, but it's going to become a hybrid, and it's honestly going to look a lot like that Triple-A Trifecta I mentioned. >> Well, and UiPath, and I presume other suppliers as well, are expanding their markets. They're reaching, you hear about citizens developers and 100% of the workforce. Obviously you guys are excited and you see a long-run way for RPA. >> Jean: Yeah, we do. >> I'll give you the last word. >> It's been a wonderful journey thus far. After this morning's event where they showed us everything, I saw a sneak peek yesterday during the CAB, and I had a list of things I wanted to talk to her about already when I came out of there. And then she saw more of 'em today, and I've got a pocketful of notes of stuff that we're going to take back and do. I really, truly believe this is the future and we can do so much. Six Sigma has kind of gotten a rebirth. You go in and look at your processes and we can get those to perfect. I mean, that's what's so cool. It is so cool that you can actually tell somebody, I can do something perfect for you. And how many people get to do that? >> It's back to the user experience, right? We can make this wildly functional to meet the need. >> Right, right. And I don't think RPA is the end all solution, I think it's just a great tool to add to your toolkit and utilize moving forward. >> Right. All right we'll have to leave it there. Thanks ladies for coming on, it was a great segment. Really appreciate your time. >> Thanks. >> Thank you. >> Thank you for watching, everybody. This is Dave Vellante with theCUBE. We'll be right back from UiPath Forward III from Las Vegas, right after this short break. (technical music)
SUMMARY :
Brought to you by UiPath. and Elena, I'm going to recruit you to be my co-host here. Great to see you again. Assistant Vice President and Director of Internal Controls, You follow this market, you have for some time, and so we sort of say the big question out there is, We, Stew Bennett and I interviewed you last year is what you use for airplane engines, right? What kind of results are you seeing? and it's going to touch our customer directly, Is that led by your team, and everything that we want to accomplish then. So, my question to you is, it seems like RPA is, and what kind of lift you want to get from it. If it's a simple process and we can put it up very quickly, Amy: Yeah, sixty plus. And so if you take that, and exponentially increase it, and you don't know what's going to happen So how are you ensuring, you know that the edicts and kind of went over everything to make sure that but it also supports the point that you just made Amy, and then your external auditors, So I don't know if you golf, and so actually it's on our list to redo. So we pay for it in maintenance every quarter, and you know there's much easier ways to get the data now. You have what you need, and to me, what I most worry about, But you want to know what, we can build the bot to do I love that. 2023 is going to be spent on chat bots than mobile development. And so we felt like we had a really good, you know, There you go. And it also provides a means to be able and emphasize the cognitive capability if you will. and ETL and big data is not living up to the hype. that you have to go through and it's honestly going to look a lot like and you see a long-run way for RPA. It is so cool that you can actually tell somebody, It's back to the user experience, right? and utilize moving forward. Really appreciate your time. Thank you for watching, everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amy Chandler | PERSON | 0.99+ |
Elena | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Jean | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Jean Youngers | PERSON | 0.99+ |
Stew Bennett | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
Amy | PERSON | 0.99+ |
Elena Christopher | PERSON | 0.99+ |
189 | QUANTITY | 0.99+ |
1,000 | QUANTITY | 0.99+ |
Jean Younger | PERSON | 0.99+ |
fourteen thousand hours | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
100% | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
UiPath | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
last year | DATE | 0.99+ |
HFS | ORGANIZATION | 0.99+ |
one process | QUANTITY | 0.99+ |
HFS Research | ORGANIZATION | 0.99+ |
200 processes | QUANTITY | 0.99+ |
one copy | QUANTITY | 0.99+ |
eight | QUANTITY | 0.98+ |
tomorrow | DATE | 0.98+ |
seven | QUANTITY | 0.98+ |
one entry | QUANTITY | 0.98+ |
Six Sigma | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
more than one bot | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
Sixty | QUANTITY | 0.97+ |
CUBE | ORGANIZATION | 0.97+ |
2019 | DATE | 0.97+ |
earlier this year | DATE | 0.96+ |
sixty | QUANTITY | 0.96+ |
single tool | QUANTITY | 0.96+ |
past year | DATE | 0.95+ |
marts | DATE | 0.95+ |
both time | QUANTITY | 0.95+ |
Security Benefit | ORGANIZATION | 0.94+ |
bot mart | ORGANIZATION | 0.94+ |
ORGANIZATION | 0.94+ | |
next day | DATE | 0.93+ |
first process | QUANTITY | 0.93+ |
Day one | QUANTITY | 0.93+ |
2,000 more processes | QUANTITY | 0.9+ |
One | QUANTITY | 0.9+ |
over a year | QUANTITY | 0.88+ |
Triple-A Trifecta | ORGANIZATION | 0.88+ |
marts | ORGANIZATION | 0.87+ |
UiPath Forward III | TITLE | 0.84+ |
FORWARD III | TITLE | 0.84+ |
Amy Chandler, Security Benefit, Jean Younger, Security Benefit & Elena Christopher, HFS Research | U
>> Live, from Las Vegas, it's theCUBE covering UiPath Forward Americas 2019. Brought to you by UiPath. >> Welcome back to the Bellagio in Las Vegas, everybody. You're watching theCUBE, the leader in live tech coverage. My name is Dave Vellante. Day one of UiPath Forward III, hashtag UiPathForward. Elena Christopher is here. She's the senior vice president at HFS Research, and Elena, I'm going to recruit you to be my co-host here. >> Co-host! >> On this power panel. Jean Youngers here, CUBE alum, VP, a Six Sigma Leader at Security Benefit. Great to see you again. >> Thank you. >> Dave: And Amy Chandler, who is the Assistant Vice President and Director of Internal Controls, also from Security Benefit. >> Hello. >> Dave: Thanks for coming on theCUBE. >> Thank you. >> Alright Elena, let's start off with you. You follow this market, you have for some time, you know HFS is sort of anointed as formulating this market place, right? >> Elena: We like to think of ourselves as the voice-- >> You guys were early on. >> The voice of the automation industry. >> So, what are you seeing? I mean, process automation has been around forever, RPA is a hot recent trend, but what are you seeing the last year or two? What are the big trends and rip currents that you see in the market place? >> I mean, I think one of the big trends that's out there, I mean, RPA's come on to the scene. I like how you phrase it Dave, because you refer to it as, rightly so, automation is not new, and so we sort of say the big question out there is, "Is RPA just flavor of the month?" RPA is definitely not, and I come from a firm, we put out a blog earlier this year called "RPA is dead. Long live automation." And that's because, when we look at RPA, and when we think about what it's impact is in the market place, to us the whole point of automation in any form, regardless of whether it's RPA, whether it be good old old school BPM, whatever it may be, it's mission is to drive transformation, and so the HFS perspective, and what all of our research shows and sort of justifies that the goal is, what everyone is striving towards, is to get to that transformation. And so, the reason we put out that piece, the "RPA is dead. Long live integrated automation platforms" is to make the point that if you're not- 'cause what does RPA allow? It affords an opportunity for change to drive transformation so, if you're not actually looking at your processes within your company and taking this opportunity to say, "What can I change, what processes are just bad, "and we've been doing them, I'm not even sure why, "for so long. What can we transform, "what can we optimize, what can we invent?" If you're not taking that opportunity as an enterprise to truly embrace the change and move towards transformation, that's a missed opportunity. So I always say, RPA, you can kind of couch it as one of many technologies, but what RPA has really done for the market place today, it's given business users and business leaders the realization that they can have a role in their own transformation. And that's one of the reasons why it's actually become very important, but a single tool in it's own right will never be the holistic answer. >> So Jean, Elena's bringing up a point about transformation. We, Stew Bennett and I interviewed you last year and we've played those clips a number of times, where you sort of were explaining to us that it didn't make sense before RPA to try to drive Six Sigma into business processes; you couldn't get the return. >> Jean: Right. >> Now you can do it very cheaply. And for Six Sigma or better, is what you use for airplane engines, right? >> Right. >> So, now you're bringing up the business process. So, you're a year in, how's it going? What kind of results are you seeing? Is it meeting your expectations? >> It's been wonderful. It has been the best, it's been probably the most fun I've had in the last fifteen years of work. I have enjoyed, partly because I get to work with this great person here, and she's my COE, and helps stand up the whole RPA solution, but you know, we have gone from finance into investment operations, into operations, you know we've got one sitting right now that we're going to be looking at statements that it's going to be fourteen thousand hours out of both time out as well as staff hours saved, and it's going to touch our customer directly, that they're not going to get a bad statement anymore. And so, you know, it has just been an incredible journey for us over the past year, it really has. >> And so okay Amy, your role is, you're the hardcore practitioner here right? >> Amy: That's right. >> You run the COE. Tell us more about your role, and I'm really interested in how you're bringing it out, RPA to the organization. Is that led by your team, or is it kind of this top-down approach? >> Yeah, this last year, we spent a lot of time trying to educate the lower levels and go from a bottom-up perspective. Pretty much, we implemented our infrastructure, we had a nice solid change management process, we built in logical access, we built in good processes around that so that we'd be able to scale easily over this last year, which kind of sets us up for next year, and everything that we want to accomplish then. >> So Elena, we were talking earlier on theCUBE about you know, RPA, in many ways, I called it cleaning up the crime scene, where stuff is kind of really sort of a mass and huge opportunities to improve. So, my question to you is, it seems like RPA is, in some regards, successful because you can drop it into existing processes, you're not changing things, but in a way, this concerns that, oh well, I'm just kind of paving the cow path. So how much process reinvention should have to occur in order to take advantage of RPA? >> I love that you use that phrase, "paving the cow path." As a New Englander, as you know the roads in Boston are in fact paved cow paths, so we know that can lead to some dodgy roads, and that's part of, and I say it because that's part of what the answer is, because the reinvention, and honestly the optimization has to be part of what the answer is. I said it just a little bit earlier in my comments, you're missing an opportunity with RPA and broader automation if you don't take that step to actually look at your processes and figure out if there's just essentially deadwood that you need to get rid of, things that need to be improved. One of the sort of guidelines, because not all processes are created equal, because you don't want to spend the time and effort, and you guys should chime in on this, you don't want to spend the time and effort to optimize a process if it's not critical to your business, if you're not going to get lift from it, or from some ROI. It's a bit of a continuum, so one of the things that I always encourage enterprises to think about, is this idea of, well what's the, obviously, what business problem are you trying to solve? But as you're going through the process optimization, what kind of user experience do you want out of this? And your users, by the way, you tend to think of your user as, it could be your end customer, it could be your employee, it could even be your partner, but trying to figure out what the experience is that you actually want to have, and then you can actually then look at the process and figure out, do we need to do something different? Do we need to do something completely new to actually optimize that? And then again, line it with what you're trying to solve and what kind of lift you want to get from it. But I'd love to, I mean, hopping over to you guys, you live and breathe this, right? And so I think you have a slightly different opinion than me, but-- >> We do live and breathe it, and every process we look at, we take into consideration. But you've also got to, you have a continuum right? If it's a simple process and we can put it up very quickly, we do, but we've also got ones where one process'll come into us, and a perfect example is our rate changes. >> Amy: Rate changes. >> It came in and there was one process at the very end and they ended up, we did a wing to wing of the whole thing, followed the data all the way back through the process, and I think it hit, what, seven or eight-- >> Yeah. >> Different areas-- >> Areas. >> Of the business, and once we got done with that whole wing to wing to see what we could optimize, it turned into what, sixty? >> Amy: Yeah, sixty plus. Yeah. >> Dave: Sixty plus what? >> Bot processes from one entry. >> Yeah. >> And so, right now, we've got 189 to 200 processes in the back log. And so if you take that, and exponentially increase it, we know that there's probably actually 1,000 to 2,000 more processes, at minimum, that we can hit for the company, and we need to look at those. >> Yeah, and I will say, the wing to wing approach is very important because you're following the data as it's moving along. So if you don't do that, if you only focus on a small little piece of it, you don't what's happening to the data before it gets to you and you don't know what's going to happen to it when it leaves you, so you really do have to take that wing to wing approach. >> So, internal controls is in your title, so talking about scale, it's a big theme here at UiPath, and these days, things scale really fast, and boo-boos can happen really fast. So how are you ensuring, you know that the edicts of the organization are met, whether it's security, compliance, governance? Is that part of your role? >> Yeah, we've actually kept internal audit and internal controls, and in fact, our external auditors, EY. We've kept them all at the table when we've gone through processes, when we've built out our change management process, our logical access. When we built our whole process from beginning to end they kind of sat at the table with us and kind of went over everything to make sure that we were hitting all the controls that we needed to do. >> And actually, I'd like to piggyback on that comment, because just that inclusion of the various roles, that's what we found as an emerging best practice, and in all of our research and all of the qualitative conversations that we have with enterprises and service providers, is because if you do things, I mean it applies on multiple levels, because if you do things in a silo, you'll have siloed impact. If you bring the appropriate constituents to the table, you're going to understand their perspective, but it's going to have broader reach. So it helps alleviate the silos but it also supports the point that you just made Amy, about looking at the processes end to end, because you've got the necessary constituents involved so you know the context, and then, I believe, I mean I think you guys shared this with me, that particularly when audit's involved, you're perhaps helping cultivate an understanding of how even their processes can improve as well. >> Right. >> That is true, and from an overall standpoint with controls, I think a lot of people don't realize that a huge benefit is your controls, cause if you're automating your controls, from an internal standpoint, you're not going to have to test as much, just from an associate process owner paying attention to their process to the internal auditors, they're not going to have to test as much either, and then your external auditors, which that's revenue. I mean, that's savings. >> You lower your auditing bill? >> Yeah. Yeah. >> Well we'll see right? >> Yeah. (laughter) >> That's always the hope. >> Don't tell EY. (laughter) So I got to ask you, so you're in a little over a year So I don't know if you golf, but you know a mulligan in golf. If you had a mulligan, a do over, what would you do over? >> The first process we put in place. At least for me, it breaks a lot, and we did it because at the time, we were going through decoupling and trying to just get something up to make sure that what we stood up was going to work and everything, and so we kind of slammed it in, and we pay for that every quarter, and so actually it's on our list to redo. >> Yeah, we automated a bad process. >> Yeah, we automated a bad process. >> That's a really good point. >> So we pay for it in maintenance every quarter, we pay for it, cause it breaks inevitably. >> Yes. >> Okay so what has to happen? You have to reinvent the process, to Elena's? >> Yes, you know, we relied on a process that somebody else had put in place, and in looking at it, it was kind of a up and down and through the hoop and around this way to get what they needed, and you know there's much easier ways to get the data now. And that's what we're doing. In fact, we've built our own, we call it a bot mart. That's where all our data goes, they won't let us touch the other data marts and so forth so they created us a bot mart, and anything that we need data for, they dump in there for us and then that's where our bot can hit, and our bot can hit it at anytime of the day or night when we need the data, and so it's worked out really well for us, and so the bot mart kind of came out of that project of there's got to be a better way. How can we do this better instead of relying on these systems that change and upgrade and then we run the bot and its working one day and the next day, somebody has gone in and tweaked something, and when all's I really need out of that system is data, that's all I need. I don't need, you know, a report. I don't need anything like that, cause the reports change and they get messed up. I just want the raw data, and so that's what we're starting to do. >> How do you ensure that the data is synchronized with your other marts and warehouses, is that a problem? >> Not yet. >> No not yet! (laughter) >> I'm wondering cause I was thinking the exact same question Dave, because on one hand its a nice I think step from a governance standpoint. You have what you need, perhaps IT or whomever your data curators are, they're not going to have a heart attack that you're touching stuff that they don't want you to, but then there is that potential for synchronization issues, cause that whole concept of golden source implies one copy if you will. >> Well, and it is. It's all coming through, we have a central data repository that the data's going to come through, and it's all sitting there, and then it'll move over, and to me, what I most worry about, like I mentioned on the statement once, okay, I get my data in, is it the same data that got used to create those statements? And as we're doing the testing and as we're looking at going live, that's one of our huge test cases. We need to understand what time that data comes in, when will it be into our bot mart, so when can I run those bots? You know, cause they're all going to be unattended on those, so you know, the timing is critical, and so that's why I said not yet. >> Dave: (chuckle) >> But you want to know what, we can build the bot to do that compare of the data for us. >> Haha all right. I love that. >> I saw a stat the other day. I don't know where it was, on Twitter or maybe it was your data, that more money by whatever, 2023 is going to be spent on chat bots than mobile development. >> Jean: I can imagine, yes. >> What are you doing with chat bots? And how are you using them? >> Do you want to answer that one or do you want me to? >> Go ahead. >> Okay so, part of the reason I'm so enthralled by the chat bot or personal assistant or anything, is because the unattended robots that we have, we have problems making sure that people are doing what they're supposed to be doing in prep. We have some in finance, and you know, finance you have a very fine line of what you can automate and what you need the user to still understand what they're doing, right? And so we felt like we had a really good, you know, combination of that, but in some instances, they forget to do things, so things aren't there and we get the phone call the bot broke, right? So part of the thing I'd like to do is I'd like to move that back to an unattended bot, and I'm going to put a chat bot in front of it, and then all's they have to do is type in "run my bot" and it'll come up if they have more than one bot, it'll say "which one do you want to run?" They'll click it and it'll go. Instead of having to go out on their machine, figure out where to go, figure out which button to do, and in the chat I can also send them a little message, "Did you run your other reports? Did you do this?" You know, so, I can use it for the end user, to make that experience for them better. And plus, we've got a lot of IT, we've got a lot of HR stuff that can fold into that, and then RPA all in behind it, kind of the engine on a lot of it. >> I mean you've child proofed the bot. >> Exactly! There you go. There you go. >> Exactly. Exactly. And it also provides a means to be able to answer those commonly asked questions for HR for example. You know, how much vacation time do I have? When can I change my benefits? Examples of those that they answer frequently every day. So that provides another avenue for utilization of the chat bot. >> And if I may, Dave, it supports a concept that I know we were talking about yesterday. At HFS it's our "Triple-A Trifecta", but it's taking the baseline of automation, it intersects with components of AI, and then potentially with analytics. This is starting to touch on some of the opportunities to look at other technologies. You say chat bots. At HFS we don't use the term chat bot, just because we like to focus and emphasize the cognitive capability if you will. But in any case, you guys essentially are saying, well RPA is doing great for what we're using RPA for, but we need a little bit of extension of functionality, so we're layering in the chat bot or cognitive assistant. So it's a nice example of some of that extension of really seeing how it's, I always call it the power of and if you will. Are you going to layer these things in to get what you need out of it? What best solves your business problems? Just a very practical approach I think. >> So Elena, Guy has a session tomorrow on predictions. So we're going to end with some predictions. So our RPA is dead, (chuckle) will it be resuscitated? What's the future of RPA look like? Will it live up to the hype? I mean so many initiatives in our industry haven't. I always criticize enterprise data warehousing and ETL and big data is not living up to the hype. Will RPA? >> It's got a hell of a lot of hype to live up to, I'll tell you that. So, back to some of our causality about why we even said it's dead. As a discrete software category, RPA is clearly not dead at all. But unless it's helping to drive forward with transformation, and even some of the strategies that these fine ladies from Security Benefit are utilizing, which is layering in additional technology. That's part of the path there. But honestly, the biggest challenge that you have to go through to get there and cannot be underestimated, is the change that your organization has to go through. Cause think about it, if we look at the grand big vision of where RPA and broader intelligent automation takes us, the concept of creating a hybrid workforce, right? So what's a hybrid workforce? It's literally our humans complemented by digital workers. So it still sounds like science fiction. To think that any enterprise could try and achieve some version of that and that it would be A, fast or B, not take a lot of change management, is absolutely ludicrous. So it's just a very practical approach to be eyes wide open, recognize that you're solving problems but you have to want to drive change. So to me, and sort of the HFS perspective, continues to be that if RPA is not going to die a terrible death, it needs to really support that vision of transformation. And I mean honestly, we're here at a UiPath event, they had many announcements today that they're doing a couple of things. Supporting core functionality of RPA, literally adding in process discovery and mining capabilities, adding in analytics to help enterprises actually track what your benefit is. >> Jean: Yes. >> These are very practical cases that help RPA live another day. But they're also extending functionality, adding in their whole announcement around AI fabric, adding in some of the cognitive capability to extend the functionality. And so prediction-wise, RPA as we know it three years from now is not going to look like RPA at all. I'm not going to call it AI, but it's going to become a hybrid, and it's honestly going to look a lot like that Triple-A Trifecta I mentioned. >> Well, and UiPath, and I presume other suppliers as well, are expanding their markets. They're reaching, you hear about citizens developers and 100% of the workforce. Obviously you guys are excited and you see a long-run way for RPA. >> Jean: Yeah, we do. >> I'll give you the last word. >> It's been a wonderful journey thus far. After this morning's event where they showed us everything, I saw a sneak peek yesterday during the CAB, and I had a list of things I wanted to talk to her about already when I came out of there. And then she saw more of 'em today, and I've got a pocketful of notes of stuff that we're going to take back and do. I really, truly believe this is the future and we can do so much. Six Sigma has kind of gotten a rebirth. You go in and look at your processes and we can get those to perfect. I mean, that's what's so cool. It is so cool that you can actually tell somebody, I can do something perfect for you. And how many people get to do that? >> It's back to the user experience, right? We can make this wildly functional to meet the need. >> Right, right. And I don't think RPA is the end all solution, I think it's just a great tool to add to your toolkit and utilize moving forward. >> Right. All right we'll have to leave it there. Thanks ladies for coming on, it was a great segment. Really appreciate your time. >> Thanks. >> Thank you. >> Thank you for watching, everybody. This is Dave Vellante with theCUBE. We'll be right back from UiPath Forward III from Las Vegas, right after this short break. (technical music)
SUMMARY :
Brought to you by UiPath. and Elena, I'm going to recruit you to be my co-host here. Great to see you again. Assistant Vice President and Director of Internal Controls, You follow this market, you have for some time, and so we sort of say the big question out there is, We, Stew Bennett and I interviewed you last year is what you use for airplane engines, right? What kind of results are you seeing? and it's going to touch our customer directly, Is that led by your team, and everything that we want to accomplish then. So, my question to you is, it seems like RPA is, and what kind of lift you want to get from it. If it's a simple process and we can put it up very quickly, Amy: Yeah, sixty plus. And so if you take that, and exponentially increase it, and you don't know what's going to happen So how are you ensuring, you know that the edicts and kind of went over everything to make sure that but it also supports the point that you just made Amy, and then your external auditors, So I don't know if you golf, and so actually it's on our list to redo. So we pay for it in maintenance every quarter, and you know there's much easier ways to get the data now. You have what you need, and to me, what I most worry about, But you want to know what, we can build the bot to do I love that. 2023 is going to be spent on chat bots than mobile development. And so we felt like we had a really good, you know, There you go. And it also provides a means to be able and emphasize the cognitive capability if you will. and ETL and big data is not living up to the hype. that you have to go through and it's honestly going to look a lot like and you see a long-run way for RPA. It is so cool that you can actually tell somebody, It's back to the user experience, right? and utilize moving forward. Really appreciate your time. Thank you for watching, everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amy Chandler | PERSON | 0.99+ |
Elena | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Jean | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Jean Youngers | PERSON | 0.99+ |
Stew Bennett | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
Amy | PERSON | 0.99+ |
Elena Christopher | PERSON | 0.99+ |
189 | QUANTITY | 0.99+ |
Jean Younger | PERSON | 0.99+ |
1,000 | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
fourteen thousand hours | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
UiPath | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
last year | DATE | 0.99+ |
HFS | ORGANIZATION | 0.99+ |
one process | QUANTITY | 0.99+ |
HFS Research | ORGANIZATION | 0.99+ |
200 processes | QUANTITY | 0.99+ |
one copy | QUANTITY | 0.99+ |
eight | QUANTITY | 0.98+ |
tomorrow | DATE | 0.98+ |
seven | QUANTITY | 0.98+ |
one entry | QUANTITY | 0.98+ |
Six Sigma | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
more than one bot | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
Sixty | QUANTITY | 0.97+ |
CUBE | ORGANIZATION | 0.97+ |
Security Benefit | ORGANIZATION | 0.96+ |
earlier this year | DATE | 0.96+ |
sixty | QUANTITY | 0.96+ |
single tool | QUANTITY | 0.96+ |
past year | DATE | 0.95+ |
marts | DATE | 0.95+ |
both time | QUANTITY | 0.95+ |
bot mart | ORGANIZATION | 0.94+ |
ORGANIZATION | 0.94+ | |
next day | DATE | 0.93+ |
first process | QUANTITY | 0.93+ |
Day one | QUANTITY | 0.93+ |
2,000 more processes | QUANTITY | 0.9+ |
One | QUANTITY | 0.9+ |
over a year | QUANTITY | 0.88+ |
Triple-A Trifecta | ORGANIZATION | 0.88+ |
marts | ORGANIZATION | 0.87+ |
U | ORGANIZATION | 0.86+ |
Power Panel | VMworld 2019
>> Narrator: Live from San Francisco celebrating 10 years of high tech coverage, It's the Cube! Covering VM World 2019 Brought to you by VMware and its ecosystem partners >> Hello everyone and welcome to the Cube's coverage here in San Francisco, California of the VMWorld 2019. I'm John Furrier with my cohost Dave Vellante Dave, 10 years covering VMWorld since 2010, it's been quite a ride, lot of changes. >> Dave: Sure has. >> John: We're going to do a Power Panel our format we normally do it remote guests in our Palo Alto and Boston studios in person because we're here. Why not do it? Of course, Keith Townsend, CTO Advisor friend of the Cube, Cube host sometimes and Sarbjeet Johal, cloud architect cloud expert, friends on Twitter. We're always jammin' on Twitter. So we'll have to take it to the video. Guys, thanks for joining us on the Power Panel. >> Good to see you, Gents. >> Good seein' ya. >> Good to be here. >> Yeah, I, I hope we don't come to blows, Sarbjeet. I mean we've had some passionate conversations over the past couple months. >> Yeah, Santoro, yes, yes. >> John: The activity has been at an all time high. I mean, snark aside, there's real things to talk about. >> Yes. >> I mean we are talking about VMware a software company, staying with their roots. We know what happened in 2016 The Amazon relationship cleared the air so to speak, pun intended. Vcloud air kind of goes it's way stock prices go up and to the right Yeah, fluctuations happening but still financially doing well. >> Keith: Yeah. >> Customers have clarity. They're an operate. They run, they target operators not developers. We're living in a DevOps world we talk about this all the time dev and ops this is the cloud world that they want Michael Dell was on the Cube Dell Technologies owns VMware they put Pivotal on VMware moves are being made. Keith, how do you make sense of it? What's your take? You've been on the inside. >> Well, you know, VMware has a tough time. Pat came in, 2013, we remember it. He said we are going to double down on virtualization. He is literally paying the cost for that hockey stick movement VMware has had this reputation of being an operator based company Infrastructure based, you go into accounts, you're stuck in this IT Infrastructure cells movement. VMware has done awesome over the past year. Few years, I had to eat a little crow and say that the move to eject Pivotal was the right thing for the Stock but for the reputation, VMware is stuck so Pat, what, tallied up 5 billion dollars in sales, in purchases last week to get out of this motion of being stuck in the IT Infrastructure realm Will it pay off? I think it's going to be a good conversation because they're going to need those Pivotal guys to push this PKS vision of theirs. This PKS and Kubernetes vision that they have >> Well they got to figure it out but certainly it's a software world and one of the things that's interesting we were talking before we started is, they are stuck in that operator world but it's part of DevOps, Dev and Ops. This is the world that they operate in Google's cloud shows how to do it. You got SRE's run things and developers this program infrastructure is code. This is the promise of this new generation. Sarbjeet, we talk about it all the time on Twitter developers coding away not dealing with the infrastructure, that's the goal >> Yeah, traditionally, developers never sort of mucked around with infrastructure. Gradually we are moving into where developers have to take care of infrastructure themselves the teams are like two person teams we hear that all the time. They are responsible for running the show from beginning to the end. Operations are under them, it's Dev and Ops are put together, right? But I'll speak from my own personal experience with working at VMware in the past that from all the companies which are operations focused, that's HP, IBM, and Oracle to a certain extent. So portfolio and all that. And BMC, and CA, those are pure companies in the operations space, right? I think VMware is one of those which values software a lot. So it's a purely, inside the VMware it's purely software driven. But to the outside, what they produce what they have produced in the past that's all operations, right? So I think they can move that switch because of the culture and then with Pivotal acquisition I think it will make it much easier because there's some following of the Pivotal stack, if you will the only caveat I think on that side is it is kind of a little bit of interlocking-ish, right? That is one of the fears I have. >> Who's not, even RedHat these days is, locking you in. >> Yeah, you know, I pulled some interesting stat metadata from a blog post from Paul Fazzone announcing the Pivotal acquisition. He mentioned Kubernetes 22 times. He mentioned Pivotal Cloud Foundry once. So VMware is all in on this open-shift type movement I think VMware is looking at the Red shift I mean Red OpenShift acquisition by IBM and thinking, "Man, I wish we didn't have this "Sense of relationship with Pivotal "So we could have went out and bought RedHat." >> Well that's a good point about Kubernetes, I think you're right on that. And remember, we've been covering Open Stack up until about a year ago, and they changed the name it's now something else, but I remember when Open Shift wasn't doing well. >> Keith: I do too! >> And what really was a tipping point for them was they had all the elements, but it was Kubernetes that really put them in a position to take advantage of what they were trying to do and I think you're right, I think VMware sees that, now that IBM owns RedHat and Open Shift, it's clear. But I think the vSphere deal with Project Pacific points out that they want to use Kubernetes as a distraction layer for developers, and have a developer interface to vSphere. So they get the operators with vSphere, they put Kubernetes in there and they say, "Hey developers, use us." Now I think that's a hedge also against Pivotal 'cause if that horse doesn't come across the track to the finish line, you know... >> It's definitely a hedge on Containers just a finer point of what you were saying there was a slight difference in the cash outlay for RedHat, 34 billion versus the cash outlay for Pivotal was 800 million. So they picked up an 800 million dollar asset or a 4 billion dollar asset for 2.7 billion. >> Hold on, explain that because 2.7 billion was the number we reported you're saying that VMware put out only 800 million in cash, which, what's that mean? >> That's correct. So they put out 800 million in cash to the existing shareholders of Pivotal, which is a minority of the shareholders. Michael Dell owns 70% of it, VMware owns 15% of it. So they take the public shareholders get the 800 million >> John: They get taken out, yep. >> Michael Dell gets more VMware stock, so now he owns more of VMware. VMware already owns 15% of Pivotal, so for 800 million, they get Pivotal. >> So, the VMware independent shareholders get... they get diluted. >> Right. >> Did they lose out in the deal is the question and I think the thing that most people are missing in this conversation is that Pivotal has a army of developers. Regardless of whether developers focus on PCF or Kubernetes is irrelevant. VMware has a army, a services army now that they can point towards the industry and say, "We have the chops to have "The conversation around why you should "Come to us for developing." >> So I want to come back to that but just, a good question is, Do the VMware shareholders get screwed? Near term, the stock drops, right? Which is what happens, right? Pivotal was up 77% on the day that the Dow dropped 800 points. Here's where I think it makes sense, and there are some external risks. Pivotal plus Carbon Black, the combination they shelled out 2.7 billion in cash. They're going to add a billion dollars to VMware's subscription business next year. VMware trades at 5x revenue multiple, so the shareholders will, in theory, get back 5 billion. In year two, it's going to be 3 billion that they're going to add to the subscription revenue so in theory, that's 15 billion of value added. I think that goes into the thinking, so, now, are people going to flock to VMware? Are Kubernetes developers going to flock to VMware? I mean to your point, that to me, that's the value of Pivotal is they can get VMware into the developer community. 'Cause where is VMware with developers? Nobody, no developers in this audience. >> That's true. >> What are your guys' thoughts on that? >> Yeah, I think that we have to dissect the workload of applications at the enterprise level, right? There are a variety of applications, right, from SAPs Oracles of the world those are two heavyweights in the application space. And then there's a long trail of ISVs, right. And then there's homegrown applications I think where Pivotal plays a big role is the homegrown applications. When you're shipping a lot as an ISV or within your enterprise, you're writing software you're shipping applications to the user base. It could be internal for partners, for customers, right, I think that's where Pivotal plays Pivotal is pivotal, if you will. >> I think that's a good bet too, one of the things we've been pulling the CESoEs data for when we got reinforced we started pulling CESoEs in our network, and it's interesting. They're under the gun to produce security solutions and manage the vendors and do all that stuff they're all telling us, the majority of them are telling us that they're building their own stacks internally to handle the crisis and the challenge of security, which I think's a leading indicator versus the kind of slow, slower CIO which LOVES multi-anything. Multi-vendor, control, a deal with contracts CESoEs, they don't have the DOGMA because they can't have the DOGMA. They got to deliver and they're saying, "We're going to build a stack "On one cloud. "Have a backup cloud, "I want all my developer resources "On this cloud, not fork my team "And I'm going to build a stack "And then I'm going to ship APIs "And say to my suppliers, in the RFP process, "If you support these APIs, "You could do business with us." >> Keith: So, if you don't -- >> That's kind of a cutting edge. If you don't, you can't, you can't. And that's the new normal. We're seeing it with the Jedi deal with Oracle not getting, playing 'cause they're not certified at the level that Amazon is, and you're going to start to see these new requirements emerging this is a huge point. I think that's where Pivotal could really shine not being the, quote, developer channel for VMware. I think it's more of really writing apps >> And John, I think people aren't even going to question that model. Capital One is probably the poster child for that model they actually went out and acquired a start-up, a security, a container security start up, integrated them into their operations and they still failed. Security in the cloud is hard. I think we'll get into a multi-cloud discussion this is one of the reasons why I'm not a big fan of multi-cloud from an architecture perspective, but from a practical challenge, security is one of the number one challenges. >> That's a great point on Capital One in fact, that's a great example. In fact, I love to argue this point. On Twitter, I was heavily arguing this point which is, yeah, they had a breach. But that was a very low-level it's like the equivalent of a S3 bucket not being configured, right? I mean it was so trivial of a problem but still, it takes one whole-- (hearty laughing) One, one entry point for malware to get in. One entry point to get into any network where it's IOT This is the huge challenge. So the question there is, automation. Do you do the, so, again, these are the, that's a solvable problem with Capital One. What we don't know is, what has Capital One done that we don't know that they've solved? So, again, I look at that breech as pretty, obviously, major, but it was a freakin' misconfigured firewall. >> So, come back to your comments on multi-cloud. I'm inferring from what you said, and I'd love to get your opinion, Sarbjeet. That multi-cloud is not an architectural strategy. I've said this. It's kind of a symptom of multiple vendors playing but so, can multi-cloud become, because certainly VMware IBM RedHat, Google with Anthos, maybe a little bit less Microsoft but those three-- >> Dell Technologies. >> Cisco, Cisco and certainly Dell all talking about multi-cloud is the clear strategy that's where CIOs are going, you're not buying it. Will it ever become a clear strategy from an architectural standpoint? >> Multi-cloud is the NSX and I don't mean NSX in VMware NSX it's the Acura NSX of enterprise IT. The idea of owning the NSX is great it brings me into the showroom, but I am going to buy, I'm going to go over to the Honda side or I'm going to go buy the MDX or something more reasonable. Multi-cloud, the idea, sure it's possible. It's possible for me to own a NSX sports car. But it's more practical for me to be able to shop around I can go to Google via cloud simple I mean I can go via cloud simple to Azure, GCP or I can go BMC, I have options to where I land, but to say that I am going to operate across all three? That's the NSX. >> If you had a NSX sports car, by the way, to use the analogy in my mind is great one, the roads aren't open yet. So, yeah, okay great. (hearty laughing) >> Or you go to Germany and you're in California. So, the transport, and again in the applications you could build tech for good applications all you want, and they're talking about tech for good here but if it's insecure, those apps are going to create more entry points. Again, for cyber threats, for malware, so again, the security equation, and you're right is super important, and they don't have it. >> Dave: What's your thought on all (mumble)? >> Sarbjeet: I think on multi-cloud you are, when you are going to use multi-cloud you going to expand the threat surface if you will 'cause you're putting stuff at different places. But I don't think it, like as you said Dave, the multi-cloud is not more of an architectural choice, it's more like a risk mitigation strategy from the vendor point of view. Like, Amazon, who they don't compete with or who they won't compete with in the future we don't know, right? So... >> You mean within the industry. >> Yeah, within the industry right-- >> Autos or healthcare or... >> Sarbjeet: Yeah, they will, they are talking about that, right? So if you put all, all sort of all your bets on that or Azure, let's say even Azure, right? They are not in that kind of category, but still if you go with one vendor, and that's mission critical and something happens like government breaks them up or they go under, sideways, whatever, right? And then your business is stuck with them and another thing is that the whole US business, if you think about it at a global scale, like where US stands and all that stuff and even global companies are using these hourglass providers based in US, these companies are becoming like they're becoming too big to fail, right? If you put everything on one company, right, and then something happens will we bail them out? Right, will the government bail them out? Like stuff like that. Like banks became too big to fail, I think. I think from that point of view, bigger companies will shift to multi-cloud for, to hedge, right, >> Risk Mitigation >> Risk mitigation. >> Yeah, that's, okay, that's fair. >> I mean, I believe in multi-cloud in one definition only. I think, for now, the nirvana of having different workload management across utility bases, that's fantasy. >> Keith: Yeah, that's fantasy. >> I think you could probably engineer it, but there might not be a workload for that or maybe data analytics I could see moving around as a use case, certainly, but I think-- >> D-R! >> The reality is, is that all companies will probably have multiple clouds, clearly like, if you're going to run Office 365, and it's going to be on Azure, you're an Azure customer, okay. You have Azure cloud. If you're building your security stack on Amazon, and got a development team, you're on Amazon. You got two clouds. You add Google in there, big tables, great for certain things you know, Big Query, you got Google. You might even have Alibaba if you're operating in China So, again, you going to have multiple clouds the question is, the workloads define cloud selection. So, I've been on this thing, if you got a workload, an app, that app should choose its best infrastructure possible that maximizes what the outcome is. >> And John, I think what people fail to realize, that users, when you give them a set of tools, they're going to do what users do, which is, be productive. Just like users went out and took credit cards swiped it and got Amazon. If you, if in your environment you have Amazon you have GCP, you have Azure, you have Salesforce, O-365, and a user has access to all five platforms, whether or not you built a multi-cloud application a user's going to find a way to get their work done with all five, and you're going to have multi-cloud fallout because users will build data sets and workloads across that, even if IT isn't the one that designed it. >> All right, guys, final question of the Power Panel Dave, I want to include this for you too, and I'll weigh in as well. Take a minute to share what you're thinking right now is on the industry. What's taking up your attention? What's dominating your Twittershpere right now? What's the bee in your bonnet? What's the hot-button issue that you're kicking the tires on, learning about, or promoting? Sarbjeet, we'll start with you. What's on top of the mind for you these days? >> I think with talk about multi-cloud all the time, that's in discussions all the time and then Blockchain is another like slow-moving train, if you will, I think it's arriving now, and we will see some solutions coming down the pike from different, like a platformization of the Blockchain, if you will, that's happening, I think those are two actually things I keep my eyes on and how developers going to move, which side to take and then how the AWSs dominance is challenged by Microsoft and Google there's one thing I usually talk about on Twittersphere, is that there's a data gravity and there's a scales gravity, right? So people who are getting trained on Amazon, they will tend to stay with them 'cause that's, at the end of the day, it's people using technology, right? So, moving from one to another is a challenge. Whoever throws in a lot of education at the developers and operators, they will win. >> Keith, what are you gettin' excited about? >> So, CTO advisor has this theory about the data framework, or data infrastructure. Multi-cloud is the conversation about workloads going here, there, irrelevant, it's all about the data. How do I have a consistent data policy? A data protection policy, data management policy across SAS, O-365, Sales Force Workday, my IAF providers, my PATH providers, and OMPRIM, how do I move that data and make sure another data management backup company won Best of VMWorld this year. This is like the third or fourth year and a reason it's not because of backup. It's because CIOs, CDOs are concerned about this data challenge, and as much as we want to talk about multi-cloud, I think well, the industry will discover the problem isn't in Kubernetes the solution isn't in Kubernetes it's going to be one of these cool start-ups or one of these legacy vendors such as NetAp, Dell, EMC that solves that data management layer. >> All right, great stuff. My hot button is cloud 2.0 as everyone knows, I think there's new requirements that are coming out, and what got my attention is this enterprise action of VMware, the CIA deal at Amazon, the Jedi deal show that there are new requirements that our customers are driving that the vendors don't have, and that's a function that cloud providers are going to provide, and I think that's that's the canary in the coal mine. >> I've got to chime in. I've got to chime in. Sorry, Lenard, but it's the combination what excites me is the combination of data plus machine intelligence and cloud scale. A new scenario of disruption moving beyond a remote set of cloud services to a ubiquitous set of digital services powered by data that are going to disrupt every industry. That's what I get excited about. >> Guys, great Power Panel. We'll pick this up online. We'll actually get the Power Panels working out of our Palo Alto studio. If you haven't seen the Power Panels, check them out. Search Power Panels the Cube on Google, you'll see the videos. We talk about an issue, we get experts it's an editorial product. You'll see more of that online. More coverage here at VMWorld 2019 after this short break. (lively techno music)
SUMMARY :
of the VMWorld 2019. friend of the Cube, Cube host sometimes over the past couple months. I mean, snark aside, there's real things to talk about. The Amazon relationship cleared the air You've been on the inside. and say that the move to eject Pivotal and one of the things that's interesting of the Pivotal stack, if you will is, locking you in. announcing the Pivotal acquisition. about Kubernetes, I think you're right on that. 'cause if that horse doesn't come across the track just a finer point of what you were saying because 2.7 billion was the number we reported get the 800 million so for 800 million, they get Pivotal. So, the VMware independent shareholders get... and say, "We have the chops to have I mean to your point, that to me, from SAPs Oracles of the world and manage the vendors and do all that stuff And that's the new normal. Capital One is probably the poster child for that model it's like the equivalent of a S3 bucket and I'd love to get your opinion, Sarbjeet. all talking about multi-cloud is the clear strategy The idea of owning the NSX is great the roads aren't open yet. in the applications you could build But I don't think it, like as you said Dave, You mean the whole US business, if you think about it I mean, I believe in multi-cloud and it's going to be on Azure, you're an Azure customer, okay. fail to realize, that users, when you give them What's the bee in your bonnet? like a platformization of the Blockchain, if you will, This is like the third or fourth year that the vendors don't have, Sorry, Lenard, but it's the combination We'll actually get the Power Panels
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
VMware | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
John | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
15 billion | QUANTITY | 0.99+ |
Paul Fazzone | PERSON | 0.99+ |
Keith | PERSON | 0.99+ |
2.7 billion | QUANTITY | 0.99+ |
BMC | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Sarbjeet | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Sarbjeet Johal | PERSON | 0.99+ |
Michael Dell | PERSON | 0.99+ |
5 billion | QUANTITY | 0.99+ |
2013 | DATE | 0.99+ |
15% | QUANTITY | 0.99+ |
Germany | LOCATION | 0.99+ |
70% | QUANTITY | 0.99+ |
China | LOCATION | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
2016 | DATE | 0.99+ |
3 billion | QUANTITY | 0.99+ |
Capital One | ORGANIZATION | 0.99+ |
5 billion dollars | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Pivotal | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
CESoEs | ORGANIZATION | 0.99+ |
RedHat | ORGANIZATION | 0.99+ |
800 million | QUANTITY | 0.99+ |
Pat | PERSON | 0.99+ |
AWSs | ORGANIZATION | 0.99+ |
22 times | QUANTITY | 0.99+ |
Honda | ORGANIZATION | 0.99+ |
34 billion | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
Project Pacific | ORGANIZATION | 0.99+ |
Randy Redmon & Jake Sager, DXC Technology | Cisco Live US 2019
>> Live from San Diego, California, it's the Cube. Covering Cisco Live US 2019. Brought to you by Cisco and its ecosystem partners. >> Hi, welcome back to Cisco Live from sunny San Diego. I'm Lisa Martin with Dave Vellante and David are joined by a couple of guests from DXC. To my right we've got Jake Sager, principal client executive TMT, Tech Media Telecom. Jake, great to have you on the program. >> Thank you. >> Now we're broadcasting from the sun. And Randy Redman, the director of security services Product Management. Randy welcome. >> Thank you very much. Glad to be here. >> So we're in the definite zone. You can imagine all of the exciting conversations going on behind us here. Guys, I just noticed that DXC, guys have been around for a couple of years IT services company with 25 billion in annual revenue, but you guys were just named, I think it's this morning, number three on CLUS 2019 solution provider list up from number 10 last year. Pretty good momentum. Jake, we'll start with you. What do you see in feed on the street, in the market with respect to digital transformation, what are customers pains and how is the DXC helping knock him out of the park? >> Well, I think you know, DXC has a long legacy history over 60 years of business together from CSC, EDS, and obviously HP heritage. So we've kind of seen it all and seen the business transform from a highly on the ground business to now a lot of things in the cloud. With that obviously customers are looking to do business in different ways. There's a lot of digital disruptors out there. So they're looking to find the new solution that's going to shade off the competition, kind of skirt it, find the newest best thing before they can and find customer driven solutions rather than just cost driven solutions and other things like that. >> So when you say customer driven solution, let's dig into that a little bit more. What does that mean? And how is it actually, how does it manifest? >> Well, I think the customer can be a lot of different things to a lot of different people. In retail, it can be somebody walking into your store and banking, it can be somebody using an app. But what does that end consumer want? What's going to make their life easier and make them go to you versus another company? And that's really what companies need to be looking at. There's no one answer to anything. But it's a lot of thought-lead leadership to try to come up with something brand new, that is not going to be disrupted by the next Airbnb or Uber. >> So you are a CEO, Michael, talks a lot about digital transformation. >> Right. >> Right here in the security side of things. So we going to dig into that a little bit. But in terms of the evolution of digital transformation, generally and specifically, how people are rethinking security as a result, because we often say, what's the difference between a business and a digital business? Well, it's how they use data. Okay, well and that opens up a whole can of worms on security. So what are you seeing in terms of the evolution of the so called digital transformation, but specifically how it's affecting their posture towards security? >> Yeah, absolutely, because in a digital environment, customers are completely rethinking both how their infrastructure is deployed and how their applications are deployed. And so really, it's opening up whole new avenues for security threats to enter their environments. At the same time, there are so many individual security technologies and customers are really struggling with what are the right technology choices to make and then more importantly how to operate them effectively, how to implement appropriate security policies, how to actually monitor effectively for threats across the environment. So digital transformation is changing their business environment, but it's really completely opening up the sphere on the security side of the house. >> So Jake, we were talking and I had asked you what your favorite topics are, you said, smart city, IoT and connected cars. Sounds like a security nightmare. >> Yeah. >> But it's an opportunity as well for you guys. >> Absolutely. >> So you go in, what's the customer conversation like? I mean, pick one or all three, if you can generalize, in terms of I mean, these are all new things, right? It's the Wild West right now. What's customers mindset? Like you said, they don't want to get disrupted. They're looking at new opportunities. What are they looking at? How are you guys helping them? >> Well, it depends industry by industry. You know, when it comes to healthcare, we can help with remote telemedicine, operating medical equipment remotely. But again, that's going to bring in a whole bunch of new security threats, which Randy is going to be more than equipped to talk about. But I think securing that is really a big problem. When you start talking about massive IoT, you're talking about thousands and thousands of sensors out there in a smart city or oil mining gas utility, like they were talking about earlier today. You're talking about tons of different entry points, lots of different vulnerabilities. So that's definitely a huge issue for them. It's also a ton of new data that they don't know how to manage, that they don't know how to make sense out of, through artificial intelligence or other means. So for a company like us that really has strength in security, artificial intelligence, machine learning, as well as a strong background of data center, data lake management, helping them kind of figure out what data to use and how to use it most effectively. That's really where we shine. Cause we're not necessarily the company providing the hardware. We're not the company writing the software. But we're really the glue that integrates it all together, and brings all those multi solutions together. 'Cause in IoT, it's an ecosystem. It's not solution in a box. >> Let's dig into the Smart City concept. It's so fascinating. I've read up on the Las Vegas city of Las Vegas, which is been on the Cube. Done a lot to really transform that city. But to your point take about data, I think Chuck Robbins said this morning in the keynote that organizations are only really getting insight from less than 1% of their data. >> Right. >> It must be one of those where do we start? >> Right. >> So you are talking about working with municipalities on becoming smart cities and being able to apply some of your expertise and AI. Where do you start that conversation? >> Well, I mean, the terms over abused, I think data is a new oil, right? So if you don't know which data you're getting it from and you're only getting 10%, you're not doing a very good job as an oil producer, right? So our company is very good at identifying where the data is. 'Cause a lot of times, that's half the problem, is finding where that data resides, getting it into a place where you can actually ingest it, and then actually analyze it and get something useful out of it. Companies typically don't know where all their data is, they don't know how to analyze it and they definitely don't know how to turn it into something useful. So that's something DXC does across the board. >> What about the partnership with Cisco? So Cisco, obviously, it's got the networks, it's got, you know, packets flying around. It's got to secure those. What's the partnership like? Are you leveraging their products? I'm sure you are. You guys use everybody's products. >> Right. >> What's the partnership like? And what specifically are you doing in the security area Randy? >> Yeah, so in terms of the partnership with Cisco, we're certainly looking in several areas frankly, because right, we're looking with our clients at a solution letter approach, right. And that's one of the things that we like with Cisco is the broad portfolio meshes with our broad portfolio. So certainly key areas of focus for us right now are in the Unified Communication space and how we're helping with collaboration for our clients, but also in the security area, technologies, such as Cisco stealth watch, which is helping provide more visibility to what's happening in networks today. Because more and more our view is that security as we were just talking about, even in the IoT space becomes more of an analytics exercise. It's less about really being able to detect what you already know, it's really about being able to drive detection from the unknown. And so the more data that we can get, the more visibility into network environments the better. >> How do you work with Cisco? 25% of Cisco's revenue is they called services. So, where do they leave off? I mean they're a product company. You guys are a services firm, but they have services. >> Right. >> How do you interact with them? You don't compete, I presume. At least there's maybe some overlap. But, where do they leave off and you guys pick up? >> Yeah, so certainly, we're not competing with Cisco from a services perspective. We're certainly relying on Cisco services for hardware and professional support around their technology. We're really there to provide overall solution design, architecture installation and we'll leverage Cisco professional services where that's appropriate. And then we provide managed services on the back end as well. >> So you're saying their role is to make sure it's architected properly and it's working, in the way it's promised. Your role is to say it my way and you can correct me is help the customer figure out how to apply those technologies to create business value. >> Well, exactly and also typically in a client solution. Cisco maybe one of several technologies that are involved in a broader solutions-- >> you got to make it all work together tomorrow-- >> And part of our role is to act as that integrator to bring the core Cisco elements with the DXC services and-- >> So your jobs getting harder and harder and harder. >> Fully it is. It's a security perspective. >> Dave: As a consumer things are getting easier, right? Oh, yeah, Google, Facebook, Instagram is so easy. But the back end with, you know, cloud and DevOps, the pace of change. How have you seen that affect your business? How are you dealing with that rapid change? >> Yeah, so I think that from a couple of perspectives here. One is that it's changing how we go about the process in terms of developing services and capabilities for our clients. Just as Agile has taken over actually in the application space, It's really driving how we think about actually developing offerings now around getting technology out into the market more quickly, evolving and growing capability from there. And so really, it's all about how we get proof of value for our clients quickly by getting technology into their hands as quickly as possible. >> Lisa: So let's talk about some of these waves of innovation Cisco was talking about this morning. Talking about this explosion of 5G, Wi-Fi 6 being able to have this access that works really well indoors outdoors, how that's changing even Jake you know, consumer demand. What opportunities, and Jake I'll start with you, what opportunities and some of the things that Cisco was talking about with respect to connectivity, AI with GPUs being everywhere, edge mobile, architectures becoming so a Morpheus opportunity for DXC to help customers really not just integrate the technologies but to excel and accelerate themselves to define new services, new business models. What's your differentiation point there? >> I mean, our main differentiation point from DXC is agnostic to the technology. We really specialize in being vendor agnostic, finding the best of breed companies out there and integrating it into our portfolio and offering it to our clients. If our client wants Azure, we're not going to try to sell them on Google Cloud. If they want one or the other, we're going to be hand in hand with the customer either way. With these new technologies that come around, it's just going to open the doors for so many new types of business, so many more disruptive businesses. No matter what comes along our goal is to have that portfolio in hand, which Cisco rounds out to be able to offer to our over 6000 enterprise clients. So we need to be able to manage every shape, size, variety, industry, anything you can think of. >> What's the trend? Is the trend, yeah, we want as you say, okay, we'll make it make it work for you or is the trend like, you guys figure it out. We're not sure what the right fit is. How much of that is going on? >> I'd say you probably see 50 50. (Jake laughs) >> I think we're seeing a lot of that. Certainly as clients are migrating applications to the cloud. They may be starting with a particular cloud platform, but clients are really frankly fairly agnostic in terms of the cloud platform they're migrating to. They're taking advantage of more and more SAS applications. So one of the trends that we're definitely seeing is how to address client security concerns in a hybrid cloud environment because that's more and more what we expect the future to be, even if clients are focusing on a particular cloud platform as their starting point today. >> So as data is traversing the network and one of the one of the things that I heard this morning from Chuck Robbins keynote was that the common denominator as all of these changes and waves in innovation are coming is the network. Data is traversing the network. Given that is a given and there's only going to be more and more data and more connected devices, more mobile data traffic. Randy question for you. How can DXC, how can you help customers leverage your expertise and say security and AI, as you mentioned, to extract more value from their data and allow them to become far more secure as the it's no longer acceptable, you can't just simply put a firewall around a perimeter that has so many a Morpheus points? >> Yeah and absolutely. And as we mentioned, with all of the data that's available today, it really becomes more of an analytics problem. And one of the investments that the DXC is making is specifically in our security platform that allows us to ingest data from pretty much any infrastructure data source and be able to leverage capabilities to provide analytics, machine learning and automation on top of that, to help clients leverage the power of the data and specifically from a security perspective, not just drive detection, because that's interesting. The question I get from clients is well now, what do I do about it? >> Right. >> And we're leveraging investment, our platform automation is actually to begin to take automated actions on behalf of our clients in order to solve security problems. >> Excellent, guys. Well, thank you so much, Jake, and Randy for stopping by the Cube and talking with Dave and me about what you guys are doing at DXC. The next time we'll have to talk about connected cars. >> Sure. >> Thank you. >> Alright. For Dave Vellante I'm Lisa Martin, you're watching the Cube live from Cisco Live in sunny San Diego. Thanks for watching. (techy music)
SUMMARY :
Brought to you by Cisco and its ecosystem partners. Jake, great to have you on the program. And Randy Redman, the director of Glad to be here. and how is the DXC helping knock him out of the park? on the ground business to now a lot of things in the cloud. So when you say customer driven solution, and make them go to you versus another company? So you are a CEO, Michael, But in terms of the evolution of digital transformation, and then more importantly how to operate them effectively, and I had asked you what your favorite topics are, So you go in, what's the customer conversation like? that they don't know how to make sense out of, But to your point take about data, and being able to apply some of your expertise and AI. and they definitely don't know how to turn it What about the partnership with Cisco? Yeah, so in terms of the partnership with Cisco, How do you work with Cisco? But, where do they leave off and you guys pick up? We're really there to provide is help the customer figure out how to apply that are involved in a broader solutions-- It's a security perspective. But the back end with, you know, cloud and DevOps, in the application space, not just integrate the technologies but to excel and offering it to our clients. or is the trend like, you guys figure it out. I'd say you probably see 50 50. the future to be, and one of the one of the things that I heard this morning and be able to leverage capabilities to provide analytics, in order to solve security problems. with Dave and me about what you guys are doing at DXC. from Cisco Live in sunny San Diego.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Jake Sager | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Randy | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Randy Redman | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Chuck Robbins | PERSON | 0.99+ |
Jake | PERSON | 0.99+ |
25 billion | QUANTITY | 0.99+ |
Randy Redmon | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
10% | QUANTITY | 0.99+ |
San Diego, California | LOCATION | 0.99+ |
CSC | ORGANIZATION | 0.99+ |
DXC | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
last year | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
EDS | ORGANIZATION | 0.99+ |
25% | QUANTITY | 0.99+ |
less than 1% | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
TMT | ORGANIZATION | 0.99+ |
Airbnb | ORGANIZATION | 0.99+ |
over 60 years | QUANTITY | 0.98+ |
ORGANIZATION | 0.98+ | |
one | QUANTITY | 0.97+ |
DXC Technology | ORGANIZATION | 0.97+ |
thousands | QUANTITY | 0.97+ |
both | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
Tech Media Telecom | ORGANIZATION | 0.95+ |
tomorrow | DATE | 0.94+ |
One | QUANTITY | 0.94+ |
three | QUANTITY | 0.92+ |
Azure | TITLE | 0.92+ |
this morning | DATE | 0.92+ |
Agile | TITLE | 0.89+ |
half | QUANTITY | 0.89+ |
over 6000 enterprise clients | QUANTITY | 0.89+ |
waves | EVENT | 0.86+ |
thousands of sensors | QUANTITY | 0.86+ |
earlier today | DATE | 0.85+ |
San Diego | LOCATION | 0.84+ |
US | LOCATION | 0.82+ |
Cisco Live | EVENT | 0.82+ |
Ashesh Badani, Red Hat | Red Hat Summit 2019
>> Announcer: Live, from Boston, Massachusets, it's theCUBE covering Red Hat Summit, 2019. Brought to you by Red Hat. >> Well, welcome back here in Boston. We're at the BCEC as we are starting to wrap up our coverage here of day two of the Red Hat Summit, 2019. Along with Stu Miniman, I'm John Walls, and we're now joined by Ashesh Badani, who is the senior vice president of Cloud Platforms at Red Hat. Been a big day for you, hasn't it Mr. Badani? >> It sure has, thanks for having me back on! >> You bet! All right, so OpenShift 4, we saw the unveiling, your baby gets introduced to the world. What's the reaction been between this morning and this afternoon in terms of people, what they're asking you about, what they're most curious about, and maybe what their best reaction is. >> Yeah, so it's not necessarily a surprise for the folks who have been following OpenShift closely, we put the beta out for a little while, so that's the good news, but let me roll back just a little. >> John: Sure >> I think another part of the news that was really important for us is our announcement of a milestone that we crossed, which is a thousand customers, right? And it was at this very summit and theCUBE definitely knows this well, right, because they've been talking for a while. At this very Summit in 2015, four years ago, that we launched OpenShift Version 3. Right and so, you know you fast forward four years, right, and now the diversity of cases that we see, you know, spanning, established apps, cloud native apps, we heard Exxon talking about AIML data signs that they're putting on the platform, in a variety of different industries, is amazing. And I think the way OpenShift 4 has come along for us, is us having the opportunity to learn what have all these customers been doing well, and what else do we need to do on the platform to make that experience a better one. How do we reimagine enterprise kubernetes, to take it to the next level. And I think that's what we're introducing to the industry. >> Ashesh I think back four years ago, kubernetes was not something that was on the tip of the tongues of most people here. Congratulations on 1,000. >> Thank you. >> I hear what, 100, 150, new customers every quarter is the current rate there, but what I've really enjoyed, talked to a CIO and they're like okay, we're talking about digital transformation, we're talking about how we're modernizing all of our environments, and OpenShift is the platform that we do it. So, talk a little bit, from a customer's standpoint, the speeds, the feeds, the technical pieces, but that outcome, what is it an enabler of for your customers? >> Yeah, so excellent points Stu, we've seen whole sale complete digital transformations underway with our customers. So whether it's Deutsche Bank, who came and talked about running thousands of containers now, moving a whole bunch of workload onto the platform, which is incredible to see. Whether it's a customer like Volkswagen, who talking yesterday, if you caught that, about building an autonomous, self-driving, sets of technologies on the platform. What we're seeing is not just what we thought we would only see in the beginning which is one built, cloud native apps, and digital apps, and so on. Or, more nice existing apps, and bring them on the platform. But also, technologies that are making a fundamental difference, and I'll call one out. So I'm a judge for The Innovation Awards, we do this every year, I have been for many years, I love it, it's one of my favorite parts of the show. This year, we had one entry, which is one of the winners, which is HCA, which is a healthcare provider, talking about how they've been using the OpenShift platform as a means to make a fundamental difference in patients' lives. And when I say fundamental difference, actually saving lives. And you'll hear more about their story, but what they've done, is be able to say, look how can we detect early warning signals, faster than we have been, take some AI technology, and correlate against that, and see how we can reduce sepsis within patients. It's a very personal story for me, my mother died of sepsis. And the fact that they've been able to do this, and I think they're reporting they've already saved dozens of lives based on this. That's when you know, the things that you're doing are making a real difference, making a real transformation, not just in an actual customers' lives, but in users and people around the world. >> You were saying earlier too, Ashesh, about looking at what customers are doing and then trying to improve upon that experience, and give them a more effective experience, whatever the right adjective might be, in terms of what you're doing with 4. If you had to look at it, and say okay, these are the two or three pillars of this where I think we've made the biggest improvement or the biggest change, what would those be? >> Yes, so, one is to look at the world as it is in some sense, which is what a customer's doing. Customers weren't deployed to hybrid cloud, right? They want choice, they want independence with regard to which environments are rented on, whether it's physical, virtual, private, or any public cloud. Customers want one platform, to say I want to run these next generation, cloud native, market service based applications, along with my established stateful applications. Customers want a platform for innovation, right? So for example, we have customers that say, look, I really need a modern platform because I want to recruit the next generation of developers from colleges, if I don't give them the ability to play with Go, or Python, or new databases, they're gonna go to some Silicon Valley company, and I'm going to deplete my pool of talent that I need to compete, right? 'Cause digital transformation is about taking existing companies, and making them digitally enabled. Going forward, what we're also seeing is the ability for us to say well maybe the experience we've given existing customers can be improved. How do we for example, give them a platform, that's more autonomous in nature, more self-driving in nature, that can heal itself, based on for example, there's a critical update that's required that we can send over the air to them. How can we bring greater automation into the platform? It's all of those ideas that we've got based on how customers are using it today, is what we're bringing to bear, going forward. >> Ashesh, one of the errors we have trying to help customers parse through the language is, everybody's talking about platforms, if you look at the public clouds, everybody's all in on kubernetes, a few weeks ago, we were at the Google Cloud event, talked to Red Hat there, there's Anthos, there's OpenShift, look at Azure, we Satya Nadella up on stage, and you're like, okay they've got their own kubernetes platform, but I've got OpenShift fully integrated there. >> Ashesh: Yeah. >> Can you help is kinda understand how those fit together because it's an interesting and changing dynamic. >> Well it's a very Silicon Valley buzzword, right? Everyone wants a platform, everyone wants to build a platform, Facebook's a platform, Uber's a platform, Airbnb is, everything's seeming a platform, right? What I really want to focus on more is in regard to, we want to be able to give folks literally an abstraction level, an ability for companies to say I want to embrace digital transformation. Before we get there, someone's like what's digital transformation, I don't even understand what that means anymore. My simple definition is basically flipping the table. Typically companies spend 80% on maintenance, 20% innovation, how do we flip that? So they're spending 80% innovation, 20% maintenance. So if we're still thinking in those terms, let me give you a way to develop those applications, spend more time and energy on innovation, and then allow for you to take advantage of what I'll call a pool of resources. Compute, network, and storage. Across the environment that you have in place. Some of which you might own, some of which some third parties might provide for you, and some of which you get from public cloud. And take advantage of innovation that's being done outside. Innovative services that come from either public cloud providers, or ISPs, or separate providers, and then be able to do that innovated rapid fashion, you know, develop, deploy, iterate quickly. So to me that is really fundamentally what we're trying to provide customers, and it takes different forms, internal packaging. >> Maybe you can explain to me, the Azure OpenStack seems different than some of the other partnerships. Two years ago, when we were sitting in this building, we talked to you about AWS with OpenShift in that partnership, so what's differentiated and special about the Azure OpenStack integration. >> Yeah, so the Azure partnership, it's a good question because we've now taken our partnering with the public cloud providers to the next level, if you will. With Azure there's a few things in play, first it's a jointly offered managed service from Red Hat and Microsoft, where we're both supporting it together. So in the case of OpenShift and AWS, that's you know OpenShift directly to the ring of service, in this case, it's right out of Microsoft, working close together to make that happen. It's a native service to Azure, so if you saw in the keynote, you could use a command line to call OpenShift directly integrate into the Azure command line. It's available within the interface of Microsoft-Azure. So it feels like a native service, you can take advantages of other Azure services, and bring those to bear, so obviously increases developer experience from that perspective. We also inherit all the compliances, certifications, that Microsoft-Azure has, as well, for that service, as well as all the availability requirements that they put out there, so it's much more closely integrated together, much better developer experience, native to Azure, and then the ability for the Microsoft sales team to go out and sell it to their customers in conjunction. >> You talk a lot about different partnerships, and bringing this collaborative, open-mindset to each and every relationship, how hard is that to do? Because you have your of way of doing things and it's worked very well, and yet, you go out and you have these new partnerships or extensions of partnerships, and not everybody with whom you work does things the same way, and so, everybody's gotta be malleable to a certain extent, but just in terms of being that flexible all the time, what does that do for you? >> So, we take that for granted sometimes, the way we work. And I don't mean to say that to be boastful, or arrogant, in any fashion. I had an interview earlier today, and the reporter said why don't you put on your page, that you're 100% open source? And I said we never put that on our page because that's just how we work, we assume that, we assume everyone knows that about us, and we're going forward. And he says, well, I don't know, perhaps there's others that don't know. And he's right. The world's changing, we're expanding our opportunities in front of folks. In the same way we've only and always known, we used to collaborate with others in the community, before we fully embraced OpenStack, there were certain projects that Red Hat was investing in that were Red Hat driven, and we say maybe there wasn't as much community around it, we're gonna go down and embrace and fully parse an OpenStack community. Same's the case, for example, in kubernetes too. It's not necessarily a project that we created on our own, in conjunction with Google, and many others in the community. And so that's something that's part of our DNA, I'm not sure we're doing anything different, in engaging with communities, just how we work. >> So, Ashesh, I know your team's busy doing a lot of things. We've been hearing about what sessions are overflowing, down in the expo floor, so why don't you give us some visibility. But there was one specific one I wondered if you could start with. >> Ashesh: Sure. >> So down on the expo floor, it's a containerized environment and it has something to do with puppies, and therefor how does that connect with OpenShift 4 if we can start there. >> That's a tough one, you're gonna have to go and ask the puppies how to make a difference in the world. (laughing) >> John: So we go from kubernetes to canines, (laughing) that's what we're doing here. >> I do believe they're comfort dogs, but there was coding and some of the other stuff, so give us a little bit of the walk around, the expo flow, the breakouts and the like, in some of the hot areas, that your team's working on. >> Fair enough, fair enough. Maybe not puppies, but maybe we're trying to herd cats, close enough, right? >> John: Safer terrain. >> The amount of interest, the number of sessions, with OpenShift, or container based technologies, cloud based technologies, it's tremendous to see that. So regardless if whether you see the breakouts that are in place, the customer sessions, I think we've got over 100 customers, I think. Who are presenting on all aspects of their journey. So to me, that's remarkable. Lots of interest in our road map going forward, which is great to see, standing room only for OpenShift 4 and where we're taking that. Other technology that's interesting, the work, for example, we're doing in serverless. We announced an OpenSource collaboration with Mircrosoft, something called KEDA, the Kubernetes eventually. Our scaling project, so interesting how customers can kind of engage around that as well. And then the partner ecosystem, you can walk around and see just a plethora of ISVs, we're all looking to build operators, or have operators and are certifying operators within our ecosystem. And then it's ways for us to expose that to our joint customers. >> We're gonna cut you loose, and let you go, the floor's gonna be open for a few minutes, those puppies are just down behind Stu, we'll let you go check that out. >> Alright, thanks, I hear you can adopt them if you want to, as well. >> Before we let you go see the comfort dogs, 1,000 customers, where do you see, when we come back a year from now, where you are, where you wanna see it go, show us a little bit looking forward. >> So there's been some news around Red Hat that has probably happened over the last few months, the people are hearing this, I look at that as a great opportunity for us to expand our reach into markets, both in terms of industries perhaps we haven't necessarily gone into, that other companies have been. Perhaps we say it's manufacturing, perhaps this is the opportunity for us to cross the chasm, have a lot more trained consultants who can help get more customers on the journey, so I fully expect our reach increasing over a period time. And then you'll see, if you will, iterations of OpenShift 4 and the progress we've made against that, and hopefully many more success stories on the stage. >> Alright, looking forward to catching up next year, if not sooner. >> Ashesh: Okay, excellent. >> John: And congratulations on today, and best of luck down the road. >> Thanks again for having me. >> And good to see you! >> Ashesh: Yeah, likewise! >> Back with more on theCube, you are watching our coverage live, here from Red Hat Summit, 2019, in Boston, Massachusetts. (upbeat techno music)
SUMMARY :
Brought to you by Red Hat. We're at the BCEC as we are starting to wrap up what they're asking you about, so that's the good news, that we see, you know, spanning, established apps, the tip of the tongues of most people here. is the platform that we do it. And the fact that they've been able to do this, or the biggest change, what would those be? and I'm going to deplete my pool of talent Ashesh, one of the errors we have Can you help is kinda understand how those fit together Across the environment that you have in place. we talked to you about AWS with OpenShift to the next level, if you will. and the reporter said why don't you put on your page, down in the expo floor, and it has something to do with puppies, and ask the puppies how to make a difference in the world. John: So we go from kubernetes to canines, in some of the hot areas, that your team's working on. Maybe not puppies, but maybe we're trying to herd cats, that are in place, the customer sessions, the floor's gonna be open for a few minutes, Alright, thanks, I hear you can adopt them Before we let you go see the comfort dogs, and hopefully many more success stories on the stage. Alright, looking forward to catching up next year, and best of luck down the road. you are watching our coverage live,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Microsoft | ORGANIZATION | 0.99+ |
Ashesh Badani | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
John Walls | PERSON | 0.99+ |
Deutsche Bank | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Volkswagen | ORGANIZATION | 0.99+ |
20% | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
Ashesh | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
Badani | PERSON | 0.99+ |
Exxon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
next year | DATE | 0.99+ |
This year | DATE | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
one platform | QUANTITY | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
100 | QUANTITY | 0.99+ |
one entry | QUANTITY | 0.99+ |
1,000 customers | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Satya Nadella | PERSON | 0.99+ |
Python | TITLE | 0.99+ |
OpenShift | TITLE | 0.99+ |
yesterday | DATE | 0.99+ |
2015 | DATE | 0.99+ |
150 | QUANTITY | 0.98+ |
Two years ago | DATE | 0.98+ |
Mircrosoft | ORGANIZATION | 0.98+ |
1,000 | QUANTITY | 0.98+ |
Boston, Massachusetts | LOCATION | 0.98+ |
four years ago | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
over 100 customers | QUANTITY | 0.98+ |
Airbnb | ORGANIZATION | 0.97+ |
OpenShift 4 | TITLE | 0.97+ |
Azure OpenStack | TITLE | 0.97+ |
this afternoon | DATE | 0.96+ |
first | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
Go | TITLE | 0.96+ |
HCA | ORGANIZATION | 0.95+ |
Azure | TITLE | 0.95+ |
day two | QUANTITY | 0.95+ |
this morning | DATE | 0.93+ |
three pillars | QUANTITY | 0.92+ |
Red Hat Summit, 2019 | EVENT | 0.92+ |
Cloud Platforms | ORGANIZATION | 0.92+ |
The Innovation Awards | EVENT | 0.91+ |
four years | QUANTITY | 0.9+ |
Stu | PERSON | 0.89+ |
last few months | DATE | 0.87+ |
Red Hat Summit 2019 | EVENT | 0.87+ |
Inhi Cho Suh, IBM Watson Customer Engagement | CUBEConversation, March 2019
(upbeat pop music) >> From our studios in the heart of Silicon Valley, Palo Alto, California, this is a CubeConversation. >> Hello, everyone welcome to this CUBE Conversation here in Palo Alto, California, I'm John Furrier, co-host of theCUBE. We are here forth Inhi Cho Suh General Manager of IBM Watson, Customer Engagement, Former Cube alumni, I think she's been on dozens of times. Great to see you again. Welcome to our Palo Alto Studios. >> Yeah, great being here, John. >> So, we haven't chatted in awhile. IBM thing just happened, a little bit of a rainy event, here in February. Interesting change over since we last talked, but first give an update on what you're up to these days, what group are you leading, what's new? >> Okay, well first of all, I'm here based in California, which I'm excited about, and I lead our Watson West office, which is our Watson headquarters, here on the west coast, in downtown San Francisco, and we hosted our Think Conference, and at Think I lead with, in IBM, what we call our Watson Customer Engagement Business Unit, which is really the business applications, of how we apply Watson and other disruptive tech to a line of business audiences, both SAS and on premise software, so really excited about the areas of applying AI and machine learning as well as Blockchain to things like supply chain, and logistics, to order management, to next generation of retail. A lot of new, exciting areas. >> Yeah, we've had many conversations over the years from big data to as your career spanned across IBM, and you have a much more horizontal view of things, now. You're horizontally scalable, as we say in the cloud world. What's your observation of the trends these days? Because there's a lot waves. Actually, the waves that you guys announced, was the IBM, Watson NE ware and the cloud private ware. Marvin and I had an amazing conversation that video went viral. This is now getting a big tailwind for IBM. What's your thoughts in general about the overall ecosystem, because you're here in Silicon Valley, you've seen the big waves, you've got another big data world, cloud is here, multi cloud. What's your thoughts on the big mega-trends? >> Yeah, that's a good question. I think the first chapter of cloud, everyone ran to public cloud. When you look at it through the lens of enterprise, though, the hot topic right now in the second chapter is really about not just public cloud, but multi-cloud, hybrid cloud. Meaning, whether it's a private, public, it's about thinking about the applications and the nature of the applications and regardless of where the data sits, what are the implications of actually getting work done? Through, kind of, new container services, new ways of microservices in the development, of how APIs are integrated, and so, the hot topic right now is definitely hybrid cloud, multi cloud. And the work we've done to certify, what we call, IBM cloud private really enables us to not just take any business application to any cloud in our cloud, as well, but actually to enable Watson and Watson based applications also across multi cloud environments. >> So, chapter two, Jenny mentioned that in her key notes, I want to dig into that because we've been talking a lot about multi cloud architecture, and one of the big debates has been, in the industry, oh, don't pick a soul cloud. I've been writing a bunch of content about that at this DOD jedi deal with Amazon and Oracle, fighting for it out there, but that's also happening at the enterprise, but the reality is, everyone has multiple clouds. If you've got a sales force or if you've got this and that and the other thing, you probably have multiple clouds, so it's not so much soul cloud vs. as it is, workloads having a cloud for the right job and that seems to be validated at IBM Think, in talking to the top technical people and in the industry. They all say, pick the right cloud for the job. And we've heard that before in Big Data. Pick the right tool for the job. So, given that, workloads seem to be driving the demand for cloud. Since you're on the app side, how are you seeing that? Because the world's flipped. It used to be infrastructure and software enable the app's capabilities. Now the workloads have infrastructure as code, made with cloud, they're driving the requirements. This is a change over. >> It is a big change and part of, I would say, when people first ran to the cloud, and a lot of the public cloud services were digital SaaS services, where people were wanting to stitch multiple applications across clouds, and that became a challenge, so in this next iteration, that I'm seeing is, really, a couple things. One is, data gravity. So, where does the data actually reside, for the workload that's actually happening? Whether it's the transactions, whether it's customer information, whether it's product information, that's one piece. The second piece is a lot more analytics, right? And the spectrum of analytics running from traditional warehouse capabilities, to more, let's say, larger scale big data projects to full blown advanced algorithms and AI applications, is, people are saying, look, not only do I want to stitch these applications across multiple clouds; I also want to make sure I can actually tap into the data to apply new types of analytics and derive new services and new values out of relationships, understanding of how products are consumed, and so forth. So, for us, when we think about it is, we want to be able to enable that fluid understanding of data across the clouds, as well as protect and be thoughtful about the data privacy rights around it, compliance around GDPR, as well as how we think about the security aspects as well, for the enterprise. >> That is a great point. I think I want to drill down on the data piece, your background on data obviously is going to be key in your job now obviously, it's pretty obvious with Watson, but David Floyd, a wiki bonds research analyst, just posted a taxonomy of hybrid cloud research report that laid out the different kinds of cloud you could have. There's edge clouds, there's all kinds of things from public to edge, so when you look at that, you're thinking, okay, the data plain is the critical nature of the cloud. Now, depending on which cloud architecture for the use case, the workload, whatever, the data plain seems to be this magical opportunity. AI is going to have a big part of that. Can you just talk about how you guys see that evolving? Because, obviously, AI is a killer part of your strategy. This data piece is inter-operating across the clouds. >> Yes. >> Data management governs you're smiling, cause there's a killer answer coming. >> Totally. This is such a great set up. Actually, Ginni even said it in her keynote at Think, which was, you can't have an AI strategy without an information architecture strategy, which is an IA strategy, and information architecture is all about what you said: it's data preparation; understanding the foundation of it, making sure you've got the right governance structure, the integration of it, and then actually how you apply the more advanced analytics on top. So, information architecture and thinking about the data aspects in all kinds of data. Majority of the data actually sits behind, what I would say, the traditional public firewall. So, it sits behind the firewalls of our enterprise clients, like 80 plus percent of it, and then, many of the clients, we actually recently did a study, with about 5,000 senior executives, across many, many thousands of organizations, and 85% of them want to apply AI to improve their customer service, to improve the way they engage their clients and their products and services, so this is a huge opportunity right now for pretty much every organization to think through; kind of their data strategy. Their information architecture strategy, as part of their overall AI strategy. >> So, a question a got on twitter comes up a lot, and, also on my notes here, I wanted to ask you is, how can companies increase transparency trust and mitigate bias in AI? Because this comes up a lot and that's the questions that come in from the community is, Hey, I got my site, my apps running in Germany. I've got users over there, I'm global. I have to manage compliance, I got all this governess now, I'm over my shoulders, kind of a pain in the butt, but also I don't want to have the software be skewed on bias and other things, and then, I also get this whole Facebook dynamic going on, where it's like, I don't trust people holding my data. This is a big, huge issue. >> It is enormous. >> You guys are in the middle of it, what's your thoughts, what's the update, what's the dynamic and what's the solution? >> So, this is a big topic. I think we could do a whole episode just on this topic alone. So, trust and developing trust and transparency in AI should be a fundamental requirement across many, many different types of institutions. So, first of all, the responsibility doesn't sit only with the technology vendors; it's a shared responsibility across government institutions, the consumers, as well as the business leaders, in terms of how they're thinking about it. The more important piece, though, is when you think about the population that's available, that really understands AI, and they're actually coding and developing on it, is that we have to think about the diverse population that's participating in the governance of it, because you don't want just one tribe or one group that's coding and developing the algorithms, or deciding the decision models. >> Like the nerds or the geeks; they're a social aspect, society aspect as well, right? Social science. >> Exactly. I actually just did a recent conversational series with Northwestern Kellogg's business school, around the importance of developing trust and transparency, not only in the algorithms themselves, but the methodology of how you think about culture and value and ethics come into play through different lens, depending on the country you live in, as you kind of referenced, depending on your different values and religious backgrounds. It may because of different institutional and/or policy positions, depending on the nature, and so there has to be a general awareness of this that's thoughtful. Now, why I'm so excited about the work we're doing at IBM is we've actually launched a couple new initiatives. One is, what we call, AI OpenScale, which is really a platform and an opportunity to have the ability to begin to apply AI, see how AI operations and models function in production. We have methodologies in terms of engaging understanding fairness, so there's a 360 degree fairness kit, which is actually available in the open source world, there's a set of tools to understand and train people on recognizing bias, so even just definitions of, what do you mean by bias? It could be things like, group think, it could be, you're just self selecting on certain data sets to reinforce your hypotheses, it could be unconscious levels and it's not just traditionally socially oriented, types of bias. >> It could be data bias, too. It could be data bias, right? >> Totally. Machine generated biases in IOT world, also. >> So, contextual and behavioral biases kind of kick into play here. >> Yeah, but it starts with transparency trust. It also starts with thoughtful governance, it starts with understanding in your position on policy around data privacy, and those things are things that should be educational conversations across the entire industry. >> How far along are we on the progress bar there? I mean, it seems like it's early and we seem to be talking for awhile, but it seems even more early than most people think. Still a lot more work. Your thoughts on where the progress bar is on this whole mash up of tech and social issues around bias and data? Where are we? >> We're really at the early stages, and part of the reason we're at the early stages is I think people have, so far, really applied AI in very simple task oriented applications. The more, what we call, broad AI, meaning multi task work flow applications are starting, and we're also starting seeing in the enterprise. Now, in the enterprise world, you can still have bias, so, for example, when you talked about data bias, one of the simple examples I use is, think about loan approvals. If one of the criteria may be based on gender, you may have a sensitivity around the lack of women owned business leaders, and that could be a scoring algorithm that says, hey, maybe it's a higher risk when in fact, it's not necessarily a higher risk, it's just that the sampling is off, right. So, that would be a detection to say, hey maybe you have sensitivity around that data set, because you actually have an insufficient amount of data. So, part of data detection and understanding biases; where you have sampling of data that's incorrect, where your segmentation could be rethought, where it may just require an additional supervision or like decision making criteria as part of your governance process. >> This is actually a great area for young people to get involved, whether at their universities or curriculum, this kind of seems to be, whether it's political science and/or data science kind of coming together, you kind of have a mash. What's your advice to people watching that might be either in high school, college, or rethinking their career, because this seems to be hot area. >> It is a hot area. I would recommend it for every student at every age, quite frankly and we're at such an early stage that it's not too late to join and you're not too young nor are you too old to actually get in the industry, so that's point one. This is a great time for everyone to get involved. The second piece is, I would just start with online courses that are available, as well as participate in communities and companies like IBM, where we actually make available on a number of our web based applications, that you can actually do some online training and courses to understand the services that we have, to begin to understand the taxonomy and the language, so a very simple set, would be like, learn the language of AI first, and then, as you're learning coding, if you're more technically inclined, there's just a myriad of classes available. >> Final question, before I move on to the topic around inclusion and diversity, machine learning is impacting all verticals. I was just in an interview, talking with Don En-ju-bin-ski, she's got a company where it's neuroscience and machine learning coming together. Machine learning's being impacted all over. We mentioned basic data bias, and machine learning can help there. Machine learning meets blank every vertical, every market, is being impacted machine learning, which will trigger some of the things you're seeing on the app side. Your thoughts, looking at where you've come from in your career at IBM to now, just the evolution of what machine learning has enabled, your thoughts on the impact of machine learning. >> Oh, it's exciting and I'll give you a real simple example, so one of the great things my own team actually did was apply machine learning to, everyone loves the holiday shopping period, right? Between Thanksgiving to New Years, so we actually develop, what we call, Watson Order Optimizer and one of my favorite brands is REI, so the recreational equipment incorporated company, they actually applied our Watson Order Optimizer to optimize in real time. The best place, let's say you want to order a kayak or a T-shirt or a hiking boot, but the best way to create the algorithms to ship from different stores, and shipping from stores, for most retailers, is a high cost variable, because you don't know what the inventory positions are, you don't necessarily know the movement of traffic into that store, you may not even know what the price promotions are, so what was exciting about putting machine learning algorithms to this was, we could actually curate things like shipping and tax information, inventory positions of products in stores, pricing, a movement of goods as part of that calculation. So, this is like a set of business rules that are automatically developed, using Watson, in a way that would be almost impossible for any human to actually come up with all of the possible business roles, right? Because this is such a complex situation, and then you're trying to do it at the peak time, which is, like Black Friday, Cyber Monday Weekend, so we were able to actually apply Watson Machine Learning to create the business roles for when it should be shipped from a warehouse or a particular store. In order to meet the customer requirement, which is the fulfillment of that brand experienced, or the product experienced, so my view is, there are so many different places across the industry, that we could actually apply machine learning to, and my team is really excited about what we've been doing, especially in the next generation of supply chain. >> And it's also causing students to be really attracted to computer science, both men and women. My daughter, who is a senior at Berkeley, is interested in it, so you're starting to see the impact of machine learning is hitting all main stream, which is a good segue to my next question, we've been very passionate, I know it's one of your passions is inclusion and diversity or diversity and inclusion, there's always debates: D before I or I before D? Some say inclusion and diversity or diversity and inclusion. It's all the same thing, there's just a lot of effort going on to bring the tech industry up to par with the reality of the world, and so you have a study out. I've got a copy here. Talk about this study: Women in Leadership and the Priority Paradox. Talk about the study; what was behind it and what were some of the findings? >> Sure, and I'm excited that your daughter, that's a senior in college, is going to be another woman that's entering the workforce, and especially being in tech, so the priority paradox is that we actually looked at over 2,300 organizations, these are some of the top institutions around the world, that are curating and attracting the best talent and skills. Now, when you look at that population, we were surprised to find out that you would think by 2019-2018 that only 18% of those organizations actually had women in senior leadership positions, and what I categorize as senior leading positions, are in the see-swee, as vice presidents, maybe senior executives or senior managers; director level folks. So, that's one piece, which is, wow, given the size and the state where we are in the industry, only 18%: we could do better. Now, why do we believe that? The second piece is, you want the full population of the human capacity to think and creatively solve. Some of the world's biggest complex problems; you don't want a small population of the world trying to do this, so, the second piece of the paradox, which was the most surprising, is that 79% of these companies actually said that formalizing or prioritizing gender, fostering that kind of inclusive culture, was not a business priority, and that they had a harder time actually mapping that gap. Now, in the study, what we actually discovered though, was those companies, that did make it a priority, actually had first mover advantage, and making it a priority is quite simple. It's about understanding how to create that inclusive culture, to allow different perspectives and different experiences to be allowed in the co-creation and development. >> So, first mover advantage, in terms of what? >> Performance, actual business performance, so even though 80% of the organizations that we interviewed actually said that they've not made it a business priority, the 20% that did, we actually saw higher performance in their outcomes, in terms of business performance. >> So, this is actually a business benefit, too. I think your point is, the first mover advantage is saying, those companies that actually brought in the leadership to create that different perspective, had higher performance. >> Absolutely. >> We've talked about this before; one of the things I always say is that, tech is now mainstream, and it's 18% of the target audience of tech isn't the market, it's 50/50 or 51. Some say 51% women/men, so who's building the products for half the audience? So, again, this doesn't make any sense, so this is a good statistic. >> It is, and if you think about the students that are actually graduating out of graduate school, recently, there's actually more women graduating out of grad school than men. When you think about that population that's now entering the workforce, and what's actually happening through the pipeline, I think there's got to be thoughtful focus and programmatic improvements across the industry, around how to develop talent and make sure that different companies and organizations can move. Like you said, problem solve for creating new products that actually serve the world, not just serve certain populations, but also do it in a way that's thoughtful about, kind of, the makeup. >> And the mainstream and prep of tech obviously makes it more attractive, I mean, you're seeing a lot more women thinking about machines, like my daughter, the question is, how do they come in and not lose their footing, mentor-ship? So, what are the priorities that you see the industry needs to do? What are some of the imperatives to keep the pipeline and keep all the mentoring, obviously mentoring is hot, we see the networking built. >> Yeah, mentoring is huge. >> What's your thoughts on the best practices that you've been involved in? >> Some of the best practices we've actually done a number with an IBM, we've done a program called, Tech Re-Entry, so women that have decided to come back into the tech workforce, we actually have a 12 week internship program to do that. Another is a big initiative that we have around P-TECH, which is the next generation of workers aren't just going to have a formal college and or PHD masters type degrees. The next generation, which we're calling, is not necessarily a white collar, blue collar, what we're calling it is, new collar, meaning these are students that are able to combine their equivalent of a high school degree and early college education in one to be kind of, if you think about it, next generation of technical vocational schools, right? That quickly enter the workforce, are able to do jobs in terms of web development, in terms of cloud management, cloud services, it could be next generation of-- >> It's a huge skill gap opportunity, this is a big opportunity for people. >> It is, and we're seeing great adoption. We've seen it on a number of states across the US, this is an effort that we partner with, the states and the governors of each state, because public education has got to be done in a systematic way that you can actually sustain it for many, many years and this is something that we were excited about championing in the state of New York first. >> The ReEntry program and other things, I always tell myself, the technology is so new now you could level up a lot faster than, there's not that linear school kind of mentality, you don't need eight years to learn something. You could literally learn something pretty quickly these days because the gap between you and someone else is so short now, because it's all new skills. >> It's true, it's true. We talk about digital disruption through the lens of businesses, but there's a huge digital disruption through the lens of what you're talking about, which is our individual development and talent, and the ability to learn through so many different channels that's available now, and the focus around micro degrees, micro skills, micro certifications, there's so many ways for everyone to get involved, but I really do encourage everyone across every industry to have some knowledge and basis and understanding of tech, because tech will redefine how services and products are delivered across every category. >> And that's not male or female: that's just everyone. Again, back to technology for good, we can solve technology problems, You guys have been doing it at IBM, solve technology problems, but now the people problem is about getting people empowered, all gender, races, et cetera, the people getting the skills, getting employed, working for clouds, this is an opportunity. >> This is a huge opportunity. I think this is an exciting time. We feel like we're entering this next phase of, what I call, chapter two of cloud, this is chapter two of digital reinvention, of the enterprise, digital reinvention of the individual, actually, and it's an opportunity for every country, every population group to get involved, in so many new and creative ways, and we're at the early foundation stages in terms of both AI development, as well as new capabilities like Blockchain. So, it's an exciting time for everybody. >> Well, that's a whole nother topic. We'll have to bring you back, Inhi. Great to see you, in fact, welcome to Palo Alto. First time in our studio. Let's co-host something together, me and you. We'll do a series: John and Inhi. >> I would love that. That would be fun. I'm excited to be here. >> You can drop by our studio anytime now that you live in Palo Alto, we're neighbors. Inhi Cho Suh here, general manager IBM Watson, customer engagement, friend of theCUBE, here inside our studios, Palo Alto. I'm John Furrier, thanks for watching. (upbeat music)
SUMMARY :
From our studios in the heart Great to see you again. what group are you leading, what's new? so really excited about the areas of applying AI Actually, the waves that you guys announced, was the IBM, and the nature of the applications and that seems to be validated at IBM Think, and a lot of the public cloud services that laid out the different kinds of cloud you could have. you're smiling, cause there's a killer answer coming. the integration of it, and then actually how you apply that come in from the community is, So, first of all, the responsibility doesn't sit Like the nerds or the geeks; but the methodology of how you think about culture and value It could be data bias, too. Machine generated biases in IOT world, also. kind of kick into play here. be educational conversations across the entire industry. on this whole mash up of Now, in the enterprise world, you can still have bias, because this seems to be hot area. the services that we have, to begin to understand some of the things you're seeing on the app side. the algorithms to ship from different stores, Women in Leadership and the Priority Paradox. of the human capacity to think and creatively solve. the 20% that did, we actually saw higher performance to create that different perspective, and it's 18% of the target audience of tech across the industry, around how to develop talent What are some of the imperatives to keep the pipeline Some of the best practices we've actually this is a big opportunity for people. in the state of New York first. I always tell myself, the technology is so new now and the ability to learn through so many different channels the people getting the skills, getting employed, of the enterprise, We'll have to bring you back, Inhi. I'm excited to be here. You can drop by our studio anytime now that you live
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
Jenny | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
David Floyd | PERSON | 0.99+ |
Germany | LOCATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
80% | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Ginni | PERSON | 0.99+ |
second piece | QUANTITY | 0.99+ |
March 2019 | DATE | 0.99+ |
US | LOCATION | 0.99+ |
eight years | QUANTITY | 0.99+ |
Inhi Cho Suh | PERSON | 0.99+ |
20% | QUANTITY | 0.99+ |
79% | QUANTITY | 0.99+ |
360 degree | QUANTITY | 0.99+ |
12 week | QUANTITY | 0.99+ |
Marvin | PERSON | 0.99+ |
February | DATE | 0.99+ |
second chapter | QUANTITY | 0.99+ |
51% | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Northwestern Kellogg | ORGANIZATION | 0.99+ |
85% | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
New York | LOCATION | 0.99+ |
18% | QUANTITY | 0.99+ |
one piece | QUANTITY | 0.99+ |
REI | ORGANIZATION | 0.99+ |
51 | QUANTITY | 0.99+ |
first chapter | QUANTITY | 0.99+ |
2019-2018 | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
over 2,300 organizations | QUANTITY | 0.99+ |
80 plus percent | QUANTITY | 0.99+ |
GDPR | TITLE | 0.99+ |
Inhi | PERSON | 0.98+ |
ORGANIZATION | 0.98+ | |
about 5,000 senior executives | QUANTITY | 0.98+ |
one group | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Thanksgiving | EVENT | 0.98+ |
One | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Watson | TITLE | 0.97+ |
one tribe | QUANTITY | 0.97+ |
each state | QUANTITY | 0.96+ |
Think | ORGANIZATION | 0.96+ |
chapter two | OTHER | 0.96+ |
New Years | EVENT | 0.95+ |
IBM Watson | ORGANIZATION | 0.95+ |
IBM Watson | ORGANIZATION | 0.95+ |
IBM Think | ORGANIZATION | 0.94+ |
first mover | QUANTITY | 0.94+ |
Black Friday | EVENT | 0.94+ |
First time | QUANTITY | 0.93+ |
Watson | ORGANIZATION | 0.93+ |
CUBEConversations Dell EMC Data Protection | February 2019
>> From the SiliconANGLE Media office in Boston, Massachusetts, it's theCUBE. Now, here's your host, Dave Vellante. >> Hi everybody. This is Dave Vellante and welcome to this CUBE conversation. I've been following trends in backup and recovery and data protection for decades and I'll tell you right now is one of the most exciting eras that I've ever seen and with me here to talk about some of the trends and some hard news is Beth Phalen. She's the president and general manager of Dell EMCs data protection division. Beth it's great to see you again. Thanks for coming on. >> It's great to be here Dave. It's always good to talk to you. >> So, there's been a subtle change in IT. Even when you go to sort of the downturn in 2008 where IT was largely a support function. It's really now becoming a fundamental enabler. Are you seeing that with your customers? >> Absolutely. The vision of IT being some back office that is segregated from the rest of the company is no longer true. What we find is customers want their application owners to be able to drive data protection and then have that compared with the central oversight so they can still have that global overview. >> The other change is, for years data has been this problem that we have to manage. I got so much data. I got to back it up or protect it, move it. It's now become a source of value. Everybody talks about digital transformation. It's all about how you get value from data. >> Yeah. And it's so interesting because it was there all the time. Right? And suddenly people have realized, yes, this is an asset that has a huge impact on our business on our customers and again makes it even more important that they can rely on getting access to that data because they're building their business on it. >> So as the head of the data protection division, it's interesting. Even the palance has changed. It used to be, when it was just tape it was backup and now it's data protection. So the mindset is shifting. >> It is and it's continuing to shift with new threats like cyber recovery and other challenges that are out there, protecting data becomes the core of what we are offering our customers. >> So let's talk a little bit more about the catalysts for that change. You got tons of data, you are able to apply now machine intelligence like you never have before and you got cloud which brings scale. So this is changing the needs of customers in the way in which they protect data. >> As customers data becomes more and more distributed across multiple cloud providers, multiple locations, it's even more important that they can answer the question, where is my data and is it protected? And that they can recover it as quickly as possible. >> And you're seeing things like DevOps, data protection strategies and data management strategies, and so supporting DevOps and analytics applications. You also have new threats like ransomware. So it's a more fundamental component of cyber. >> Yeah and you will hear us talking a little bit about cyber recovery, the new product that we introduced last year. We can't just think about data protection as backup. We have to think about it as the comprehensive way that customers can get access to their data even if they're attacked. >> So much has changed. Everything has changed. >> The level of innovation that we've been doing has been keeping up with that change. And that's one of the things that I'm most excited about as the president of this division. We've been investing in enhancing the customer experience, and cyber recovery as I mentioned and expanding into new markets into driving a new level of reliability and resiliency, building on the duration that we have. And of course expanding into the cloud. So one of the things that hasn't changed is the fundamentals of I need to get my data back, I need to be trusted. Why is it, you guys make a big deal out of being number one. You're number one in all the Gartner Magic Quadrants and so forth. Why is leadership so important to customers and what are those fundamentals that haven't changed? >> So two questions there. First, leadership is so important because we have more experience protecting data around the globe than anybody else. And that means all environments right from the multi-petabyte, major corporations to the shops have maybe a terabyte. So 24 terabytes. We're involved in it all. So that experience is important. And then those fundamentals you talked about, lowest cost to protect, fastest performance, fastest backups and resiliency, those fundamentals have to be part of any data protection product. >> The way you guys are organized, you are in chare of R&D as well, you talked about innovation before. I wonder if you could talk a little bit more about how your R&D investments are translating into customer value in terms of price performance. So resiliency, speed, cost. What's going on there? >> The biggest thing that I wanna talk about and highlight here is how much our investment in cloud is enabling our customers to continue to have confidence that they can get the same level of digital trust that they've had with us on prem but now as they expand into the cloud for cloud disaster recovery, long-term retention, data protection in the cloud that that confidence comes with them. And we're doing it in a way that allows them to seamlessly expand into the cloud without having to introduce additional gateways, additional hardware. It becomes an extension of their data protection infrastructure. >> So the cloud operating model is very important here. What are you guys doing for instance, admins, application owners, in terms of enabling self-service for example. >> We have the broadest application support of any company. And what we're doing is we're integrating directly with those applications. Whether it be Oracle, SAP. You can go down the list. And then of course directly integrating with VMware for the VM admins. That's not enough though because if we just did that you wouldn't be able to have one view of how your data protection policies are working. And so we pair that with centralized governance to make sure that the person in charge of the data protection for that company still could have confidence that all the right things are happening. >> So what does the data protection portfolio look like? How should we think about that? >> Three simple things, Data Domain, our new integrated appliances and data protection suite. >> Okay. Follow up question on that is, how do you, for customers, obstruct the complexity? How are you simplifying their world especially in this cloud operating module. >> Simplifying comes in multiple stages. You have to simplify the first box to backup experience. We've cut that down to an hour and a half, two hours in max. From there, you have to make sure the day-to-day tasks are simple. So things like two clicks to do cloud failover, three clicks to failback. Things like a single step to restore a file in a VMware environment and then live movement of that VM to another primary storage array. That kind of targeted customer use case simple process is core to what we've been doing to enhance the customer experience. >> Now, you guys aren't really a public cloud provider so you gotta support multiple clouds. What are you doing there in terms of both cloud support and what are you seeing in multi-cloud. >> Most customers have more than one cloud provider that they're working with. So what we do is we allow the customers specific example right from within the data domain interface to select which cloud they wanna tier to and then they can also select other cloud providers through the same interface. So, it's not a separate experience. They can focus on the Data Domain but then interact with multiple clouds. >> Awesome. Beth, thanks for taking some time here to set this up. We're gonna hear about some hard news that you guys have today. We've got some perspectives from IDC on this but right now lets take a look at what the customer says. Keep it right there. (chilled piano music) >> Phoenix Children's is a healthcare organization for kids. Everything that we do is about the kids. So we wanna make sure that all our critical data that a doctor or a nurse needs on the floors to be able to take care of a sick kid, we need to make sure it's available at any time. The data protection software that we're using from Dell EMC with Data Domain give us that protection. Our critical data are well kept and we can easily recover them. Before we moved to Data Domain we were using Veritas NetBackup and some older technology. Our backup windows were taking upwards of 20 to 24 hours. Moving to Data Domain with de-duplication we can finish our full backups in less than seven hours. The user deployment for data protection software and Data Domain was very easy for us. Our engineers, they have never worked with data protection software or Data Domain before. They were able to do some research, walk a little bit with some Dell engineers and we were able to implement the technology within a month, a month and a half. ECS for Phoenix Children's Hospital is a great technology. Simple to use, easy to manage. The benefits from a user perspective are tremendous. From an IT perspective, I can extract terabytes of data in less than an hour. When we get into a critical situation, we can rely 100% on ECS that we will get the information that the doctor or the nurse needs to take care of the kid. The data protection software and the Data Domain benefits for Phoenix Children's Hospital are great. There is a solution that works seamlessly together. I have no worries that my backups will not run. I have no worries I will not be able to recover critical applications. (chilled piano music) >> We're back with Ruya Barrett who's the vice president of marketing for Dell EMC's Data Protection division. We got some hard news to get into. Ruya, let's get right into it. What are you guys announcing today? >> We are announcing a basically tremendous push with our data protection family both in Data Domain and Integrated Data Protection appliances and the software that basically makes those two rock. >> So, you've got a few capabilities that you're announcing. Cloud performance. Take us through sort of at a high level. What are the three areas that you're focused on this announcement? >> Exactly. You nailed it Dave. So three areas of announcement, exciting cloud capabilities and cloud expansion. We've been investing in cloud over the last three years and this announcement is just a furthering of those capabilities. Tremendous push around performance for an additional use cases and services that customers want. The last one but not least is basically expanded coverage and push into the mid-market space with our Data Domain 3300 and IDPA 4400. >> And this comes in the form of software that I can install on my existing appliances? >> It's all software value that really enables our appliances to do what they do best, to drive efficiency, performance but it's really the software layer that makes it sane. >> And if I'm a customer I get that software, no additional charges? >> If you have the capabilities, today you'll be able to get the expanding capabilities. No charge. >> Okay. So one of the important areas is cloud. Let's get into some of the cloud use cases. You're focused on a few of those. What are they? >> Cloud has become a really prevalent destination. So when we look at cloud and what customers wanna do with regards to data protection in the cloud, it's really a lot of use cases. The three we're gonna touch on today is really cloud tiering. Our capabilities are in cloud tiering with long time archival. So they're really trying to leverage cloud as a long time archival. The second one is really around cloud disaster recovery. To and from the cloud. So that's really important use case. That's becoming really important to our customers. And not, God forbid, for a disaster but just being able to test our disaster recovery capabilities and resiliency. And the last one is really in-cloud data protection. So those are the three use cases and we have enhancements across all three. >> Let's go deeper into those. So cloud tiering. We think of tiering. Often times you remember the big days of tiering, inbox tiering, hot data, cold data. What are you doing in cloud tiering? >> Well, cloud tiering is our way of really supporting object storage both on premises and in the cloud. And we introduced it about two years ago. And what we're really doing now is expanding that coverage, making it more efficient, giving customers the tools to be able to understand what the costs are gonna be. So one of the announcements is actually a free space estimator tool for our customers that really enables them to understand the impact of taking an application and using long-term retention using cloud tier both for their on-premise data protection capacity as well as what they need in the cloud and the cost associated. So that's a big question before customers wanna move data. Second is really broadest coverage. I mean, right now in addition to the usual suspects of AWS, Azure, Dell EMC Elastic Cloud Storage, we now support Ceph, we support Alibaba, we support Google Cloud. So really, how do you build out that multi-cloud deployment that we see our customers wanting to do with regards to their long-term archival needs? So really expanding that reach. So we now have the broadest coverage with regards to archiving in the cloud and using cloud for long-term retention. >> Great. Okay. Let's talk about disaster recovery. I'm really interested in this topic because the customers that we talk to they wanna incorporate disaster recovery and backup as part of a holistic strategy. You also mentioned testing. Not enough customers are able to test their DR. It's too risky, it's too hard, it's too complicated. What are you guys doing in the DR space. >> So one of the things that's I think huge and very differentiated with regards to how we approach, whether it's archive or whether it's DR or in-cloud is the fact that from an appliance standpoint you need no additional hardware or gateway to be able to leverage the capabilities. One of the things that we introduced, again cloud DR over a year ago, and we introduced it across our Data Domain appliances as well as our first entry to the mid-sized companies with IDPA DP 4400. And now what we're doing is making it available across all our models, all our appliances. And all of our appliances now have the ability to do fully orchestrated disaster recovery either for test use cases or actual disasters, God forbid, but what they are able to do. The three click failovers and the two click failbacks from the cloud. So both for failback from the cloud or in the cloud. So it's really big and important use cases for our customers right now. Again, with that, we're expanding use case coverage to now, we used to support AWS only, now we also support Azure. >> Great. Okay. The third use case you talked about was in-cloud data protection. What do you mean by that and what are you doing there? >> So one of, again, the really interesting things about our portfolio is our ability to run it as an integrated hardware-software platform or in the form of a software only deployment. So our data domain virtual addition is exactly that. You can run our Data Domain software in virtual machines. And what that allows our customers to do is whether they're running a software defined data center on prem or whether they want in-cloud capabilities and all that goodness they have been getting from Data Domain in the cloud, they now can do that very easily. And what we've done in that space with this announcement is expanded our capacity coverage. So now Data Domain Virtual Edition can cover 96 terabytes of in-cloud capability and capacity. And we've also, again, with that use case, expanded our coverage to include Google Cloud, AWS, Azure. So really expanded our coverage. >> Great. I'm interested in performance as well because everybody wants more performance but are we talking about backup performance, restore performance? What are you doing in that area? >> Perfect. And one of the things, when we talk about performance, one of the big use cases we're seeing that's driving performance is that customers wanna make their backup copies do more. They wanna use it for application test and development, they wanna use it for instant access to their VMs, instant access and restores for their VMs. So performance is being fueled by some additional services that customers wanna see on their backup copies. So basically one of the things that we've done with this announcement is improved our performance across all of these use cases. So for application test of test of development, you can have access to instant VMs. Up to 32 instant access and restore capabilities with VMs. We have improved our cash utilization. So now you can basically support a lot more IOPS, leveraging our cash, enhanced cash, four times as many IOPS as we were doing before. So up to 40,000 IOPS with almost no latency. So tremendous, again, improvement in use cases. Restores. Customers are always wanting to do restores faster and faster. So file restores is no exception to that. So with multi-streaming capability, we now have the opportunity and the capabilities to do file restores two times faster on premise and four times faster from cloud. So again, cloud is a big, everything we do, there's a cloud component to it. And that performance is no exception to that. >> The last thing I wanna touch on is mid-market. So you guys made an announcement this past summer. And so it sounds like you're doubling down on that space. Give us the update. >> Sure. So we introduced the Data Domain 3300 and our customers have been asking for a new capacity point. So one of the things we're introducing with this release is an eight terabyte version of Data Domain 3300 that goes and scales up to 32 terabytes. In addition to that, we're supporting faster networking with 10 gig E support as well as virtual tape libraries over Fiber Channels. So virtual tape libraries are also back and we're supporting with Data Domain 3300. So again, tremendous improvements and capabilities that we've introduced for mid-market in the form of Data Domain 3300 as well as the DP4400 which is our integrated appliance. So, again, how do we bring all that enterprise goodness to a much broader segment of the market in the right form factor and right capacity points. >> Love it. You guys are on a nice cadence. Last summer, we had this announcement, we got Dell Technologies World coming up in May, actually end of April, now May. So looking forward to seeing you there. Thanks so much for taking us through these announcements. >> Yeah, thank you. Thanks for having us. >> You're very welcome. Now, let's go Phil Goodwin. Phil Goodwin was an analyst at IDC. And IDC has done a ton of research on the economic impact of moving to sort of modern data protection environment, they've interviewed about a thousand customers and they had deep dive interviews with about a dozen. So let's hear from Phil Goodwin in IDC and we'll be right back. (chilled music) >> IDC research shows that 60% of organizations will be executing on a digital transformaion strategy by 2020, barely a year away. The purpose of digital transformation is to make the organization more competitive with faster, more accurate information and timely information driving driving business decisions. If any digital transformation effort is to be successful, data availability must be a foundational part in the effort. Our research also shows that 48.5% or nearly half of all digital transformation projects involve improvements to the organizations data protection efforts. Purpose-built backup appliances or PBBAs have been the cornerstone for many data protection efforts. PBBAs provide faster, more reliable backup with fewer job failures than traditional tape infrastructure. More importantly, they support faster data restoration in the event of loss. Because they have very high data de-duplication rates, sometimes 40 to one or more, organizations can retain data onsite longer at a lower overall cost thereby improving data availability and TCO. PBBAs may be configured as a target device or disk-based appliance that can be used by any backup software as a backup target or as integrated appliances that include all hardware and software needed for fast efficient backups. The main customer advantages are rapid deployment, simple management and flexible growth options. The Dell EMC line of PBBAs is a broad portfolio that includes Data Domain appliances and the recently introduced Integrated Data Protection Appliances. Dell EMC Data Domain appliances have been in the PBBA market for more than 15 years. According to IDC market tracker data as of December 20th, 2018, Dell EMC with Data Domain and IDPA currently holds a 57.5% market share of PBBA appliances for both target and integrated devices. Dell EMC PBBAs have support for cloud data protection including cloud long term retention, cloud disaster recovery and protection for workloads running in the cloud. Recently IDC conducted a business value study among Dell EMC data protection customers. Our business value studies seek to identify and quantify real world customer experiences and financial impact of specific products. This study surveyed more than 1000 medium-sized organizations worldwide as well as provided in-depth interviews with a number of them. We found several highlights in the study including a 225% five-year ROI. In numerical terms, this translated to $218,928 of ROI per 100 terabytes of data per year. We also found a 50% lower cost of operating a data protection environment, a 71% faster data recovery window, 33% more frequent backups and 45% more efficient data protection staff. To learn more about IDC's business value study of Dell EMC data protection and measurable customer impact, we invite you to download the IDC white paper titled, The Business Value of Data Protection in IT Transformation sponsored by Dell EMC. (bouncy techno music) >> We're back with Beth Phalen. Beth, thanks again for helping us with this session and taking us through the news. We've heard about, from a customer, their perspective, some of the problems and challenges that they face, we heard about the hard news from Ruya. Phil Goodwin at IDC gave us a great overview of the customer research that they've done. So, lets bring it home. What are the key takeaways of today? >> First and foremost, this market is hot. It is important and it is changing rapidly. So that's number one. Data protection is a very dynamic and exciting market. Number two is, at Dell EMC, we've been modernizing our portfolio over the past three years and now we're at this exciting point where customers can take advantage of all of our strenth put in multi-cloud environment, in a commercial environment, for cyber recovery. So we've expanded where people can take the value from our portfolio. And I would just want people to know that if they haven't taken a look at the Dell EMC data protection portfolio recently, it's time to take another look. We appreciate all of our customers and what they do for us. We have such a great relationship with our customer base. We wanna make sure that they know what's coming, what's here today and how we're gonna work with them in the future. >> Alright. Well, great. Congratulations on the announcement. You guys have been hard at work. It is a hot space. A lot of action going on. Where can people find more information? >> Go back to dellemc.com, it's all there. >> Great. Well, thank you very much Beth. >> Thank you Dave. >> And thank you for watching. We'll see you next time. This is Dave Vellante from theCUBE. (chilled music)
SUMMARY :
From the SiliconANGLE Media office Beth it's great to see you again. It's always good to talk to you. Even when you go to sort of the downturn in 2008 and then have that compared with the central oversight that we have to manage. that they can rely on getting access to that data So as the head of the data protection division, It is and it's continuing to shift with new threats So let's talk a little bit more about the catalysts And that they can recover it as quickly as possible. So it's a more fundamental component of cyber. the new product that we introduced last year. So much has changed. So one of the things that hasn't changed is the fundamentals So that experience is important. The way you guys are organized, is enabling our customers to continue to have confidence So the cloud operating model is very important here. that all the right things are happening. and data protection suite. for customers, obstruct the complexity? of that VM to another primary storage array. and what are you seeing in multi-cloud. They can focus on the Data Domain that you guys have today. that the doctor or the nurse needs to take care of the kid. We got some hard news to get into. and the software that basically makes those two rock. What are the three areas that you're focused and push into the mid-market space but it's really the software layer that makes it sane. If you have the capabilities, So one of the important areas is cloud. To and from the cloud. What are you doing in cloud tiering? So one of the announcements is actually because the customers that we talk to One of the things that we introduced, The third use case you talked about So one of, again, the really interesting things What are you doing in that area? So basically one of the things that we've done So you guys made an announcement this past summer. So one of the things we're introducing with this release So looking forward to seeing you there. Thanks for having us. and they had deep dive interviews with about a dozen. and the recently introduced of the customer research that they've done. over the past three years Congratulations on the announcement. Well, thank you very much Beth. And thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Beth Phalen | PERSON | 0.99+ |
2008 | DATE | 0.99+ |
$218,928 | QUANTITY | 0.99+ |
Phil Goodwin | PERSON | 0.99+ |
February 2019 | DATE | 0.99+ |
IDC | ORGANIZATION | 0.99+ |
December 20th, 2018 | DATE | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Ruya Barrett | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Beth | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
48.5% | QUANTITY | 0.99+ |
two hours | QUANTITY | 0.99+ |
10 gig | QUANTITY | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
45% | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
24 terabytes | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
33% | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
71% | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
57.5% | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
May | DATE | 0.99+ |
60% | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
96 terabytes | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
Last summer | DATE | 0.99+ |
225% | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
two clicks | QUANTITY | 0.99+ |
two questions | QUANTITY | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
less than seven hours | QUANTITY | 0.99+ |
two times | QUANTITY | 0.99+ |
40 | QUANTITY | 0.99+ |
two click | QUANTITY | 0.99+ |
Phoenix Children's | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
less than an hour | QUANTITY | 0.99+ |
SAP | ORGANIZATION | 0.99+ |
more than 15 years | QUANTITY | 0.99+ |
second one | QUANTITY | 0.99+ |
three clicks | QUANTITY | 0.99+ |
an hour and a half | QUANTITY | 0.99+ |
24 hours | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
first box | QUANTITY | 0.99+ |
Phoenix Children's Hospital | ORGANIZATION | 0.99+ |
a month and a half | QUANTITY | 0.99+ |
three use cases | QUANTITY | 0.98+ |
about a dozen | QUANTITY | 0.98+ |
first entry | QUANTITY | 0.98+ |
Boston, Massachusetts | LOCATION | 0.98+ |
end of April | DATE | 0.98+ |
three click | QUANTITY | 0.98+ |
Dell EMCs | ORGANIZATION | 0.98+ |
Russ Currie, NETSCOUT | AWS re:Invent 2018
>> Live from Las Vegas. It's the Cube. Covering AWS re:Invent 2018. Brought to you by Amazon Web Services, Intel and their ecosystem partners. >> And welcome back to Las Vegas. Good afternoon to you. No matter where you're watching, here in the US, we know it is afternoon. As we wind up our coverage for day three at AWS re:invent here on the Cube. One of seven venues we're in right now that's hosting various satellite events. Right now we're in the Sands Expo. Rebecca Knight, John Walls, with Russ Currie. The vice president of Enterprise Strategy at NetScout. Russ, good to see you sir. >> Nice to see you again. >> I've got to be careful. I've got two Bostonian's of sorts here. >> Sorry, sorry. >> So if I don't get something on the accent, you just let me know (laughs) >> We'll talk amongst ourselves. >> Sorry if your sports inferiority complex is.. We're the champions, whatever >> Russ, if you would, first off, your take about what you're seeing here. Because, here we are, day three. As you know, you've been to a lot of shows. Day three, kind of, usually hits a different gear, right? >> Right >> Slows down a little bit. There's still a lot of excitement here. There's still a lot of people around. This show has a little different vibe to it. >> It really does. It's interesting because it becomes a little bit more serious, I think. At this point, in day three for this show, it's now, people are really kind of, they've been exposed to an awful lot in the last three days. And they're really saying, "Okay, now I really want to understand the nuts and bolts of it", and they're spending a little bit more time sitting and learning, understanding what you have to offer them in terms of your solution sense. So it's been an awful lot of fun. Also we did a couple of speaking engagements, so it's really good getting the folks that went and saw our guys speak and seeing them come into the booth and say, 'I want to talk more about that'. >> Well, I want to talk more about that, so you have a new marketing campaign, but first for the viewers who are not familiar with NetScout, you're a Fortune 500 company, but tell us a little more about who you are and what you do. >> Right, so what we do is we provide visibility into the communications between servers and clients and basically see all of the traffic that traverses the network and whether the network is in a public cloud, private cloud, or an on-prem environment, and by looking at that traffic we're able to understand the performance of the services that are being delivered and ensure the security performance of those servers. But right in that perspective we give IT the tools they need to get quicker to the mean time to knowledge, identifying where a problem might be or where a risk may exist and being able to solve those tough problems. >> So it's been a year since you've been on the Cube, I know the esteemed John Walls interviewed you. >> We go way back, yes. >> What's new this year? What sort of advancements, progressions have you implemented? >> So last year when we came in was really our first entry into the public cloud environment and our first entry into AWS, since then we've got a lot of really good traction, a lot of embracing of our technology, we partnered more closely with AWS and got ourselves onto the marketplace. We also enhanced our partnership with VMware and now a part of their NetX integration so we have high levels of integration into both of those platforms, which of course is strong to this entire audience. We've introduced new features and functionality into our product to be able to provide greater visibility, even going deeper into understanding the way the applications are functioning and also added some more security profiling into our products and being able to identify threats as they come into the enterprise network and also as they go out ,so, it's been interesting, it's been a lot of fun. >> You talk about the hybrid cloud and obviously we're hearing that here, this week right, so AWS is obviously slightly shifting its perspective a bit, you know, in terms of on-prem and dealing with the public cloud as well, mind that merger. Your clients, is their any arm twisting that you still have to do or are people buying into it a little more wholeheartedly now in terms of the public cloud and that you've addressed these security concerns? >> I think actually we've become an enabler in often times for our customers, to move to the cloud with competence, where they were a little bit concerned that as they move into the cloud, what kind of investment in tools are they going to have to make? What are they going to potentially lose as they put their workloads into the cloud ? Do they lose a degree of visibility and control? And what we've been able to do is ensure that they have that same experience no matter where they deploy. We were talking earlier about one of our customers that's in the travel and entertainment business and they have been using our gear on cruise ships and in their on-prem data center, but expanded themselves into AWS to extend their capabilities and provide a better user experience for those on ship and now what they really have was the ability to have visibility from ship to shore to cloud and have that perspective and have the confidence that they're delivering a high quality experience to their customers. >> Ship to shore to cloud, not every company can say that. >> Exactly. >> But speaking of motto's, you have a new marketing campaign, visibility without borders. What does that mean? What are you trying to evoke their with your customers? >> What we we're really trying to look at there is the ability to provide visibility no matter where you're deployed. If you have a deployment in a public cloud environment like AWS, you want to have that same level of visibility in your on-prem environment, you want to have it no matter where you have a workload, wherever you have an instance that you want to manage. You want to be able to have that same perspective, one of the things we talk about a little bit, is the idea that, providing a single pane of glass into the service insurance experience and I think that often times, when people try to get to that single pane of glass, they end up with more of a single glass of pain. (laughing) You know, they're trying to aggregate so much stuff that it really doesn't come together too well. >> Right. >> But because we focus in on the data source itself then it just provides that continuity regardless of where they deploy. >> Alright and the importance of visibility, obviously when you're talking about end to end right now, whether you're on-prem or whether you're in the public cloud you're not particular, right? >> Right. >> As a user, I just want to see my operation from start to finish and I don't care where it is. >> Exactly, providing that end to end perspective and being able to understand how I'm delivering services, what the customer experiences, no matter where I deploy and especially when we look at taking advantage of some of the elastic compute capabilities and the like that exists in the cloud, you want to ensure that you're actually getting what you paid for as well, you want to have those controls in place, knowing that what you're delivering is meaningful and impacting the business in a positive way as opposed to, potentially, in a negative way, and spending too much for something that doesn't improve anything in terms of the customer experience. >> Right. >> So the customers, when they want to talk about return on investment, what excites them most? What kind of things are you showing them that is delighting them? >> Often times it's really about mean time to knowledge, we spend a lot of time pointing fingers at each other when we're trying to solve a problem instead of pointing our fingers at the problem. And that's really what we try to focus in on, really getting down into the real details of why something is not performing properly, not, what might be performing improperly. So being able to really get down to that detail and get the right people working on the right problems, I often talk about it in terms of getting the right information, to the right person at the right time, to do the right thing. And if you're able to do that, you're going to provide a better user experience to your customers. >> You mention that a lot of your personnel, a lot of your folks here have been speaking, talking to various groups, I always find that interesting right, because, it's usually the Q&A. >> Yes. >> That when things pop off. So, if you had to generalize about the kind of feedback you're getting from those sessions in terms of the questions, the concerns, the challenges, what are you hearing from folks out there? >> It's kind of funny, one of the things that we get a lot of times is, "You really can do this?" you know, "Is this real what you're showing us?", it's like, yes, this is actual traffic we're showing you exactly what we're seeing. >> Yeah. >> And then they are often pretty amazed at our ability to bring this all into something that visualizes these complex applications that they're delivering across multiple different environments and they have that ah-ah moment where they go, "Oh gosh I really need this, you know, this is really that end to end view that I've been looking for for so long, but I really can't get what I use, a bunch of disparate tools, to try and bring that together" >> So that means, what you just described, is really the definition of innovation, which is providing customers with things that they want, that they didn't even know they wanted. How do you stay innovative? I mean here we are at AWS, Amazon, one of the most innovative companies on the planet and in the history of industry. >> Absolutely. >> How does a company, you're based in Westford, Massachusetts, how do you stay on the cutting edge? >> We spend an enormous amount of time working with our customers and listening to them in terms of where they're going, what their plans are, what new technologies might they be implementing, what are their major initiatives, we regularly reach out and we have constant contact with them to get that feedback and make sure that we're developing solutions that are meaningful to them, it's really about, not what feature can I deliver but what can I provide as value, that's going to make their lives better. Because, as an IT person, it's a tough job right? You're usually the person that people look at and say "Why isn't this working?". >> Right. >> And not being able to have an answer to that, is not a good position to be in. (laughter) Right, so what we're really trying to do is provide them with that answer and give them that ability to be able to answer the tough questions and solve those tough problems. >> Talking about finger pointing, it happens right? >> It does, you know. So what we're really all about is making sure that they're able to get the problem solved as quickly as possible. One of the interesting things I've been hearing from our customers too is that they're looking to this concept of a versatilist, rather than having just a straight forward specialist coming in and work on problems, having people that are a little bit broader in terms of their capabilities and looking at things not only from the perspective, say I'm a network guy, I'm going to look at the network and the app guys going to look at the app. >> Right. >> You got to cross pollinate a little bit and provide that ability to see both sides of that problem, so that's starting to happen. >> Certainly presents a challenge for your workforce right? All of a sudden, you've got to be a little smarter and where a lot of different hats. >> Exactly. >> Versatilist, I like that. You heard it here first. >> Yeah. >> Well, Russ, if you're going to go and ship the shorter cloud, you let us know? OK? >> OK. (laughing) >> Because we want to take that journey with you, alright? >> Love to. >> Thanks for being with us again, good to see you. >> Thank you, it was a pleasure. >> You bet, safe trip home. Back with more here from AWS re:Invent, you are watching us live on the Cube.
SUMMARY :
Brought to you by Amazon Web Services, Russ, good to see you sir. I've got to be careful. We're the champions, whatever As you know, you've been to a lot of shows. There's still a lot of excitement here. so it's really good getting the folks that went so you have a new marketing campaign, and ensure the security performance of those servers. I know the esteemed John Walls interviewed you. and got ourselves onto the marketplace. and that you've addressed these security concerns? and have that perspective and have the confidence What are you trying to evoke their with your customers? is the ability to provide visibility But because we focus in on the data source itself from start to finish and I don't care where it is. and the like that exists in the cloud, and get the right people working on the right problems, talking to various groups, the challenges, what are you hearing from folks out there? It's kind of funny, one of the things and in the history of industry. that are meaningful to them, it's really about, and give them that ability to be able and the app guys going to look at the app. and provide that ability to see a little smarter and where a lot of different hats. You heard it here first. you are watching us live on the Cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rebecca Knight | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Russ | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Russ Currie | PERSON | 0.99+ |
US | LOCATION | 0.99+ |
last year | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
both sides | QUANTITY | 0.99+ |
first entry | QUANTITY | 0.99+ |
this week | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Sands Expo | EVENT | 0.98+ |
NetScout | ORGANIZATION | 0.98+ |
first | QUANTITY | 0.98+ |
both | QUANTITY | 0.97+ |
two | QUANTITY | 0.97+ |
day three | QUANTITY | 0.97+ |
seven venues | QUANTITY | 0.96+ |
VMware | ORGANIZATION | 0.96+ |
single pane | QUANTITY | 0.96+ |
this year | DATE | 0.95+ |
Westford, Massachusetts | LOCATION | 0.93+ |
single glass | QUANTITY | 0.91+ |
a year | QUANTITY | 0.86+ |
AWS re:Invent 2018 | EVENT | 0.83+ |
Day three | QUANTITY | 0.82+ |
single pane of glass | QUANTITY | 0.81+ |
NetX | TITLE | 0.81+ |
Enterprise Strategy | ORGANIZATION | 0.79+ |
Cube | COMMERCIAL_ITEM | 0.78+ |
AWS re:invent | EVENT | 0.74+ |
Invent 2018 | EVENT | 0.68+ |
last three days | DATE | 0.67+ |
lot of people | QUANTITY | 0.67+ |
Invent | EVENT | 0.64+ |
Bostonian | OTHER | 0.6+ |
NetScout | TITLE | 0.58+ |
Cube | ORGANIZATION | 0.55+ |
vice | PERSON | 0.52+ |
500 | TITLE | 0.3+ |
Sanjeev Vohra, Accenture | Informatica World 2018
>> Announcer: Live, from Las Vegas, it's theCUBE! Covering Informatica World 2018. Brought to you by Informatica. >> Hello everyone welcome back, this is theCUBE's exclusive coverage at Informatica World 2018 here live, in Las Vegas at The Venetian Ballroom. I'm John Furrier, your host of theCUBE, with Peter Burris, my co-host this week, Analyist at Wikibon, Chief Analyst at SiliconANGLE and theCUBE. Our next guest is Sanjeev Vohra, Group Technology Officer at Accenture, in charge of incubating new businesses, growing new businesses, handling the talent. Great to have you on thanks for spending the time coming on. >> Pleasure, it's my pleasure to be here. >> So we have a lot of Accenture interviews, go to thecube.net, type in Accenture, you'll see all the experts. And one of the things we love about talking with Accenture, is you guys are in the front lines of all the action. You have all the customer deployments, global system integrator, but you've got to be on top of the new technology, you've got really smart people, so thanks for spending the time. So I got to ask you, looking at the landscape, of the timing of Informatica's opportunity, you've got data, which is not a surprise for some people, but you've got GDPR happening on, this Friday, you've got cloud scale on the horizon, a lot of interesting things are going on right now around data and the impact of customers, which is now pretty much front and center. What're you guys doing with Informatica, what are some of the things that you guys are engaging with them on, and what's important to you? >> We have a very deep relationship with Informatica for many years and, we have many, many, joint clients in the market, and we are helping them sustain their businesses, and also grow their businesses future. Right? In future. And I think, I think there's a lot going on, there's a lot going on sustaining the core of the business, and improving it on a continuous basis, by using new technologies, and, you know, like today's keynote went on a little, talked about the new stuff and it's, there's a lot of things, actually, clients require, or our customers require for, just sustaining their core. But then I caught something in the middle, which is basically: how are you building your new business models, how are you disrupting the market your industry, what's new around that? And, in that piece, I think that's where, we are now starting working with Informatica to see what other pieces we need to bring together to the market, so we can generate, so we can help clients or customers to really leverage the power of technology. And I'll tell you, there are four areas of discussion priorities, that are, you know, you get a sense, and we get a deep dive depending on what you want to see. The first one is, I think the customers now have data warehouses, which are Data 2.0, as is what's told in the morning, so these are still 15 years old data warehouses, they are not in the new. So a lot of customers, and a lot of organizations, large organizations, including some organizations like ours, they're investing right now to make sure that they get to Data 3.0, which is what Anil was saying in the morning, which is around the new data supply chain, because without that, you cannot actually get real data analytics. Right? So you can't generate insight on analytics unless you actually work on your data's infrastructure layer below, so that's one area where we are working with them, that's where the cloud comes in, that's where the flexibility of cloud comes in. The second piece is around, around data compliance and governance because, guess what, there're regulations which are coming up now, which are towards data privacy and data protection. And the data infrastructures which were built 15 years back, actually do not handle that so effectively. >> In being polite, yeah. I mean, it wasn't built for it, they didn't have to think about it. >> Sanjeev: It was not built for that, exactly. So now, now, the point there is that, now there is a regulation coming in, one of them is GDPR, Global Data Protection Regulation, it impacts all the global companies who deal with your EU residents. And now they are looking at how they can address that regulation, and be compliant with that regulation. And we believe that's a great opportunity for them to actually invest. And see how, not only comply with regulation, but actually make this a benefit for them. And make the next leap towards building a next level of infrastructure for them, their data, right? >> And that is doing a lot of the data engineering, actually getting data right. >> And that's the third piece. So the first two are this: one is infrastructure, second is compliance, and the third reason, they're all interrelated finally, but I'm just saying, it depends on, from where do you want to begin your journey, right? And the third piece is around, I think you got it right, is about quality of data, but actually it is not quality, we call it data voracity, it's much beyond quality. We talk about more completeness, and also things like provenance, integrity, and security along with it, so if we, and it's very much business contextual element, because what's happening is, you may have heard the story is that, clients have invested in data lakes, for years now, it's been there for like, eight, nine years, data lake concepts, and everybody talks about it-- >> John: Throw everything into the lake. >> And everybody says throw everything into the lake, and then they become a data swamp. (John laughing) - That was last years theme. >> That was last years theme, and the reason is because, because it's not IT's failure, IT is actually pretty advanced, the technology is very advanced. If the business is not as involved as it should be, and is not able to trust the data, and that's where your point comes in, whether you have the right data, and trusted data with you. >> Though, well we had Toyota on earlier and they said, one of the customers said, we had this 2008 post crisis thing and then, they had all this stuff channeled, they had product in channel, and they had the data! They actually had the data, they didn't have access to it! So again, this is like the new data center, data first, get it right, and so with GDPR we're seeing people saying okay, we've got to get this right. So that's, investing engineering involved, governance, application integration, this is all, now, a new thing. How do you guys advise you clients? 'Cause this is super important and you guys are, again, on the front edge. As a CTO group, you got to look at the new tech and say, okay, that's baked, that's not baked, that's new, that's old, throw a container around it, you know. (laughing) How are you sorting through the tools, the platforms? 'Cause there's a lot of, there's a lot of stuff out there. >> Oh yes, absolutely, and there's a lot of stuff, and there's a lot of unproven things as well, in the market. So, the first and foremost thing is that, we should understand what the context in the market right now is. The first question is, mine is, is everybody ready for GDPR? The answer is no. (John laughs) Are they, have they started into the journey, have they started getting on the racetrack, right, on the road? >> Yes? Yeah? It depends on a majority of that organization, some people have just started building a small strategy around GDPR, some people have actually started doing assessments to understand how complex is this beast, and regulation, and some people have just moved further in the journey of doing assessment, but they're now putting up changes in their infrastructure to handle remediation, right? Things like, for example, consent management, thinks about things like dilation, like, it's going to be a very big deal to do, right? And so they are making advantageous changes to the infrastructure that they have, or the IT systems to manage it effectively. But I don't think there's any company which properly can claim that have got it right fully, from end-to-end, right? So I think that's happening. Now, how are we addressing? I think the first and foremost thing, first of all we need to assess the majority of the customers, or the organization. Like BHD, because we talk to them first and understand, we understand, right? Usually we have various ways of doing it, we can have a chit-chat, and meet the person responsible in that company, it could be a Chief Data Officer of a company, it could be a CIO of a company, it could Chief Operating Officer of a company, it could be a CSO of a company, depending on who has a baton in the sea of suites, to kind of handle this problem. >> So it's different per company, right, so every company has their own hierarchy or need, or entry point? >> Data companies have different entry points, but we are seeing more of the CSOs and CIOs playing a role in many of the large organizations, and our, you know our clientele is very large companies, as you know. But we see most of these players playing that role, and asking for help, and asking for having a meeting, and starting with that. In some cases, they have not invested initially, we talked to them, we assess them very quickly, very easy, quick as it's in, you know, probably in a couple of days or day, and tell them that, let's get into a, what we call is, assessment as step one, and that takes four to six weeks, or eight weeks, depending on the size of their application suite, and the organization. And we do it quite fast, I mean initially, we were also learning. If you were to have asked me this question 12 months back, we had an approach. We've changed that approach and evolved that approach now. We invested hugely in that approach itself, by using a lot of machine learning to do assessment itself. So we have now a concept called data discovery, another concept called knowledge graph. >> And that's software driven, both with, it's all machine learning or? >> Sanjeev: It's largely computer driven. But obviously human and computer work together, but it's not only human. A traditional approach would happen to do only with humans. >> John: Yeah, and that've been takin' a long time. >> And that has changed, that has changed with the new era, and technology advancement, that even for, things which are like assessment, could now be done by machines as well, machines are smart enough to do that work, so we are using that right now. But that's a step one, and after that, once we get there, we build a roadmap for them, we ensure that they're stakeholders are agreeing with the roadmap, they actually embrace the roadmap! (laughing) And once that's done, then we talk about remediation to their systems. >> So, you mention voracity, one of the, and you also mentioned, for example, the idea of the, because of GDPR, deletion, which is in itself a voracity thing, so you, it's also having a verifiable actions on data. So, the challenge that you face, I think, when you talk to large customers, John mentioned Toyota, is, the data's there, but sometimes it's not organized for new classes of problems, so, and that's an executive issue 'cause, a lot of executives don't think in terms of new problem, new data, new organization. You guys are speaking to the top executives, CSOs, CIOs often but, how are you encouraging your clients, your customers, to think differently, so that they become data-first? Which is, kind of a predicate for digital business transformation anyway. >> So I think it's a great question. I think it depends again on, who you're talking to in the organization. I have a very strong perspective, my personal view is that data is an intersection of business and technology, it is not a technology, it's not a business, right? It's an intersection of both, especially this topic, it has to be done in collaboration within business and technology. Very closely in terms of how, what is the, how you can drive metadata out of your data, how can you drive advantage out of your data? And, having said that, I think the important thing to note down is that: for every, when you talk about data voracity, the single comment I will make that it is very, very, very contextual to business. Data voracity is very, very contextual to the business that you're running. >> Well, but problems, right? Because, for example, going to Toyota, so, when the Toyota gentleman came on, and this is really important, >> Absolutely. >> the manufacturing people are doing a great job of using data, lean is very data-driven. The marketing people were doing a great job of using data, the sales people were making a great job of using data, the problem was, the problems that Toyota faced in 2008, when the credit crunch hit, were not limited. They were not manufacturing problems, or marketing problems, or sales problems, they were a wholistic set of problems. And he discovered, Toyota discovered, they needed to say, what's the problem, recast the problem, and what can we do to get the data necessary to answer some of these crucial questions that we have? >> So, I think you hit the nail, I can tell, I mean, I think you're spot on, and the one way we are doing right now, addressing that is through, what we call our liquid studios, >> John: I'm just going to-- >> Peter: I'm sorry what? >> Liquid studios. >> Peter: Liquid studios. >> We have this concept called liquid studios. >> John: Yeah, yeah. >> And actually, this concept we started, I don't know if you heard about this from Accenture before? we started this thing couple of years back-- >> John: Well take a minute to explain that, that's important, explain liquid studios. >> Okay, so liquid studios, so what, when we were thinking about these things where, we talked to multiple clients, they called us, exactly the point, they may be working in silence, and they may be doing a great job in their department, or their function, but they are talking across enterprise. As to how they can, if you are doing great work, can I use your work for my advantage, and vice versa, right, because it's all sharing data, even inside enterprise, forget outside enterprise, and you will be amazed to know how much sharing happens today, within enterprise, right? And you're smiling, right, so? So what we did was, we came to this concept, and the technologies are very new and very advanced, and many of the technologies we are not using beyond experimentation, we are still in the COE concept, well that's different than enterprise ready deployment. Like, if we talk about ERP today, that's not a COE, that's an enterprise ready deployment, in most of the companies, it's all there, like, you run your finance on ERPs right, most of the companies, big companies. So we felt that, technology's advancing, the business and technology IOs, they all have to still agree on a concept, and define a problem together. And that's where the studio comes in, so what we do is, it's actually a central facility, very innovative and creative space, it's unlike an office, it's very much like, new, new thing, it's like very, differently organized structure to generate creativity and good discussion. And we bring in core customers there, we have a workshop with them, we talk about the problem for one or two days, we use design thinking for that, a very effective way. Because one thing we've learned, the one thing that brings our table to agreement on a problem. (laughing) (John and Peter laugh) In a very nice manner, without confronting, in a very subtle manner. So we, through this timeframe, we get to a good problem situation, a good problem definition and then, the studio can actually help you do the POC itself. Because many times people say, well I understand the problem, I think I kind of get your solution, or what your proposing, my people also tell me something else, they have a different option to propose. Can we do it together? Can I get the confidence that, I don't want to go in enterprise ready deployment and put my money, unless I see some proof of pudding, but proof of pudding is not a power point. It's the actual working mark. >> Peter: It's not?! >> It's not! (all laughing) and that's where the studio comes in picture because, you wouldn't believe that we do these two days of workshop without any Powerpoint, like we aren't on a single slide. >> So it's creative, it's very agile, very? >> It's more white boarding, come and talk, it's more visitation, more visitation now, more human interaction, and that's where you open up everybody saying: what is your view, what is your view? We use a lot of post-it stickies to kind of get the-- >> I think the business angle's super important, I want to get your thoughts. 'Cause there's a lot of problems that can be solved once you identify them. But we're hearing terms like competitive advantage, 'cause when you solve some of these problems, these wholistic problems, that have a lot of interplay, where data's shared, or where there's internal, and or external with APIs and cloud-native, you start thinking about competitive advantages, being the data-first company, we've heard these terms. What does that mean to you guys? When you walk into an executive briefing, and they say look, you know, we've done all this work, we've done this engineering, here's where we're at, we need help, but ultimately we want to drive top-line results, be more competitive, really kind of move with the shift. This is a, this is more of a business discussion, what do you guys talk about when you have those conversations? >> I think we, so first of all, data was always a technical topic, do you agree? Like if you just go back, 10 years back, data was always a CIO discussion. >> Well, >> Unless you're in a regulated industry like financial services or, >> Or I guess I'd say this, that the, that the notion of getting data out of a system, or getting data into a system, was a technical discussion. But there was, you know, we've always used data, from market share growth, etc. But that was relatively simple, straight-forward data, and what you're talking about, I think, is, getting into considerably greater detail about how the business is really operating, how the business is really working. Am I right? >> You're right, considering data as an asset, in a discussion in terms of, how can you leverage it effectively, that's what I was saying and, so it is, it's definitely gone up one more level upstaged or into the discussion that is, into the companies and organizations. And what we're saying is, that's where the business comes in effectively and say that, helping them understand, and by the way, the reason I was making that comment is because, if you have ever seen people expending data 10 years back, it is very complex explanation. >> Schemas, this, that, and the other thing. >> You got it, yeah. And it's very hard for a business guy to understand that, like if I'm a supply action lead, I don't get it, it's too complex for me. So what we did, I'm just letting you know how we started the discussion. The first and foremost thing is, we tell them, we're going to solve the business problem, to your point, that's what we think, right? And, every company now-a-days, they want to lead in their industry, and the leadership position is to be more intelligent. >> Yeah, and it's got to hit the mark, I mean, we had Graeme Thompson on, who's the CIO, here at Informatica, and he was saying that if you go to a CFO and ask them hey where's the money, they'll go oh, it's over here, they get your stuff, they know where it's stored, at risk management, they say, where's they data? You mentioned asset, this is now becoming a conversation, where it's like, certainly GDPR is one shot across the bow that people are standing up, taking notice, it's happening now. This data as a asset is a very interesting concept. When I'm a customer of yours, say, and I say hey Sanjeer, I have a need, I got to move my organization to be data-first but, I got to do some more work. What's my journey? I know it's different per customer, depending on whether it's top-down, or bottom-up, we see that a lot but. How do you guys take them through the journey? Is it the workshop, as you mentioned, the assessment, take us through the journey of how you help customers, because I'm sure a lot of them are sittin' out there goin' now, they're going to be exposed with GDPR, saying wow, were we really setup for this? >> Yeah, so I think in the journey, it's a very good question that you asked. The journey can start depending on the real, the biggest pain they have, and the pains could be different on the majority of that particular organization, right? But I can tell you what client position we are having, in a very simplified manner, so that you understand the journey, but yes, when we engage with them, there's a process we follow, we have a discovery process, we have a studio process, together have a workshop, get into a POC, get into a large-scale deployment solution en route. That's a simple thing, that's more sequential in nature, but the condition is around four areas. The first and foremost area is, many companies actually don't have any particular data strategy. They have a very well articulated IT strategy, and when you go to a section of IT strategy, there's a data component in that, but that's all technology. About how do you load, how do you extract those things. It talks about data architectures, and talks about data integration, but it doesn't talk about data as a business, right? That's where it's not there, right? In some companies they do have, to your point, yes, some companies were always there in data, because of regulatory concerns and requirements, so they always had a data organization, a function, which thought of data as different from other industries. And those industries have more better strategy documents or, or they're more organized in that space. But, guess what, now companies are actually investing. They're actually asking for doing help in data strategies, that's one entry point which happens, which means, hey, I understand this, I understand governance is required, I understand privacy's required, and I understand this is required, I also understand that I need to move to new infrastructure, but I can't just make an investment in one or two areas, can you help my build my strategy and road map as to what should be my journey from now til next three years, right, how does it look like? How much money is required, how much investment is required, how do I save from something and invest here, help me save internal wealth, right? That's a new concept. Right, because I don't have so much that you're asking for, so help me gain some savings somewhere else. That's where cloud comes in. (laughs) So, that's one entry point, the second entry point is totally on, where the customers are very clear, they actually have thought through the process, in terms of where they want to go, they actually are asking, very specifically saying, I do have a problem in our infrastructure, help me move to cloud. Help me, that's a big decision right, help me move to cloud, right? But that's one, which I call is, new data supply chain, that's my language. Which means that-- >> John: I like that word actually. >> Yeah? I'm making your supply chain and my supply chain in business terms, if I have to explain business, it's different, technically it's different. Technology, I can explain all the things that you just mentioned, in business I explain that there are three Cs to a supply chain, capture it, curate it, consume it, and they so, oh I get it now, that's easy! >> Well, the data supply chain is interesting too, when you think about new data coming in, the system has to be reactive and handle new data, so you have to have this catalog thing. And that was something that we saw a lot of buzz here at the show, this enterprise catalog. What's your take on that, what's your assessment of the catalog, impact to customers, purpose at this point in time? >> I think it's very important, especially with the customers and large companies, who actually have data all over the place. I can share, as an example, we were talking to one of the customers who had 2600 applications, and they want to go for GDPR, we had a chat with them, and we said look, they were more comfortable saying, no, no, let's no use any machine. Because when you talk about machine, then you have to expose yourself a bit, right? And I said look, the machine is not going to be in my place, it's going to be in yours, your boundaries of firewall. But they were a little more concerned, they said let's go with a manual approach, let's do that, I said fair enough, it's your call, we can do that as well. But guess what? 2600 applications, you can't discover manually, it's just not possible. >> John: Yeah, you need help. A lot of data streaming and-- >> I guess I'm just letting you know it's very, I'm just answering your question. The data catalog is extremely important, if you really want to get a sense of where the data is residing, because data is not in one or two applications, it's all over the place. >> Well I'm impressed by the data catalog positioning, but then also, when you look at the Azure announcement they had, that Informatica had. You're essentially seeing hybrid cloud playing out as a real product. So that's an easy migration, of bringing in some of those BI tools, bringing some democratization into the data discovery. Rajeev, thanks for coming on theCUBE, really appreciate it, love the work you do, and I just want you to take a minute, just to end the segment out. Explain the work that you do, you have two roles, real quick, explain your two primary roles. You've got the, you incubate new stuff, which is hard to do, but, I'm an entrepreneur, I love the hard problems, but also you're doing talent. Take a minute to kind of explain, real quickly, those two roles, for, super important. >> well, the first one is basically that I, my role, I look at any ideas that are, that we can incubate as a business, and we can work within Accenture, different entities within Accenture to make sure that we go to clients in a much more quiescent manner, and see how we can have an impact to our top line. And that's a big thing, because our, we are a service as a business and, we have to be very innovative to come to know how do we increase our business. >> Any examples that you can share, of that stuff that you worked on? >> So, one is, right now, I'm spending a lot of my time in, on fueling our data business itself. We just recently launched our data business group, right? We have our market way in this position, is called applied intendance, which you may be aware, which includes data, analytics, advanced analytics, and then artificial intelligence, all put together, then we can solve these problems. >> And you guys got a zillion data scientists, I know that, you guys have been hiring really, really strong people. >> It's a very strong team. But on that, what I feel is that, the data is a critical foundation, really critical foundation for an intelligent enterprise. You can become and intelligent enterprise unless you have right data, to your point. And right data means curated data, in the set, in the fashion that can help you become, draw more insights from your enterprise. And that's possible if you invest in data strongly, and selection of data so strongly, but that's why we are fueling that, so I'm just letting you know that I'm spending most of my time right now to enhance our capability, you know, enhance our power in on that, and go to market with that. The second thing which I am investing right now, which is, there is a few more ideas, but one more, which could be very useful for you to know, is, while companies are moving to the new, they have to also, they have to rely on their people. Ultimately the companies are made of people. Like us, right? And if you can, if you are not retooling yourself, you cannot reimagine the future of your organization as well. >> You're talking about the peoples, their own skills, their job functions, okay-- >> So I'm working on a concept called workforce of the future right, how can 44 companies, large companies, how can they transform their talent, and their, even leadership as well, so that they are ready for the future and they can be more relevant. >> Yeah, and this is the argument we always see on theCUBE, oh, automation's going to take jobs away, well, I mean certainly automating repetitive tasks, no one wants to do those, (laughing) but the value is going to shift, that's where the opportunities are, is that how you see that future workforce? >> Absolutely, it's one of the complimentary, we have Paul Daugherty, whom you know, who's the Chief Technology Officer of Accenture Technology. Accenture, Accenture as a firm, he, he's a Chief Technology and Innovation Officer for Accenture He has recently written a book called Human + Machine, exactly talked about the same concept that, we actually all believe, very, very strongly that, the future is all about augmenting humans together. So there are tasks which machines should be doing, and there are tasks where humans should be doing, and there are tasks which both of them do collaboratively, and that's what we are trying to boast. >> Cloud world, we're doing it here in theCUBE, here at Informatica World. Rajeev, thanks so much for spending time-- >> Sajeev. (laughing) Sajeev, I mean, thanks for coming on. Sorry my bad, a little late in the day. But we're bringing it out here at Informatica World, this is theCUBE, I'm John Furrier with Peter Burris, here with Accenture inside theCUBE, here at Informatica World in Las Vegas. Be right back with more coverage, after this short break. Thank you. (bubbly music)
SUMMARY :
Brought to you by Informatica. Great to have you on thanks for And one of the things we love that they get to Data 3.0, they didn't have to think about it. And make the next leap towards building of the data engineering, and the third reason, they're and then they become a data swamp. and the reason is because, again, on the front edge. in the market right now is. in the sea of suites, to and that takes four to happen to do only with humans. John: Yeah, and that've And once that's done, then we talk about So, the challenge that you face, I think, for every, when you talk get the data necessary We have this concept minute to explain that, and many of the technologies and that's where the studio and they say look, you know, Like if you just go back, 10 years back, that the notion of getting or into the discussion that is, and the other thing. and the leadership position Is it the workshop, as you and when you go to a that you just mentioned, the system has to be And I said look, the machine John: Yeah, you need help. it's all over the place. love the work you do, and I and see how we can have which you may be aware, And you guys got a zillion in the fashion that can help you become, and they can be more relevant. we have Paul Daugherty, whom you know, doing it here in theCUBE, Sorry my bad, a little late in the day.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Paul Daugherty | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
Informatica | ORGANIZATION | 0.99+ |
Sajeev | PERSON | 0.99+ |
Rajeev | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
2008 | DATE | 0.99+ |
Toyota | ORGANIZATION | 0.99+ |
third piece | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Sanjeev Vohra | PERSON | 0.99+ |
eight weeks | QUANTITY | 0.99+ |
eight | QUANTITY | 0.99+ |
15 years | QUANTITY | 0.99+ |
44 companies | QUANTITY | 0.99+ |
Global Data Protection Regulation | TITLE | 0.99+ |
two days | QUANTITY | 0.99+ |
two roles | QUANTITY | 0.99+ |
second piece | QUANTITY | 0.99+ |
Sanjeev | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
GDPR | TITLE | 0.99+ |
first question | QUANTITY | 0.99+ |
Graeme Thompson | PERSON | 0.99+ |
six weeks | QUANTITY | 0.99+ |
two applications | QUANTITY | 0.99+ |
thecube.net | OTHER | 0.99+ |
four | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
theCUBE | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Accenture Technology | ORGANIZATION | 0.99+ |
SiliconANGLE | ORGANIZATION | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
two areas | QUANTITY | 0.98+ |
third reason | QUANTITY | 0.98+ |
John Furrier | PERSON | 0.98+ |
Informatica World 2018 | EVENT | 0.98+ |
10 years back | DATE | 0.98+ |
BHD | ORGANIZATION | 0.98+ |
Wrap Up | ServiceNow Knowledge18
>> Narrator: Live from Las Vegas, it's the CUBE covering ServiceNow Knowledge 2018. Brought to you by ServiceNow. >> Welcome back everyone, we are wrapping up three big days of the CUBE's live coverage of ServiceNow Knowledge 18. I'm your host Rebecca Knight along with my cohost Dave Vellante and Jeffrick. It has been such fun co-hosting with you both. It's always a ghast to be with you so three days, what have we learned? We've learned we're making the world of work work better for people. Beyond that what do you think? >> New branding you know there which I think underscores ServiceNow's desire to get into the C-Suite. Become a strategic partner. Some of the things we heard this week, platform of platforms. The next great enterprise software company is what they aspire to, just from a financial standpoint. This company literally wants to be a hundred billion dollar valuation company. I think they got a reasonable shot at doing that. They're well on their way to four billion dollars in revenue. It's hard to be a software company and hit a billion. You know the number of companies who get there ar very limited and they are the latest. We're also seeing many products, one platform and platforms in this day and age beat products. Cloud has been a huge tailwind for ServiceNow. We've seen the SaaSification of industries and now we're seeing significant execution on the original vision at penetration into deeply into these accounts. And I got to say when you come to events like this and talk to customers. There's amazing enthusiasm as much of if not more than any show that we do. I mean I really got, what's your take? >> We go to so many shows and it's not hard to figure out the health of a show. Right you walk around the floor, what's the energy, how many people are there? What's the ecosystem I mean, even now as I look around we're at the very end of the third day and there is action at most of the booths still. So it's a super healthy ecosystem. I think it grew another 4,000 people from this year of the year of year growth. So it's clearly on the rise. SaaS is a big thing, I think it's really interesting play and the kind of simple workflow. Not as much conversation really about the no code and the low code that we've heard in the past. Maybe they're past that but certainly a lot of conversation about the vertical stack applications that they're building and I think at the end of the day. We talked about this before, it's competition for your screen. You know what is it that you work in everyday. Right if you use, I don't care what application. SalesForce or any SaaS application which we all have a lot of on our desktop today. If you use it as a reporting tool it's a pain. It's double entry, it's not good. But what is the tool that you execute your business on everyday? And that's really a smart strategy for them to go after that. The other thing that I just think is ripe and we talked about a little bit. I don't know if they're down playing it because they're not where they want to be at or they're just downplaying it but the opportunity for machine learning and artificial intelligence to more efficiently impact workflows with the data from the workflow is a huge opportunity. So what was a bunch of workflows and approvals and this and that should all get, most of it should just get knocked out via AI over a short period of time. So I think they're in a good spot and then the other thing which we hear over and over. You know Frank Slootman IT our homies I still love that line. But as has been repeated IT is everywhere so what a great way to get into HR. To get into legal, to get into facilities management, to get into these other things. Where like hey this is a really cool efficient little tool can I build a nice app for my business? So seemed to be executing on that strategy. >> Yeah CJ just said IT will always be at our core. Rebecca the keynote was interesting. It got mixed reviews and I think part of that is they're struggling we heard tat from some of our guests. There's a hybrid audience now. You got the IT homies, you got the DevOps crowd and then you got the business leaders and so the keynote on day one was really reaching an audience. Largely outside of the core audience. You know I think day two and day three were much more geared toward that direct hit. Now I guess that's not a bad thing. >> No and I think that I mean as you noted it's a hybrid audience so you're trying to reach and touch and inspire and motivate a lot of different partners, customers, analysts. People who are looking at your business in a critical way. The first day John Donahoe it struck me as very sort of aspirational. Really talking about what is our purpose, what do we do as an organization. What are our values, what problems are we trying to solve here and I think that that laying out there in the way that he did was effective because it really did bring it back to, here's what we're about. >> Yeah the other thing I learned is succession has been very successful. Frank Slootman stepped down last year as CEO. He's maintained his chairman title, he's now stepped down as chairman. Fred kind of you know went away for a little while. Fred's back now as chairman. John Donahoe came in. People don't really put much emphasis on this but Fred Luddy was the chief product officer. Dan McGee was the COO, CJ Desai took over for both of them. He said on the CUBE. You know you texted me, you got big shoes to fill. He said I kept that just to remind me and he seems to have just picked up right where those guys left off. You know Pat Casey I think is understated and vital to the culture of this company. You know Jeff you see that, he's like a mini Fred you know and I think that's critical to maintain that cultural foundation. >> But as we said you know going the way that Pat talked about kind of just bifurcation in the keynote and the audiences in the building and out of the building. Which I've never heard before kind of an interesting way to cut it. The people that are here are their very passionate community and they're all here and they're adding 4,000 every single year. The people that are outside of the building maybe don't know as much about it and really maybe that aspirational kind of messaging touched them a little bit more cause they're not into the nitty gritty. It's really interesting too just cause this week is such a busy week in technology. The competition for attention, eyeballs and time. I was struck this morning going through some of our older stuff where Fred would always say. You know I'm so thankful that people will take the time to spend it with us this week. And when people had choices to go to Google IO, Microsoft build, of course we're at Nutanix next, Red Hat Summit I'm sure I'm missing a bunch of other ones. >> Busy week. >> The fact that people are here for three days of conference again they're still here is a pretty good statement in terms of the commitment of their community. >> Now the other thing I want to mention is four years ago Jeff was I think might have been five years ago. We said on the CUBE this company's on a collision course with SalesForce and you can really start to see it take shape. Of the customer service management piece. We know that SalesForce really isn't designed for CSM. Customer Service Management. But he talked about it so they are on a collision course there. They've hired a bunch of people from SalesForce. SalesForce is not going to rollover you know they're going to fight hard for that hard, Oracle's going to fight hard for that. So software companies believe that they should get their fair share of the spend. As long as that spend is a 100%. That's the mentality of a software company. Especially those run by Marc Benioff and Larry Ellis and so it's going to be really interesting to see how these guys evolve. They're going to start bumping into people. This guy's got pretty sharp elbows though. >> Yeah and I think the customer relation is very different. We were at PagerDuty Summit last right talked to Nick Meta who just got nominated for entrepreneur of the year I think for Ink from GainSight and he really talked about what does a customer management verses opportunity management. Once you have the customer and you've managed that sale and you've made that sale. That's really were SalesForce has strived in and that's we use it for in our own company but once you're in the customer. Like say you're in IBM or you're in Boeing. How do you actually manage your relationship in Boeing cause it's not Boeing and your sales person. There's many many many relationships, there's many many many activities, there's somewhere you're winning, somewhere you're losing. Somewhere you're new, somewhere you're old and so the opportunity there is way beyond simply managing you know a lead to an opportunity to a closed sale. That' just the very beginning of a process and actually having a relationship with the customer. >> The other thing is so you can, one of the measurements of progress in 2013 this company 95% of its business was in IT. Their core ITSM, change management, help desk etc. Today that number's down to about two thirds so a third of the business is outside of IT. We're talking about multi-hundreds of millions of dollars. So ITOM, HR, the security practice. They're taking these applications and they're becoming multi-hundred million dollar businesses. You know some of them aren't there yet but they're you know north of 50, 75 we're taking about hundreds of customers. Higher average price, average contract values. You know they don't broadcast that here but you know you look at peel back the numbers and you can see just tremendous financial story. The renewal rates are really really high. You know in the mid 90s, high 90s which is unheard of and so I think this company is going to be the next great enterprise software company and their focus on the user experience I think is important because if you think about the great enterprise software companies. SalesForce, Oracle, SAP, maybe put IBM in there because they sort of acquired their way to it. But those three, they're not the greatest user experiences in the world. They're working on the UI but they're, you know Oracle, we use Oracle. It's clunky, it's powerful. >> They're solving such different problems. Right when those companies came up they were solving a very different problem. Oracle on their relational database side. Very different problem. You know ARP was so revolutionary when SAP came out and I still just think it's so funny that we get these massive gains of efficiency. We had it in the ARP days and now we're getting it again. So they're coming at it from a very different angle. That they're fortunate that there are more modern architecture, there are more modern UI. You know unfortunately if you're legacy you're kind of stuck in your historical. >> In your old ways right? >> Paradigm. >> So the go to market gets more complicated as they start selling to all these other divisions. You're seeing overlay, sales forces you know it's going to be interesting. IBM just consolidated it's big six shows into one. You wonder what's going to happen with this. Are they going to have to create you know mini Knowledges for all these different lines of business. We'll see how that evolves. You think with the one platform maybe they keep it all together. I hope they don't lose that core. You think of VM world, rigt there's still a core technical audience and I think that brings a lot of the energy and credibility to a show like this. >> They still do have some little regional shows and there's a couple different kind of series that they're getting out because as we know. Once you get, well just different right. AWS reinvents over $40,000 last year. Oracle runs it I don't even know what Oracle runs. A 65,000, 75,000. SalesForce hundred thousand but they kind of cheat. They give away lot of tickets but it is hard to keep that community together. You know we've had a number of people come up to us while we're off air to say hi, that we've had on before. The company's growing, things are changing, new leadership so to maintain that culture I think that's why Pat is so important and the key is that connection to the past and that connection to Fred. That kind of carried forward. >> The other thing we have to mention is the ecosystem when we first started covering ServiceNow Knowledge it was you know fruition partners, cloud Sherpas I mean it. Who are these guys and now you see the acquisitions, it's EY is here, Deloitte is here, Accenture is here. >> Got Fruition. >> PWC you see Unisys is here. I mean big name companies, Capgemini, KPMG with big install bases. Strong relationships it's why you see the sales guys at ServiceNow bellying up to these companies because they know it's going to drive more business for them. So pretty impressive story I mean it's hard to be critical of these guys, your price is too high. Okay I mean alright. But the value's there so people are lining up so. >> Yeah I mean it's a smoking hot company as you said. What do they needed to do next? What do you need to see from them next? >> Well I mean the thing is they laid out the roadmap. You know they announced twice a year at different cities wit each a letter of the alphabet. They got to execute on that. I mean this is one of those companies that's theirs to lose. It really is, they got the energy. They got to retain the talent, attract new talent, the street's certainly buying their story. Their free cash flow is growing faster than their revenue which is really impressive. They're extremely well run company. Their CFO is a rockstar stud behind the scenes. I mean they got studs in development, they got a great CEO they got a great CFO. Really strong chief product officer, really strong general managers who've got incredible depth in expertise. I mean it's theirs to lose, I mean they really just have to keep executing on that roadmap keeping their customer focus and you know hoping that there's not some external factor that blows everything up. >> Yeah good point, good point. What about the messaging? We've heard as you said, it's new branding so it's making the world of work work better, there's this focus on the user experience. The idea that the CIO is no longer just so myopic in his or her portfolio. Really has to think much more broadly about the business. A real business leader, I mean is this. Are you hearing this at other conferences too? Is it jiving with the other? >> You know everyone talks about the new way to work, the new to work, the new way to work and the consumers they sort of IT and you know all the millennials that want to operate everything on their phone. That's all fine and dandy. Again at the end of the day, where do people work? Because again you're competing everyone has, excuse me many many applications unfortunately that we have to run to get our day job done and so if you can be the one that people use as the primary way that they get work done. That's the goal... >> Rebecca: That's where the money is. >> That's the end game right. >> Well I owe that so the messaging to me is interesting because IT practitioners as a community are some of the most under appreciated. You know overworked and they're only here from the business when things go bad. For decades we've seen this the thing that struck me at ServiceNow Knowledge 13 when we first came here was wow. These IT people ar pumped. You know you walk around a show the IT like this, they're kind of dragging their feet, heads down and the ServiceNow customers are excited. They're leading innovation in their companies. They're developing new applications on these platforms. It's a persona that I think is being reborn and it sound exciting to see. >> It's funny you bring up the old chest because before it was a lot about just letting IT excuse me, do their work with a little bit more creativity. Better tools, build their own store, build an IT services Amazon likened store. We're not hearing any of that anymore. >> Do more with less, squeeze, squeeze. >> If we're part of delivering value as we've talked about with the banking application and link from MoonsStar you know now these people are intimately involved with the forward facing edge of the company. So it's not talking about we'll have a cool service store. I remember like 2014 that was like a big theme. We're not hearing that anymore, we've moved way beyond that in terms of being a strategic partner in the business. Which we here over and over but these are you know people that header now the strategic partner for the business. >> Okay customers have to make bets and they're making bets on ServiceNow. They've obviously made a bunch of bets on Oracle. Increasingly they're making bets on Amazon. You know we're seeing that a lot. They've made big bets on VM ware, obviously big bets on SAP so CIOs they go to shows like this to make sure that they made the right bet and they're not missing some blind spots. To talk to their peers but you can see that their laying the chips on the table. I guess pun intended, I mean they're paying off. >> That's great, that's a great note to end on I think. So again a pleasure co-hosting with both of you. It's been a lot of fun, it's been a lot of hard work but a lot of fun too. >> Thank you Rebecca and so the CUBE season Jeff. I got to shout out to you and the team. I mean you guys, it's like so busy right now. >> I thought you were going to ask if we were going next. I was going to say oh my god. >> Next week I know I'm in Chicago at VMON. >> Right we have VMON, DON, we've got a couple of on the grounds. SAP Sapphire is coming up. >> Dave: Pure Accelerate. >> Pure Accelerate, OpenStack, we're going back to Vancouver. Haven't been there for a while. Informatica World, back down here in Las Vegas Pure Storage, San Francisco... >> We got the MIT's CTO conference coming up. We got Google Next. >> Women Transforming Technology. Just keep an eye on the website upcoming. We can't give it all straight but... >> The CUBE.net, SiliconAngle.com, WikiBon.com, bunch of free content.- you heard it here first. >> There you go. >> For Rebecca Knight and Jeffrick and Dave Vellante this has been the CUBE's coverage of ServiceNow Knowledge 18. We will see you next time. >> Thanks everybody, bye bye.
SUMMARY :
Brought to you by ServiceNow. It's always a ghast to be with you so And I got to say when you come to events like this and the kind of simple workflow. and so the keynote on day one No and I think that I mean as you noted You know Jeff you see that, the time to spend it with us this week. in terms of the commitment of their community. and so it's going to be really interesting to see and so the opportunity there I think this company is going to be the next great and I still just think it's so funny that we get these So the go to market gets more complicated and the key is that connection to the past you know fruition partners, cloud Sherpas I mean it. it's why you see Yeah I mean it's a smoking hot company as you said. and you know hoping that there's not The idea that the CIO is no longer just and so if you can be the one that people use as the so the messaging to me is interesting It's funny you bring up the old chest Do more with less, and link from MoonsStar you know now these people but you can see that their laying the chips on the table. That's great, that's a great note to end on I think. I got to shout out to you and the team. I thought you were going to ask if we were going next. Right we have VMON, DON, we're going back to Vancouver. We got the MIT's CTO conference coming up. Just keep an eye on the website upcoming. bunch of free content.- you heard it here first. We will see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rebecca | PERSON | 0.99+ |
Dan McGee | PERSON | 0.99+ |
Frank Slootman | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
Boeing | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Chicago | LOCATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Larry Ellis | PERSON | 0.99+ |
KPMG | ORGANIZATION | 0.99+ |
Fred Luddy | PERSON | 0.99+ |
Vancouver | LOCATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Marc Benioff | PERSON | 0.99+ |
Nick Meta | PERSON | 0.99+ |
John Donahoe | PERSON | 0.99+ |
Pat Casey | PERSON | 0.99+ |
Pat | PERSON | 0.99+ |
Fred | PERSON | 0.99+ |
Deloitte | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Unisys | ORGANIZATION | 0.99+ |
CJ Desai | PERSON | 0.99+ |
three days | QUANTITY | 0.99+ |
Jeffrick | PERSON | 0.99+ |
4,000 | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
95% | QUANTITY | 0.99+ |
Capgemini | ORGANIZATION | 0.99+ |
4,000 people | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
Next week | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
SAP | ORGANIZATION | 0.99+ |
PWC | ORGANIZATION | 0.99+ |
2014 | DATE | 0.99+ |
SalesForce | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
mid 90s | DATE | 0.99+ |
ServiceNow | ORGANIZATION | 0.98+ |
four years ago | DATE | 0.98+ |
one platform | QUANTITY | 0.98+ |
CUBE | ORGANIZATION | 0.98+ |
over $40,000 | QUANTITY | 0.98+ |
MoonsStar | ORGANIZATION | 0.98+ |
four billion dollars | QUANTITY | 0.98+ |
third day | QUANTITY | 0.98+ |
ServiceNow Knowledge 18 | TITLE | 0.97+ |
GainSight | ORGANIZATION | 0.97+ |
CJ | PERSON | 0.97+ |
C-Suite | TITLE | 0.97+ |
this week | DATE | 0.97+ |
multi-hundred million dollar | QUANTITY | 0.97+ |
EY | ORGANIZATION | 0.97+ |
first | QUANTITY | 0.96+ |
hundred thousand | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
twice a year | QUANTITY | 0.96+ |
Red Hat Summit 2018 | Day 2 | AM Keynote
[Music] [Music] [Music] [Music] [Music] [Music] that will be successful in the 21st century [Music] being open is really important because it comes with a lot of trust the open-source community now has matured so much and that contribution from the community is really driving innovation [Music] but what's really exciting is the change that we've seen in our teams not only the way they collaborate but the way they operate in the way they work [Music] I think idea is everything ideas can change the way you see things open-source is more than a license it's actually a way of operating [Music] ladies and gentlemen please welcome Red Hat president and chief executive officer Jim Whitehurst [Music] all right well welcome to day two at the Red Hat summit I'm amazed to see this many people here at 8:30 in the morning given the number of people I saw pretty late last night out and about so thank you for being here and have to give a shout out speaking of power participation that DJ is was Mike Walker who is our global director of open innovation labs so really enjoyed that this morning was great to have him doing that so hey so day one yesterday we had some phenomenal announcements both around Red Hat products and things that we're doing as well as some great partner announcements which we found exciting I hope they were interesting to you and I hope you had a chance to learn a little more about that and enjoy the breakout sessions that we had yesterday so yesterday was a lot about the what with these announcements and partnerships today I wanted to spin this morning talking a little bit more about the how right how do we actually survive and thrive in this digitally transformed world and to some extent the easy parts identifying the problem we all know that we have to be able to move more quickly we all know that we have to be able to react to change faster and we all know that we need to innovate more effectively all right so the problem is easy but how do you actually go about solving that right the problem is that's not a product that you can buy off the shelf right it is a capability that you have to build and certainly it's technology enabled but it's also depends on process culture a whole bunch of things to figure out how we actually do that and the answer is likely to be different in different organizations with different objective functions and different starting points right so this is a challenge that we all need to feel our way to an answer on and so I want to spend some time today talking about what we've seen in the market and how people are working to address that and it's one of the reasons that the summit this year the theme is ideas worth it lorring to take us back on a little history lesson so two years ago here at Moscone the theme of the summit was the power of participation and then I talked a lot about the power of groups of people working together and participating are able to solve problems much more quickly and much more effectively than individuals or even individual organizations working by themselves and some of the largest problems that we face in technology but more broadly in the world will ultimately only be solved if we effectively participate and work together then last year the theme of the summit was the impact of the individual and we took this concept of participation a bit further and we talked about how participation has to be active right it's a this isn't something where you can be passive that you can sit back you have to be involved because the problem in a more participative type community is that there is no road map right you can't sit back and wait for an edict on high or some central planning or some central authority to tell you what to do you have to take initiative you have to get involved right this is a active participation sport now one of the things that I talked about as part of that was that planning was dead and it was kind of a key my I think my keynote was actually titled planning is dead and the concept was that in a world that's less knowable when we're solving problems in a more organic bottom-up way our ability to effectively plan into the future it's much less than it was in the past and this idea that you're gonna be able to plan for success and then build to it it really is being replaced by a more bottom-up participative approach now aside from my whole strategic planning team kind of being up in arms saying what are you saying planning is dead I have multiple times had people say to me well I get that point but I still need to prepare for the future how do I prepare my organization for the future isn't that planning and so I wanted to spend a couple minutes talk a little more detail about what I meant by that but importantly taking our own advice we spent a lot of time this past year looking around at what our customers are doing because what a better place to learn then from large companies and small companies around the world information technology organizations having to work to solve these problems for their organizations and so our ability to learn from each other take the power of participation an individual initiative that people and organizations have taken there are just so many great learnings this year that I want to get a chance to share I also thought rather than listening to me do that that we could actually highlight some of the people who are doing this and so I do want to spend about five minutes kind of contextualizing what we're going to go through over the next hour or so and some of the lessons learned but then we want to share some real-world stories of how organizations are attacking some of these problems under this how do we be successful in a world of constant change in uncertainty so just going back a little bit more to last year talking about planning was dead when I said planning it's kind of a planning writ large and so that's if you think about the way traditional organizations work to solve problems and ultimately execute you start off planning so what's a position you want to get to in X years and whether that's a competitive strategy in a position of competitive advantage or a certain position you want an organizational function to reach you kind of lay out a plan to get there you then typically a senior leaders or a planning team prescribes the sets of activities and the organization structure and the other components required to get there and then ultimately execution is about driving compliance against that plan and you look at you say well that's all logical right we plan for something we then figure out how we're gonna get there we go execute to get there and you know in a traditional world that was easy and still some of this makes sense I don't say throw out all of this but you have to recognize in a more uncertain volatile world where you can be blindsided by orthogonal competitors coming in and you the term uber eyes you have to recognize that you can't always plan or know what the future is and so if you don't well then what replaces the traditional model or certainly how do you augment the traditional model to be successful in a world that you knows ambiguous well what we've heard from customers and what you'll see examples of this through the course of this morning planning is can be replaced by configuring so you can configure for a constant rate of change without necessarily having to know what that change is this idea of prescription of here's the activities people need to perform and let's lay these out very very crisply job descriptions what organizations are going to do can be replaced by a greater degree of enablement right so this idea of how do you enable people with the knowledge and things that they need to be able to make the right decisions and then ultimately this idea of execution as compliance can be replaced by a greater level of engagement of people across the organization to ultimately be able to react at a faster speed to the changes that happen so just double clicking in each of those for a couple minutes so what I mean by configure for constant change so again we don't know exactly what the change is going to be but we know it's going to happen and last year I talked a little bit about a process solution to that problem I called it that you have to try learn modify and what that model try learn modify was for anybody in the app dev space it was basically taking the principles of agile and DevOps and applying those more broadly to business processes in technology organizations and ultimately organizations broadly this idea of you don't have to know what your ultimate destination is but you can try and experiment you can learn from those things and you can move forward and so that I do think in technology organizations we've seen tremendous progress even over the last year as organizations are adopting agile endeavor and so that still continues to be I think a great way for people to to configure their processes for change but this year we've seen some great examples of organizations taking a different tack to that problem and that's literally building modularity into their structures themselves right actually building the idea that change is going to happen into how you're laying out your technology architectures right we've all seen the reverse of that when you build these optimized systems for you know kind of one environment you kind of flip over two years later what was the optimized system it's now called a legacy system that needs to be migrated that's an optimized system that now has to be moved to a new environment because the world has changed so again you'll see a great example of that in a few minutes here on stage next this concept of enabled double-clicking on that a little bit so much of what we've done in technology over the past few years has been around automation how do we actually replace things that people were doing with technology or augmenting what people are doing with technology and that's incredibly important and that's work that can continue to go forward it needs to happen it's not really what I'm talking about here though enablement in this case it's much more around how do you make sure individuals are getting the context they need how are you making sure that they're getting the information they need how are you making sure they're getting the tools they need to make decisions on the spot so it's less about automating what people are doing and more about how can you better enable people with tools and technology now from a leadership perspective that's around making sure people understand the strategy of the company the context in which they're working in making sure you've set the appropriate values etc etc from a technology perspective that's ensuring that you're building the right systems that allow the right information the right tools at the right time to the right people now to some extent even that might not be hard but when the world is constantly changing that gets to be even harder and I think that's one of the reasons we see a lot of traction and open source to solve these problems to use flexible systems to help enterprises be able to enable their people not just in it today but to be flexible going forward and again we'll see some great examples of that and finally engagement so again if execution can't be around driving compliance to a plan because you no longer have this kind of Cris plan well what do leaders do how do organizations operate and so you know I'll broadly use the term engagement several of our customers have used this term and this is really saying well how do you engage your people in real-time to make the right decisions how do you accelerate a pace of cadence how do you operate at a different speed so you can react to change and take advantage of opportunities as they arise and everywhere we look IT is a key enabler of this right in the past IT was often seen as an inhibitor to this because the IT systems move slower than the business might want to move but we are seeing with some of these new technologies that literally IT is becoming the enabler and driving the pace of change back on to the business and you'll again see some great examples of that as well so again rather than listen to me sit here and theoretically talk about these things or refer to what we've seen others doing I thought it'd be much more interesting to bring some of our partners and our customers up here to specifically talk about what they're doing so I'm really excited to have a great group of customers who have agreed to stand in front of 7,500 people or however many here this morning and talk a little bit more about what they're doing so really excited to have them here and really appreciate all them agreeing to be a part of this and so to start I want to start with tee systems we have the CEO of tee systems here and I think this is a great story because they're really two parts to it right because he has two perspectives one is as the CEO of a global company itself having to navigate its way through digital disruption and as a global cloud service provider obviously helping its customers through this same type of change so I'm really thrilled to have a del hasta li join me on stage to talk a little bit about T systems and what they're doing and what we're doing jointly together so Adelle [Music] Jim took to see you Adele thank you for being here you for having me please join me I love to DJ when that fantastic we may have to hire him no more events for events where's well employed he's well employed though here that team do not give him mics activation it's great to have you here really do appreciate it well you're the CEO of a large organization that's going through this disruption in the same way we are I'd love to hear a little bit how for your company you're thinking about you know navigating this change that we're going through great well you know key systems as an ICT service provider we've been around for decades I'm not different to many of our clients we had to change the whole disruption of the cloud and digitization and new skills and new capability and agility it's something we had to face as well so over the last five years and especially in the last three years we invested heavily invested over a billion euros in building new capabilities building new offerings new infrastructures to support our clients so to be very disruptive for us as well and so and then with your customers themselves they're going through this set of change and you're working to help them how are you working to help enable your your customers as they're going through this change well you know all of them you know in this journey of changing the way they run their business leveraging IT much more to drive business results digitization and they're all looking for new skills new ideas they're looking for platforms that take them away from traditional waterfall development that takes a year or a year and a half before they see any results to processes and ways of bringing applications in a week in a month etcetera so it's it's we are part of that journey with them helping them for that and speaking of that I know we're working together and to help our joint customers with that can you talk a little bit more about what we're doing together sure well you know our relationship goes back years and years with with the Enterprise Linux but over the last few years we've invested heavily in OpenShift and OpenStack to build peope as layers to build you know flexible infrastructure for our clients and we've been working with you we tested many different technology in the marketplace and been more successful with Red Hat and the stack there and I'll give you an applique an example several large European car manufacturers who have connected cars now as a given have been accelerating the applications that needed to be in the car and in the past it took them years if not you know scores to get an application into the car and today we're using open shift as the past layer to develop to enable these DevOps for these companies and they bring applications in less than a month and it's a huge change in the dynamics of the competitiveness in the marketplace and we rely on your team and in helping us drive that capability to our clients yeah do you find it fascinating so many of the stories that you hear and that we've talked about with with our customers is this need for speed and this ability to accelerate and enable a greater degree of innovation by simply accelerating what what we're seeing with our customers absolutely with that plus you know the speed is important agility is really critical but doing it securely doing it doing it in a way that is not gonna destabilize the you know the broader ecosystem is really critical and things like GDP are which is a new security standard in Europe is something that a lot of our customers worry about they need help with and we're one of the partners that know what that really is all about and how to navigate within that and use not prevent them from using the new technologies yeah I will say it isn't just the speed of the external but the security and the regulation especially GDR we have spent an hour on that with our board this week there you go he said well thank you so much for being here really to appreciate the work that we're doing together and look forward to continued same here thank you thank you [Applause] we've had a great partnership with tea systems over the years and we've really taken it to the next level and what's really exciting about that is you know we've moved beyond just helping kind of host systems for our customers we really are jointly enabling their success and it's really exciting and we're really excited about what we're able to to jointly accomplish so next i'm really excited that we have our innovation award winners here and we'll have on stage with us our innovation award winners this year our BBVA dnm IAG lasat Lufthansa Technik and UPS and yet they're all working in one for specific technology initiatives that they're doing that really really stand out and are really really exciting you'll have a chance to learn a lot more about those through the course of the event over the next couple of days but in this context what I found fascinating is they were each addressing a different point of this configure enable engage and I thought it would be really great for you all to hear about how they're experimenting and working to solve these problems you know real-time large organizations you know happening now let's start with the video to see what they think about when they think about innovation I define innovation is something that's changing the model changing the way of thinking not just a step change improvement not just making something better but actually taking a look at what already exists and then putting them together in new and exciting lives innovation is about to build something nobody has done before historically we had a statement that business drives technology we flip that equation around an IT is now demonstrating to the business at power of technology innovation desde el punto de vista de la tecnologÃa supone salir de plataform as proprietary as ADA Madero cloud basado an open source it's a possibility the open source que no parameter no sir Kamala and I think way that for me open-source stands for flexibility speed security the community and that contribution from the community is really driving innovation innovation at a pace that I don't think our one individual organization could actually do ourselves right so first I'd like to talk with BBVA I love this story because as you know Financial Services is going through a massive set of transformations and BBVA really is at the leading edge of thinking about how to deploy a hybrid cloud strategy and kind of modular layered architecture to be successful regardless of what happens in the future so with that I'd like to welcome on stage Jose Maria Rosetta from BBVA [Music] thank you for being here and congratulations on your innovation award it's been a pleasure to be here with you it's great to have you hi everybody so Josemaria for those who might not be familiar with BBVA can you give us a little bit of background on your company yeah a brief description BBVA is is a bank as a financial institution with diversified business model and that provides well financial services to more than 73 million of customers in more than 20 countries great and I know we've worked with you for a long time so we appreciate that the partnership with you so I thought I'd start with a really easy question for you how will blockchain you know impact financial services in the next five years I've gotten no idea but if someone knows the answer I've got a job for him for him up a pretty good job indeed you know oh all right well let me go a little easier then so how will the global payments industry change in the next you know four or five years five years well I think you need a a Weezer well I tried to make my best prediction means that in five years just probably will be five years older good answer I like that I always abstract up I hope so I hope so yah-yah-yah hope so good point so you know immediately that's the obvious question you have a massive technology infrastructure is a global bank how do you prepare yourself to enable the organization to be successful when you really don't know what the future is gonna be well global banks and wealth BBBS a global gam Bank a certain component foundations you know today I would like to talk about risk and efficiency so World Bank's deal with risk with the market great the operational reputational risk and so on so risk control is part of all or DNA you know and when you've got millions of customers you know efficiency efficiency is a must so I think there's no problem with all these foundations they problem the problem analyze the problems appears when when banks translate these foundations is valued into technology so risk control or risk management avoid risk usually means by the most expensive proprietary technology in the market you know from one of the biggest software companies in the world you know so probably all of you there are so those people in the room were glad to hear you say that yeah probably my guess the name of those companies around San Francisco most of them and efficiency usually means a savory business unit as every department or country has his own specific needs by a specific solution for them so imagine yourself working in a data center full of silos with many different Hardware operating systems different languages and complex interfaces to communicate among them you know not always documented what really never documented so your life your life in is not easy you know in this scenario are well there's no room for innovation so what's been or or strategy be BES ready to move forward in this new digital world well we've chosen a different approach which is quite simple is to replace all local proprietary system by a global platform based on on open source with three main goals you know the first one is reduce the average transaction cost to one-third the second one is increase or developers productivity five times you know and the third is enable or delete the business be able to deliver solutions of three times faster so you're not quite easy Wow and everything with the same reliability as on security standards as we've got today Wow that is an extraordinary set of objectives and I will say their world on the path of making that successful which is just amazing yeah okay this is a long journey sometimes a tough journey you know to be honest so we decided to partnership with the with the best companies in there in the world and world record we think rate cut is one of these companies so we think or your values and your knowledge is critical for BBVA and well as I mentioned before our collaboration started some time ago you know and just an example in today in BBVA a Spain being one of the biggest banks in in the country you know and using red hat technology of course our firm and fronting architecture you know for mobile and internet channels runs the ninety five percent of our customers request this is approximately 3,000 requests per second and our back in architecture execute 70 millions of business transactions a day this is almost a 50% of total online transactions executed in the country so it's all running yes running I hope so you check for you came on stage it's I'll be flying you know okay good there's no wood up here to knock on it's been a really great partnership it's been a pleasure yeah thank you so much for being here thank you thank you [Applause] I do love that story because again so much of what we talk about when we when we talk about preparing for digital is a processed solution and again things like agile and DevOps and modular izing components of work but this idea of thinking about platforms broadly and how they can run anywhere and actually delivering it delivering at a scale it's just a phenomenal project and experience and in the progress they've made it's a great team so next up we have two organizations that have done an exceptional job of enabling their people with the right information and the tools they need to be successful you know in both of these cases these are organizations who are under constant change and so leveraging the power of open-source to help them build these tools to enable and you'll see it the size and the scale of these in two very very different contexts it's great to see and so I'd like to welcome on stage Oh smart alza' with dnm and David Abraham's with IAG [Music] Oh smart welcome thank you so much for being here Dave great to see you thank you appreciate you being here and congratulations to you both on winning the Innovation Awards thank you so Omar I really found your story fascinating and how you're able to enable your people with data which is just significantly accelerated the pace with which they can make decisions and accelerate your ability to to act could you tell us a little more about the project and then what you're doing Jim and Tina when the muchisimas gracias por ever say interesado pono true projecto [Music] encargado registry controller las entradas a leda's persona por la Frontera argentina yo sé de dos siento treinta siete puestos de contrôle tienen lo largo de la Frontera tanto area the restreamer it EEMA e if looool in dilute ammonia shame or cinta me Jonas the tránsito sacra he trod on in another Fronteras dingus idea idea de la Magneto la cual estamos hablando la Frontera cantina tienen extension the kin same in kilo metros esto es el gada mint a maje or allege Estancia kaeun a poor carretera a la co de mexico con el akka a direction emulation s tambien o torgul premios de de residencia control a la permanencia de los rancheros en argentina pero básicamente nuestra área es prevenir que persona que estén in curie end o en delito transnational tipo pero remo trata de personas tráfico de armas sunday muy gravis SI yo que nosotros a Samos es para venir aquà es uno para que nadie meso and he saw some vetoes pueden entrada al Argentine establecer see not replaceable Terry Antone see koalas jenner are Yap liquor make animo para que - no Korra NL Angelo Millie see sighs a partir de la o doc mil DC says turmoil affirm a decision de cambiar de un sistema reactive Oh foreign c'est un sistema predict TiVo say Previn TiVo yes I don't empezamos s target area con el con las Judah in appreciable de la gente del canto la tarea el desafÃo era integra todo es desconocido vasa de datos propias estructura Radha's no instruct Radha's propias del organ is mo y de otros Organa Mo's del estado y tambien integral akan el mundo si si si como cinta yo el lo controls the Interpol o empezamos @n información anticipable pasajeros a travell CT ma p tambien intent ahmo's controller latrans Sybilla de en los happiness a través de en er de todo esto fue possible otra vez de la generation dune irreparable econo penchev y la virtualization de datos si esto fue fundamental por que entra moseyin una schema se en un modelo de intelligent a artificial eden machine learning KD o por resultado jimmy esto que todas esas de datos integral as tanto Nacional como Internacional A's le provision a nuestros nuestras an Aleta que antes del don't build Isis ice tenÃan que buscar say información integral Adel diferentes sistema z-- c yatin de Chivo manuals tarde Ando auras odious en algunos casos a tener toda la información consolidate a integra dope or poor pasajero en tiempo real esto que hizo mejor Oh el tiempo y la calidad de la toma de decisiones de nuestros durante la gente / dueño and affinity regime de lo que se trata esto es simplemente mejor our la calidad de vida de atras de mettre personas SI y meet our que el delito perform a trois Natura from Dana's Argentine sigue siendo en favor de esto SI temes uno de los paÃses mess Alberto's Allah immigration en Latin America yah hora con una plataforma mas segunda first of all I want to thank you for the interest is played for our project the National migration administration or diem records the entry and exit of people on the Argentine territory it grants residents permits to foreigners who wish to live in our country through 237 entry points land air border sea and river ways Jim dnm registered over 80 million transits throughout last year Argentine borders cover about 15,000 kilometers just our just to give you an idea of the magnitude of our borders this is greater than the distance on a highway between Mexico City and Alaska our department applies the mechanisms that prevent the entry and residents of people involved in crimes like terrorism trafficking of persons weapons drugs and others in 2016 we shifted to a more preventive and predictive paradigm that is how Sam's the system for migration analysis was created with red hats great assistance and support this allowed us to tackle the challenge of integrating multiple and varied issues legal issues police databases national and international security organizations like Interpol API advanced passenger information and PNR passenger name record this involved starting private cloud with OpenShift Rev data virtualization cloud forms and fuse that were the basis to develop Sam and implementing machine learning models and artificial intelligence our analysts consulted a number of systems and other manual files before 2016 4 days for each person entering or leaving the country so this has allowed us to optimize our decisions making them in real time each time Sam is consulted it processes patterns of over two billion data entries Sam's aim is to improve the quality of life of our citizens and visitors making sure that crime doesn't pierce our borders in an environment of analytic evolution and constant improvement in essence Sam contributes toward Argentina being one of the leaders in Latin America in terms of immigration with our new system great thank you and and so Dave tell us a little more about the insurance industry and the challenges in the EU face yeah sure so you know in the insurance industry it's a it's been a bit sort of insulated from a lot of major change in disruption just purely from the fact that it's highly regulated and the cost of so that the barrier to entry is quite high in fact if you think about insurance you know you have to have capital reserves to protect against those major events like floods bush fires and so on but the whole thing is a lot of change there's come in a really rapid pace I'm also in the areas of customer expectations you know customers and now looking and expecting for the same levels of flexibility and convenience that they would experience with more modern and new startups they're expecting out of the older institutions like banks and insurance companies like us so definitely expecting the industry to to be a lot more adaptable and to better meet their needs I think the other aspect of it really is in the data the data area where I think that the donor is now creating a much more significant connection between organizations in a car summers especially when you think about the level of devices that are now enabled and the sheer growth of data that's that that's growing at exponential rates so so that the impact then is that the systems that we used to rely on are the technology we used to rely on to be able to handle that kind of growth no longer keeps up and is able to to you know build for the future so we need to sort of change that so what I G's really doing is transform transforming the organization to become a lot more efficient focus more on customers and and really set ourselves up to be agile and adaptive and so ya know as part of your Innovation Award that the specific set of projects you tied a huge amount of different disparate systems together and with M&A and other you have a lot to do there to you tell us a little more about kind of how you're able to better respond to customer needs by being able to do that yeah no you're right so we've we've we're nearly a hundred year old company that's grown from lots of merger and acquisition and just as a result of that that means that data's been sort of spread out and fragmented across multiple brands and multiple products and so the number one sort of issue and problem that we were hearing was that it was too hard to get access to data and it's highly complicated which is not great from a company from our perspective really because because we are a data company right that's what we do we we collect data about people what they what's important to them what they value and the environment in which they live so that we can understand that risk and better manage and protect those people so what we're doing is we're trying to make and what we have been doing is making data more open and accessible and and by that I mean making data more of easily available for people to use it to make decisions in their day-to-day activity and to do that what we've done is built a single data platform across the group that unifies the data into a single source of truth that we can then build on top of that single views of customers for example that puts the right information into the into the hands of the people that need it the most and so now why does open source play such a big part in doing that I know there are a lot of different solutions that could get you there sure well firstly I think I've been sauce has been k2 these and really it's been key because we've basically started started from scratch to build this this new next-generation data platform based on entirely open-source you know using great components like Kafka and Postgres and airflow and and and and and then fundamentally building on top of red Red Hat OpenStack right to power all that and they give us the flexibility that we need to be able to make things happen much faster for example we were just talking to the pivotal guys earlier this week here and some of the stuff that we're doing they're they're things quite interesting innovative writes even sort of maybe first in the world where we've taken the older sort of appliance and dedicated sort of massive parallel processing unit and ported that over onto red Red Hat OpenStack right which is now giving us a lot more flexibility for scale in a much more efficient way but you're right though that we've come from in the past a more traditional approach to to using vendor based technology right which was good back then when you know technology solutions could last for around 10 years or so on and and that was fine but now that we need to move much faster we've had to rethink that and and so our focus has been on using you know more commoditized open source technology built by communities to give us that adaptability and sort of remove the locking in there any entrenchment of technology so that's really helped us but but I think that the last point that's been really critical to us is is answering that that concern and question about ongoing support and maintenance right so you know in a regular environment the regulator is really concerned about anything that could fundamentally impact business operation and and so the question is always about what happens when something goes wrong who's going to be there to support you which is where the value of the the partnership we have with Red Hat has really come into its own right and what what it's done is is it's actually giving us the best of both worlds a means that we can we can leverage and use and and and you know take some of the technology that's being developed by great communities in the open source way but also partner with a trusted partner in red had to say you know they're going to stand behind that community and provide that support when we needed the most so that's been the kind of the real value out of that partnership okay well I appreciate I love the story it's how do you move quickly leverage the power community but do it in a safe secure way and I love the idea of your literally empowering people with machine learning and AI at the moment when they need it it's just an incredible story so thank you so much for being here appreciate it thank you [Applause] you know again you see in these the the importance of enabling people with data and in an old-world was so much data was created with a system in mind versus data is a separate asset that needs to be available real time to anyone is a theme we hear over and over and over again and so you know really looking at open source solutions that allow that flexibility and keep data from getting locked into proprietary silos you know is a theme that we've I've heard over and over over the past year with many of our customers so I love logistics I'm a geek that way I come from that background in the past and I know that running large complex operations requires flawless execution and that requires great data and we have two great examples today around how to engage own organizations in new and more effective ways in the case of lufthansa technik literally IT became the business so it wasn't enabling the business it became the business offering and importantly went from idea to delivery to customers in a hundred days and so this theme of speed and the importance of speed it's a it's a great story you'll hear more about and then also at UPS UPS again I talked a little earlier about IT used to be kind of the long pole in the tent the thing that was slow moving because of the technology but UPS is showing that IT can actually drive the business and the cadence of business even faster by demonstrating the power and potential of technology to engage in this case hundreds of thousands of people to make decisions real-time in the face of obviously constant change around weather mechanicals and all the different things that can happen in a large logistics operation like that so I'd like to welcome on stage to be us more from Lufthansa Technik and Nick Castillo from ups to be us welcome thank you for being here Nick thank you thank you Jim and congratulations on your Innovation Awards oh thank you it's a great honor so to be us let's start with you can you tell us a little bit more about what a viet are is yeah avatars are a digital platform offering features like aircraft condition analytics reliability management and predictive maintenance and it helps airlines worldwide to digitize and improve their operations so all of the features work and can be used separately or generate even more where you burn combined and finally we decided to set up a viet as an open platform that means that we avoid the whole aviation industry to join the community and develop ideas on our platform and to be as one of things i found really fascinating about this is that you had a mandate to do this at a hundred days and you ultimately delivered on it you tell us a little bit about that i mean nothing in aviation moves that fast yeah that's been a big challenge so in the beginning of our story the Lufthansa bot asked us to develop somehow digital to win of an aircraft within just hundred days and to deliver something of value within 100 days means you cannot spend much time and producing specifications in terms of paper etc so for us it was pretty clear that we should go for an angel approach and immediately start and developing ideas so we put the best experts we know just in one room and let them start to work and on day 2 I think we already had the first scribbles for the UI on day 5 we wrote the first lines of code and we were able to do that because it has been a major advantage for us to already have four technologies taken place it's based on open source and especially rated solutions because we did not have to waste any time setting up the infrastructure and since we wanted to get feedback very fast we were certainly visited an airline from the Lufthansa group already on day 30 and showed them the first results and got a lot of feedback and because from the very beginning customer centricity has been an important aspect for us and changing the direction based on customer feedback has become quite normal for us over time yeah it's an interesting story not only engaging the people internally but be able to engage with a with that with a launch customer like that and get feedback along the way as it's great thing how is it going overall since launch yeah since the launch last year in April we generated much interest in the industry as well from Airlines as from competitors and in the following month we focused on a few Airlines which had been open minded and already advanced in digital activities and we've got a lot of feedback by working with them and we're able to improve our products by developing new features for example we learned that data integration can become quite complex in the industry and therefore we developed a new feature called quick boarding allowing Airlines to integrate into the via table platform within one day using a self-service so and currently we're heading for the next steps beyond predictive maintenance working on process automation and prescriptive prescriptive maintenance because we believe prediction without fulfillment still isn't enough it really is a great example of even once you're out there quickly continuing to innovate change react it's great to see so Nick I mean we all know ups I'm still always blown away by the size and scale of the company and the logistics operations that you run you tell us a little more about the project and what we're doing together yeah sure Jim and you know first of all I think I didn't get the sportcoat memo I think I'm the first one up here today with a sport coat but you know first on you know on behalf of the 430,000 ups was around the world and our just world-class talented team of 5,000 IT professionals I have to tell you we're humbled to be one of this year's red hat Innovation Award recipients so we really appreciate that you know as a global logistics provider we deliver about 20 million packages each day and we've got a portfolio of technologies both operational and customer tech and another customer facing side the power what we call the UPS smart logistics network and I gotta tell you innovations in our DNA technology is at the core of everything we do you know from the ever familiar first and industry mobile platform that a lot of you see when you get delivered a package which we call the diad which believe it or not we delivered in 1992 my choice a data-driven solution that drives over 40 million of our my choice customers I'm whatever you know what this is great he loves logistics he's a my choice customer you could be one too by the way there's a free app in the App Store but it provides unmatched visibility and really controls that last mile delivery experience so now today we're gonna talk about the solution that we're recognized for which is called site which is part of a much greater platform that we call edge which is transforming how our package delivery teams operate providing them real-time insights into our operations you know this allows them to make decisions based on data from 32 disparate data sources and these insights help us to optimize our operations but more importantly they help us improve the delivery experience for our customers just like you Jim you know on the on the back end is Big Data and it's on a large scale our systems are crunching billions of events to render those insights on an easy-to-use mobile platform in real time I got to tell you placing that information in our operators hands makes ups agile and being agile being able to react to changing conditions as you know is the name of the game in logistics now we built edge in our private cloud where Red Hat technologies play a very important role as part of our overage overarching cloud strategy and our migration to agile and DevOps so it's it's amazing it's amazing the size and scale so so you have this technology vision around engaging people in a more effect way those are my word not yours but but I'd be at that's how it certainly feels and so tell us a little more about how that enables the hundreds of thousands people to make better decisions every day yep so you know we're a people company and the edge platform is really the latest in a series of solutions to really empower our people and really power that smart logistics network you know we've been deploying technology believe it or not since we founded the company in 1907 we'll be a hundred and eleven years old this August it's just a phenomenal story now prior to edge and specifically the syphon ishutin firm ation from a number of disparate systems and reports they then need to manually look across these various data sources and and frankly it was inefficient and prone to inaccuracy and it wasn't really real-time at all now edge consumes data as I mentioned earlier from 32 disparate systems it allows our operators to make decisions on staffing equipment the flow of packages through the buildings in real time the ability to give our people on the ground the most up-to-date data allows them to make informed decisions now that's incredibly empowering because not only are they influencing their local operations but frankly they're influencing the entire global network it's truly extraordinary and so why open source and open shift in particular as part of that solution yeah you know so as I mentioned Red Hat and Red Hat technology you know specifically open shift there's really core to our cloud strategy and to our DevOps strategy the tools and environments that we've partnered with Red Hat to put in place truly are foundational and they've fundamentally changed the way we develop and deploy our systems you know I heard Jose talk earlier you know we had complex solutions that used to take 12 to 18 months to develop and deliver to market today we deliver those same solutions same level of complexity in months and even weeks now openshift enables us to container raise our workloads that run in our private cloud during normal operating periods but as we scale our business during our holiday peak season which is a very sure window about five weeks during the year last year as a matter of fact we delivered seven hundred and sixty-two million packages in that small window and our transactions our systems they just spiked dramatically during that period we think that having open shift will allow us in those peak periods to seamlessly move workloads to the public cloud so we can take advantage of burst capacity economically when needed and I have to tell you having this flexibility I think is key because you know ultimately it's going to allow us to react quickly to customer demands when needed dial back capacity when we don't need that capacity and I have to say it's a really great story of UPS and red hat working you together it really is a great story is just amazing again the size and scope but both stories here a lot speed speed speed getting to market quickly being able to try things it's great lessons learned for all of us the importance of being able to operate at a fundamentally different clock speed so thank you all for being here very much appreciated congratulate thank you [Applause] [Music] alright so while it's great to hear from our Innovation Award winners and it should be no surprise that they're leading and experimenting in some really interesting areas its scale so I hope that you got a chance to learn something from these interviews you'll have an opportunity to learn more about them you'll also have an opportunity to vote on the innovator of the year you can do that on the Red Hat summit mobile app or on the Red Hat Innovation Awards homepage you can learn even more about their stories and you'll have a chance to vote and I'll be back tomorrow to announce the the summit winner so next I like to spend a few minutes on talking about how Red Hat is working to catalyze our customers efforts Marko bill Peter our senior vice president of customer experience and engagement and John Alessio our vice president of global services will both describe areas in how we are working to configure our own organization to effectively engage with our customers to use open source to help drive their success so with that I'd like to welcome marquel on stage [Music] good morning good morning thank you Jim so I want to spend a few minutes to talk about how we are configured how we are configured towards your success how we enable internally as well to work towards your success and actually engage as well you know Paul yesterday talked about the open source culture and our open source development net model you know there's a lot of attributes that we have like transparency meritocracy collaboration those are the key of our culture they made RedHat what it is today and what it will be in the future but we also added our passion for customer success to that let me tell you this is kind of the configuration from a cultural perspective let me tell you a little bit on what that means so if you heard the name my organization is customer experience and engagement right in the past we talked a lot about support it's an important part of the Red Hat right and how we are configured we are configured probably very uniquely in the industry we put support together we have product security in there we add a documentation we add a quality engineering into an organization you think there's like wow why are they doing it we're also running actually the IT team for actually the product teams why are we doing that now you can imagine right we want to go through what you see as well right and I'll give you a few examples on how what's coming out of this configuration we invest more and more in testing integration and use cases which you are applying so you can see it between the support team experiencing a lot what you do and actually changing our test structure that makes a lot of sense we are investing more and more testing outside the boundaries so not exactly how things must fall by product management or engineering but also how does it really run in an environment that you operate we run complex setups internally right taking openshift putting in OpenStack using software-defined storage underneath managing it with cloud forms managing it if inside we do that we want to see how that works right we are reshaping documentation console to kind of help you better instead of just documenting features and knobs as in how can how do you want to achieve things now part of this is the configuration that are the big part of the configuration is the voice of the customer to listen to what you say I've been here at Red Hat a few years and one of my passion has always been really hearing from customers how they do it I travel constantly in the world and meet with customers because I want to know what is really going on we use channels like support we use channels like getting from salespeople the interaction from customers we do surveys we do you know we interact with our people to really hear what you do what we also do what maybe not many know and it's also very unique in the industry we have a webpage called you asked reacted we show very transparently you told us this is an area for improvement and it's not just in support it's across the company right build us a better web store build us this we're very transparent about Hades improvements we want to do with you now if you want to be part of the process today go to the feedback zone on the next floor down and talk to my team I might be there as well hit me up we want to hear the feedback this is how we talk about configuration of the organization how we are configured let me go to let me go to another part which is innovation innovation every day and that in my opinion the enable section right we gotta constantly innovate ourselves how do we work with you how do we actually provide better value how do we provide faster responses in support this is what we would I say is is our you know commitment to innovation which is the enabling that Jim talked about and I give you a few examples which I'm really happy and it kind of shows the open source culture at Red Hat our commitment is for innovation I'll give you good example right if you have a few thousand engineers and you empower them you kind of set the business framework as hey this is an area we got to do something you get a lot of good IDs you get a lot of IDs and you got a shape an inter an area that hey this is really something that brings now a few years ago we kind of said or I say is like based on a lot of feedback is we got to get more and more proactive if you customers and so I shaped my team and and I shaped it around how can we be more proactive it started very simple as in like from kbase articles or knowledgebase articles in getting started guys then we started a a tool that we put out called labs you've probably seen them if you're on the technical side really taking small applications out for you to kind of validate is this configured correctly stat configure there was the start then out of that the ideas came and they took different turns and one of the turns that we came out was right at insights that we launched a few years ago and did you see the demo yesterday that in Paul's keynote that they showed how something was broken with one the data centers how it was applied to fix and how has changed this is how innovation really came from the ground up from the support side and turned into something really a being a cornerstone of our strategy and we're keeping it married from the day to day work right you don't want to separate this you want to actually keep that the data that's coming from the support goes in that because that's the power that we saw yesterday in the demo now innovation doesn't stop when you set the challenge so we did the labs we did the insights we just launched a solution engine called solution engine another thing that came out of that challenge is in how do we break complex issues down that it's easier for you to find a solution quicker it's one example but we're also experimenting with AI so insights uses AI as you probably heard yesterday we also use it internally to actually drive faster resolution we did in one case with a a our I bought basically that we get to 25% faster resolution on challenges that you have the beauty for you obviously it's well this is much faster 10% of all our support cases today are supported and assisted by an AI now I'll give you another example of just trying to tell you the innovation that comes out if you configure and enable the team correctly kbase articles are knowledgebase articles we q8 thousands and thousands every year and then I get feedback as and while they're good but they're in English as you can tell my English is perfect so it's not no issue for that but for many of you is maybe like even here even I read it in Japanese so we actually did machine translation because it's too many that we can do manually the using machine translation I can tell it's a funny example two weeks ago I tried it I tried something from English to German I looked at it the German looked really bad I went back but the English was bad so it really translates one to one actually what it does but it's really cool this is innovation that you can apply and the team actually worked on this and really proud on that now the real innovation there is not these tools the real innovation is that you can actually shape it in a way that the innovation comes that you empower the people that's the configure and enable and what I think is all it's important this don't reinvent the plumbing don't start from scratch use systems like containers on open shift to actually build the innovation in a smaller way without reinventing the plumbing you save a lot of issues on security a lot of issues on reinventing the wheel focus on that that's what we do as well if you want to hear more details again go in the second floor now let's talk about the engage that Jim mentioned before what I translate that engage is actually engaging you as a customer towards your success now what does commitment to success really mean and I want to reflect on that on a traditional IT company shows up with you talk the salesperson solution architect works with you consulting implements solution it comes over to support and trust me in a very traditional way the support guy has no clue what actually was sold early on it's what happens right and this is actually I think that red had better that we're not so silent we don't show our internal silos or internal organization that much today we engage in a way it doesn't matter from which team it comes we have a better flow than that you deserve how the sausage is made but we can never forget what was your business objective early on now how is Red Hat different in this and we are very strong in my opinion you might disagree but we are very strong in a virtual accounting right really putting you in the middle and actually having a solution architect work directly with support or consulting involved and driving that together you can also help us in actually really embracing that model if that's also other partners or system integrators integrate put yourself in the middle be around that's how we want to make sure that we don't lose sight of the original business problem trust me reducing the hierarchy or getting rid of hierarchy and bureaucracy goes a long way now this is how we configured this is how we engage and this is how we are committed to your success with that I'm going to introduce you to John Alessio that talks more about some of the innovation done with customers thank you [Music] good morning I'm John Alessio I'm the vice president of Global Services and I'm delighted to be with you here today I'd like to talk to you about a couple of things as it relates to what we've been doing since the last summit in the services organization at the core of everything we did it's very similar to what Marco talked to you about our number one priority is driving our customer success with red hat technology and as you see here on the screen we have a number of different offerings and capabilities all the way from training certification open innovation labs consulting really pairing those capabilities together with what you just heard from Marco in the support or cee organization really that's the journey you all go through from the beginning of discovering what your business challenge is all the way through designing those solutions and deploying them with red hat now the highlight like to highlight a few things of what we've been up to over the last year so if I start with the training and certification team they've been very busy over the last year really updating enhancing our curriculum if you haven't stopped by the booth there's a preview for new capability around our learning community which is a new way of learning and really driving that enable meant in the community because 70% of what you need to know you learned from your peers and so it's a very key part of our learning strategy and in fact we take customer satisfaction with our training and certification business very seriously we survey all of our students coming out of training 93% of our students tell us they're better prepared because of red hat training and certification after Weeds they've completed the course we've updated the courses and we've trained well over a hundred and fifty thousand people over the last two years so it's a very very key part of our strategy and that combined with innovation labs and the consulting operation really drive that overall journey now we've been equally busy in enhancing the system of enablement and support for our business partners another very very key initiative is building out the ecosystem we've enhanced our open platform which is online partner enablement network we've added new capability and in fact much of the training and enablement that we do for our internal consultants our deal is delivered through the open platform now what I'm really impressed with and thankful for our partners is how they are consuming and leveraging this material we train and enable for sales for pre-sales and for delivery and we're up over 70% year in year in our partners that are enabled on RedHat technology let's give our business partners a round of applause now one of our offerings Red Hat open innovation labs I'd like to talk a bit more about and take you through a case study open innovation labs was created two years ago it's really there to help you on your journey in adopting open source technology it's an immersive experience where your team will work side-by-side with Red Hatters to really propel your journey forward in adopting open source technology and in fact we've been very busy since the summit in Boston as you'll see coming up on the screen we've completed dozens of engagements leveraging our methods tools and processes for open innovation labs as you can see we've worked with large and small accounts in fact if you remember summit last year we had a European customer easier AG on stage which was a startup and we worked with them at the very beginning of their business to create capabilities in a very short four-week engagement but over the last year we've also worked with very large customers such as Optim and Delta Airlines here in North America as well as Motability operations in the European arena one of the accounts I want to spend a little bit more time on is Heritage Bank heritage Bank is a community owned bank in Toowoomba Australia their challenge was not just on creating new innovative technology but their challenge was also around cultural transformation how to get people to work together across the silos within their organization we worked with them at all levels of the organization to create a new capability the first engagement went so well that they asked us to come in into a second engagement so I'd like to do now is run a video with Peter lock the chief executive officer of Heritage Bank so he can take you through their experience Heritage Bank is one of the country's oldest financial institutions we have to be smarter we have to be more innovative we have to be more agile we had to change we had to find people to help us make that change the Red Hat lab is the only one that truly helps drive that change with a business problem the change within the team is very visible from the start to now we've gone from being separated to very single goal minded seeing people that I only ever seen before in their cubicles in the room made me smile programmers in their thinking I'm now understanding how the whole process fits together the productivity of IT will change and that is good for our business that's really the value that were looking for the Red Hat innovation labs for us were a really great experience I'm not interested in running an organization I'm interested in making a great organization to say I was pleasantly surprised by it is an understatement I was delighted I love the quote I was delighted makes my heart warm every time I see that video you know since we were at summit for those of you who are with us in Boston some of you went on our hardhat tours we've opened three physical facilities here at Red Hat where we can conduct red head open Innovation Lab engagements Singapore London and Boston were all opened within the last physical year and in fact our site in Boston is paired with our world-class executive briefing center as well so if you haven't been there please do check it out I'd like to now talk to you a bit about a very special engagement that we just recently completed we just recently completed an engagement with UNICEF the United Nations Children's Fund and the the purpose behind this engagement was really to help UNICEF create an open-source platform that marries big data with social good the idea is UNICEF needs to be better prepared to respond to emergency situations and as you can imagine emergency situations are by nature unpredictable you can't really plan for them they can happen anytime anywhere and so we worked with them on a project that we called school mapping and the idea was to provide more insights so that when emergency situations arise UNICEF could do a much better job in helping the children in the region and so we leveraged our Red Hat open innovation lab methods tools processes that you've heard about just like we did at Heritage Bank and the other accounts I mentioned but then we also leveraged Red Hat software technologies so we leveraged OpenShift container platform we leveraged ansible automation we helped the client with a more agile development approach so they could have releases much more frequently and continue to update this over time we created a continuous integration continuous deployment pipeline we worked on containers and container in the application etc with that we've been able to provide a platform that is going to allow for their growth to better respond to these emergency situations let's watch a short video on UNICEF mission of UNICEF innovation is to apply technology to the world's most pressing problems facing children data is changing the landscape of what we do at UNICEF this means that we can figure out what's happening now on the ground who it's happening to and actually respond to it in much more of a real-time manner than we used to be able to do we love working with open source communities because of their commitment that we should be doing good for the world we're actually with red hat building a sandbox where universities or other researchers or data scientists can connect and help us with our work if you want to use data for social good there's so many groups out there that really need your help and there's so many ways to get involved [Music] so let's give a very very warm red hat summit welcome to Erica kochi co-founder of unicef innovation well Erica first of all welcome to Red Hat summit thanks for having me here it's our pleasure and thank you for joining us so Erica I've just talked a bit about kind of what we've been up to and Red Hat services over the last year we talked a bit about our open innovation labs and we did this project the school mapping project together our two teams and I thought the audience might find it interesting from your point of view on why the approach we use in innovation labs was such a good fit for the school mapping project yeah it was a great fit for for two reasons the first is values everything that we do at UNICEF innovation we use open source technology and that's for a couple of reasons because we can take it from one place and very easily move it to other countries around the world we work in 190 countries so that's really important for us not to be able to scale things also because it makes sense we can get we can get more communities involved in this and look not just try to do everything by ourselves but look much open much more openly towards the open source communities out there to help us with our work we can't do it alone yeah and then the second thing is methodology you know the labs are really looking at taking this agile approach to prototyping things trying things failing trying again and that's really necessary when you're developing something new and trying to do something new like mapping every school in the world yeah very challenging work think about it 190 countries Wow and so the open source platform really works well and then the the rapid prototyping was really a good fit so I think the audience might find it interesting on how this application and this platform will help children in Latin America so in a lot of countries in Latin America and many countries throughout the world that UNICEF works in are coming out of either decades of conflict or are are subject to natural disasters and not great infrastructure so it's really important to a for us to know where schools are where communities are well where help is needed what's connected what's not and using a overlay of various sources of data from poverty mapping to satellite imagery to other sources we can really figure out what's happening where resources are where they aren't and so we can plan better to respond to emergencies and to and to really invest in areas that are needed that need that investment excellent excellent it's quite powerful what we were able to do in a relatively short eight or nine week engagement that our two teams did together now many of your colleagues in the audience are using open source today looking to expand their use of open source and I thought you might have some recommendations for them on how they kind of go through that journey and expanding their use of open source since your experience at that yeah for us it was it was very much based on what's this gonna cost we have limited resources and what's how is this gonna spread as quickly as possible mm-hmm and so we really asked ourselves those two questions you know about 10 years ago and what we realized is if we are going to be recommending technologies that governments are going to be using it really needs to be open source they need to have control over it yeah and they need to be working with communities not developing it themselves yeah excellent excellent so I got really inspired with what we were doing here in this project it's one of those you know every customer project is really interesting to me this one kind of pulls a little bit at your heartstrings on what the real impact could be here and so I know some of our colleagues here in the audience may want to get involved how can they get involved well there's many ways to get involved with the other UNICEF or other groups out there you can search for our work on github and there are tasks that you can do right now if and if you're looking for to do she's got work for you and if you want sort of a more a longer engagement or a bigger engagement you can check out our website UNICEF stories org and you can look at the areas you might be interested in and contact us we're always open to collaboration excellent well Erica thank you for being with us here today thank you for the great project we worked on together and have a great summer thank you for being give her a round of applause all right well I hope that's been helpful to you to give you a bit of an update on what we've been focused on in global services the message I'll leave with you is our top priority is customer success as you heard through the story from UNICEF from Heritage Bank and others we can help you innovate where you are today I hope you have a great summit and I'll call out Jim Whitehurst thank you John and thank you Erica that's really an inspiring story we have so many great examples of how individuals and organizations are stepping up to transform in the face of digital disruption I'd like to spend my last few minutes with one real-world example that brings a lot of this together and truly with life-saving impact how many times do you think you can solve a problem which is going to allow a clinician to now save the life I think the challenge all of his physicians are dealing with is data overload I probably look at over 100,000 images in a day and that's just gonna get worse what if it was possible for some computer program to look at these images with them and automatically flag images that might deserve better attention Chris on the surface seems pretty simple but underneath Chris has a lot going on in the past year I've seen Chris Foreman community and a space usually dominated by proprietary software I think Chris can change medicine as we know it today [Music] all right with that I'd like to invite on stage dr. Ellen grant from Boston Children's Hospital dr. grant welcome thank you for being here so dr. grant tell me who is Chris Chris does a lot of work for us and I think Chris is making me or has definitely the potential to make me a better doctor Chris helps us take data from our archives in the hospital and port it to wrap the fastback ends like the mass up and cloud to do rapid data processing and provide it back to me in any format on a desktop an iPad or an iPhone so it it basically brings high-end data analysis right to me at the bedside and that's been a barrier that I struggled with years ago to try to break down so that's where we started with Chris is to to break that barrier between research that occurred on a timeline of days to weeks to months to clinical practice which occurs in the timeline of seconds to minutes well one of things I found really fascinating about this story RedHat in case you can't tell we're really passionate about user driven innovation is this is an example of user driven innovation not directly at a technology company but in medicine excuse me can you tell us just a little bit about the genesis of Chris and how I got started yeah Chris got started when I was running a clinical division and I was very frustrated with not having the latest image analysis tools at my fingertips while I was on clinical practice and I would have to on the research so I could go over and you know do line code and do the data analysis but if I'm always over in clinical I kept forgetting how to do those things and I wanted to have all those innovations that my fingertips and not have to remember all the computer science because I'm a physician not like a better scientist so I wanted to build a platform that gave me easy access to that back-end without having to remember all the details and so that's what Chris does for us is brings allowed me to go into the PAC's grab a dataset send it to a computer and back in to do the analysis and bring it back to me without having to worry about where it was or how it got there that's all involved in the in the platform Chris and why not just go to a vendor and ask them to write a piece of software for you to do that yeah we thought about that and we do a lot of technical innovations and we always work with the experts so we wanted to work with if I'm going to be able to say an optical device I'm going to work with the optical engineers or an EM our system I'm going to work with em our engineers so we wanted to work with people who really knew or the plumbers so to speak of the software in industry so we ended up working with the massive point cloud for the platform and the distributed systems in Red Hat as the infrastructure that's starting to support Chris and that's been actually a really incredible journey for us because medical ready medical softwares not typically been a community process and that's something that working with dan from Red Hat we learned a lot about how to participate in an open community and I think our team has grown a lot as a result of that collaboration and I know you we've talked about in the past that getting this data locked into a proprietary system you may not be able to get out there's a real issue can you talk about the importance of open and how that's worked in the process yeah and I think for the medical community and I find this resonates with other physicians as well too is that it's medical data we want to continue to own and we feel very awkward about giving it to industry so we would rather have our data sitting in an open cloud like the mass open cloud where we can have a data consortium that oversees the data governance so that we're not giving our data way to somebody else but have a platform that we can still keep a control of our own data and I think it's going to be the future because we're running of a space in the hospital we generate so much data and it's just going to get worse as I was mentioning and all the systems run faster we get new devices so the amount of data that we have to filter through is just astronomically increasing so we need to have resources to store and compute on such large databases and so thinking about where this could go I mean this is a classic feels like an open-source project it started really really small with a originally modest set of goals and it's just kind of continue to grow and grow and grow it's a lot like if yes leanest torval Linux would be in 1995 you probably wouldn't think it would be where it is now so if you dream with me a little bit where do you think this could possibly go in the next five years ten years what I hope it'll do is allow us to break down the silos within the hospital because to do the best job at what we physicians do not only do we have to talk and collaborate together as individuals we have to take the data each each community develops and be able to bring it together so in other words I need to be able to bring in information from vital monitors from mr scans from optical devices from genetic tests electronic health record and be able to analyze on all that data combined so ideally this would be a platform that breaks down those information barriers in a hospital and also allows us to collaborate across multiple institutions because many disorders you only see a few in each hospital so we really have to work as teams in the medical community to combine our data together and also I'm hoping that and we even have discussions with people in the developing world because they have systems to generate or to got to create data or say for example an M R system they can't create data but they don't have the resources to analyze on it so this would be a portable for them to participate in this growing data analysis world without having to have the infrastructure there and be a portal into our back-end and we could provide the infrastructure to do the data analysis it really is truly amazing to see how it's just continued to grow and grow and expand it really is it's a phenomenal story thank you so much for being here appreciate it thank you [Applause] I really do love that story it's a great example of user driven innovation you know in a different industry than in technology and you know recognizing that a clinicians need for real-time information is very different than a researchers need you know in projects that can last weeks and months and so rather than trying to get an industry to pivot and change it's a great opportunity to use a user driven approach to directly meet those needs so we still have a long way to go we have two more days of the summit and as I said yesterday you know we're not here to give you all the answers we're here to convene the conversation so I hope you will have an opportunity today and tomorrow to meet some new people to share some ideas we're really really excited about what we can all do when we work together so I hope you found today valuable we still have a lot more happening on the main stage as well this afternoon please join us back for the general session it's a really amazing lineup you'll hear from the women and opensource Award winners you'll also hear more about our collab program which is really cool it's getting middle school girls interested in open sourcing coding and so you'll have an opportunity to see some people involved in that you'll also hear from the open source Story speakers and you'll including in that you will see a demo done by a technologist who happens to be 11 years old so really cool you don't want to miss that so I look forward to seeing you then this afternoon thank you [Applause]
SUMMARY :
from the day to day work right you don't
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John Alessio | PERSON | 0.99+ |
Mike Walker | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
Chris | PERSON | 0.99+ |
UNICEF | ORGANIZATION | 0.99+ |
2016 | DATE | 0.99+ |
BBVA | ORGANIZATION | 0.99+ |
John Alessio | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Jim Whitehurst | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Lufthansa | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Sam | PERSON | 0.99+ |
Erica | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Peter lock | PERSON | 0.99+ |
Lufthansa Technik | ORGANIZATION | 0.99+ |
12 | QUANTITY | 0.99+ |
1992 | DATE | 0.99+ |
Delta Airlines | ORGANIZATION | 0.99+ |
1995 | DATE | 0.99+ |
Josemaria | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
25% | QUANTITY | 0.99+ |
Adele | PERSON | 0.99+ |
70% | QUANTITY | 0.99+ |
1907 | DATE | 0.99+ |
Heritage Bank | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
Lufthansa Technik | ORGANIZATION | 0.99+ |
Nick Castillo | PERSON | 0.99+ |
Jim Whitehurst | PERSON | 0.99+ |
Heritage Bank | ORGANIZATION | 0.99+ |
Adelle | PERSON | 0.99+ |
two teams | QUANTITY | 0.99+ |
UPS | ORGANIZATION | 0.99+ |
English | OTHER | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
North America | LOCATION | 0.99+ |
Alaska | LOCATION | 0.99+ |
hundred days | QUANTITY | 0.99+ |
ninety five percent | QUANTITY | 0.99+ |
Latin America | LOCATION | 0.99+ |
Heritage Bank | ORGANIZATION | 0.99+ |
10% | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
yesterday | DATE | 0.99+ |
Jose Maria Rosetta | PERSON | 0.99+ |
Omar | PERSON | 0.99+ |
two questions | QUANTITY | 0.99+ |
4 days | QUANTITY | 0.99+ |
Mexico City | LOCATION | 0.99+ |
Marco | PERSON | 0.99+ |
Optim | ORGANIZATION | 0.99+ |
five years | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
two teams | QUANTITY | 0.99+ |
Samos | LOCATION | 0.99+ |
iPad | COMMERCIAL_ITEM | 0.99+ |
last year | DATE | 0.99+ |
Nick | PERSON | 0.99+ |
today | DATE | 0.99+ |
David Abraham | PERSON | 0.99+ |
German | OTHER | 0.99+ |
five years | QUANTITY | 0.99+ |
Erica Kochi & Mike Walker | Red Hat Summit 2018
live from San Francisco it's the cube covering Red Hat summit 2018 brought to you by Red Hat okay welcome back everyone would live here in San Francisco California the Moscone West for the cubes exclusive coverage of Red Hat summit 2018 I'm John for the co-host of the cube was my closest week analyst John Troy a co-founder of tech reckoning advisory and Community Development firm our next two guests is erica kochi co-founder of unicef innovation the United Nations Children's Fund and Mike Walker director of open innovation labs at Red Hat welcome to the cube thanks for coming joining us thanks love this story so Erica take a minute to talk about what you're working on at UNICEF you guys doing a lot of great stuff you got the relationship with Red Hat innovation labs but you doing some pretty amazing things take them into explain what you're doing at UNICEF some of the projects and what we're going to talk about here with the school and the magic fox all the greatness sure so UNICEF innovation essentially what we do is take technology and apply it to the problems facing children around the world and we do that in a variety of ways I think the things that we're probably most most known for is our work in mobile technology to connect frontline health workers and young people to to governments and have let them have a say in what's happening in in you know the halls of government and we have a program called you report which has five million young people from all over the world who are talking directly to their government representatives they need that now more than ever we certainly do yeah so open source obviously with red hat big shared vision talk about the shared mission like what's going on but there where's the connection I was the open source was great for society we've seen the benefits all around the world how is this translating for you guys yeah so I've been at Red Hat for a while and obviously we're the world's largest open source enterprise open source software company and I as a consultant been able to see Red Hat open source software used for many different purposes in every vertical you can think of but this one was really unique because we found a natural partnership I think between some of UNICEF innovations vision to use open source and open principles for maximum impact for good and so when I learned about innovation at UNICEF really by chance I just ran into a colleague at a meeting in New York and and she gave me a few words about it I said this is incredible because we can leverage all of what we learned at Red Hat our knowledge of open source to impact people and culture and not just for technical reasons and and partnered with UNICEF to make maximum social impact for children that need it most and you got red house key a technology company a lot of smart people there but with open source there's been a DNA in your bloodstream of the company around democratization and now we're out in the open with everyone online and everything's good this is a democratization piece talk about some of that the things that you guys are doing with red hat what specifically are you celebrating together here so we had a great collaboration with with Red Hat there at their labs program which really took a look at our challenge of using big data to read or understand what's happening on the ground especially in schools in countries that are either coming out of emergencies or have limited access to a lot of the parts of the country and so we layered satellite imagery information on poverty other sets of data you can really get a clear picture about where we should be allocating our resources and how we should be planning for emergencies and this collaboration just just finished up a couple days ago right and it's really been great what's some of the impact give an example of some of the use cases so actually saving time money will be things around yeah what what are some of the impact things that you see with this project what are some of the things a lot of countries right now are thinking about how they can connect all of their schools and make sure all of it their schools are online and give children this access to information that's really essential to to thrive in the world of today and tomorrow and if you don't know where your schools are and you don't know if they're connected or not and you can't see you know what else is happening it's in the socio-economic way in those areas it's really hard to figure out what to do and where to start so we're really just at the beginning of that process to try to connect every school in the world and we're at the moment we're trying to lay the groundwork to understand where we're at and where we need a level of insight you're providing once you connect the schools you can get people can know what to do and how to align with what's happening it's interesting I was just in Puerto Rico a couple weeks ago and the young kids there have self formed their own blockchain network between the schools and they're teaching themselves how to program because they recognize that to get out of their world and the mess that they're challenging through now post-hurricane they want to participate in the new economy so as someone not knowing that if I know I could help you're kind of providing a window into that kind of dynamic where is that kind of the use case is that how it's working so it's but participation and contribution is absolutely participation is key you know for young people and they need to it like really learn how to acquire the skills that they're gonna need to you know become successful productive adults in the future and school is you know one of the entry points to do that so that's really important and everyone loves that - yeah I'm kind of curious about the structure of the project today in the keynote you know Jim why does she start us off by saying well you know we can't plan everything we've got to be a little bit more agile here's a framework for how to how to really approach problems when we really don't know what the outcome or even what we're gonna hit so can you talk a little bit maybe about the structure of the of the process and did you know did you start with a blank piece of paper or do you know how did how did you figure out the pathway to the ultimate outcome here yeah I can take it first um that's a great question because at labs we experiment with ways to get fast feedback and really in a very short amount of time usually one to three months and a very limited amount of funds how can we make maximum impact using open technologies and open practices so the project was already in progress like most IT projects are right Gardi been some research we have data scientists to work with and one of the first things we did was really talk about really our concerns and fears about how we might work together using an exercise called how might we we kind of came together and said how might we solve this problem or that problem and just got it out on the table one of the aspects that I think work really is dedicating a small team in a residency style engagement where we worked off premise so Red Hatters left their office UNICEF folks left their office we came together in akola works based in New York City that was fairly convenient and you know we all focused on a tough problem and we decided really early on that in order to make sure that this problem actually would be usable and in the hands of end users in the field across the world we needed to get face to face so we made a trip to Latin America to work with a UNICEF field office to get fast feet up feedback on prototypes and that helped us adjust what we ended up shipping as the product at the end of the two months cycle Erika how was the outcome for you and your game it's great I think you know one of the things that really aligns RedHat and UNICEF is not just a commitment to open source and the values around that but also this agile methodology I think that you know to really move something a product forward or sort of a program forward you need to step away from the daily part of life you know and move away from the the email and the connection to the laptop and the phone and I think we were able to do that I also think that you need to ground truth things and so that you know that trip to the field and to really understand the context and the problems that that people are facing is is completely critical to success and that's like agile programming you kind of gotta get get out in the front lines not ask about the data I'm really intrigued so you got multiple data sources coming in love the satellite thing you're changing lives but you're saving lives too is your talk about you may name real-time efforts here what's the data science thing what's the tech behind I mean is it ingesting data as a third party data Z how does it work I mean can you share some some of the mechanics on the date of data science piece er yeah I think there's probably a lot we can talk about I could talk about data all day love data but some of the things that I think were fundamentally really exciting about this project and about what UNICEF innovation has done so let's take for example Facebook they have a whole lot of data but that's one company and it's sort of one lens to the world right it's it's quite broad and we get a lot of information but it's one company what UNICEF innovation has done is found ways to partner with private and public companies and private and public data sources in a way that maintains the security and integrity of that data so that it's not exposing proprietary information but they've been able to create those that community essentially that's willing to share information to solve a really tough challenge for social good and so we have actually a really wide variety of data at our disposal and our job was to create a sandbox that allow data scientists to really both proactively plan for things that might happen and reactively plan when events occur when we don't even know what that event might be so you know I like to think back to Jim Whitehurst's speech last year at summit where I said planning is dead we've got to try learn and modify I think that's exactly what we aim to build a platform that you know hasn't been planned for any one event or action but provides the flexibility for data scientists to try experiment pull different data together learn from it sharing Maps we integrated geospatial data and maps to be able to pass this along quickly and then modify based on the results so we can more quickly achieve something with the greatest impact that's awesome yeah so for example if you take you know you take like for example epidemics right so many factors are so many different types of data are needed to really understand what's happening in an epidemic for example take Zika you have temperature right mosquitos only breed at a certain at a certain temperature you have poverty or which really indicates standing water where mosquitoes can breed you have socioeconomic factors so it does the house more likely does it have mosquito screens or not and then you have the social right what are people talking about what are they concerned about and I think like a really interesting picture emerges when you can start to layer all of these kinds of data and that really helps us see where we should be focusing it's great discovery information using the data to drive kind of we're to look at and we're to focus efforts exactly and also a global footprint right and in previous decades maybe this would have run on a piece with some sort of a proprietary GIS thing or or yeah I'm not even sure right you chip around discs maybe but I mean not not to be too product oriented right built on OpenShift we've seen a whole lot this week right these global footprint you could take it live on any cloud I assume that's a piece of it right at global accessibility now for they out for the the resulting application absolutely and we want to take you know what we've done in one scenario and apply it to many others in many other locations and so being an open source is key for this because we wouldn't be able to do this in other locations are replicated just as easily handed to local folks have them an adapted and/or take it further or have other people work on it whether it's academics other companies us nice I love the structure like how its agile I got a Eric I ought to ask you about this because we're seeing a big trend with open source obviously that's well on its way to becoming it is the standard of doing software but mission driven technology activities aren't just nonprofits anymore you starting to see collaboration the JOBS Act that Obama put in place really set the table for new kind of funding so you've seen a lot more younger people coming in and saying hey you know what I can build it on the cloud and grants aware but the code gets live on right so you seeing a new flywheel around mission driven nonprofits and for-profits a new kind of entrepreneurship culture can you share insight into how this is developer you see a lot of it you have a lot of thoughts on this your them please so I think that you know as technology companies become so much more influential in our lives you know they're not just showing you the news anymore they were they're moving into every aspect of our lives whether it's in our into our homes or even inside our bodies that they're they're occupying as so much more influential role in an individual's life with that comes a tremendous amount of responsibility and I think that while it's not enough to say you should do good because it's the right thing to do I think that employees also really demand it I think that you know and that shift will occur because employees realize that they want to they want to be doing good in the world and if they're gonna be influencing so many people's lives that's really really it's a new citizenship model for the younger generations early Millennials want to work in a company that's not just the profit hungar motive but also there's some dynamics going on with the infrastructure world you look at Facebook as a classic example you know the word weaponizing content has been a bad thing but we've been talking about in the queue there's actually a reverse of that polar opposite which is you can weaponize content for good meaning that all the same principles that do bad things can be used for good things so this is where we started to see a lot more people saying hey let's do more of the fad and punish so the new kind of rules are developing in the society so I find it fascinating and I'm just curious is this known within the societal entrepreneurship culture or what's the what's your view on how to do more how to do better I'm doing a lot of work in what AI is gonna be meaning what's what it's gonna mean for children in the world and you know there are so many opportunities we've been talking about some of them but there are also a lot of risks right what does it mean when your child's best friend is a robot what does that change about our us us you know as human beings and so I think it's you know you have to look at both sides and you have to be very conscious about designing the technology that you want to see in the world that's gonna make the world a good place to live in and I think that there definitely is an awakening and that's going and there's a lot this is a first generation set of problems that social entrepreneurship brings a just society I mean who sets the policy which side of the road the cars drive on or you know there's these new issues that are evolving that I've never been seen before you know cyber bullying - all kinds of things happening so congratulations on all those success so what's the forecast for Red Hat innovation has more of this gonna continue double down on it what are the things do you guys have going on yeah so Labs is growing quite largely we are now live in North America amia and a pack with plans to expand extend to Latin in the future and we're growing quite quickly in terms of our ability to execute I guess you know the labs team is relatively small a small number of specialists but we are all of RedHat so the way we operate is based on what we're trying to achieve together we will look at all of red hat and sometimes even outside of red hat to figure out who we can bring to the table to help solve that problem and so it allows me to work with our engineering with our business units even with our marketing so we brought marketing in to the first meeting not simply because we're creating a marketing event but we realized we need to advertise internally and externally what we build in order to gain adoption it's part of building a community and what I have found is because Labs has an injective that goes beyond you know simply a technological objective we're aiming to change ways of working and to change culture it's really easy to build a lot of interest and adoption among all Red Hatters to bring them together to solve a tough problem a really an interesting facet a lot about labs I know you do these pop-up labs and I think this was what you know you don't make necessarily make people come to you you son can come to them but I think like you said it's important to get outside your your office and your day-to-day for these focused projects you talked a little bit about your approach to yeah so we've learned a lot you know Labs is almost exactly two years old I think we launched in April of 2016 at OpenStack summit and one thing we learned is you know the world is a big place and we can't necessarily have a physical lab location everywhere so we do have first-class facilities in Boston Singapore in London but I would say the large majority of the work efforts we've done to date have been in what we call pop-up labs and what that allows us to do is create that immersion and focus on a tough challenge by getting people out of the office but also provide the ability to go home at the end of the day and have dinner at your home which a lot of people enjoy and from the red head perspective we've got a lot of folks used to travel so we can make that happen meet in the middle and and it's been a good hybrid approach that we end up doing more and more great stuff here actually is my final question for then to take from Jim Whitehouse keynote today how is blockchain changing this open for good economics that's absolutely right and I mean Erika you might want to weigh in as well but I think I love blockchain first of all I love math and I love the science behind it but I love the fact that it was developed in the open it was debated in the open it's radically transparent you can see all of the transactions of anyone in the chain and it's being used in ways that no one ever dreamed of I mean it was meant for a universal currency but you know think about this we might be able to use it as a token system so that we can actually ensure that humanitarian efforts that are done are actually recognized by people that they may not otherwise have funds right someone with very little money can still use so perhaps takers making sure the money gets put to use absolutely and endpoints we have accountability you know we're using it to exchange electronic health records securely and privately with the people that need them and only the people that need them so I don't know where blockchain will be in five years but I am optimistic that I think the mathematics and the fundamental is a blockchain or sound and I think more than anything it's the community that will drive new applications of blockchain and really define and answer that question for you well I know we'll be in New York next week with blockchain for consensus of ennum there's a lot of ents going on we've seen wealthy entrepreneurs donating Bitcoin and aetherium there's a really great project so and a lot of young people love the blockchain and crypto so who knows got to be on that labs we're definitely look you know looking into it and we have a couple experiments around the world that range from trying to do some smart contracts you know in in country environments to taking donations in in blockchain armies Arion cryptocurrencies I think that there are a lot of exciting applications for it in this due to do good space I also think that there's a tremendous amount of hype and you know you really have to ask yourself the key question of like does this need a central trusted Authority or is there one that exists that already is great um and do we need to record every transaction if you can answer those two questions then the other baby going somewhere well great point the other thing I would answer that agree hundred percent and that is is that blockchain and crypto our token economic certainty not the ico scams but is an efficiency heat-seeking missile it it targets efficiencies where there's inefficiencies announced where I see a lot of the action going on and you know efforts and for good are highly inefficient yeah so hey you knows well we'd love blockchain as you can tell we talk about all day long smart contracts token economics thanks for coming on and congratulations on your project thank you you're good to stuff their cube coverage here day two of three days live coverage here in San Francisco the Red Hat summit 2018 moved back after this short break stay with us
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
UNICEF | ORGANIZATION | 0.99+ |
John Troy | PERSON | 0.99+ |
New York | LOCATION | 0.99+ |
United Nations Children's Fund | ORGANIZATION | 0.99+ |
Mike Walker | PERSON | 0.99+ |
Jim Whitehurst | PERSON | 0.99+ |
New York | LOCATION | 0.99+ |
Puerto Rico | LOCATION | 0.99+ |
Erica Kochi | PERSON | 0.99+ |
Latin America | LOCATION | 0.99+ |
April of 2016 | DATE | 0.99+ |
New York City | LOCATION | 0.99+ |
Jim Whitehouse | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Erica | PERSON | 0.99+ |
Obama | PERSON | 0.99+ |
erica kochi | PERSON | 0.99+ |
Erika | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
North America | LOCATION | 0.99+ |
Jim | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
two questions | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
London | LOCATION | 0.99+ |
hundred percent | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
OpenStack | EVENT | 0.99+ |
JOBS Act | TITLE | 0.98+ |
two years old | QUANTITY | 0.98+ |
agile | TITLE | 0.98+ |
three days | QUANTITY | 0.98+ |
next week | DATE | 0.98+ |
last year | DATE | 0.98+ |
Red Hat summit 2018 | EVENT | 0.98+ |
RedHat | ORGANIZATION | 0.98+ |
two months | QUANTITY | 0.98+ |
Red Hatters | ORGANIZATION | 0.98+ |
tomorrow | DATE | 0.97+ |
five million young people | QUANTITY | 0.97+ |
Latin | LOCATION | 0.97+ |
first meeting | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
one scenario | QUANTITY | 0.97+ |
three months | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
John | PERSON | 0.96+ |
Red Hat | ORGANIZATION | 0.96+ |
first generation | QUANTITY | 0.96+ |
one company | QUANTITY | 0.96+ |
this week | DATE | 0.96+ |
Red Hat summit 2018 | EVENT | 0.96+ |
Boston Singapore | LOCATION | 0.96+ |
Red Hat summit 2018 | EVENT | 0.95+ |
Red Hat Summit 2018 | EVENT | 0.95+ |
Labs | ORGANIZATION | 0.94+ |
Red Hat | TITLE | 0.94+ |
Eric | PERSON | 0.94+ |
one event | QUANTITY | 0.94+ |
both | QUANTITY | 0.94+ |
five years | QUANTITY | 0.93+ |
two guests | QUANTITY | 0.93+ |
first | QUANTITY | 0.93+ |
unicef innovation | ORGANIZATION | 0.93+ |
Zika | OTHER | 0.91+ |
San Francisco California | LOCATION | 0.91+ |
a couple weeks ago | DATE | 0.88+ |
Moscone West | LOCATION | 0.86+ |
red hat | TITLE | 0.86+ |
a couple days ago | DATE | 0.84+ |
a lot more people | QUANTITY | 0.82+ |
two | QUANTITY | 0.8+ |
Deepak Bharadwaj, ServiceNow | ServiceNow Knowledge18
>> Announcer: Live from Las Vegas it's theCUBE, covering ServiceNow Knowledge 2018. Brought to you by ServiceNow. >> Welcome back to theCUBE's live coverage of ServiceNow Knowledge18. I'm your host Rebecca Knight, along with my co-host Dave Vellante. We have Deepak Bharadwaj joining us. He is the General Manager of HR Business Unit at ServiceNow. Thanks so much for coming on the show, Deepak. >> My pleasure, glad to be here. >> Good to see you again. >> Likewise. >> So we know that ServiceNow is expanding beyond IT, and HR is a huge business opportunity. Describe for our viewers how you view your role, and how you see HR in the modern organization. >> Yeah, that's a great question, so what we are trying to do, really, is help our customers' HR organizations provide their employees with what I call the Google Maps for their employee journey. So if you think about Google Maps, and what it has done in terms of the transformation of the travel journey, it provides you proactively with the guidance that you need as you make your way. And so if you think about the employee journey, it could be long in an organization, it could be short, but they all have these moments that matter, whether they are personal, whether they are professional. So when you think about personal moments, that could be birth of a baby, I changed my address, I got married, things like that. It could be professional. If I'm a manager, I want to promote someone. If I'm a new hire, I'm being onboarded. So how do we help guide these employees through each of these moments that matter in that journey? And why that's important is because that's when employees need their organization's support the most, and so, if you don't get that right, then it starts to have an impact on everything from productivity and engagement, and eventually that starts to impact customer satisfactions, right? So if you really think about happy employees equals happy customers, you can really bring it back to things like employment brand, productivity, engagement, and really where the rubber meets the road and where things could fall apart is during these moments that matter. So what we do is we help HR departments manage that, provide the proactive guidance to these employees, provide high touch help when they need it because not everything can be automated, right? You might order a Starbucks on your app, but sometimes you just want to go and walk up and talk to the barista. And so we want to make sure that we can provide flexibility to our customers in being able to manage how they interact, how employees interact with these HR departments and make them feel like they've got the peace of mind, get the emotion and the stress out of these moments that matter, and get them back into what they are doing best, which is their day-to-day job. >> You said that companies are investing in, you were talking about investing in employee, in customer success, but that's really about investing in employee success because happier employees lead to happier customers. >> Deepak: Absolutely. >> They're happier to come to work. >> Deepak: Yep. >> Do companies get it? Do companies get that? >> I think they do. They get it at a philosophical level, it makes sense. I think where companies struggle with is they are trying to figure out how do they make that linkage happen. And the reality is there's no silver bullet. It's not a, you fix this one thing over here, and that's going to make an impact. And so our approach is, while there may be many other things that you need to address, right? What we focus on really is making sure that we give this employee that guidance, that help, when they need it the most because we believe that that's where things could fall apart very easily. But, on the other hand, if you actually take care of them during those moments that matter, that represents a great opportunity to differentiate themselves and create what we call competitive differentiation, right? In fact, the topic of my keynote this morning was how employee experience creates competitive differentiation. And that's what we are here to enable. >> You guys talk a lot about the HR onboarding experience. You got to get a desk. You got to get a badge. You got to sign up on this portal, that portal, and it's just a slow and somewhat painful, not really productive period in an employee's life. When I think, and you and I talked about this at headquarters. When I think about how I interact with Netflix, and Fred Levy talks about this all the time, bringing that consumer experience to the enterprise. I don't talk to Netflix's sales department or marketing department or customer service department. I just interact with Netflix. I'd like to interact with HR the same way. I believe that's what you're trying to do. Is that a reality, can that happen in our lifetimes? Is it happening today? >> Absolutely, why not, right? We've got the technology, for sure. It is a very well-known pain point. Everybody knows this pain exists. I think where we are in terms of maturity of the market for these types of solutions is trying to figure out, well, who owns this problem. So this is a very distributed problem. It's across the enterprise. And anything across the enterprise, we at ServiceNow do very well. But a lot of times, it also means that we have to go and make the case, or help our champions make the case, with many departments. So, in this case, you need to get IT on board, and facilities on board. Obviously, HR has to be on board. And there's a number of departments that have to come together. And so we still have to figure out who owns this problem, who owns the budget, how are we planning to roll this out, can we do this in a phased manner. And that's where we are today in terms of its maturity, but at this point, we launched the product last year, right? We had customers that were creating bespoke solutions before that. We productized it, we launched Enterprise Onboarding and Transitions last year at Knowledge, in fact. And we've seen, we're starting to see, the early customers starting to implement, based again on the foundation of case and knowledge management. You know, start there, get your unstructured interactions more structured. And then eventually start to automate the things that are going to make that difference, especially when they start to cut across these multiple departments. >> I know we ask you this all the time, but for our viewers who aren't as familiar with what you guys are doing in HR, if I just brought in a Workday or a SuccessFactors, or I'm a PeopleSoft customer, why do I need ServiceNow? What do you guys do? We talked to John Donahoe about, you guys are a platform of platforms, but explain that, please. >> Sure, we absolutely, and maybe I'll go back to the Google Maps metaphor. The way I think about this is, you know in my mind, Workday, you can think about them as a highway system. You have to drive on them. Yes, it's got signage, and you need to know what exits to take, right? So, to me, Workday has a good user interface, if you will. But a lot of times what employees are looking for is, where do I go? Where do I begin? What's the policy? What's the process? And so that's where the Google Maps equivalent comes in. And these two go hand in hand. And they're extremely complementary. And you just cannot imagine going out there without a maps application these days. And in fact, my, where I feel that things have truly transformed is this is not just when I don't know the way to get somewhere. You're using this for every trip now. When I go home every day after work, I'm using Google Maps. Whether I know it or not, it turns on and it tells me, oh, you're headed home, and it's going to take you 35 minutes to get home. And I didn't ask it anything. But I'm using Google Maps every day for a route that is well-traveled because I know that if there is a traffic backup, it's going to let me know. >> Dave: Police ahead. (laughing) Or whatever. >> Yeah, and so I think that's where we are different from systems that are extremely important for, you know, managing our core data, core business processes, talent management, workforce management. I mean, there are systems that do that, do that very effectively, but we are really trying to provide that guidance, especially when what you're trying to get done involves multiple departments, and a number of times, multiple systems, even within HR. >> So when you're thinking about, when you're talking to customers, what are their, what are they telling you about their biggest pain points? And then what is your, if you have any sort of overarching advice for these HR practitioners, what is it? >> That's a good question. So, we engage with customers typically three different ways. They're all related, but typically our engagement starts off either because we are talking to someone that runs shared services, and what they're trying to do is bring order to how employees are interacting with HR. And typically they will go through some sort of organizational change. They'll setup a shared services organization, which basically means that becomes a single entry point for employees to go to, and, in that case, really, the pain point is too many unstructured interactions, and they may have no technology or they may have technology that is inadequate. And we bring a method to that madness, if you will. We help them structure those interactions and help them provide the right type of support to these employees. The other way we engage with customers is they're going through a full-blown HR transformation, and they've decided that technology is going to be a big piece of their transformation. And as they are looking to move everything to the cloud, for example, we start to talk about how the interaction aspect of employees still needs to be managed. And you cannot ignore that. You cannot just move your systems to the cloud and then just hope that employees will figure this out themselves, right? Because, again, it's not about the user interface, it's about the entire end-to-end experience. So that's the other pain point that we help solve for them, in the context of a cloud-based application or set of applications, how do you make sure that they know what they need to do. And then the third piece is, it's usually a CHRO type conversation, where they are really starting to make this association between happy customers means happy employees, and so they are trying to, they have several strategic initiatives then that are on a C Suite level, trying to find out, okay, what does that really mean, and they're trying to drive great employee experiences. And so they're working top down. As part of that, they may end up with a shared services set up to manage that. They may end up with moving systems to the cloud. But it's a different angle, and they are really thinking about the holistic end-to-end experience for that employee, and what they're going to feel, how does that impact employment brand, so kind of higher order benefits that they're trying to accomplish. But ultimately, we make the HR department much more effective and efficient, and we make it very, very easy for the employee. That's what we end up doing. >> You guys completed some research with chief human resource officers. >> Deepak: Yeah, the CHRO Point of View Study. >> 500, I think, in the study? >> Deepak: Yeah, absolutely. >> Tell us about the study, the findings, what'd you learn? >> Yeah, so this was a study that was done recently. 500 CHROs and HR leaders that we studied. I think the number one thing that popped out for me was that the CHROs are thinking of their role as not someone that is managing talent, management processes, and people data and things like that. That's obviously very important and that's been the focus. But as those disciplines mature, the technologies that manage that mature, what's happening is that they're focusing towards how to create these great experiences. How to leverage digital technologies and create what we call consumerized experiences, especially during these moments that matter. So when they are thinking about their employee population, they are looking at where do these breakdowns happen? This is where, you know, things are likely to snap, quite literally, right? Employees can get angry, frustrated, overwhelmed, stressed out. This is very, you know, intrinsic, you know, it's from the gut. And so, that's where your employment brand starts to take a dip, and that ends up on Glassdoor. That will end up with those employees speaking with a friend, and that starts to directly impact employment brand. So they're starting to focus on these moments that matter. And then I think what they're trying to do is also develop digital proficiency. One of the things that came out of the study is how can CHROs be the change agent when it comes to digital transformation so that this just doesn't have to come from IT. Doesn't have to come from a different line of business. HR can manage and guide their own destiny. Obviously, IT is going to be involved. But how can HR be more and more in the driver's seat, become more digitally proficient? And we see that in our customer base. We've got a number of customers where HR deployed ServiceNow first, and really set the bar for the other departments to follow. But ultimately, we absolutely believe that every department should be on the same platform because that's where you get the economies of scope, if you will, in terms of solutions to these problems. >> What can you tell us about your business? How are you guys doing? Couple years now since you've launched this product. How's it going? >> Well, it couldn't be better. As John mentioned in our earnings column last quarter, it was last month actually, we had six million dollar plus ACV deals, just for HR, right? And that's just in one quarter, so. That starts to show you how the business is really picking up. We have hundreds of customers now using us for HR. 80% of them are off our customer bases live. In fact, we had a customer in my keynote, they did a global rollout, and they took 14 weeks to complete that global rollout. So the time to value is extremely fast, and that's one of the things that really makes it, you know, a solution that our buyers are attracted to. But, you know, the business is doing very well. Lot of interest from organizations that are all sizes, really. You know, you look at thousand person organizations. We are selling to hundreds of thousand person organizations. We're selling globally, in all geographies. We are selling to all verticals. And, you know, it's just great to see the business take off. >> Rebecca: Great, well, Deepak, thanks so much for coming on theCUBE. >> Thank you so much, and love being here. And thank you for having me. >> Dave: Awesome seeing you again, thanks. >> Rebecca: We can't wait to see you again next year. >> Likewise, thank you. >> I'm Rebecca Knight for Dave Vellante. We will have more just after this. (upbeat music)
SUMMARY :
Brought to you by ServiceNow. Welcome back to and how you see HR in and talk to the barista. lead to happier customers. and that's going to make an impact. You got to sign up on that have to come together. I know we ask you this all the time, and it's going to take you Dave: Police ahead. Yeah, and so I think that's where And as they are looking to with chief human resource officers. Deepak: Yeah, the and that starts to directly How are you guys doing? So the time to value for coming on theCUBE. And thank you for having me. to see you again next year. I'm Rebecca Knight for Dave Vellante.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Deepak Bharadwaj | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Rebecca | PERSON | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
35 minutes | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
Deepak | PERSON | 0.99+ |
John Donahoe | PERSON | 0.99+ |
14 weeks | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
last quarter | DATE | 0.99+ |
ServiceNow | ORGANIZATION | 0.99+ |
last month | DATE | 0.99+ |
80% | QUANTITY | 0.99+ |
Starbucks | ORGANIZATION | 0.99+ |
third piece | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
today | DATE | 0.99+ |
Google Maps | TITLE | 0.99+ |
One | QUANTITY | 0.98+ |
Las Vegas | LOCATION | 0.98+ |
two | QUANTITY | 0.98+ |
Glassdoor | ORGANIZATION | 0.98+ |
three different ways | QUANTITY | 0.97+ |
each | QUANTITY | 0.97+ |
Couple years | QUANTITY | 0.97+ |
PeopleSoft | ORGANIZATION | 0.97+ |
hundreds of customers | QUANTITY | 0.97+ |
one | QUANTITY | 0.96+ |
Google Maps | TITLE | 0.96+ |
500 CHROs | QUANTITY | 0.94+ |
theCUBE | ORGANIZATION | 0.92+ |
C Suite | TITLE | 0.92+ |
thousand person | QUANTITY | 0.92+ |
first | QUANTITY | 0.9+ |
ServiceNow Knowledge 2018 | TITLE | 0.9+ |
six million dollar | QUANTITY | 0.86+ |
500 | QUANTITY | 0.85+ |
Fred Lev | PERSON | 0.83+ |
this morning | DATE | 0.83+ |
one thing | QUANTITY | 0.8+ |
single entry | QUANTITY | 0.79+ |
hundreds of | QUANTITY | 0.78+ |
ServiceNow Knowledge18 | TITLE | 0.78+ |
SuccessFactors | ORGANIZATION | 0.78+ |
one quarter | QUANTITY | 0.67+ |
Stefan Renner, Veeam & Darren Williams, Cisco | Cisco Live EU 2018
>> Announcer: From Barcelona, Spain, it's theCUBE covering Cisco Live 2018. Brought to you by Cisco, Veeam and theCUBE's ecosystem partners. >> Here in Barcelona, Spain. It's theCUBE's exclusive coverage of Cisco Live 2018 in Europe. I'm John Furrier, co-host of theCUBE, with my partner in crime this week Stu Miniman, Senior Analyst at Wikibon. Also co-host of many events across the world in terms of networking, storage, Cloud, you name it, Stu is on the developers with me. Stu, thanks. Nice seeing you. Stefan Renner is Technical Director, Global Alliances at Veeam Software is with us with Darren Williams, @MrHyperFlex, that's his Twitter handle, go check him out. HyperFlex-V at Cisco, guys welcome to theCUBE. >> Thank you. >> Also love the Twitter handle. >> Darren: I live the brand. >> You live the brand. I mean that's got some longevity to it, it's evergreen. So congratulations on that. You guys are together with Cisco Veeam, what's the story? What's going on in Europe with Cisco and Veeam? >> I would say there is a lot of stuff going on between Cisco and Veeam. Especially around the Hyperflex story, obviously is the topic of this session, right? So having integration, Hyperflex, having a good go-to-market, having a good relationship between the two companies. We just joked about how often we've been in front of cameras talking about this exact same topic. So that shows that the relationship between the two of us is really moving forward and in a good shape. >> I think we're in good shape in terms of, you think about not just my product, Hyperflex, but you look at what Veeam can do for the rest of Cisco data-centered products, and be that backup, safer hands around what we need in terms of that data protection layer. But also then, what we can add in terms of that target to be the server of choice for backups so you get the benefits of the speed, performance, and more importantly, you get quicker restores. Because that's the important bit, you need to be able to do the quick restore. >> Yeah, we usually talk about availability, right? We don't talk about backups or recovery. Even if recovery is maybe the most important part of availability, still we talk more about availability than maybe anything else. The good thing about Cisco is that the actually can deliver what we need in terms of performance, in terms of capacity, in terms of compute resources. So yeah, that's a real benefit. >> It's such an interesting time, I mean we look back at history, go back 10 years ago, maybe, or more; backup recover, that's like, "Oh, we forgot to talk about that in our RFP." Kind of bolted on, kind of retrofitted in. But now we've seen it come to the main center. But more importantly, with AI and Cloud, and all the action happening with DevOpps on premises, you hear CIOs and CXOs and developers saying, "We're data driven." >> Yeah. >> Okay, so if you're data driven, you have to be data protection driven too. So those things go hand in hand. So the question for you guys is how does a data driven organization, whether it's in the data center, all the way up to the business units, or the business processes, become data protection built in? How do they design in from day one a data protection system up and down the stack? >> Yeah, so maybe I'll start to answer that question. I think when I'm going to customers, and I fully agree on what you just said, most customers 10 years ago were focusing on getting used to platforms and getting used to org systems. It has to be an isolated project, right? Now in those days when I go to customers I tried to convince them to include data protection in every project the do in data center, because at the end, data protection is one of the core elements. >> So designing in early, at the front end? >> I say whenever you go about having a new Hyperflex system or whenever you talk about replacing your existing environment, whatever you do, right, just look into data protection, looking into your availability story. Because right now, and you mentioned that, it's about data services, right? We don't really talk about restoring of EM, we don't restore to the single file. It's about, the customer wants to have a data availability in terms of a service availability. And that includes more than just the VM, it includes more than just the single thing, right? >> Yeah. So they need to include data protection and the design of that in the whole org chart. From the beginning. >> And you're point? >> Yeah we look at it from a similar thing in terms of where you've got changes happening in terms of the way people are looking at how they want to design their applications, where they want their data to live. And that's the whole messaging around 3.0, is that multi-Cloud readiness platform. Being able to think about an application and go, "Do I want to design in the public, and house privately, "or vice versa? Do I want to house the data "of the application in a private location "and the actual application in public?" Having that being able to be transparent to a user in terms of the way they design it and then position, but also as we look at other applications, not all people on this journey are going to go, "We're going to put everything in the Cloud." They're going to look at about, maybe have a little bit in the Cloud, a little bit of the traditional apps we need to manage and protect. And it's all about that 3.0 that we've delivered the pre-multi-Cloud offering around Hyperconvergence, we've now brought the multi-Cloud element. It's giving you the choice of where you want to position things, where you want to house things, how you want to design things. And keeping it nice and simple for customers, and the agility and performance. >> Darren, some really interesting points that you just had there. When I think back to a few years ago, Hyperconverge, pretty strong in North America. But it was project based, it was like, let's take a VDI, some virtualized environment, it wasn't a Cloud discussion. >> Darren: Correct. >> Take us inside what you're seeing in Europe here, because today Hyperconverge is a lot about Cloud, how that kind of hybrid or multi-Cloud environment, so what are you hearing from your customers? >> Absolutely, and I think if you look at the, what's happened in times of Hyperconvergence up to this point it's the initial building block of this multi-Cloud. And we're seeing more and more customers now, I think the latest IDC survey, surveyed that 87% of all customers have a multi-Cloud strategy. And we're seeing now more of the ability to think of Hyperconvergence as that multi-Cloud strategy, and have that simplicity that people have done in terms of the initial thought around a simple application, how they can collapse the layers, they can now utilize that experience into the multi-Cloud experience. And we're seeing more and more of that. We've now got 2500 users around the world around Hyperflex, and about 700-800 EMEA, and the majority of those are utilizing it as private Cloud experience. They're getting the benefits of what they've had in the Cloud, and getting away from the sovereignty issues, and the shadow IT issues that they all face. They can now bring it back into their own data center. They can start small. They can spin out applications very quickly. They're getting the benefit of that Cloud message, but locally now. >> And I think that perfectly aligns with the Veeam story because as you know we are also focusing on the Cloud. We recently changed and also did some acquisitions on the Cloud, so we're also moving forward in the Cloud story and the HyperCloud area. And that's more or less what Cisco's multi-Cloud's story is also about, right? And I think one thing we should also mention here coming a bit back to how to implement and how to design such solutions as having more of a broad view on all the projects. I think one important thing for customers is the CBD Cisco has, right? And we do have CBD available to beam Cisco on the data protection layer. So we try to make it really easy for customers and for partners to design, implement and actually do the right decisions for those projects. >> Stefan, at Veeam On, of course a lot of partners, a lot of talk about the multi-Cloud, of course Veeam has a long history of VMware, but why don't you talk about Microsoft? I believe there's some things you've been doing lately with Hyper-V and the like, what's the update? >> Yeah, so obviously with Hyperflex there is Hyper-V coming, right? That's one of the bigger things coming to Hyperflex. Now for us, when we started to talk with Cisco, Cisco actually told us that Hyper-V is next and 3.0. We said that's fine for us, because as I said, we are dealing with Hyper-V like we did with VMware since a couple of years. So there is no big difference in terms of features and what we can do with Hyper-V. On the Microsoft side obviously it's around extract, which also is a big story with Cisco and Veeam, because there is a extract solution, and so we tried to get the extract fully integrated in the Veeam portfolio, and it's about effort, right? As we just talked about, making this Cloud journey even easier for the customer, making sure we have data protection forever, or making sure we can actually use our Cloud solutions to provide the full experience in the cloud. >> So the question on European audience, I was just looking at some Twitter tweets, getting in some feedback, is, "Ask the GDPR our question." Which is basically code words for the sophistication between data protection, you know we say as you get bitten in the butt if you don't prepare. And this is one of those things where I mean literally, there's so much data out there, people can't understand their own tables. I mean, if you have accounts, how do I know a user uses a certain name in this one, I got a certain name in this database, I mean it's just a nightmare to even understand what data do you have, nevermind taking someone out of a database. >> Yeah. >> So, the challenges are massive. >> Yep. >> This is coming down and it really highlights the bigger trend is: what do I do with the data, what is my protection, what's my recovery, how do I engage in real time, GDPR issue? Talk about the GDPR issue, and then what it really is going to mean for customers going forward. >> Well, I think if you think about GDPR, and people, I've got the understanding that it's just a mere thing, it's not. It's a worldwide thing. Any data that relates to a European citizen, anywhere in the world, is covered under the GDPR. So you've got to think about the multinationals we work with, have to have this GDPR thoughts, even if they're not based in EMEA. They may house data based around a European citizen. So it's a massive thing. Now, not one person or one organization can fix GDPR. We're all part of a bigger framework. So it looks like if you look at the Hyperflex offering, having self-encrypting drives, having good data protection and replication of the data so it's protected. That protects the actual content of a record, but it doesn't solve everything around GDPR. There's no one organization that can do that. It's about having that framework of you do the right decisions around the architecture, and the data protection, you'll get in there in terms of the protection. >> Well, I mean, I'm just going to rant here and say whoever came up with GDPR doesn't know anything about databases, okay. >> Darren: Yeah. >> I mean I get the concept, but, I mean, just think about how hard it is to deal with unstructured data, and structured data in and of itself within a company. Nevermind inside a company, what's happening externally, it is a technical nightmare. And so, yeah, just hand waving, "Hey, someone came "to your website." Well, did they come in anonymously, did they login, which identity did they login on? There's no - I mean it's a nightmare. This is a huge problem. What do customers do? >> I think if you talk about GDPR it's first of all not about a single solution, right? It's not an issue of just one company, or one vendor, one solution. It goes across different databases, different applications, different software, so as you said, it's database solutions, you need to delete maybe a single table entry, which is almost impossible right now. Especially if that's ina backup, right? How are you going to do that? I think between Cisco and us, and he mentioned that one important part of GDPR is data protection itself. So the customers need to make sure they can actually promise and they can show to the government that they have a proper data protection in place, so they can showcase what does my DR plan look like? How do I recover? What is my RPO? So we can already solve those issues. >> It changes your game because, for you, it turns you into a insurance policy to a proactive; in order to do data protection you actually have to know what the data is. So it kind of creates an opportunity to say hey, this is an opportunity to say we're going to start thinking about, kind of a new e-discovery model. >> If you look at 3.0, the multi-Cloud platform, we were discussing around how Hyperconvergence started very small in certain apps. But when you actually then expand that out into the multi-Cloud, security is a major pillar. And you've got to have the security elements, and Cisco has some great security offerings in the data center and outside of the data center. They all form part of that GDPR message. But it's been baked into multi-Cloud 3.0. as a key component to allow customers that confidence. >> It's going to be a Hyperconvergence of databases. So this is coming. >> Darren: Yeah. >> So this is going to force, I think the compliance is going to be more a shot across the bow, if you will. I don't know how hardcore they're going to be enforcing it. >> It's going to be interesting in the first one. Because at the moment I think a lot of customers are thinking, "Well, we'll wait till we see "how big the fines are, and then we'll decide." >> They're going to create shell corporations in the Cayman Islands. (laughter) >> Alright, so we've talked a little bit about some of the headwinds we're facing in IT. Talk about the tailwinds. A lot of things in the Hyperflex 3.0, got 700-800 customers, what's going to drive adoption, get that into thousands of customers here in 2018? >> So I think it's the simplicity message. Customers want ease of use of technology. They want to get away from what they've had before where they've had tough times standing up applications, where they've had to invest time around different skill sets for the infrastructure, be it networking, be it storage, be it compute. Having 3 teams back leaning against each other, and change windows. So the simplicity message of Hyperflex is you can have a three node cluster up and running in 34 minutes, including the network. We're the only ones that incorporate the network into the solution, and we do it for good reason. Because when we can get predictability in performance, and we can grow the solution very, very easily. And that's the whole point of what they're doing, is they want to be able to start small, and add more nodes when required, around what applications they're going to deploy on. Our tagline is "any application, anywhere" now, and either private location or into that multi-Cloud location. Gives customers choice, and I think as we start seeing more and more customers, 700 in just under 2 years is a phenomenal amount in EMEA, and 2500 worldwide, we've had some great traction. And it's just going to get faster and faster. >> Yeah, I think a lot of customers are obviously talking about moving to the Cloud completely or at least majority of the data. So for the customers that stay for them, and I talked with some customers today, and they told me, "For us right now, we can't focus "anymore on a data center itself. "We do have much more difficult and more important "topics to talk about and to cover in our IT business "than the basic data center itself" That includes compute, that includes digitalization. So it's great to hear you can actually set up a Hyperflex system, no matter if that's Hyper-V or VM or whatever in less than an hour, right? And if I tell you now that if you add Veeam on that to provide the availability for Hyperflex environment that's also less than an hour. So if you know how to configure that you can be done in a couple of hours, and you have more or less the whole data center set up. >> You bring up a really good point. What are customers concerned about? I have to worry about my application portfolio, I have my security issue, my whole Cloud strategy piece, so, if the infrastructure piece is just invisible and I don't have to touch it, tweak it and do that, I'm going to have time to actually grow my business. >> The more integrated it is, the more easy it is to set up and to maintain and troubleshoot by the way, that's also an important thing, right? What if it doesn't work? If there is a consistent layer, a consistent way to get all this information sent to get a troubleshooting thing done, the better it is for our customers. Because again, they don't want to care anymore about what's happening in the back end. >> And that's the next challenge we're addressing, in-app product or Insight, is taking that management solution into the Cloud to make things easier for customers. And being able to take a lot of the things we have in point product into a Cloud model. So the likes of analytics, the likes of Smart Tac. Customers get fed up if when they have an issue they have to go and roll the logs up into Tac, and then go and FTP them. They get away from that, they don't need to do that in Insight. And it's all about, we're talking about the deployment of technology, well one of the fist benefits of Insight is Hyperflex. We can roll out sites without even visiting them. You just do a Cloud deployment, and a Cloud management, and it's job done. >> And this is the whole point we were kind of getting at earlier, connect back to the compliance issue, these agile like things are happening; it's throwing off data too. So now you got to organize the data, you can't protect what you don't understand. >> Correct. >> I mean that is ultimately the bottom line for what's happening here. >> Yeah, you can't protect what you don't understand, I think that's a good conclusion of the whole thing. And I think for us >> By the way when you guys use that tagline I want royalties. But it's true. (laughter) We'll get back to you on that. No, but this is a big problem. Protection is inherently assuming you know the data is. >> Stefan: Yeah. >> Darren: Yeah. >> There it is. >> That's for sure the case, and one thing we worked on and, you know, we announced it a couple of months ago, was the Veeam Ability Orchestrator, which is another layer on top of it. So he just talked about how they can deploy within the site, multiple sites of Hyperflex very easily. And for us it's about, you know, getting the customer an easy solution with all the successful recovery and failovers in areas across the data centers with the Availability Orchestrator. >> Data is the competitive advantage, data is messy if you don't control it and reign it in, of course theCUBE is doing their part and bringing the data to you guys here in theCUBE with Veeam and Cisco partnership. I'm John Furrier, Stu Miniman breaking it down here at Cisco Live in Europe 2018. Live coverage with theCUBE. Be back with more after this short break. (techno music)
SUMMARY :
Brought to you by Cisco, Veeam Stu is on the developers with me. I mean that's got some longevity to it, it's evergreen. So that shows that the relationship between the two of us Because that's the important bit, Even if recovery is maybe the most important part and all the action happening with DevOpps on premises, So the question for you guys is in every project the do in data center, And that includes more than just the VM, and the design of that in the whole org chart. of the traditional apps we need to manage and protect. When I think back to a few years ago, Hyperconverge, and about 700-800 EMEA, and the majority of those and actually do the right decisions for those projects. That's one of the bigger things coming to Hyperflex. in the butt if you don't prepare. Talk about the GDPR issue, and then what and replication of the data so it's protected. Well, I mean, I'm just going to rant here and say I mean I get the concept, but, I mean, just think about So the customers need to make sure they can actually in order to do data protection you actually in the data center and outside of the data center. It's going to be a Hyperconvergence of databases. is going to be more a shot across the bow, if you will. Because at the moment I think a lot in the Cayman Islands. about some of the headwinds we're facing in IT. And that's the whole point of what they're doing, So it's great to hear you can actually and I don't have to touch it, tweak it and do that, The more integrated it is, the more easy it is And that's the next challenge we're addressing, So now you got to organize the data, I mean that is ultimately the bottom line And I think for us By the way when you guys use that tagline and failovers in areas across the data centers and bringing the data to you guys here in theCUBE
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Darren Williams | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Stefan | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Stefan Renner | PERSON | 0.99+ |
Darren | PERSON | 0.99+ |
Cayman Islands | LOCATION | 0.99+ |
2018 | DATE | 0.99+ |
Veeam | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
34 minutes | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
GDPR | TITLE | 0.99+ |
87% | QUANTITY | 0.99+ |
less than an hour | QUANTITY | 0.99+ |
one solution | QUANTITY | 0.99+ |
Barcelona, Spain | LOCATION | 0.99+ |
North America | LOCATION | 0.99+ |
Veeam Software | ORGANIZATION | 0.99+ |
2500 users | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
3 teams | QUANTITY | 0.99+ |
theCUBE | ORGANIZATION | 0.99+ |
one vendor | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
EMEA | LOCATION | 0.99+ |
Stu | PERSON | 0.99+ |
one company | QUANTITY | 0.99+ |
one person | QUANTITY | 0.99+ |
Hyperflex | ORGANIZATION | 0.99+ |
700-800 customers | QUANTITY | 0.98+ |
single file | QUANTITY | 0.98+ |
single solution | QUANTITY | 0.98+ |
700 | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
IDC | ORGANIZATION | 0.98+ |
10 years ago | DATE | 0.98+ |
Wikibon | ORGANIZATION | 0.98+ |
2500 | QUANTITY | 0.98+ |
Veeam | PERSON | 0.97+ |
ORGANIZATION | 0.97+ | |
first one | QUANTITY | 0.96+ |
this week | DATE | 0.96+ |
Hyper-V | TITLE | 0.96+ |
one organization | QUANTITY | 0.94+ |
under 2 years | QUANTITY | 0.94+ |
Craig Nunes, Datrium & Sazzala Reddy, Datrium | AWS re:Invent
>> Announcer: Live from Las Vegas, it's the Cube, covering AWS re:Invent 2017. Presented by AWS, Intel, and our ecosystem of partners. (soft electronic music) >> Back on the Cube we are live here in Las Vegas at re:Invent, AWS putting on a show for about 45,00 of its closest friends. You might hear some of the cheering behind us. It's happy hour here, lot of happy folks having a good time. John Walls along with Justin Warren and we're now joined by a couple of fellows from Datrium. We have Sazzala Reddy who is the co-founder of Datrium and Craig Nunes who is VP of marketing. Gentlemen, thanks for being with us here on the Cube. We appreciate the time. >> Thanks for having us here. >> First off, let's talk about Datrium for those who are watching might not be familiar with your particular offering. If you would, Sazzala, give us a little thumbnail of what you guys are doing. >> Yeah, so we're kind of a new breed of unifying compute, primary storage, and backup all built in to the same product so that it becomes convenient for the end user so they don't have to manage multiple pieces of infrastructure. It's a unifying way of managing it. It's a new way of doing convergence. It's the next evolution of hyperconvergence and now in this particular AWS event, we are here to announce that our backup extends beyond the data center to be having it as a service running in Amazon. That's our new offering today as of this event. >> Yeah, so a little bit more about the announcement, then, because this was, again, why you're here in terms of becoming even more enjoined with AWS, that offering. I mean, if you would Craig, run through that a little bit and the prominence of that announcement, why you think this is a significant moment for you all. >> So we have seen, first of all, a huge attraction with our customers to bring the backup or data protection function into their Tier 1 environment. One individual can do it all, manage it all. At the same time we talk to a lot of folks who've got an AWS strategy and they might even have some developers doing stuff with AWS but they haven't, broadly, been able to take full advantage of backup DR in the cloud because when they do the math the numbers just haven't been there in terms of the economics of that. We felt like we could do something about that with some innovative technology that Sazzala and his guys put together around what we call Global Cloud De-duplication. Might want to talk a little bit about that. >> Yeah, so before Datrium I was a CT of a company called Data Domain, you probably heard of it. So being there, what we did there was being they're one of the pioneers in doing the global de-dup. So we learned a few things there and the other thing we learned about being in that company was that I learned that many people ignore backup. Backup seems to be one of the like-- If you have a car, you don't think of the insurance you pay for it, but it's an important part. Your family jewels are there. You gotta make it too. And if you look at the backup administrators, their life is not very happy, because everybody ignores them but we didn't want to do that. >> We were just talking about that in our last segment, too, weren't we buddy? Nobody wants to be the backup guy, right? >> Actually we want to solve that problem. >> It's great until that one mistake. >> Exactly, exactly, so we want to solve that problem very nicely which is why we have converged backup into our product because it's not another thing just to the side, it is your main family jewels so that's what we've tried to do. But to make it really work well you must have the fundamentals of storage and that's a little bit of a inf-- Like, you know, the details but details do matter in how we do this. So dedupe is a old thing, but still a lot of people don't have it. If you don't have dedupe and compression, all the other data reduction features, backup, you really can't do backup. And then how do you extend it beyond the data center. So if you're gonna do like tape, you do fulls and every weeek in incrementals. If you do the same thing to the cloud, you know the expense of that story, it's a thing. So it's not really practical anymore. What we wanted to do was bring two things to the cloud. One is that we know that AWS is expensive and secondly, AWS is hard to use. It's like Lego pieces, right? If you're a developer, you can put it together, but if I want to just use it, consume it, how do you bring that to the market? So we did two things. One is that we extended the global dedupe all the way to the cloud so everything ends up there. It's all globally deduped. We got like five to like 15X dedupe over there. If you have multiple sites going into a offering, it all gets deduped. Very convenient. Over the wire transfer is very, very convenient, very, very cheap. And also the other thing we have done is that we made it as a SAS offering. See the world is moving towards a SAS offering so in the near future you'll see some of our new announcements, which I can't talk about it right now, it's still secret, but that's what the conception model is gonna be. It's a SAS model. There is developers who want Lego, right, for Amazon, but there is a lot of other people who want to run a business, not just build pieces so for that we want to build as a SAS offerings. Convenient to use, it runs in the cloud, don't have to manage it, don't have to run all these things in your data center. So this backup offering is our first entry, a backup as a service. It's very unique. It's all this global dedupe and it's a service, nothing to do. You don't have to upgrade it. You don't have to manage it. You just have to use it, consume it as a product. >> The other thing that I was gonna say is when we've introduced this to our customers, pretty much everyone has said, "Yeah, we have a strategy "to incorporate public cloud in what we're doing," but almost to an individual none of them had done it yet. I mean, certain people in their company may have accounts, but for a lot of these guys it was their first ever engagement with AWS and so for them, they understood our product. They just wanted that experience to just kind of extend to AWS and not have to figure out how much EC-2, how much Dynamo DB, how much S3 bucket size, whatever. They didn't want any of that. Just help me do what I need to do on your platform leveraging public cloud. >> Yeah, they want to run a business, not manage Lego pieces. >> John: They don't care how the watch works, just what time is it? >> Exactly. >> Yeah, yeah, right. >> And we are very good at what we do. >> So you've taken compute and storage and you've convert, you put all of that together, and you've added backup and you're basically making it a one-stop shop for people to do something so as you say, I want to tell the time, just give me a watch. You've added this remote backup capability all in there. It's like, what's left for me to do? Do I just buy some of your stuff and say, "Okay, I'm done"? There isn't anything else. >> Actually, we have customers. They haven't talked to us for six months. We call them back, "Are you okay?" They're like, "Nothing to do. "I forgot about it because it just works and it runs." That's what you get with us. >> John: Don't tell my boss. >> Yeah sure, there is that. I mean, I think there are other things to do. >> John: But still, yeah. >> Other pieces to develop which don't work as well. So they were managing those. >> And by the way, the strategic stuff that's on their plate that they've never been able to get to in the past, maybe they're managing LUNS on the storage side or dealing with backup stuff that would give them headaches. That is out and they can focus on things that accelerate the business, drive revenue, top line, and make IT a hero again. >> Yeah, making something that's simple and easy to use like that, that takes a lot of engineering and in a lot of ways people underestimate how much work goes into making something easy to use. Now you've been working on this for a little while and you've change-- Like, people might be familiar with hyperconversion, but you guys are doing things in a slightly different way, which is clearly a much better way of doing things, so could you maybe explain a little bit more about how that global dedupe works in conjunction with the stuff that's onsite which makes it a really good fit to go and expand out into the cloud. >> Sure, so firstly we believe in the philosophy of not one click but zero click. One click is too hard. You gotta read the manual to know what the one click is. So that's where our design thinking has come from. If we can eliminate that click, that's even better. Why give a choice to the customer because it means that we have not thought about it. That's kind of what the design philosophy is of our company. For the first three years, we didn't ship our product because we spent the time to build the fundamentals of the product. We can't build this later on. Like global dedupe, it was harder to build later on. It just, it's not possible. So global dedupe is this concept that if something is there already, you can avoid sending it there. You negotiate from Site A to Site B or whatever it is. It's a multi-cloud world. Wherever it is, you can negotiate and say, "Do you have this?" "Yes and No." You don't have it, then they can send you the copy and keep it there so you tend to have this massive reduction of data that also, remember, is not just that global dedupe is gonna save you cost. Ultimately, backup is about recovery. You also need a sufficient amount of tools and workflows to be able to recover what you want efficiently and also, ultimately, backup is useful if you can recover it. If you don't check it, you're gonna-- If you ever have a problem at the time of recovery you're gonna lose your job so we also do the other thing of, okay, we saved your costs but also we check it regularly to make sure that the backup data is recoverable when you need to recover it. That's also an important aspect of it. So global dedupe is like block chain. Think of it like block chain. So how do you know, for example, if you have a piece of data here, you send it somewhere else. How do you know that it all went there? Somebody said so, but how do you verify that? Fundamentally, as architecture, so our global dedupe is like block chain a little bit. We know that they sent all these pieces over there. We can verify at the high level, yes, this is the signature of the data. It's all there and say, "okay, they're good." So now you can send the data anywhere you want and you can be sure that the data you send is what you're supposed to send. >> And Justin, you mentioned kind of the difference between what we're doing and hyperconverge and if you think of hyperconverge. It has brought compute, storage, network all in the box. Our approach is different. It's more like the modern hyper-scalers in that we split that compute and active data from durable capacity. >> I like to think of it as taking all the great advantages that you got from hyperconverge but then getting rid of some of the limitations where it's like we can scale compute and storage independently of each other but we still get all the great benefits from an integrated platform. >> Yeah and the interesting proof point is when we did the cloud-native port, not a code, not a lick of code was changed in the underlying file system that users don't ever see, but that just kind of shows you that kind of approach works in a world that's gotta embrace public cloud as part of your IT strategy. >> Well before we say goodbye, I just want to get your take on the show in general. Knowing that you both probably have some history with what AWS has been up to in the past, but this is not the same as in the past. At least, that's what we're hearing from people. What's your take on what you're seeing here, what you're feeling here? >> Fair enough. So I'm a computer science kind of guy so I kind of enjoy the show because it's all familiar stuff a little bit. So what they've done is an amazing job. It's a amazing business, to be honest, how they've built all these pieces and they've executed pretty well. Their service model is pretty good. I mean, sometimes things don't work as well, the pieces, but they're willing to spend the time to work with you, which to me is pretty awesome. They're willing to have the service level agreement to call you and they're willing to forgive you. I mean, they're willing to do all these things for you. This is why people like Amazon because of the service model. So they have a lot of building blocks so I'm talking to people, I'm going to some of the sessions. What I found is that there are two kinds of people. There is the developers. They love some of the things here because it's a building block. I mean, Lego. Who doesn't wanna play Legos? >> You love your Legos, don't you, Sazzala, yeah you do. >> But I think a lot of companies don't have the time, luxury to spend time with this. They want a simpler higher-level constructs so SAS applications for example. You can build it in Amazon, so the SAS people who are building the product can build it using Lego pieces but the higher level businesses want to use SAS model. They want to use more simpler model so that's the difference between VM Ware and Amazon. I think there's a lot of developers here. In VM Ware it was mostly I think the IT folks were there because it's about operating the business, right? So I think it's interesting to see as the future goes where is that shift? Is everybody gonna be a developer? I don't think so. It's very complicated. I think as I've used some of the Amazon API's, they're actually not trivial. Have to think about what happens if it fails, what happens if this dies? I mean, they're thinking about all these things. It's pretty complicated model. >> Craig: Well, it's a good formula though and it's working for them, obviously. >> Yeah totally. >> It's all the show here, we just had Black Friday, Cyber Monday. You look around here, this is like AWS Tuesday, Wednesday, Thursday. I mean, everyone is here, everyone is kind of shopping the new tech that's integrated with their favorite public cloud. It's a huge mixer of technology and AWS is after all, they probably learn a lot from the e-commerce side, the store front, and they have kind of worked that in to their show and their partnership bringing in companies like Datrium to really leverage their infrastructure as a service. It's awesome. It's great for us. >> Well it's been a great show and thank- We appreciate the time here. Good luck with the Legos. >> Sazzala: Thank you. >> No, no, no, all right. Back with more live. We are in Las Vegas. We'll continue and almost coming down the home stretch of our live coverage here on the Cube. Back with a little bit more in just a moment. (electronic music)
SUMMARY :
it's the Cube, covering AWS re:Invent 2017. Back on the Cube we are live here in Las Vegas of what you guys are doing. beyond the data center to be having it as a service Yeah, so a little bit more about the announcement, then, At the same time we talk to a lot of folks and the other thing we learned about being in that company And also the other thing we have done is that we made it and so for them, they understood our product. Yeah, they want to run a business, so as you say, I want to tell the time, That's what you get with us. I mean, I think there are other things to do. Other pieces to develop which don't work as well. that accelerate the business, drive revenue, and easy to use like that, You gotta read the manual to know what the one click is. and if you think of hyperconverge. of some of the limitations where it's like Yeah and the interesting proof point Knowing that you both probably have some history to call you and they're willing to forgive you. You can build it in Amazon, so the SAS people and it's working for them, obviously. It's all the show here, we just had Black Friday, We appreciate the time here. of our live coverage here on the Cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Justin Warren | PERSON | 0.99+ |
Justin | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
Sazzala Reddy | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Craig | PERSON | 0.99+ |
Craig Nunes | PERSON | 0.99+ |
Sazzala | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Datrium | ORGANIZATION | 0.99+ |
VM Ware | TITLE | 0.99+ |
first entry | QUANTITY | 0.99+ |
Lego | ORGANIZATION | 0.99+ |
six months | QUANTITY | 0.99+ |
One click | QUANTITY | 0.99+ |
Data Domain | ORGANIZATION | 0.99+ |
15X | QUANTITY | 0.99+ |
one click | QUANTITY | 0.99+ |
two kinds | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
Wednesday | DATE | 0.99+ |
Legos | ORGANIZATION | 0.98+ |
first | QUANTITY | 0.98+ |
first three years | QUANTITY | 0.98+ |
First | QUANTITY | 0.98+ |
five | QUANTITY | 0.98+ |
Thursday | DATE | 0.98+ |
zero click | QUANTITY | 0.98+ |
about 45,00 | QUANTITY | 0.97+ |
Intel | ORGANIZATION | 0.96+ |
both | QUANTITY | 0.96+ |
SAS | TITLE | 0.95+ |
today | DATE | 0.95+ |
Black Friday | EVENT | 0.94+ |
Cube | COMMERCIAL_ITEM | 0.91+ |
firstly | QUANTITY | 0.91+ |
one | QUANTITY | 0.9+ |
re:Invent | EVENT | 0.89+ |
one mistake | QUANTITY | 0.89+ |
EC-2 | TITLE | 0.87+ |
secondly | QUANTITY | 0.87+ |
Dynamo | ORGANIZATION | 0.85+ |
one-stop | QUANTITY | 0.83+ |
Cyber Monday | EVENT | 0.83+ |
re:Invent 2017 | EVENT | 0.81+ |
S3 | COMMERCIAL_ITEM | 0.81+ |
Tier 1 | QUANTITY | 0.81+ |
One individual | QUANTITY | 0.69+ |
Tuesday | DATE | 0.66+ |
Site | OTHER | 0.62+ |
Sheila FitzPatrick, NetApp & Paul Stringfellow, Gardner Systems | NetApp Insight Berlin 2017
>> Announcer: Live from Berlin, Germany, it's theCUBE, covering NetApp Insight 2017. Brought to you by NetApp. (upbeat music) >> Welcome back to theCUBE's live coverage of NetApp Insight 2017, here in Berlin, Germany. I'm your host, Rebecca Knight, along with my co-host, Peter Burris. We are joined by Shelia Fitzpatrick, she is the Chief Privacy Officer of NetApp, and Paul Stringfellow who is a Technical Director at Gardner Systems. Shelia, Paul, thanks so much for joining us. >> Thank you. >> Thank you for inviting us. >> So, I want to talk about data privacy. The general data protection regulation, the EU's forthcoming laws, GDPR, are going to take effect in May of next year. They represent a huge fundamental change about the way that companies use data. Can you just set the scene for our viewers and explain what these changes mean? >> Sure, happy to. As you said, GDPR is the newest regulation, it will replace the current EU directive, goes into effect May 25th of 2018. It has some fundamental changes that are massively different than any other data privacy laws you've ever seen. First and foremost, it is a legal, compliance and business issue as opposed to a technology issue. It's also the first extra-territorial regulation, meaning, it will apply to any organization anywhere in the world, regardless of whether or not they have a presence in Europe. But if they provide goods and services to an EU resident, or they have a website that EU residents would go to to enter data, they are going to have to comply with GDPR, and that is a massive change for companies. Not to mention the sanctions, the sanctions can be equal to 20 million Euro or 4% of a company's annual global turnover, pretty phenomenal sanctions. There are a lot of fundamental changes, but those are probably the biggest right there. >> What are some of the biggest challenges that companies are... I mean, you talked about the threat of sanctions and just the massive implications of what companies need to do to prepare? >> To really prepare, as I'm talking to customers, they really need, unfortunately a lot of companies are just thinking about security. And they're thinking, well as long as we have encryption, as long as we have tokenization, as long as we're locking down that data, we're going to be okay. I'm saying, no. It first and foremost starts with building that legal compliance program. What does your data privacy program look like? What personal data are you collecting? Why are you collecting it? Do you have the legal right to collect it? Part of GDPR requires unambiguous, explicit, freely-given consent. Companies can no longer force or imply consent. A lot of times when you go on to websites the terms and conditions are so impossible to understand that people just tick the box (laughs). Well, under GDPR, that will no longer be valid because it has to be very transparent, very easily understandable, very readable. And people have to know what organizations are doing with their data. And it puts ownership and more control of data back into the hands of the data subject, as opposed to the organizations that are collecting data. SO those are some of the fundamental changes. For the Cloud environment, for instance, for a lot of big hyperscalers, GDPR now puts obligations on data processors which is very different from the current regulation. SO that's going to be a fundamental change of business for a lot of organizations. >> Now, is it just customers or is it customers and employees as well? >> It's customers, employees, suppliers, it's any personal data that an organization collects, regardless of the relationship. >> SO what does it mean? Does it mean that I'm renting your data? Does it mean that I, 'cause you now own it, it's not me owning it. >> I own it, that's right. >> What are some of the implications of how folks are going to monetize some of these resources? >> SO what it actually means is, as an organization that's collecting data, you have to have a legal and valid business reason for needing that data. SO part of GDPR requires what's called, data minimization. You should only be collecting the minimal amount of data you need in order to provide the service you're going to provide, or manage the relationship you're going to manage. And you are never, as an organization, the owner of that data, you're the data steward. I am giving you permission to use my data for a very specific reason. You can't take liberties with that data. You can't do, what I call, scope-creep which is, once you have the data, "Oh, I can do whatever I want "with that data," no you can't. Unless I have consented to it, you cannot use that data. And so, that is going to be a major change for organizations to deal with and it doesn't matter if it's your employee data, your customer data, your partner data, your alternative worker data, your supplier data. Whose ever data you have, you better be transparent about that data. >> Shelia, you haven't once mentioned technology. Paul, what does this mean from a technology perspective? >> I suppose it's my job to mention technology? >> As Shelia will tell you, the GDPR, it should not be driven by IT. Because it's not an IT problem, it's absolutely a legal and compliance issue. However, I think there's a technology problem in there. So for lots of things that Shelia is talking about, in terms of understanding your data, in terms of being able to find data, being able to remove data when you no longer need to use it, that's absolutely a technology problem. And I think, actually, maybe something you won't hear said very often, I'm a real fan of GDPR, I think a it's long overdue it's probably because Shelia's been beating me round the head for the last 12 months >> I have. >> about it. But, I think it's one of those things that's long overdue to all of us within enterprises, within business, who hold and look after data. Because what we've done, traditionally, is that we just collected tons and tons of data and we bought storage 'cause storage could be relatively cheap, we're moving things to the Cloud. And, we've got absolutely no control, no management, no understanding of what the data is, where it is, who has access to it? Does anybody even access it, I'm paying for it, does anybody even use it? And I think what this is, for me, if GDPR wasn't a regulatory thing that we had to do, I think it's a set of really good practices that, as organizations, we should be looking to follow anyway. And technology plays a small part in that, it will enable organizations to understand the data better, it will enable those organizations to be able to find information as and when they need it. When somebody makes a subject access request, how are you going to find that data without appropriate technology? And I think, first and foremost, it's something that is forcing organizations to look at the way they culturally look after data within their business. This is no longer about, "Let me just keep things forever and I won't worry about it." This is a cultural shift that says data is actually an asset in your business. And as Shelia actually mentioned before, and something I'll pinch in future, the data is not mine, I'm just the custodian of that data while you allow me to be so. So I should treat that like anything else I'm looking after on your behalf. SO I think it's those kind of fundamental shifts that will drive technology adoption, no doubt, to allow you to do that, but actually, it's much more of a cultural shift in the way that we think of data and the way that we manage data in our businesses. >> Well you're talking about it as this regulation that is long overdue, and it will cause this cultural shift. So what will be different in the way that companies do business and the way that they treat their customer data, and their customer's privacy? And their employee's privacy, too, as you pointed out? >> Well, and part of the difference is going to be that need for transparency. So companies are going to have to be very upfront about what they're doing with the data, as Paul said. You know, why are they collecting that data, and they need to think differently about the need for data. Instead of collecting massive amounts of data that you really don't need, they need to take a step back and say, "This is the type of relationship "I'm trying to manage." Whether it's an employment relationship, whether it's a customer relationship, whether it's a partner relationship. What is the minimum amount of information I need in order to manage that relationship? So if I have an employee, for instance, I don't need to know what my employee does on their day off. Maybe that's a nice thing to know because I think well, maybe we can offer them a membership to a gym because they like to work out? That's not a must-have, that's a nice-to-have. And GDPR is going to force must-haves. In order to manage the employment relationship I have to be able to pay you, I have to be able to give you a job, I have to be able to provide benefits, I have to be able to provide performance evaluations and other requirements, but if it's not legally required, I don't need that data. And so it's going to change the way companies think about developing programs, policies, even technology. As they start to think about how they're developing new technology, what data do they need to make this technology work? And technology has actually driven the need for more privacy laws. If you think about IoT, artificial intelligence, Cloud. >> Mobile. >> Absolutely. Great technology, but from a privacy perspective, the privacy was never a part of the planning process. >> In fact, in many respects it was the exact opposite. There were a whole bunch of business models, I mean if you think about it in the technology industry, there's two fundamental business models. There's the ad-based business model, which is, "Give us all your data "and we'll figure out a way to monetize it." >> Absolutely. >> And there's a transaction-based business model which says, "We'll provide you a service "and you pay us, and we promise to do something "and only something with your data." >> Absolutely. >> It's the difference between the way Google and Facebook work, and say, Apple and Microsoft work. SO how is this going to impact these business models in ways of thinking about engaging customers at least where GDPR is the governing model? >> Well, it is going to force a fundamental change in their business model. SO the companies that you mentioned, that their entire business model is based on the collection and aggregation of data, and in some cases, the selling of personal data. >> Some might say screwing you. >> Some might definitely say that, especially if you're a privacy attorney, you might say that. They offer fabulous services and people willingly give up their privacy, that's part of the problem, is that they're ticking the box to say, "I want to use Facebook, I want to use Twitter, "I want to use LinkedIn "because these are great technologies." But, it's the scope-creep. It's what you're doing behind the scenes that I don't know how you're using my data. SO transparency is going to become more and more critical in the business model and that's going to be a cultural, as Paul said, a cultural shift for companies that their entire business model's based on personal data. They're struggling because they're the companies that, no matter what they do, they're going to have to change. They can't just make a simple, change their policy or procedure, they have to change their entire business model to meet the GDPR obligations. >> And I think from, like Shelia says there, and obviously GDPR's very much around, kind of, private data. Well, the conversation we're having with our customers is, is a much wider scope than that, it is all of the data that you own. And it's important, I think, organizations need to stop being fast and loose with the information that they hold because not only is the private information about those people there that, you know, me and you, and that we don't want that necessarily leaked across the well to somebody who might look to exploit that for some other reason. But, that might be, business confidential information, that might be price list, it might be your customer list. And, at the moment, I think in lots of organizations we have a culture where people from top to bottom in an organization don't necessarily understand that. SO they might be doing something where, we had a case in UK recently where some records, security arrangements for Heathrow Airport were found on a bus. So somebody copied them to a USB stick, no encryption, somebody copied it to a USB stick, thought it was okay to take home and leave in the back of, probably didn't think it was okay to leave in the back of the taxi, but certainly thought it was okay to take that information home. And you look at that and think, well, what other business asset that that organization held would they have treated with such disdain, almost to say "I just don't care, this is just ones and zeroes, "why would I care about it?" It's that shift that I think we're starting to see. And I think it's that shift that organizations should have taken a long time ago. We talk to customers, and you hear of events like this all the time, data is the new gold, data is the new precious material of your choice. >> Which it really isn't. It really isn't, here's why I say that because this is the important thing and leads to the next question I was going to ask you. Every asset that's ever been conceived follows the basic laws in economic scarcity. Take gold, you can apply to that purpose, you can make connectors for a chip, or you can use it as a basis for making jewelry or some other purpose. But, data is fungible in so many ways. You can connect it and in many respects, we talked about it a little bit earlier, the act of making it private is, in many respects, the act of turning it into an asset. SO one of the things I want to ask you about, if you think about it, is that, there will still be a lot of net new ways to capture data that's associated with a product or service in a relationship. SO we're not saying that GDPR is going to restrict the role that data plays, it's just going to make it more specific. We're still going to see more IoT, we're still going to see more mobile services, as long as the data that's being collected is in service to the relationship or the product that's being offered. >> Yeah, you're absolutely right. I mean, one of the things that I always say is that, GDPR's intent is not stop organizations from collecting data, data is your greatest asset, you need data to manage any kind of relationship. But, you're absolutely right in what it's going to do is force transparency, so instead of doing things behind the scenes where nobody has any idea what you're doing with my data, companies are going to have to be extremely transparent about it and think about how it's being used. You talked about data monetization, healthcare data today is ten times more valuable than financial data. It is the data that all hackers want. And the reason is, is because you take even aggregate and statistical information through, say trial clinics, information that you think there's no way to tie it back to a person, and by adding just little elements to it, you have now turned that data into greater value and you can now connect it back to a person. SO data that you think does not have value, the more we add to it and the more, sort of, profiling we do, the more valuable that data is going to become. >> But it's even more than that, right? Because not only are you connecting it back to a person, you're connecting it back to a human being. Whereas financial data is highly stylized, it's defined, it's like this transaction defining, and there's nothing necessarily real about it other than that's the convention that we used to for example, do accounting. But, healthcare data is real. It ties back to, what am I doing, what drugs am I taking, why am I taking them, when am I visiting somebody? This is real, real data that provides deep visibility into the human being, who they are, what they face, and any number of other issues. >> Well, if you think about GDPR, too, they expanded the definition of personal data under GDPR. SO it now includes data, like biometric and genetic information that is heavily used in the healthcare industry. It also includes location data, IP information, unique identifiers. SO a lot of companies say, "Well, we don't collect personal data "but we have the unique identifiers." Well, if you can go through any kind of process to tie that back to a person, that's now personal data. SO GDPR has actually the first entry into the digital age as opposed to the old fashioned processing. Where you can now take different aspects of data and combine it to identify a human being, as you say. >> So, I got one more question. This is something of a paradox, sorry for jumping in, but I'm fascinated by this subject. Something of a paradox. Because the act of making data private, at least to the corporation, is an act of creating an asset, and because the rules of GDPR are so much more specific and well thought through than most rules regarding data, does it mean that companies that follow GDPR are likely, in the long run, to be better at understanding, taking advantage of, and utilizing their data assets? That's the paradox. Most people say, "I need all the data." Well, GDPR says, "Maybe you need to be more specific "about how you handle your data assets." What do you think, is this going to create advantages for certain kinds of companies? >> I think it absolutely is going to create advantages in two ways. One, I see organizations that comply with GDPR as having a competitive advantage. Because, number one it goes down to trust. If I'm going to do business with Company A or Company B, I'm going to do business with the company that actually takes my personal data seriously. But, looking' at it from your point of view, absolutely. As companies become more savvy when it comes to data privacy compliance, not just GDPR, but data privacy laws around the world, they're also going to see more of that value in the data, be more transparent about it. But, that's also going to allow them to use the data for other purposes, because they're going to get very creative in how having your data is actually going to benefit you as an individual. SO they're going to have better ways of saying, "But, by having your data I can offer you these services." >> GDPR may be a catalyst for increased data maturity. >> Absolutely. >> Well, I wanna ask you about the cultural shift. We've been talking so much about it from the corporate standpoint, but will it actually force a cultural shift from the customer standpoint, too? I mean, this idea of forcing transparency and having the customer understand why do you need this from me, what do you want? I mean, famously, Europeans are more private than Americans. >> Oh much so. As you've said, "Just click accept, okay, fine, "tell me what I need to know, "or how can I use this website?" >> Well, the thing is that, it's not necessarily from a consumer point of view, but I do think it's from a personal point of view from everybody. SO whether you work inside an organization that keeps data, that's starting to understand just how valuable that data might be. And just to pick up on something, that just to pop at something you were saying before, I think one of the other areas where this has business benefit is that that better and increased management and maturity, actually I think is actually a great way, that better maturity around how we look after our data, has huge impact. Because, it has huge impact in the cost of storing' it, if we want to use Cloud services why am I putting things there that nobody looks at? And then, looking at maintaining this kind of cultural shift that says, "If I'm going to have data in my organization, "I'm no longer going to have it on a USB stick "and leave it in the back of a cab "when it's got security information "of a global major airport on it. "I'm going to think about that "because I'm now starting to understand." And this big drive about, people starting to understand how the information that people keep about you has a potential bigger impact, and it has a potential bigger impact if that data, yeah, we've seen data breach, after data breach after data breach. You can't look at the news any day of the week without some other data breach and that's partly because, a bit like health and safety legislation, GDPR's there because you can't trust all those organizations to be mature enough with the way that we look after our data to do these things. SO legislation and regulations come across and said, "Well, actually this stuff's really important "to me and you as individuals, "so stop being fast and loose with it, "stop leaving it in the back of taxis, "stop letting it leak out your organization "because nobody cares." And that's driving a two-way thing, here, it's partly we're having to think more about that because actually, we're not trusting organizations who are looking after our data. But, as Shelia said, if you become an organization that has a reputation for being good with the way they lock their data, and look after data, that will give you a competitive edge alongside, actually I'm being much more mature, I'm being much more controlled and efficient with how I look after my data. That's got big impact in how I deliver technology and certainly, within a company. Which is why I'm enthusiastic about GDPR, I think it's forcing lots and lots of long-overdue shift in the way that we, as people, look after data, architect technology, start to think about the kind of solutions and the kind of things that we do in the way that we deliver IT into business and enterprise across the globe. >> I think one of the things, too, and Paul brought it up, is he mentioned security several times. And, as Paul knows, one of my pet peeves is when companies say, "We have world-class security, "therefore we're compliant with GDPR." And I go, "Really, so you're basically locking down data "you're not legally allowed to have? That's "what you're telling me." >> Like you said earlier, it's not just about having encryption everywhere. >> Exactly, and it's funny how many companies say "Well, we're compliant with GDPR "because we encrypt the data." And I go, "Well, if you're not legally allowed "to have that data, that's not going to help you at all." And, unfortunately, I think that's what a lot of companies think, that as long as we're looking at the security side of the house, we're good. And they're missing the whole boat on GDPR. >> It's got to be secure. >> It's got to be secure. >> But-- >> You got to legally have it first. >> Exactly. The chicken and the egg. >> But, what's always an issue with security, around data and the stuff that Shelia talked about is quite a lot, is that one of the risks you have, is you can have all the great security in the world but, if the right person with the right access to the right data has all the things that they should have, that doesn't mean that they can't steal that data, lose that data, do something with that data that they shouldn't be doing, just because we've got it secured. SO we need to have policies and procedures in place that allow us to manage that better, a culture that understands the risk of doing those kinds of things, and maybe, alongside technologies that identify, unusual use of data are important within that. >> Well, Paul, Shelia, thank you so much for coming on the show, it's been a fascinating conversation. >> Thank you very much, appreciate it. >> Yeah, thanks for having us on, appreciate it. >> I'm Rebecca Knight for Peter Burris, we will have more from NetApp Insight here in Berlin in just a little bit. (upbeat music)
SUMMARY :
Brought to you by NetApp. she is the Chief Privacy Officer of NetApp, the EU's forthcoming laws, GDPR, are going to take effect and business issue as opposed to a technology issue. and just the massive implications of what companies need the terms and conditions are so impossible to understand regardless of the relationship. Does it mean that I, 'cause you now own it, And so, that is going to be a major change for organizations Shelia, you haven't once mentioned technology. being able to remove data when you no longer need to use it, to allow you to do that, but actually, it's much more And their employee's privacy, too, as you pointed out? Well, and part of the difference is going to be the privacy was never a part of the planning process. I mean if you think about it in the technology industry, which says, "We'll provide you a service SO how is this going to impact these business models SO the companies that you mentioned, in the business model and that's going to be a cultural, it is all of the data that you own. SO one of the things I want to ask you about, And the reason is, is because you take even aggregate other than that's the convention that we used to and combine it to identify a human being, as you say. in the long run, to be better at understanding, I think it absolutely is going to create advantages and having the customer understand "tell me what I need to know, that just to pop at something you were saying before, "you're not legally allowed to have? Like you said earlier, "to have that data, that's not going to help you at all." The chicken and the egg. is that one of the risks you have, on the show, it's been a fascinating conversation. I'm Rebecca Knight for Peter Burris, we will have more
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Shelia | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Paul Stringfellow | PERSON | 0.99+ |
Berlin | LOCATION | 0.99+ |
Shelia Fitzpatrick | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Apple | ORGANIZATION | 0.99+ |
May 25th of 2018 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
UK | LOCATION | 0.99+ |
GDPR | TITLE | 0.99+ |
Gardner Systems | ORGANIZATION | 0.99+ |
two ways | QUANTITY | 0.99+ |
4% | QUANTITY | 0.99+ |
Sheila FitzPatrick | PERSON | 0.99+ |
NetApp | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
two-way | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
Berlin, Germany | LOCATION | 0.99+ |
One | QUANTITY | 0.99+ |
20 million Euro | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
ten times | QUANTITY | 0.99+ |
EU | ORGANIZATION | 0.99+ |
Shelia | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
one more question | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
two fundamental business models | QUANTITY | 0.97+ |
Heathrow Airport | LOCATION | 0.97+ |
NetApp Insight | ORGANIZATION | 0.97+ |
theCUBE | ORGANIZATION | 0.96+ |
ORGANIZATION | 0.96+ | |
EU | LOCATION | 0.95+ |
tons and tons of data | QUANTITY | 0.94+ |
2017 | DATE | 0.92+ |
May of next year | DATE | 0.91+ |
Bill Schmarzo, Dell EMC | DataWorks Summit 2017
>> Voiceover: Live from San Jose in the heart of Silicon Valley, it's The Cube covering DataWorks Summit 2017. Brought to you by: Hortonworks. >> Hey, welcome back to The Cube. We are live on day one of the DataWorks Summit in the heart of Silicon Valley. I'm Lisa Martin with my co-host Peter Burris. Not only is this day one of the DataWorks Summit, this is the day after the Golden State Warriors won the NBA Championship. Please welcome our next guess, the CTO of Dell AMC, Bill Shmarzo. And Cube alumni, clearly sporting the pride. >> Did they win? I don't even remember. I just was-- >> Are we breaking news? (laughter) Bill, it's great to have you back on The Cube. >> The Division III All-American from-- >> Cole College. >> 1947? >> Oh, yeah, yeah, about then. They still had the peach baskets. You make a basket, you have to climb up this ladder and pull it out. >> They're going rogue on me. >> It really slowed the game down a lot. (laughter) >> All right so-- And before we started they were analyzing the game, it was actually really interesting. But, kick things off, Bill, as the volume and the variety and the velocity of data are changing, organizations know there's a tremendous amount of transformational value in this data. How is Dell AMC helping enterprises extract and maximize that as the economic value of data's changing? >> So, the thing that we find is most relevant is most of our customers don't give a hoot about the three V's of big data. Especially on the business side. We like to jokingly say they care of the four M's of big data, make me more money. So, when you think about digital transformation and how it might take an organization from where they are today to sort of imbed digital capabilities around data and analytics, it's really about, "How do I make more money?" What processes can I eliminate or reduce? How do I improve my ability to market and reach customers? How do I, ya know-- All the things that are designed to drive value from a value perspective. Let's go back to, ya know, Tom Peters kind of thinking, right? I guess Michael Porter, right? His value creation processes. So, we find that when we have a conversation around the business and what the business is trying to accomplish that provides the framework around which to have this digital transformation conversation. >> So, well, Bill, it's interesting. The volume, velocity, variety; three V's, really say something about the value of the infrastructure. So, you have to have infrastructure in place where you can get more volume, it can move faster, and you can handle more variety. But, fundamentally, it is still a statement about the underlying value of the infrastructure and the tooling associated with the data. >> True, but one of the things that changes is not all data is of equal value. >> Peter: Absolutely. >> Right? So, what data, what technologies-- Do I need to have Spark? Well, I don't know, what are you trying to do, right? Do I need to have Kafka or Ioda, right? Do I need to have these things? Well, if I don't know what I'm trying to do, then I don't have a way to value the data and I don't have a way to figure out and prioritize my investment and infrastructure. >> But, that's what I want to come to. So, increasingly, what business executives, at least the ones who we're talking to all the time, are make me more money. >> Right. >> But, it really is, what is the value of my data? And, how do I start pricing data and how do I start thinking about investing so that today's data can be valuable tomorrow? Or the data that's not going to be valuable tomorrow, I can find some other way to not spend money on it, etc. >> Right. >> That's different from the variety, velocity, volume statement which is all about the infrastructure-- >> Amen. >> --and what an IT guy might be worried about. So, I've done a lot of work on data value, you've done a lot of work in data value. We've coincided a couple times. Let's pick that notion up of, ya know, digital transformation is all about what you do with your data. So, what are you seeing in your clients as they start thinking this through? >> Well, I think one of the first times it was sort of an "aha" moment to me was when I had a conversation with you about Adam Smith. The difference between value in exchange versus value in use. A lot of people when they think about monetization, how do I monetize my data, are thinking about value in exchange. What is my data worth to somebody else? Well, most people's data isn't worth anything to anybody else. And the way that you can really drive value is not data in exchange or value in exchange, but it's value in use. How am I using that data to make better decisions regarding customer acquisition and customer retention and predictive maintenance and quality of care and all the other oodles of decisions organizations are making? The evaluation of that data comes from putting it into use to make better decisions. If I know then what decision I'm trying to make, now I have a process not only in deciding what data's most valuable but, you said earlier, what data is not important but may have liability issues with it, right? Do I keep a data set around that might be valuable but if it falls into the wrong hands through cyber security sort of things, do I actually open myself up to all kinds of liabilities? And so, organizations are rushing from this EVD conversation, not only from a data evaluation perspective but also from a risk perspective. Cause you've got to balance those two aspects. >> But, this is not a pure-- This is not really doing an accounting in a traditional accounting sense. We're not doing double entry book keeping with data. What we're really talking about is understand how your business used its data. Number one today, understand how you think you want your business to be able to use data to become a more digital corporation and understand how you go from point "a" to point "b". >> Correct, yes. And, in fact, the underlying premise behind driving economic value of data, you know people say data is the new oil. Well, that's a BS statement because it really misses the point. The point is, imagine if you had a barrel of oil; a single barrel of oil that can be used across an infinite number of vehicles and it never depleted. That's what data is, right? >> Explain that. You're right but explain it. >> So, what it means is that data-- You can use data across an endless number of use cases. If you go out and get-- >> Peter: At the same time. >> At the same time. You pay for it once, you put it in the data lake once, and then I can use it for customer acquisition and retention and upsell and cross-sell and fraud and all these other use cases, right? So, it never wears out. It never depletes. So, I can use it. And what organizations struggle with, if you look at data from an accounting perspective, accounting tends to value assets based on what you paid for it. >> Peter: And how you can apply them uniquely to a particular activity. A machine can be applied to this activity and it's either that activity or that activity. A building can be applied to that activity or that activity. A person's time to that activity or that activity. >> It has a transactional limitation. >> Peter: Exactly, it's an oar. >> Yeah, so what happens now is instead of looking at it from an accounting perspective, let's look at it from an economics and a data science perspective. That is, what can I do with the data? What can I do as far as using the data to predict what's likely to happen? To prescribe actions and to uncover new monetization opportunities. So, the entire approach of looking at it from an accounting perspective, we just completed that research at the University of San Francisco. Where we looked at, how do you determine economic value of data? And we realized that using an accounting approach grossly undervalued the data's worth. So, instead of using an accounting, we started with an economics perspective. The multiplier effect, marginal perpetuity to consume, all that kind of stuff that we all forgot about once we got out of college really applies here because now I can use that same data over and over again. And if I apply data science to it to really try to predict, prescribe, and monetize; all of a sudden economic value of your data just explodes. >> Precisely because of your connecting a source of data, which has a particular utilization, to another source of data that has a particular utilization and you can combine them, create new utilizations that might in and of itself be even more valuable than either of the original cases. >> They genetically mutate. >> That's exactly right. So, think about-- I think it's right. So, congratulations, we agree. Thank you very much. >> Which is rare. >> So, now let's talk about this notion of as we move forward with data value, how does an organization have to start translating some of these new ways of thinking about the value of data into investments in data so that you have the data where you want it, when you want it, and in the form that you need it. >> That's the heart of why you do this, right? If I know what the value of my data is, then I can make decisions regarding what data am I going to try to protect, enhance? What data am I going to get rid of and put on cold storage, for example? And so we came up with a methodology for how we tie the value of data back to use cases. Everything we do is use case based so if you're trying to increase same-store sales at a Chipotle, one of my favorite places; if you're trying to increase it by 7.1 percent, that's worth about 191 million dollars. And the use cases that support that like increasing local even marketing or increasing new product introduction effectiveness, increasing customer cross-sale or upsell. If you start breaking those use cases down, you can start tying financial value to those use cases. And if I know what data sets, what three, five, seven data sets are required to help solve that problem, I now have a basis against which I can start attaching value to data. And as I look across at a number of use cases, now the valued data starts to increment. It grows exponentially; not exponentially but it does increment, right? And it gets more and more-- >> It's non-linear, it's super linear. >> Yeah, and what's also interesting-- >> Increasing returns. >> From an ROI perspective, what you're going to find that as you go down these use cases, the financial value of that use case may not be really high. But, when the denominator of your ROI calculation starts approaching zero because I'm reusing data at zero cost, I can reuse data at zero cost. When the denominator starts going to zero ya know what happens to your ROI? In infinity, it explodes. >> Last question, Bill. You mentioned The University of San Francisco and you've been there a while teaching business students how to embrace analytics. One of the things that was talked about this morning in the keynote was Hortonworks dedication to the open-source community from the beginning. And they kind of talked about there, with kids in college these days, they have access to this open-source software that's free. I'd just love to get, kind of the last word, your take on what are you seeing in university life today where these business students are understanding more about analytics? Do you see them as kind of, helping to build the next generation of data scientists since that's really kind of the next leg of the digital transformation? >> So, the premise we have in our class is we probably can't turn business people into data scientists. In fact, we don't think that's valuable. What we want to do is teach them how to think like a data scientist. What happens, if we can get the business stakeholders to understand what's possible with data and analytics and then you couple them with a data scientist that knows how to do it, we see exponential impact. We just did a client project around customer attrition. The industry benchmark in customer attrition is it was published, I won't name the company, but they had a 24 percent identification rate. We had a 59 percent. We two X'd the number. Not because our data scientists are smarter or our tools are smarter but because our approach was to leverage and teach the business people how to think like a data scientist and they were able to identify variables and metrics they want to test. And when our data scientists tested them they said, "Oh my gosh, that's a very highly predicted variable." >> And trust what they said. >> And trust what they said, right. So, how do you build trust? On the data science side, you fail. You test, you fail, you test, you fail, you're never going to understand 100 percent accuracy. But have you failed enough times that you feel comfortable and confident that the model is good enough? >> Well, what a great spirit of innovation that you're helping to bring there. Your keynote, we should mention, is tomorrow. >> That's right. >> So, you can, if you're watching the livestream or you're in person, you can see Bill's keynote. Bill Shmarzo, CTO of Dell AMC, thank you for joining Peter and I. Great to have you on the show. A show where you can talk about the Warriors and Chipotle in one show. I've never seen it done, this is groundbreaking. Fantastic. >> Psycho donuts too. >> And psycho donuts and now I'm hungry. (laughter) Thank you for watching this segment. Again, we are live on day one of the DataWorks Summit in San Francisco for Bill Shmarzo and Peter Burris, my co-host. I am Lisa Martin. Stick around, we will be right back. (music)
SUMMARY :
Brought to you by: Hortonworks. in the heart of Silicon Valley. I don't even remember. Bill, it's great to have you back on The Cube. You make a basket, you have to climb It really slowed the game down a lot. and maximize that as the economic value of data's changing? All the things that are designed to drive value and the tooling associated with the data. True, but one of the things that changes Well, I don't know, what are you trying to do, right? at least the ones who we're talking to all the time, Or the data that's not going to be valuable tomorrow, So, what are you seeing in your clients And the way that you can really drive value is and understand how you go from point "a" to point "b". because it really misses the point. You're right but explain it. If you go out and get-- based on what you paid for it. Peter: And how you can apply them uniquely So, the entire approach of looking at it and you can combine them, create new utilizations Thank you very much. so that you have the data where you want it, That's the heart of why you do this, right? the financial value of that use case may not be really high. One of the things that was talked about this morning So, the premise we have in our class is we probably On the data science side, you fail. Well, what a great spirit of innovation Great to have you on the show. Thank you for watching this segment.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Bill Shmarzo | PERSON | 0.99+ |
Michael Porter | PERSON | 0.99+ |
Bill Schmarzo | PERSON | 0.99+ |
Chipotle | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
Tom Peters | PERSON | 0.99+ |
Golden State Warriors | ORGANIZATION | 0.99+ |
7.1 percent | QUANTITY | 0.99+ |
San Jose | LOCATION | 0.99+ |
Adam Smith | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Bill | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
100 percent | QUANTITY | 0.99+ |
59 percent | QUANTITY | 0.99+ |
University of San Francisco | ORGANIZATION | 0.99+ |
two aspects | QUANTITY | 0.99+ |
24 percent | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
Cole College | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
today | DATE | 0.99+ |
1947 | DATE | 0.99+ |
zero | QUANTITY | 0.99+ |
DataWorks Summit | EVENT | 0.99+ |
about 191 million dollars | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Dell AMC | ORGANIZATION | 0.98+ |
Cube | ORGANIZATION | 0.98+ |
Dell EMC | ORGANIZATION | 0.97+ |
first times | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
DataWorks Summit 2017 | EVENT | 0.97+ |
day one | QUANTITY | 0.96+ |
one show | QUANTITY | 0.96+ |
four M's | QUANTITY | 0.92+ |
zero cost | QUANTITY | 0.91+ |
Hortonworks | ORGANIZATION | 0.91+ |
NBA Championship | EVENT | 0.89+ |
CTO | PERSON | 0.86+ |
single barrel | QUANTITY | 0.83+ |
The Cube | ORGANIZATION | 0.82+ |
once | QUANTITY | 0.8+ |
two X | QUANTITY | 0.75+ |
three V | QUANTITY | 0.74+ |
seven data sets | QUANTITY | 0.73+ |
Number one | QUANTITY | 0.73+ |
this morning | DATE | 0.67+ |
double entry | QUANTITY | 0.65+ |
Kafka | ORGANIZATION | 0.63+ |
Spark | ORGANIZATION | 0.58+ |
Hortonworks | PERSON | 0.55+ |
III | ORGANIZATION | 0.46+ |
Division | OTHER | 0.38+ |
Ioda | ORGANIZATION | 0.35+ |
American | OTHER | 0.28+ |
Craig Bernero, Dell EMC & Pierluca Chiodelli, Dell - Dell EMC World 2017
>> Narrator: Live from Las Vegas, it's the Cube. Covering Dell EMC World 2017. Brought to you by Dell EMC. >> Okay, welcome back, everyone. We are here live at Dell EMC World 2017, our eighth year coverage with the Cube. Formerly EMC World, now Dell EMC World. This is the Cube's coverage. I'm John Furrier, my cohost, Paul Gillin. Our next two guests are Craig Bernero, who is the senior vice president general manager of the midrange and entry storage solutions at Dell EMC. And Pierluca Chiodelli, VP appliance management at Dell. Guys, welcome to the Cube. Great to see you guys. >> Likewise. >> Thank you. >> Give us the update. We're hearing a ton of stories, 'cause the top stories, obviously the combination, merger, acquisition, whatever side you want to call acquired who. But all good, good stories. I mean, some speed bumps, little bumps along the way, but nothing horrific. Great stories. Synergies was the word we've been hearing. So you got to have some great growth with the Dell scale. Entry-level touchpoint growth, high end, get more entry level, give us the update. >> Yeah, absolutely. So again, first and foremost, I wanted to call out to all our customers and partners that are critical for the success that we've seen. No doubt, and actually, we've committed better together one fourth, which is why you saw two of our launches, both on the Unilining and the Assine line, which historically were part of EMC and Dell respectively prior. And main point is, a lot of the feedback we got from customers was they really respected and appreciate our customer choice first philosophy. But also understanding that there's clear demarkation where each of those technologies play in their sweet spot-- >> Well, how are you demarcating them right now? >> Absolutely, so traditionally, pre the EMC acquisition, what we actually ended up determining is, when you define the midrange market segment we were looking at, it was more in the upper range, upper level of it, where they're driving value from a technology aspect and with their Unity product set. We are focusing heavily into the all-Flash market segment, too, which is one of the major refreshes we did here. And then in the Dell storage, which is very server affinity, a direct attached construct at the entry through the lower end of the midrange band, it was actually some very clear swim lanes of where each of the respective products played to their strengths as well. And so as a result of that, we've really taken that to heart with our hybrid offering on the C side to get your economics. Again, effectively, our 10 cents per gig as they've rolled it outlined on Monday as far as the most affordable hybrid solution there on the market. And then you go to the upper, premium level of value capability, with all Flash to deal with your performance workloads and other characteristics, too. >> Here Luca, talk about the overlap, because we address that, we hit him head on with that. Turns out, not a lot of overlap. But as you guys come together with, we just had Toshiba on earlier. Flash is obviously big part of the success. Getting those price points down to the entry, midrange, enabling that kind of performance and cost is key, but as you look at the product portfolio, where are the areas you guys are doubling down on, and where is kind of some of the overlapping taking care of, if any? >> Yeah, so let me tell you, the first thing that is very important and we have in the show is the reaffirmation of the investment in the two product. So we have a panel entry yesterday also a panel with 120 customer. Divide 50% between the legacy, the heritage Dell and the EMC customer, and the amazing things there was the Flash adoption is very strong, but also they want to have it economical, so Four I is very strong. So this is really feet to our two product. Because if you remember, Compellan has been created as the best storage for data progression. And we double down on Unity now that we are now that we are now so completely full line of Unity product today. So in the other face, on the FC line, we reaffirmed the completion of the family with the new 5020. That provide more performance, more capacity, much more lenience. And we'll drive our 4020 customer to a very new product. So yes, some people before they think, "Oh these guys, they have a lot of overlap." But actually, we have two amazing product that they play together in this market. >> And talk about the customer dynamic, 'cause that's interesting about that. Almost the 50/50 split as you mentioned. They got to be, I mean, not, their indifference is probably, they're probably like, "Bring on the better product." I'm not hearing any revolts. Right, that no one's really revolting. Can you just share the perspective of some of the insight that they're telling you about, what they're expecting from you guys? >> So I think it's very fun to be in this position where we are right now, where we have such a good portfolio of product where a customer, company, people inside of our company start to learn how this product works. Because you sell what you know, right? Or you use what you know. People, right, try to do the same things every day. So we are forced now to look outside of our part and say, "You know, we have two product. "What is the benefit?" And now, we sparkle this discussion with the customer. And in any customer, we have before tremendous amount of common customer, right? The customer, they have a preference, but now they say, "Oh let me," an EMC customer say, "Maybe I have a huge case for "an all-Flash upgrade with Unity." And the SC customer say, "Oh, maybe now I can run "this application on Unity or SC "or Open App to a different things." What we say is, this is the line I use. We are the top one now because we can solve any use case. Right, if you look at our competitors, they try to cover everything with one product, right? >> John: You can mix and match. >> Yes, you can mix and match and we have a very differential part between the two. And we said, "SC economy, drive economy with the fact "that we can have a de-looped compression on speedy media." Unity optimized for Flash. >> Is there any incompatibility between the two? Do the two platforms work pretty seamlessly together? >> Pierluca: Yes. >> Yeah, so I'm going to expand a little further on that. So one of the things we did highlight as part of the all-Flash offering for Unity, 350 through the 650, the four new entry models, customers were surprised, you know. And there were some questions on the level of innovation we're driving. A year later, getting a full platform refresh was a very big surprise for customers. I typically, two years, 18 months of other vendors in the field, and they're like, "You just launched "the product last year, and you already have a refresh." And we did that 'cause we listened to customer requirements and the all-Flash, the performances as absolutely critical, so the controller upgrade. We went from a Haswell to Broadwell design. We actually added additional core capabilities in memory, and all with the architecture built to do an online date and place upgrade that will be driving later in the year, too. So, and the SC 5020 that we announced too as a separate product line to complementing, as Pierluca stated, but the third area that hasn't been necessarily amplified but customers have raved about seeing in the showroom area is our Cloud IQ technology, which is actually built off of Cloud Foundry. That's a value, the portfolio of the company and a strategic aligned business. And actually, it does preemptive and proactive not only monitoring, but we're taking that from Jeff Boudreau's keynote today. That whole definition of autonomous and self-aware storage well, in midrange 'cause of all the use cases and requirements, we're driving that into it. And there's actually, we have compatibility between Unity and SC in Cloud IQ. As that one pane of glass, it's not helmet manager, but more to take that value to a whole new level. And we're going to continue to drive that level innovation beyond, not just through software, but clearly leveraging better together talent to really solve some key business needs for customers. >> As David Guilden always says in the Cube, it's better to have overlap than holes in a product line. So that's cool that you guys got that addressed, and certainly mixing and match, that's the standard operating procedure these days in a lot of guys in IT. They know how to do that. The key is, does it thread together? So, congratulations. The hard question that I want to ask you guys and what everyone wants to know about, where the customer wins? Okay, because at the end of the day, you'd be number one at whatever old category scoreboard. >> Craig: Sure. >> Scoreboard of customers is what we're looking at. Are you getting more customers? Are they adopting, are they implementing a variety of versions? Give us the updates on the wins and what the combination is of Dell EMC coming together. What has that done for sales and wins? >> Yeah, so there's a public blog I posted for Dell EMC World, and it's about the one two punch with midrange storage. >> John: What was the title of that blog post? >> It was basically a one two punch, our midrange storage. And I'll provide you the link in followup. >> John: I'll look at it later. >> The reason we preemptively provided that was the biggest question I would get from customers was, which product are you going to choose? And our point was, both, right? Both products, the power of the portfolio. We don't need to choose one. Our install base on both those technologies is significant. But in that post, I also did quote some of the publicly available IEDC data, which showed us in our last quarter, in Q4, where you compare Q3 to Q4, we actually had double digit quarter growth for both Uni and SC, our primary leading lines in both the portfolio, which actually allowed us to get effectively back into a midrange market share segment. Now that's for purpose build. >> That reflects a very positive trend for Dell EMC midrange storage portfolio. I'm quoting directly from your blog post. One two punch drives midrange storage momentum. >> Craig: Correct. >> And it's not only the storage, right? I've been with a very big customer of ours. I was telling to an analyst this morning it's amazing to see the motion of the business that we can do now that we are Dell EMC. So being a private company in one sense allows us to do creative things that we didn't do before. So we can actually position not only one product or two product, but the entire portfolio. And as you see, with the server business, the affinity that some of the storage they have with the server, we can drive more and more adoptions for our work class. >> Just quickly, how is your channel reacting to all this? Are they fully on board, do they understand? Are they out there selling both solutions? >> 100%, we put a lot of investment into our channel enablements across the midrange storage products in portfolio as well, 'cause that's the primary motion that we drive as well. And that it allowed us to actually enable them for success, both in education enablement, and clearly, proper incentives in play. They're very well received. The feedback we've gotten has been overwhelmingly positive. And we've been complementing that more and more with constant refresher of not only our technology and sharing roadmap delivery so that it can plan ahead as that storage is used. >> I asked Mike Amerius Hoss and David Guilden the question, they both had the same answers. It's good to see them on the same page. But I said, you know, what's, where are the wins? And they both commented that where there's EMC Storage, they bring more Dell in. Where there's Dell, they bring more EMC Storage in. >> Yes, that's why they judged this with this customer. The new business motion that we can now propose like we have a very loyal customer from Arita GMC for example, but now we can offer also server, a software define on top of all that and the storage, right? And you can enter from the other one, from the server and position now a full portfolio of storage. >> Alright, I'm going to ask you a personal question. I'd like to get your reactions. Take your EMC hat off for a second. Put your industry participant, individual hat on. What's the biggest surprise from the combination, from your area of expertise and your jobs that you've personally observed with the combination? Customer adoption, technology that wasn't there, chaos, mayhem, what? >> Yeah, so I'll comment first. I think the, I mean, recognizing the real power of global scale, and what I mean by that is the combined set. So from an organization and R and D investment, being able to have global scale, where you have engineering working literally 24 by five, right, based on effectively, a follow the sun model, that's how you're seeing that innovation engine just cranking into high gear. And that was further extended with the power of the supply chain and innovation bringing together has been in my opinion, super powerful, right? 'Cause couple customers had shared with me, it's like, my concern if I go with a startup that may not be in business and relative to the supply chain leverage and the level of innovation, breadth, and depth of products that we have. >> Craig, that's a great point. Before we go to Pierluca, I just want to comment on that. We're seeing the same thing in the marketplace. A lot of the startups can't get into the pure storage play because scale requirements is now the new barrier entry, not necessarily the technology. >> Exactly. >> Not necessarily the technology, so that kind of reaffirms, that's why the startups kind of are doing that a lot of data protection, white space stuff. And their valuations, by the way, are skyrocketing. Go ahead, your comment, observation that surprised you or didn't surprise you, took you by storm, what? >> I need to say that I'm living a dream in this moment because I think it's a few times in life that you can experience a trust formation. And you can have the ability, actually, in my role that I have right now to accelerate this trust formation. And that it's not the common things to do in the company that is already established. So this shape, this come together give you more and more opportunity. So I'm so very exciting to do what I'm doing, and I love it. >> Injection of the scale, and more capabilities, it's like, go to the gym and you're like pumped up, you're in shape. >> Actually, I started to go to the gym after 20 years. (laughing) >> It's like getting a good meal. You're Italian, you appreciate a good buffet of resource, right? >> That's right. >> Dell's got the gourmet-- >> You know, every day, I find something new, some product that I didn't know, something that we did, innovation that we have in the company that we can actually use together. It's very very exciting. >> And the management teams are pretty solid. They didn't really just come in and decimate EMC. They essentially, it was truly a combination. Some say that EMC acquired Dell, some say Dell acquired EMC. But the fact that is even discussed shows a nice balance in terms of a lot of EMC at the helm. Its great sales force, great commercial business with Dell, very well play, I think. You guys feel the same way? >> I appreciate that, and couldn't agree more. And I think it shows as you look at business results and even from an employee satisfaction level. We continue to see that being record high, 'cause there's always that uncertainty, but the interesting piece is people have really been jazzed based on opportunity ahead. >> Alright, we're done complimenting. Let's get to the critical analysis. What's on the roadmap? >> Craig: A lot. >> Tell us what's coming down the pike. I know you privately do your earnings call, but you guys have been transparent, some of the things. What can you say about what's coming out for customers? What can they expect from you guys in the storage? >> I'll let Pierluca run the product management team. He drives that every day. >> So I do not say much, things that I'm getting. >> Share all, come on. You're telling, just spill it out. Come on. You and your dream, come on, sell it. >> We have only 20 minutes, so, really, as I said, we announced the 5020, right, we add the 7020 in August. We are planning to finish the lineup of the new family of SC for sure. We announced the ability to tiering to the Cloud, we're going to expand that. Also, we announced a full new set a family of Flash Unity. So we're going down that trajectory to offer more and more. And we are going to be very bold to offer also upgrades from old jan to the new jan and non-destructive upgrade and also a line upgrade. So it's a very very beefy roadmap that we show with our customer in the A and DH section. I need to say the feedback is tremendous, and to your point at the beginning, what is the ecosystem? How do you integrate the thing? You're going to see more and more, for example, the UI, the experience for the customer being the same. So the experience from the UI perspective-- >> Paul: Simplicity. >> Yes, simplicity. >> Paul: Simplicity is the new norm. >> Cloud IQ key, but also going between the products who have the same kind of philosophy. >> Hey, I always say, this great business model, make thins super fast, really easy to use, and really intuitive. Can't go wrong with that triple threat right there. So that's like what you guys are doing. >> Yes. >> Absolutely. >> Guys, thanks so much for coming on the Cube and sharing insight and update. Congratulations on the one two punch and the momentum and the success. That's the scoreboard we look at on the Cube. Are customers adopting it? Sharing all the data here inside the Cube live in Las Vegas with Dell EMC World 2017, stay with us for more coverage after this short break.
SUMMARY :
Narrator: Live from Las Vegas, it's the Cube. of the midrange and entry storage solutions at Dell EMC. I mean, some speed bumps, little bumps along the way, And main point is, a lot of the feedback we got the lower end of the midrange band, Flash is obviously big part of the success. So in the other face, on the FC line, Almost the 50/50 split as you mentioned. We are the top one now because we can solve any use case. And we said, "SC economy, drive economy with the fact So, and the SC 5020 that we announced too Okay, because at the end of the day, Are you getting more customers? for Dell EMC World, and it's about the one two punch And I'll provide you the link in followup. and SC, our primary leading lines in both the portfolio, I'm quoting directly from your blog post. And it's not only the storage, right? channel enablements across the midrange storage products the question, they both had the same answers. The new business motion that we can now propose What's the biggest surprise from the combination, by that is the combined set. A lot of the startups can't get into Not necessarily the technology, so that kind of reaffirms, And that it's not the common things to do Injection of the scale, and more capabilities, Actually, I started to go to the gym after 20 years. You're Italian, you appreciate innovation that we have in the company And the management teams are pretty solid. And I think it shows as you look at business results What's on the roadmap? What can they expect from you guys in the storage? I'll let Pierluca run the product management team. You and your dream, come on, sell it. We announced the ability to tiering Cloud IQ key, but also going between the products So that's like what you guys are doing. That's the scoreboard we look at on the Cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Paul Gillin | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Craig | PERSON | 0.99+ |
Pierluca Chiodelli | PERSON | 0.99+ |
Craig Bernero | PERSON | 0.99+ |
David Guilden | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Pierluca | PERSON | 0.99+ |
luca Chiodelli | PERSON | 0.99+ |
18 months | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
Jeff Boudreau | PERSON | 0.99+ |
Toshiba | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
120 customer | QUANTITY | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
eighth year | QUANTITY | 0.99+ |
August | DATE | 0.99+ |
last quarter | DATE | 0.99+ |
two years | QUANTITY | 0.99+ |
two product | QUANTITY | 0.99+ |
one product | QUANTITY | 0.99+ |
Arita GMC | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
A year later | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
Assine | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
both solutions | QUANTITY | 0.99+ |
two platforms | QUANTITY | 0.99+ |
Monday | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
24 | QUANTITY | 0.98+ |
5020 | COMMERCIAL_ITEM | 0.98+ |
Luca | PERSON | 0.98+ |
One | QUANTITY | 0.98+ |
Dell EMC World | ORGANIZATION | 0.98+ |
Cube | COMMERCIAL_ITEM | 0.98+ |
one sense | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Both products | QUANTITY | 0.98+ |
two guests | QUANTITY | 0.98+ |
third area | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
couple customers | QUANTITY | 0.97+ |
Unilining | ORGANIZATION | 0.97+ |
650 | COMMERCIAL_ITEM | 0.97+ |
each | QUANTITY | 0.97+ |
Seneca Louck, Dow Chemical | ServiceNow Knowledge17
(upbeat music) >> Commentator: Live, from Orlando, Florida, it's theCUBE, covering ServiceNow Knowledge17, brought to you by ServiceNow. >> Hi everybody, welcome back to Knowledge17. This is theCUBE, the leader in live tech coverage. My name is Dave Vellante, and I'm with my co-host Jeff Frick at our fifth Knowledge. We go out to the events. We extract the signal from the noise. Seneca Louck is here, he's the Business Process Lead at Dow Chemical. A relatively new ServiceNow customer. Welcome to theCUBE, thanks for coming on. >> Thank you guys. >> Thanks for having me. >> So you said this is your second Knowledge. >> Seneca: It is. >> And, well how do you like Orlando? >> I like it, I like it. I'm here, in Venetian, >> Sunny? >> for next year, and so I'm a Vegas guy, so I'll be happy to get back there, but Orlando's nice. >> Dave: Where's home for you? >> Originally from New Jersey. Worked in Philadelphia for 15 years and relocated to Midland, Michigan, where Dow Chemical's headquartered. >> Dave: Fantastic, ah it's great, great country, Michigan. >> Absolutely. >> So, take us through your role, start there. What do you do, >> Sure. >> at Dow Chemical? >> So, I'm a Business Process Lead for Enterprise Service Management. We could go down the ITSM route, or we can go down the BSM route and we said, "Why pick one?" Enterprise Systems Management used to be the name. We actually elevated it up, Enterprise Service Management. We're the IT Operability focus on the end. >> Okay, and you said you went live, with ServiceNow, June last year? >> June 11th last year, we started with Incident Problem Change Config. We did Change Management, sorry, a month later. And then we did Service Request catalog, rolled out for the whole rest of the year. >> How long did it take you from sort of, when you said, "All right, we're doing this. "Start the project." To actually get, you know, MVP out? >> The cake. >> Yeah, the cake. (laughs) >> To get to the cake. >> And MVP's a really important thing. Minimum Viable Product. It was a hard lesson for us to learn. Quickly we realized that we're not going to be able to do everything we want to do in a first shot. So, we did focus very heavily on MVP. ServiceNow was good enough to make sure that they bred that into us, the importance of that. And so, we started in October, with workshops. We spent probably the first four or five months before we wrote one single line of code or configured one thing in ServiceNow. You know, a lot of that work was As-Is Process. Document it, understand it, uplift it, figure out what we want that To-Be Process to look like, and then figure out how the tool's going to deliver against that. >> Did you do some of that, I mean much of that came as part of the business case, and then you just refined it, is that right? >> The business case was really more on the value side. We didn't get into the specifics around process. We had a high level idea what we wanted to do strategically. Right? >> Yeah. >> Our guiding principles were really, Industry Best Practice, we like to think we're special. But really, the industry should know. Out of the box, ServiceNow, whenever possible. And to be honest, the out-of-the-box ServiceNow should reflect Industry Best Practice fairly well. And so that was kind of the coming in position for us. We deviated only when absolutely necessary and we really tried to stick to vanilla. >> So you minimized custom mods? >> Seneca: We really tried to do that, yes. There's times where we had to deviate of course. But we really wanted to look to see if ServiceNow had an answer, and if we could tweak what was already there, then great. There's only a handful of opportunities where we had to build something net new. >> And was that related to your ERP, or when did you have to build those custom mods? >> So, in places where we might have a concept that was to bring Legacy capability from a previous system. We knew we weren't going to cut and run from the old to the new. We had to kind of pull on some of the capabilities of that platform. So, the way you guys do category, sub-category, we did through classification. And so we had to customize a couple of tables to bring classifications over to bridge that gap. >> I see, okay, and then, so then you go live. Now was it a CMDB, a single CMDB across the organization? >> So, we have HP technology, where we had large investment. We wanted to keep that for discovery purposes and it enabled us to build one big tunnel between our CMDB and ServiceNow, so it made the integration go very easily. So, we really did two key integrations, a CMDB integration and an LDAP one to get our people data. Once that was done, we were on our feet, we were stood up and we were ready to start delivering processes. >> And the Service Catalog? >> Service Catalog was an interesting one because we had it spread out in a bunch of places. We had web forums, where somebody had customized a small, little web forum that that was actually making calls into our ticketing system to create service requests. We also had Request Center, which was brought in to try and solve that world of Service Request Management, but it only did it for Service Request. And we realize ServiceNow is going to do it end-to-end. >> Seneca, when you're thinking about your investments. I like to look at 'em as you get investments to run the business, some to grow the business and some to transform the business. And you're really sort of an IT-transform expert. How do you allocate that? Are those mutually exclusive? Do they sort of blend into each other and how much of your investment is transformation, and what does that all mean? >> Yeah, so it's tough because you've got guys that are on the run side, and I actually spent the large majority of my career on the run side. So, I know what if feels like to be accountable for everything in production, regardless of how it got there. And so, I kind of oscillate back and forth. Right? If the hair's on fire and these guys are going to be dead by the time the project transforms next year's capability, there's no point in us waiting. We can't wait. So, we're bouncing in and out of transformation and dealing with, making sure operability can happen effectively, efficiently, and that these guys are around next year, and alive and well, so that we can deliver that transformational capability. >> You talked about MVP being kind of a new concept. I wonder if you could dig into that a little bit further. >> Sure, sure. >> Is that not kind of a process or methodology that you guys have done in the past, or was it a learning curve? >> So, it was a little bit of a learning curve. So, typically you know, we delivered the biggest SAP implementation in the history of the world. A billion dollars, 800 SAP systems. And it took us seven years. So, we didn't think a lot about MVP, we wanted perfection. And so we made sure that we got it. And it cost us dearly. But in the end, the results were good. In this case, we had to move fast. Right? We weren't going to be able to do it all. We knew the capabilities that you see, throughout this room, are incredible. We want to get to them. But we've got to get on to the platform first. And so, we really did hone in on trying to find, what is the minimum product that we need to get people moved over to the platform, and we'll increment from there. So, it was a little bit of a learning for us. It was a little bit of a culture change. And we kind of found that sweet spot between Agile and Waterfall, which I think we called it Wagile, or (laughs). Yeah, Wagile I think, >> Well, right. >> is the name. >> I mean your implementation >> coincided with the sort of DevOps craze, and Agile, but there's >> That's right, that's right. >> a place for Waterfall, right? >> There is, there is. >> Sometimes, you need >> that perfection. Other times, you need to break stuff and iterate. >> Absolutely. >> But so, that's interesting. You said you came up with sort of a hybrid. Sometimes, hybrids are scary. So, how did you sort of come to that point and how's it workin' for you? >> Yeah, so what we did is we front-ended a lot of the requirements. We spent, like I said, several months, just sitting and doing requirements. And then, we transitioned into two-week sprints. And we pulled out of the backlog, the requirements that we had captured in those months previous. So, that was kind of how we blended the two together. We're more a Waterfall shop but we were delivering a system of record. And so, in systems of record, we strongly believe that Agile can be dangerous. It's not necessarily the place to start. And so, we started with Waterfall, and we kind of ended with Agile. >> All right, okay, and so, what so far have been the sort of business impacts? Can you share that with us? >> Yeah absolutely, so first thing's first, we're getting consistency throughout our processes. So, many times, geographical differences or even within a geography, at a sub-activity level, people were doing things differently. So, first thing we had to do was Standardize Process. That gives us the ability to measure across the world, how that process is being executed. Whereas before, we couldn't do that one-for-one, we couldn't compare these things one-for-one. And so, now we have that vision, now we have that visibility, and we were a performance analytics customer from day one, so we started capturing data to baseline, to benchmark, from Go Live, until today, and we've got incredible data to go back then and do the continuous service improvement. >> And how much of the consistency and process was forced in your pre-deployment activities, where you kind of find, all right, we got to sit down and actually document this to put it into the system. Versus, now that you've got this tool in place, that you see the opportunity to continue to go after new processes. >> It varied, dependent upon area, so Change Management was actually not a bad process from a global perspective. On the flip side is, we actually implemented some case management capability for our Business Functions. Their processes were extremely deviated across geographies, across activities. And so it depends, but the bottom line is that before we talk about implementing on this platform, we got to talk standardization. Good news is the incident problem changed. It wasn't as much work. On the Business Process side, it was a lot more. >> How are you predominantly measured? Is it getting stuff done? Are there other sort of KPI's that you focus on? Is there one that you try to optimize? >> So, these days, we're actually operating in a little bit of a dangerous place because we're going through so much mergers and acquisition activity, that our success is, can we integrate a company in less than a year while we go on to do the biggest chemical merger in the history of the world? So, typically, we would be kind of looking at metrics, and KPI's, down at the process level. Right now, we're looking at, can I actually bring these companies together? So it's integrated. >> And not kill each other. >> And not kill each other. (laughs) That's right. That's not to say we're not doing the latter as well but I think we have to start with, can we get the big activities done so that we can figure out how to do the process improvement. >> Dave: Right. How about the show for you here? What's it been like? What are you learning? >> Yeah, so. >> Are you sharing? >> Dx Continuum I think is going to be the theme that I'm going to leave here thinking, wow, these guys did the right thing with that purchase. So, you know the artificial intelligence, the machine learning, the data lakes, that we're going to be able to take all this data that we have and pump it out to you guys. And you're going to turn around and tell us an interesting story. You're going to tell me the questions that I would never even think to ask because you're going to be able to see into that data in ways that we never even dreamed possible. So, that's the big one for me. I've heard some rumors of some other things coming, but I shouldn't know about those and so I'm not going to say anything at this point. But right now, it's about the machine learning, the artificial intelligence. >> So, what other, I mean 'cause a company the size of Dow must be doing some interesting things with Big Data and Hadoop and AI. How does what you're doing or does what you're doing with ServiceNow relate to those sort of other activities? Is there sort of a data platform strategy? >> It's an interesting question. It's something that we're actually struggling with a little bit to figure out what that strategy is going to be. I don't think the larger organization expected so many opportunities to use analytics and to use machine learning against data sets that otherwise were, this is operation stuff, for the most part, right? We're starting to get into the business side a little bit but really, we were focused on running the business from an operations perspective. And so, all of a sudden, now, we're getting attention that we wouldn't have had otherwise, from the big players, you know. The SAP Business Warehouse, Business Intelligence guys. They've got 120 people delivering their reporting service. I got a guy half-time, that's helping me with my PA reports and we've got to figure out a way to either join our strategies together or at least meet in the middle because there's data that we probably want to share from each other. >> Do you have a Chief Data Officer on staff? >> We do not, that I'm aware of, actually. But I think it is , it's a very powerful role, but in our SAP world, they kind of act as that defacto person within our organization. But they're not very interested in what we're doing yet but they are starting to get the attention of us. >> It's interesting 'cause we talk a lot about IoT Now will bridge, you know, kind of the IT and the Ops folks. And it sounds like you're having that experience really specifically built around some of the processes that you're delivering in ServiceNow. To bring those two world together. >> Yeah, so while I mentioned machine learning and Artificial Intellience, that's actually right there, second on my list. The thing I came here last year and raised my hands and said I need the most is I need the ability to bring massive amounts of data onto this platform. Raw performance data, network data, server data, utilization data, end-user data. I want to be able to bring it into this platform so that I can use it to correlate events and incidents and problems. And so, the things that you guys are doing for IoT, to bring massive data sets in, are actually going to solve my problem, but I don't think it was necessarily what you were trying to solve. But I'm very happy for that. >> So, by the way, we're independent media, so we're (laughs) like third-party guys. >> Understood, understood >> It's these guys, ServiceNow. So, we just sort of unpack, analyze. What about if you had to do it again. What would you do differently? Obviously you would have, and you did, you embraced the MVP, other things? >> So, we took a very dangerous route in that we didn't have a team built. We didn't have a competency built. We took a system integrator and we went off and we went hog wild and we implemented it quickly, while we built the team, while we built the governance, while we built the competency center. If I could do it again, I'd have that team ready, staffed, you know, well-trained up front, so that we could learn as we went, a little bit more, be a little more autonomous and self-sufficient. >> Were you one of the 100 customers that John Donahoe met with in 45 days? >> I was not actually. >> And if you weren't, then what would you tell him in terms of the piece that he said, "What can we do better?" What would you? >> Yeah. >> So, the question came up yesterday, around releases. You know, should we do more, should we do less. I mean, we're actually struggling a little bit to keep up with the two releases per year. So, the biggest thing that I see is not making it a wholesale upgrade. If I could take parts and pieces from the new capabilities that are coming without having to go through the full upgrade cycle, you know, I think that would be huge for me. So that we don't have to spend a couple of months or we're hoping to get that down to one month. But this is our first one in production. So, we're going to spend three months getting this upgrade right. We're hoping to get it down to, you know, a couple of weeks to a month. But if I can take pieces and parts of the capability that's being delivered, and not have to take it wholesale, that would be the thing. >> Yeah, so that's interesting because Multi-instance is nice. You don't have to go on the SaaS player's schedule. But you want to keep current, you know, for a lot of reasons, with maybe, with certain parts of the upgrade. Yeah, okay, that doesn't sound trivial. (laughs) >> Yeah, it's not. >> Although I know they're thinking about it so it's come up, I've heard a couple of people at least mention that it's something that they have to think about. They may not actually go that direction. But at least that they're thinking about it, that tells me that they're exploring other avenues to deliver capability. >> Dave: What's in the future for you guys? Where do you want to take this thing? >> Yeah, so our next big thing's going to be Event Management. So, we've got 45 different tools that are doing monitoring from purchase tools to somebody's script that's sitting on the mainframe that sends us an event, when some exception happens. And so we've built, you know, with a custom IT process automation tool, our Event Management framework. And it's integrated with ServiceNow. But at the heart of it is, there's some old technology, decade-old technology, that was my first entry into IT process automation. And so, as the person who built it, I'm going to be the one that ultimately unplugs it and hands it over to ServiceNow. So, for us, that's the next step for what we're going to do. >> Awesome, well listen, Seneca, thanks very much for coming to theCUBE. It's great to have you. Loved the knowledge. >> Thanks for having us. >> Dave: Rapid fire, you know, perfect for theCUBE, so thank you. >> Great, wonderful. >> Thank you, guys. >> Thanks for coming on. >> I appreciate it. >> All right, pleasure. >> All right, keep it right there, buddy. We'll be back with our next guest. This is theCUBE, we're live from Knowledge17 in Orlando. We'll be right back. (upbeat music)
SUMMARY :
brought to you by ServiceNow. We extract the signal from the noise. I like it, I like it. so I'll be happy to get back there, and relocated to Midland, Michigan, Dave: Fantastic, ah it's great, What do you do, and we said, "Why pick one?" And then we did Service Request catalog, How long did it take you from sort of, Yeah, the cake. And so, we started in October, with workshops. We didn't get into the specifics around process. And so that was kind of the coming in position for us. and if we could tweak what was already there, then great. So, the way you guys do category, sub-category, I see, okay, and then, so then you go live. Once that was done, we were on our feet, we were stood up And we realize ServiceNow is going to do it end-to-end. and some to transform the business. so that we can deliver that transformational capability. I wonder if you could dig into that We knew the capabilities that you see, Other times, you need to break stuff and iterate. So, how did you sort of come to that point So, that was kind of how we blended the two together. And so, now we have that vision, And how much of the consistency and process On the flip side is, we actually implemented So, typically, we would be kind of looking at metrics, so that we can figure out how to do the process improvement. How about the show for you here? that we have and pump it out to you guys. relate to those sort of other activities? from the big players, you know. but they are starting to get the attention of us. It's interesting 'cause we talk a lot about IoT Now And so, the things that you guys are doing for IoT, So, by the way, we're independent media, So, we just sort of unpack, analyze. so that we could learn as we went, So that we don't have to spend a couple of months But you want to keep current, you know, that they have to think about. And so we've built, you know, Loved the knowledge. Dave: Rapid fire, you know, perfect for theCUBE, This is theCUBE, we're live from Knowledge17 in Orlando.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Philadelphia | LOCATION | 0.99+ |
Dow Chemical | ORGANIZATION | 0.99+ |
October | DATE | 0.99+ |
John Donahoe | PERSON | 0.99+ |
seven years | QUANTITY | 0.99+ |
three months | QUANTITY | 0.99+ |
New Jersey | LOCATION | 0.99+ |
two-week | QUANTITY | 0.99+ |
one month | QUANTITY | 0.99+ |
Venetian | LOCATION | 0.99+ |
15 years | QUANTITY | 0.99+ |
Vegas | LOCATION | 0.99+ |
Orlando | LOCATION | 0.99+ |
Michigan | LOCATION | 0.99+ |
last year | DATE | 0.99+ |
45 days | QUANTITY | 0.99+ |
45 different tools | QUANTITY | 0.99+ |
June last year | DATE | 0.99+ |
100 customers | QUANTITY | 0.99+ |
120 people | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Midland, Michigan | LOCATION | 0.99+ |
first one | QUANTITY | 0.99+ |
a month later | DATE | 0.99+ |
June 11th last year | DATE | 0.99+ |
Wagile | TITLE | 0.99+ |
first | QUANTITY | 0.99+ |
two key integrations | QUANTITY | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
ServiceNow | ORGANIZATION | 0.98+ |
less than a year | QUANTITY | 0.98+ |
five months | QUANTITY | 0.98+ |
first shot | QUANTITY | 0.98+ |
first entry | QUANTITY | 0.98+ |
ServiceNow | TITLE | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
HP | ORGANIZATION | 0.97+ |
Knowledge17 | ORGANIZATION | 0.97+ |
SAP Business Warehouse | ORGANIZATION | 0.97+ |
Seneca Louck | PERSON | 0.97+ |
800 SAP | QUANTITY | 0.97+ |
first four | QUANTITY | 0.96+ |
Waterfall | ORGANIZATION | 0.96+ |
CMDB | TITLE | 0.96+ |
single | QUANTITY | 0.96+ |
second | QUANTITY | 0.96+ |
half-time | QUANTITY | 0.95+ |
one single line | QUANTITY | 0.95+ |
Event Management | TITLE | 0.94+ |
one | QUANTITY | 0.94+ |
Agile | TITLE | 0.94+ |
one thing | QUANTITY | 0.92+ |
SAP | ORGANIZATION | 0.91+ |
second Knowledge | QUANTITY | 0.89+ |
billion dollars | QUANTITY | 0.89+ |
Seneca | ORGANIZATION | 0.86+ |
first thing | QUANTITY | 0.84+ |
DevOps | TITLE | 0.82+ |
Seneca | PERSON | 0.82+ |