Haseeb Budhani, Rafay & Adnan Khan, MoneyGram | Kubecon + Cloudnativecon Europe 2022
>>The cube presents, Coon and cloud native con Europe 22, brought to you by the cloud native computing foundation. >>Welcome to the cube coverage of CubeCon 2022 EU. I'm here with my cohost Paul Gill. Please work with you, Keith. Nice to work with you, Paul. And we have our first two guests. The cube is hot. I'm telling you we are having interviews before the start of even the show floor I have with me. We gotta start with the customers first enterprise architect, a non-con Aon con. Welcome to the show. >>Thank you so >>Much. Cube time cube time. First now you're at cube alumni. Yep. <laugh> and, and, uh, has Havani CEO. Arai welcome back. Nice to, >>Uh, >>Talk to you again today. So we're talking all things Kubernetes and we're super excited to talk to MoneyGram about their journey to Kubernetes. First question I have for Anon. Talk to us about what your pre Kubernetes landscape looked like. >>Yeah, certainly. Uh, Keith, so, um, we had a, uh, you know, a traditional mix of legacy applications and modern applications. Uh, you know, a few years ago we made the decision to move to a microservices architecture. Um, and this was all happening while we were still on prem. Right? So your traditional VMs, um, and you know, we started 20, 30 microservices, but with the microservices packing, you know, you quickly expand to hundreds of microservices. Um, and we started getting to that stage where managing them without sort of an orchestration platform, uh, and just as traditional VMs was getting to be really challenging, right. Uh, especially from a day two operational, uh, you know, you can manage 10, 15 microservices, but when you start having 50 and so forth, um, all those concerns around, uh, you know, high availability, operational performance. Um, so we started looking at some open source projects, you know, spring cloud. Uh, we are predominantly a Java, um, shop. So we looked at the spring cloud projects. Uh, they give you a number, uh, you know, of initiatives, um, for doing some of those, um, management and what we realized again, to manage those components, um, without sort of a platform was really challenging. So that, that kind of led us to sort of Kubernetes where, um, along with our journey cloud, uh, it was the platform that could help us with a lot of those management operational concerns. >>So as you talk about some of those challenges, pre Kubernetes, what were some of the operational issues that you folks experienced? >>Yeah. You know, uh, certain things like auto scaling is, is number one, right? I mean, that's a fundamental concept of cloud native, right. Is, um, how do you auto scale VMs? Right. Uh, you can put in some old methods and stuff, but, uh, it was really hard to do that automatically. Right. So, uh, Kubernetes with like HPA gives you those out of the box, right? Provided you set the right policies. Uh, you can have auto scaling, uh, where it can scale up and scale back. So we were doing that manually. Right. So before, uh, you know, MoneyGram, obviously, you know, holiday season, people are sending more money mother's day. Um, our ops team would go in basically manually scale, uh, VMs. Right. So we'd go from four instances to maybe eight instances. Right. Uh, but, but that entailed outages. Right. Um, and just to plan around doing that manually and then sort of scale them back was a lot of overhead, a lot of administration overhead. Right. So, uh, we wanted something that could help us do that automatically right. In a, in an efficient, uh, unintrusive way. So, so, you know, that was one of the things, uh, monitoring, um, and, and management, uh, operations, you know, just kind of visibility into how those applications were during, what were the status of your, um, workloads was also a challenge, right. Uh, to do that. >>So, cause see, I gotta ask the question. If someone would've came to me with that problem, I'd just say, you know, what, go to the plug, the cloud, what, how does, uh, your group help solve some of these challenges? What do you guys do? >>Yeah. What, what do we do? So here's my perspective on the market as it's playing out. So I see a bifurcation happening in the Kubernetes space, but there's the Kubernetes run time. So Amazon is EKS Azure as EKS, you know, there's enough of these available. They're not managed services. They're actually really good, frankly. Right? In fact, retail customers, if you're an Amazon, why would you spin up your own? Just use EK. It's awesome. But then there's an operational layer that is needed to run Kubernetes. Uh, my perspective is that, you know, 50,000 enterprises are adopting Kubernetes over the next five to 10 years. And they're all gonna go through the same exact journey and they're all gonna end up, you know, potentially making the same mistake, which is, they're gonna assume that Kubernetes is easy. <laugh> they're gonna say, well, this is not hard. I got this up and running on my laptop. >>This is so easy. No worries. Right. I can do key gas, but then, okay. Can you consistently spin up these things? Can you scale them consistently? Do you have the right blueprints in place? Do you have the right access management in place? Do you have the right policies in place? Can you deploy applications consistently? Do you have monitoring and visibility into those things? Do your developers have access to when they need it? Do you have the right networking layer in place? Do you have the right chargebacks in place? Remember you have multiple teams and by the way, nobody has a single cluster. So you gotta do this across multiple clusters. And some of them have multiple clouds, not because they wanna be multiple clouds because, but sometimes you buy a company and they happen to be in Azure. How many dashboards do you have now across all the open source technologies that you have identified to solve these problems? >>This is where pain lies. So I think that Kubernetes is fundamentally a solve problem. Like our friends at AWS and Azure they've solved this problem. It's like a KSKS et cetera, GK for that matter. They're they're great. And you should use them and don't even think about spinning up Q B and a best clusters. Don't do it. Use the platforms that exist and commensurately on premises. OpenShift is pretty awesome, right? If you like it, use it. But then when it comes to the operations layer, right, that's where today we end up investing in a DevOps team and then an SRE organization that need to become experts in Kubernetes. And that is not tenable, right? Can you let's say unlimited capital unlimited budgets. Can you hire 20 people to do Kubernetes today? >>If you could find them, if >>You can find 'em right. So even if you could, the point is that see, five years ago, when your competitors were not doing Kubernetes, it was a competitive advantage to go build a team to do Kubernetes. So you could move faster today. You know, there's a high chance that your competitors are already buying from a Rafa or somebody like Rafa. So now it's better to take these really, really sharp engineers and have them work on things that make the company money, writing operations for Kubernetes. This is a commodity. Now >>How confident are you that the cloud providers won't get in and do what you do and put you out of business? >>Yeah, I mean, absolutely. I think, I mean, in fact, I, I had a conversation with somebody from HBS this morning and I was telling them, I don't think you have a choice. You have to do this right. Competition is not a bad thing. Right? This, the, >>If we are the only company in a space, this is not a space, right. The bet we are making is that every enterprise has, you know, they have an on-prem strategy. They have at least a handful of, everybody's got at least two clouds that they're thinking about. Everybody starts with one cloud and then they have some other cloud that they're also thinking about, um, for them to only rely on one cloud's tools to solve for on-prem plus that second cloud, they potentially, they may have, that's a tough thing to do. Um, and at the same time we as a vendor, I mean the only real reason why startups survive is because you have technology that is truly differentiated, right. Otherwise, right. I mean, you gotta build something that is materially. Interesting. Right. We seem to have, sorry, go ahead. >>No, I was gonna ask you, you actually had me thinking about something, a non yes. MoneyGram big, well known company, a startup, adding, working in a space with Google, VMware, all the biggest names. What brought you to Rafi to solve this operational challenge? >>Yeah. Good question. So when we started out sort of in our Kubernetes, um, you know, we had heard about EKS, uh, and, and we are an AWS shop. So, uh, that was the most natural path. And, and we looked at, um, EKS and, and used that to, you know, create our clusters. Um, but then we realized very quickly that yes, toe's point AWS manages the control plane for you. It gives you the high availability. So you're not managing those components, which is some really heavy lifting. Right. Uh, but then what about all the other things like, you know, centralized dashboard, what about, we need to provision, uh, Kubernetes clusters on multi-cloud right. We have other clouds that we use, uh, or also on prem. Right. Um, how do you do some of that stuff? Right. Um, we, we also, at that time were looking at, uh, other, uh, tools also. >>And I had, I remember come up with an MVP list that we needed to have in place for day one or day two, uh, operations, right. To before we even launch any single applications into production. Um, and my ops team looked at that list. Um, and literally there was only one or two items that they could check, check off with S you know, they they've got the control plane, they've got the cluster provision, but what about all those other components? Uh, and some of that kind of led us down the path of, uh, you know, looking at, Hey, what's out there in this space. And, and we realized pretty quickly that there weren't too many, there were some large providers and capabilities like Antos, but we felt that it was, uh, a little too much for what we were trying to do. You know, at that point in time, we wanted to scale slowly. We wanted to minimize our footprint. Um, and, and Rafa seemed to sort of, uh, was, was a nice mix, uh, you know, uh, from all those different angles, how >>Was, how was the situation affecting your developer experience? >>So, um, so that's a really good question also. So operations was one aspect of, to it, right? The other part is the application development, right? We've got, uh, you know, Moneygrams when a lot of organizations have a plethora of technologies, right? From, from Java to.net to no GS, what have you, right. Um, now as you start saying, okay, now we're going cloud native, and we're gonna start deploying to Kubernetes. Um, there's a fair amount of overhead because a tech stack, all of a sudden goes from, you know, just being Java or just being.net to things like Docker, right? All these container orchestration and deployment concerns, Kubernetes, uh, deployment artifacts, right. I gotta write all this YAML, uh, as my developer say, YAML, hell right. <laugh>, uh, I gotta learn Docker files. I need to figure out, um, a package manager like helm, uh, on top of learning all the Kubernetes artifacts. >>Right. So, um, initially we went with sort of, okay, you know, we can just train our developers. Right. Um, and that was wrong. Right. I mean, you can't assume that everyone is gonna sort of learn all these deployment concerns, uh, and we'll adopt them. Right. Um, uh, there's a lot of stuff that's outside of their sort of core dev domain, uh, that you're putting all this burden on them. Right. So, um, we could not rely on them and to be sort of cube cuddle experts, right. That that's a fair amount, overhead learning curve there. Um, so Rafa again, from their dashboard perspective, right? So the managed cube cuddle gives you that easy access for devs, right. Where they can go and monitor the status of their workloads. Um, they can, they don't have to figure out, you know, configuring all these tools locally just to get it to work. >>Uh, we did some things from a DevOps perspective to basically streamline and automate that process. But then also office order came in and helped us out, uh, on kind of that providing that dashboard. They don't have to worry. They can basically get on through single sign on and have visibility into the status of their deployment. Uh, they can do troubleshooting diagnostics all through a single pane of glass. Right. Which was a key key item. Uh, initially before Rafa, we were doing that command line. Right. And again, just getting some of the tools configured was, was huge. Right. Took us days just to get that. And then the learning curve for development teams, right? Oh, now you gotta, you got the tools now you gotta figure out how to use it. Right. Um, so >>See, talk to me about the, the cloud native infrastructure. When I look at that entire landscaping number, I'm just overwhelmed by it. As a customer, I look at it, I'm like, I, I don't know where to start I'm sure. Or not, you, you folks looked at it and said, wow, there's so many solutions. How do you engage with the ecosystem? You have to be at some level opinionated, but flexible enough to, uh, meet every customer's needs. How, how do you approach that? >>Yeah. So it's a, it's a really tough problem to solve because, so, so the thing about abstraction layers, you know, we all know how that plays out, right? So abstraction layers are fundamentally never the right answer because they will never catch up. Right. Because you're trying to write and layer on top. So then we had to solve the problem, which was, well, we can't be an abstraction layer, but then at the same time, we need to provide some sort of, sort of like centralization standardization. Right. So, so we sort of have this, the following dissonance in our platform, which is actually really important to solve the problem. So we think of a, of a stack as sort of four things. There's the, there's the Kubernetes layer infrastructure layer, um, and EKS is different from ES and it's okay. Mm-hmm <affirmative>, if we try to now bring them all together and make them behave as one, our customers are gonna suffer because there are features in ESS that I really want. >>But then if you write an AB obsession layer, I'm not gonna get 'em so not. Okay. So treat them as individual things. And we logic that we now curate. So every time S for example, goes from 1 22 to 1 23, rewrite a new product, just so my customer can press a button and upgrade these clusters. Similarly, we do this fors, we do this for GK. We it's a really, really hard job, but that's the job. We gotta do it on top of that, you have these things called. Add-ons like my network policy, my access management policy, my et cetera. Right. These things are all actually the same. So whether I'm Anek or a Ks, I want the same access for Keith versus a none. Right. So then those components are sort of the same across doesn't matter how many clusters does money clouds on top of that? You have applications. And when it comes to the developer, in fact, I do the following demo a lot of times because people ask the question, right? Mean, I, I, I, people say things like, I wanna run the same Kubernetes distribution everywhere, because this is like Linux, actually, it's not. So I, I do a demo where I spin up a access to an OpenShift cluster and an EKS cluster and an AKs cluster. And I say, log in, show me which one is, which they're all the same. >>So Anan get, put, make that real for me, I'm sure after this amount of time, developers groups have come to you with things that are snowflakes and you, and as a enterprise architect, you have to make it work within your framework. How has working with RAI made that possible? >>Yeah. So, um, you know, I think one of the very common concerns is right. The whole deployment, right. Uh, toe's point, right. Is you are from an, from a deployment perspective. Uh, it's still using helm. It's still using some of the same tooling, um, right. But, um, how do you Rafa gives us, uh, some tools, you know, they have a, a command line, art cuddle API that essentially we use. Um, we wanted parody, um, across all our different environments, different clusters, you know, it doesn't matter where you're running. Um, so that gives us basically a consistent API for deployment. Um, we've also had, um, challenges, uh, with just some of the tooling in general, that we worked with RA actually to actually extend their, our cuddle API for us, so that we have a better deployment experience for our developers. So, >>Uh Huie how long does this opportunity exist for you? At some point, do the cloud providers figure this out or does the open source community figure out how to do what you've done and, and this opportunity is gone. >>So, so I think back to a platform that I, I think very highly of, which is a highly off, which has been around a long time and continues to live vCenter, I think vCenter is awesome. And it's, it's beautiful. VMware did an incredible job. Uh, what is the job? Its job is to manage VMs, right? But then it's for access. It's also storage. It's also networking and a sex, right? All these things got done because to solve a real problem, you have to think about all the things that come together to solve, help you solve that problem from an operations perspective. Right? My view is that this market needs essentially a vCenter, but for Kubernetes, right. Um, and that is a very broad problem, right. And it's gonna spend, it's not about a cloud, right? I mean, every cloud should build this. I mean, why would they not? It makes sense, Anto success, right. Everybody should have one. But then, you know, the clarity in thinking that the Rafa team seems to have exhibited till date seems to merit an independent company. In my opinion, I think like, I mean, from a technical perspective, this products awesome. Right? I mean, you know, we seem to have, you know, no real competition when it comes to this broad breadth of capabilities, will it last, we'll see, right. I mean, I keep doing Q shows, right? So every year you can ask me that question again. Well, you're >>You make a good point though. I mean, you're up against VMware, you're up against Google. They're both trying to do sort of the same thing you're doing. What's why are you succeeding? >>Maybe it's focus. Maybe it's because of the right experience. I think startups only in hindsight, can one tell why a startup was successful? In all honesty. I, I, I've been in a one or two service in the past. Um, and there's a lot of luck to this. There's a lot of timing to this. I think this timing for a com product like this is perfect. Like three, four years ago, nobody would've cared. Like honestly, nobody would've cared. This is the right time to have a product like this in the market because so many enterprises are now thinking of modernization. And because everybody's doing this, this is like the boots storm problem in HCI. Everybody's doing it. But there's only so many people in the industry who actually understand this problem. So they can't even hire the people. And the CTO said, I gotta go. I don't have the people. I can't fill the, the seats. And then they look for solutions and we are that solution that we're gonna get embedded. And when you have infrastructure software like this embedded in your solution, we're gonna be around with the assuming, obviously we don't score up, right. We're gonna be around with these companies for some time. We're gonna have strong partners for the long term. >>Well, vCenter for Kubernetes, I love to end on that note, intriguing conversation. We could go on forever on this topic, cuz there's a lot of work to do. I think, uh, I don't think this will over be a solve problem for the Kubernetes of cloud native solution. So I think there's a lot of opportunity in that space. Hi, thank you for rejoining the cube. I non con welcome becoming a cube alum. <laugh> I awesome. Thank you. Get your much your profile on the, on the Ken's. Website's really cool from Valencia Spain. I'm Keith Townsend, along with my whole Paul Gillon and you're watching the cube, the leader in high tech coverage.
SUMMARY :
brought to you by the cloud native computing foundation. I'm telling you we are having interviews before the start of even the <laugh> and, and, uh, has Havani CEO. Talk to you again today. Uh, Keith, so, um, we had a, uh, you know, So before, uh, you know, MoneyGram, obviously, you know, that problem, I'd just say, you know, what, go to the plug, the cloud, what, how does, So Amazon is EKS Azure as EKS, you know, How many dashboards do you have now across all the open source technologies that you have identified to And you should use them and don't even think about spinning up Q B and a best clusters. So even if you could, the point is that see, five years ago, I don't think you have a choice. we as a vendor, I mean the only real reason why startups survive is because you have technology that is truly What brought you to Rafi to solve Uh, but then what about all the other things like, you know, centralized dashboard, that they could check, check off with S you know, they they've got the control plane, they've got the cluster provision, you know, just being Java or just being.net to things like Docker, right? So, um, initially we went with sort of, okay, you know, we can just Oh, now you gotta, you got the tools now you gotta figure out how to use it. How do you engage with the ecosystem? so the thing about abstraction layers, you know, we all know how that plays out, We gotta do it on top of that, you have these things called. developers groups have come to you with things that are snowflakes and you, some tools, you know, they have a, a command line, art cuddle API that essentially we use. does the open source community figure out how to do what you've done and, and this opportunity is gone. you know, the clarity in thinking that the Rafa team seems to have exhibited till date seems What's why are you succeeding? And when you have infrastructure software like this embedded in your solution, we're thank you for rejoining the cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Paul Gill | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Paul Gillon | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
Keith | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
20 | QUANTITY | 0.99+ |
HBS | ORGANIZATION | 0.99+ |
Rafay | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Adnan Khan | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Java | TITLE | 0.99+ |
20 people | QUANTITY | 0.99+ |
Haseeb Budhani | PERSON | 0.99+ |
Rafa | PERSON | 0.99+ |
eight instances | QUANTITY | 0.99+ |
Valencia Spain | LOCATION | 0.99+ |
Arai | PERSON | 0.99+ |
50 | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
50,000 enterprises | QUANTITY | 0.99+ |
second cloud | QUANTITY | 0.99+ |
15 microservices | QUANTITY | 0.99+ |
Linux | TITLE | 0.98+ |
one cloud | QUANTITY | 0.98+ |
vCenter | TITLE | 0.98+ |
today | DATE | 0.98+ |
mother's day | EVENT | 0.98+ |
first | QUANTITY | 0.98+ |
First question | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
five years ago | DATE | 0.98+ |
four instances | QUANTITY | 0.98+ |
ES | TITLE | 0.98+ |
Anan | PERSON | 0.97+ |
Rafi | PERSON | 0.97+ |
MoneyGram | ORGANIZATION | 0.97+ |
first two guests | QUANTITY | 0.97+ |
HPA | ORGANIZATION | 0.97+ |
four years ago | DATE | 0.96+ |
Kubernetes | TITLE | 0.96+ |
single cluster | QUANTITY | 0.95+ |
1 23 | OTHER | 0.95+ |
hundreds of microservices | QUANTITY | 0.95+ |
30 microservices | QUANTITY | 0.95+ |
single | QUANTITY | 0.95+ |
OpenShift | TITLE | 0.95+ |
one aspect | QUANTITY | 0.95+ |
single pane | QUANTITY | 0.94+ |
VMware | ORGANIZATION | 0.94+ |
two items | QUANTITY | 0.94+ |
day two | QUANTITY | 0.93+ |
Coon | ORGANIZATION | 0.93+ |
ESS | TITLE | 0.9+ |
10 years | QUANTITY | 0.89+ |
Azure | ORGANIZATION | 0.89+ |
day one | QUANTITY | 0.89+ |
Rafa | ORGANIZATION | 0.88+ |
Kubernetes | ORGANIZATION | 0.88+ |
this morning | DATE | 0.88+ |
Docker | TITLE | 0.87+ |
Cloudnativecon | ORGANIZATION | 0.86+ |
Ken | PERSON | 0.86+ |
Vijoy Pandey, CIsco
>> .. >> From around the globe. It's the Cube. Presenting Future Cloud. One event, a world of opportunities. Brought to you by CISCO. >> We're here with Vijoy Panday, VP of emerging tech and incubations at Cisco. Vijoy, good to see you. Welcome. >> Good to see you as well. Thank you, Dave. And pleasure to be here. >> So in 2020, we kind of had to redefine the notion of agility when it came to digital business or, you know organizations they had to rethink their concept of agility and business resilience. What are you seeing in terms of how companies are thinking about their operations in this sort of new abnormal context? >> Yeah, I think that's a great question. I think what, what we're seeing is that pretty much the application is the center of the universe. And if you think about it the application is actually driving brand recognition and the brand experience and the brand value. So, the example I like to give is think about a banking app, pre COVID, that did everything that you would expect it to do. But if you wanted to withdraw cash from your bank you would actually have to go to the ATM and punch in some numbers and then look at your screen and go through a process. And then finally withdraw cash. Think about what that would do in a post pandemic era where people are trying to go contact-less. And so in a situation like this the digitization efforts that all of these companies are going through and the modernization and the automation is what is driving brand recognition, brand trust and brand experience. >> Yeah. So, I was going to ask you when I heard you say that I was going to say well, but hasn't it always been about the application, but it's different now, isn't it. So I wondered if you'd talk more about how the application experience is changing. Yes, as a result of this new digital mandate, but how should organizations think about optimizing those experiences in this new world? >> Absolutely. And I think, yes, it's always been about the application, but it's becoming the center of the universe right now. Because all interactions with customers and consumers and even businesses, are happening through that application. So if the application is unreliable or if the application is not available, is not trusted, insecure. There's a problem. There's a problem with the brand, with the company and the trust that consumers and customers have with that company. So if you think about an application developer the weight he or she is carrying on their shoulders is tremendous because you're thinking about rolling features quickly to be competitive. That's the only way to be competitive in this world. You need to think about availability and resiliency like you pointed out, and experience. You need to think about security and trust. Am I as a customer, a consumer willing to put my data in their application? So velocity, availability, security, and trust, and all of that depends on the developer. So the experience, the security, the trust the feature velocity is what is driving the brand experience now. >> Those are two tensions, let's say agility and trust. You know, zero trust used to be a buzzword. Now it's a mandate. But are those two vectors counter posed? Can they be merged into one and not affect each other? Does the question make sense? Right? Security usually handcuffs my speed. But how do you address that? >> Yeah, that's a great question. I think if you think about it today that's the way things are. And if you think about this developer all they want to do is run fast because they want to build those features out and they want to pick and choose the APIs and services that matter to them and build out their app. And they want the complexities of infrastructure and security and trust to be handled by somebody else. It's not that they don't care about it but they want that abstraction. So that is handled by somebody else. And typically within an organization we've seen in the past where there's friction between Net Ops, Sec Ops, IT Ops and the cloud platform teams and the developer on one side. And these frictions and these meetings and toil actually take a toll on the developer. And that's why companies and apps and developers are not as agile as they would like to be. But it doesn't have to be that way. So I think if there was something that would allow a developer to pick and choose, discover the APIs that they would like to use, connect those APIs in a very simple manner and then be able to scale them out and be able to secure them. And in fact, not just secure them during the run time when it's deployed but right off the bat, when they fire up that IDE and start developing the application, wouldn't that be nice. And as you do that, there is a smooth transition between that discovery, connectivity and ease of consumption and security with the IT Ops, Net Ops, Sec Ops teams and CSOs to ensure that they're not doing something that their organization won't allow them to do in a very seamless manner. >> I want to come back and talk about security but I want to add another complexity before we do that. So for a lot of organizations and having the public cloud became a staple of keeping the lights on during the pandemic but it brings new complexities differences in terms of latency, security which I want to come back to deployment models, etc. So what are some of the specific networking challenges that you've seen with the cloud native architectures? How are you addressing those? >> Yeah. In fact, if you think about cloud, to me, that is a different way of saying a distributed system. And if you think about a distributed system, what is at the center of the distributed system is the network. So, my favorite commentary is that the network is the run time for all distributed systems and modern applications. And that is true because if you think about where things are today, like you said, there's cloud assets that a developer might use in the banking example that I gave earlier. I mean, if you want to build a contact-less app so that you get verified, a customer gets verified on the app, they walk over to the ATM and they were draw cash without touching that ATM. In that kind of an example you are touching the mobile iOS, let's say iOS APIs, you're touching cloud APIs where the backend might sit. You're touching on prem APIs. Maybe it's an Oracle database or a mainframe even where transactional data exists. You're touching branch APIs where the ATM actually exists. And there needs to be consistency when you withdraw cash and you're carrying all of this. And in fact, there might be customer data sitting in Salesforce somewhere. So it's cloud APIs, it's on prem, it's branch it's SaaS, it's mobile and you need to bring all of these things together. And over time, you'll see more and more of these APS coming from various SaaS providers. So it's not just cloud providers but SaaS providers that the developer has to use. And so this complexity is very, very real. And this complexity is across the wide open internet. So the application is built across this wide open internet. So the problems of discoverability, the problems of being able to simply connect these APIs and manage the data flow across these APIs, the problems of consistency of policy and consumption because all of these APIs have their own nuances and what they mean what the arguments mean and what the API actually means. How do you make it consistent and easy for the developer? That is the networking problem. And that is the problem of building out this network making traffic engineering easy, making policy easy making scale-out scale down easy all of that are networking problems. And so we are solving those problems at Cisco. >> Yeah, the internet is a new private network but it's not so private. So I want to come back to security. You know, I often say that the security model of building a moat, dig the moat you get the hardened castle. That's just outdated now. The queen has left her castle I always say, and it's dangerous out there. And the point is, and you touched on this, it's a huge decentralized system with distributed apps and data, that notion of perimeter security, it's just no longer valid. So I wonder if you could talk more about how you're thinking about this problem and you definitely address some of that in your earlier comments but what are you specifically doing to address this and how do you see it evolving? >> Yeah, I mean, that's, that's a very important point. I mean, I think if you think about again, the wide open internet being the run time for all modern applications. What is perimeter security in this, in this new world? I mean, it's, to me it boils down to securing an API because again going with that running example of this contact-less cash withdrawal feature for a bank, the API, wherever it sits on-prem, branch, SaaS, cloud, iOS, Android, doesn't matter. That API is your new security perimeter. And the data object that it's trying to access is also the new security perimeter. So if you can secure API to API communication and API to data object communication, you should be good. So that is the new frontier. But guess what, software is buggy, everybody's software, I'm not saying Cisco software everybody's software is buggy. Software is buggy. Humans are not reliable. And so things mature, things change things evolve over time. So there needs to be defense in depth. So you need to secure at the API layer and the data object layer, but you also need to secure at every layer below it, so that you have good defense in depth. If any layer in between is not working out properly. So for us, that means ensuring API to API communication not just during runtime, when the app has been deployed and is running, but during deployment and also during the development life cycle. So as soon as the developer launches an ID they should be able to figure out that this API is secure to use, is reputable. It is compliant to my organizations needs because it is hosted, let's say from Germany. And my organization wants APIs to be used only only if they are being hosted out of Germany. So compliance needs and security needs and reputation, is it available all the time? Is it secure? And being able to provide that feedback all the time between the security teams and the developer teams in a very seamless real-time manner. Because again, that's something that we're trying to solve through. Some of the services that we're trying to produce inside of Cisco. >> Yeah. I mean, those, that layered approach that you're talking about is, is critical because every layer has, you know, some vulnerability. And so you, you've got to protect that with some depth. In terms of thinking about security. How should we think about where Cisco's primary value add is? I mean, there's parts of the, you know, you guys are a great security business. It's a growing business. Is it your intension to add value across the entire value chain? I mean, obviously you can't do everything so you've got a partner. But how should we think about Cisco's role, you know, over the next I'm thinking longer term, over the next decade? >> Yeah. I mean, I think so we do come in with good strength from the run time side of it. So if you think about the security aspects that we have in play today, there's a significant set of assets that we have around user security, with duo and password-less. We have significant assets in runtime security. I mean, the entire portfolio that Cisco brings to the table is runtime security, the secure X, aspects around posture and policy that we bring to the table. And as you see Cisco evolve over time you will see us shifting left. I mean, I know it's an overused term but that is where security is moving towards. And so that is where API security and data security are moving towards. So learning what we have during run time, because again run time is where you learn what's available. And that's where you can apply all of the ML and AI models to figure out what works, what doesn't. Taking those learnings, taking those catalogs, taking that reputation database and moving it into the deployment and development life cycle and making sure that that's part of that entire dev to deploy to run time chain is what you will see Cisco do overtime. >> That's fantastic, phenomenal perspectives. Thanks for coming on the Cube. Great to have you and look forward to having you again. >> Absolutely. Thank you. Pleasure to be here. >> This is Dave Volante for the Cube. Thank you for watching.
SUMMARY :
Brought to you by CISCO. Vijoy, good to see you. Good to see you as well. What are you seeing in terms and the automation is what about how the application of that depends on the developer. Does the question make sense? and the developer on one side. and having the public but SaaS providers that the developer has to use. And the point is, and you touched on this, So that is the new frontier. across the entire value chain? of the ML and AI models to figure out Great to have you and look Pleasure to be here. This is Dave Volante for the Cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
CISCO | ORGANIZATION | 0.99+ |
Dave Volante | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
Germany | LOCATION | 0.99+ |
two vectors | QUANTITY | 0.99+ |
Vijoy Pandey | PERSON | 0.99+ |
Vijoy Panday | PERSON | 0.99+ |
iOS | TITLE | 0.99+ |
two tensions | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Android | TITLE | 0.99+ |
today | DATE | 0.98+ |
One event | QUANTITY | 0.98+ |
Vijoy | PERSON | 0.97+ |
one side | QUANTITY | 0.96+ |
one | QUANTITY | 0.93+ |
pandemic | EVENT | 0.9+ |
zero trust | QUANTITY | 0.89+ |
ess | TITLE | 0.84+ |
next decade | DATE | 0.73+ |
SaaS | TITLE | 0.66+ |
Salesforce | ORGANIZATION | 0.6+ |
each | QUANTITY | 0.59+ |
prem | ORGANIZATION | 0.54+ |
CIsco | ORGANIZATION | 0.52+ |
Cube | COMMERCIAL_ITEM | 0.4+ |
COVID | TITLE | 0.37+ |
Eric Herzog & Sam Werner, IBM | CUBEconversation
(upbeat music) >> Hello everyone, and welcome to this "Cube Conversation." My name is Dave Vellante and you know, containers, they used to be stateless and ephemeral but they're maturing very rapidly. As cloud native workloads become more functional and they go mainstream persisting, and protecting the data that lives inside of containers, is becoming more important to organizations. Enterprise capabilities such as high availability or reliability, scalability and other features are now more fundamental and important and containers are linchpin of hybrid cloud, cross-cloud and edge strategies. Now fusing these capabilities together across these regions in an abstraction layer that hides that underlying complexity of the infrastructure, is where the entire enterprise technology industry is headed. But how do you do that without making endless copies of data and managing versions not to mention the complexities and costs of doing so. And with me to talk about how IBM thinks about and is solving these challenges are Eric Herzog, who's the Chief Marketing Officer and VP of Global Storage Channels. For the IBM Storage Division is Sam Werner is the vice president of offering management and the business line executive for IBM Storage. Guys, great to see you again, wish should, were face to face but thanks for coming on "theCUBE." >> Great to be here. >> Thanks Dave, as always. >> All right guys, you heard me my little spiel there about the problem statement. Eric, maybe you could start us off. I mean, is it on point? >> Yeah, absolutely. What we see is containers are going mainstream. I frame it very similarly to what happened with virtualization, right? It got brought in by the dev team, the test team, the applications team, and then eventually of course, it became the main state. Containers is going through exactly that right now. Brought in by the dev ops people, the software teams. And now it's becoming again, persistent, real use clients that want to deploy a million of them. Just the way they historically have deployed a million virtual machines, now they want a million containers or 2 million. So now it's going mainstream and the feature functions that you need once you take it out of the test sort of play with stage to the real production phase, really changes the ball game on the features you need, the quality of what you get, and the types of things you need the underlying storage and the data services that go with that storage,. to do in a fully container world. >> So Sam how'd we get here? I mean, container has been around forever. You look inside a Linux, right? But then they did, as Eric said, go mainstream. But it started out the, kind of little experimental, As I said, their femoral didn't really need to persist them, but it's changed very quickly. Maybe you could talk to that evolution and how we got here. >> I mean, well, it's been a look, this is all about agility right? It's about enterprises trying to accelerate their innovation. They started off by using virtual machines to try to accelerate access to IT for developers, and developers are constantly out, running ahead. They got to go faster and they have to deliver new applications. Business lines need to figure out new ways to engage with their customers. Especially now with the past year we had it even further accelerated this need to engage with customers in new ways. So it's about being agile. Containers promise or provide a lot of the capabilities you need to be agile. What enterprises are discovering, a lot of these initiatives are starting within the business lines and they're building these applications or making these architectural decisions, building dev ops environments on containers. And what they're finding is they're not bringing the infrastructure teams along with them. And they're running into challenges that are inhibiting their ability to achieve the agility they want because their storage needs aren't keeping up. So this is a big challenge that enterprises face. They want to use containers to build a more agile environment to do things like dev ops, but they need to bring the infrastructure teams along. And that's what we're focused on now. Is how do you make that agile infrastructure to support these new container worlds? >> Got it, so Eric, you guys made an announcement to directly address these issues. Like it's kind of a fire hose of innovation. Maybe you could take us through and then we can unpack that a little bit. >> Sure, so what we did is on April 27th, we announced IBM Spectrum Fusion. This is a fully container native software defined storage technology that integrates a number of proven battle-hardened technologies that IBM has been deploying in the enterprise for many years. That includes a global scalable file system that can span edge core and cloud seamlessly with a single copy of the data. So no more data silos and no more 12 copies of the data which of course drive up CapEx and OpEx. Spectrum Fusion reduces that and makes it easier to manage. Cuts the cost from a CapEx perspective and cuts a cost for an OpEx perspective. By being fully container native, it's ready to go for the container centric world and could span all types of areas. So what we've done is create a storage foundation which is what you need at the bottom. So things like the single global namespace, single accessibility, we have local caching. So with your edge core cloud, regardless of where the data is, you think the data's right with you, even if it physically is not. So that allows people to work on it. We have file locking and other technologies to ensure that the data is always good. And then of course we'd imbued it with the HA Disaster Recovery, the backup and restore technology, which we've had for years and have now made of fully container native. So spectrum fusion basically takes several elements of IBM's existing portfolio has made them container native and brought them together into a single piece of software. And we'll provide that both as a software defined storage technology early in 2022. And our first pass will be as a hyperconverged appliance which will be available next quarter in Q3 of 2021. That of course means it'll come with compute, it'll come with storage, come with a rack even, come with networking. And because we can preload everything for the end users or for our business partners, it would also include Kubernetes, Red Gat OpenShift and Red Hat's virtualization technology all in one simple package, all ease of use and a single management gooey to manage everything, both the software side and the physical infrastructure that's part of the hyperconverged system level technologies. >> So, maybe it can help us understand the architecture and maybe the prevailing ways in which people approach container storage, what's the stack look like? And how have you guys approached it? >> Yeah, that's a great question. Really, there's three layers that we look at when we talk about container native storage. It starts with the storage foundation which is the layer that actually lays the data out onto media and does it in an efficient way and makes that data available where it's needed. So that's the core of it. And the quality of your storage services above that depend on the quality of the foundation that you start with. Then you go up to the storage services layer. This is where you bring in capabilities like HA and DR. People take this for granted, I think as they move to containers. We're talking about moving mission critical applications now into a container and hybrid cloud world. How do you actually achieve the same levels of high availability you did in the past? If you look at what large enterprises do, they run three site, for site replication of their data with hyper swap and they can ensure high availability. How do you bring that into a Kubernetes environment? Are you ready to do that? We talk about how only 20% of applications have really moved into a hybrid cloud world. The thing that's inhibiting the other 80% these types of challenges, okay? So the storage services include HA DR, data protection, data governance, data discovery. You talked about making multiple copies of data creates complexity, it also creates risk and security exposures. If you have multiple copies of data, if you needed data to be available in the cloud you're making a copy there. How do you keep track of that? How do you destroy the copy when you're done with it? How do you keep track of governance and GDPR, right? So if I have to delete data about a person how do I delete it everywhere? So there's a lot of these different challenges. These are the storage services. So we talk about a storage services layer. So layer one data foundation, layer two storage services, and then there needs to be connection into the application runtime. There has to be application awareness to do things like high availability and application consistent backup and recovery. So then you have to create the connection. And so in our case, we're focused on open shift, right? When we talk about Kubernetes how do you create the knowledge between layer two, the storage services and layer three of the application services? >> And so this is your three layer cake. And then as far as like the policies that I want to inject, you got an API out and entries in, can use whatever policy engine I want. How does that work? >> So we're creating consistent sets of APIs to bring those storage services up into the application, run time. We in IBM have things like IBM cloud satellite which bring the IBM public cloud experience to your data center and give you a hybrid cloud or into other public cloud environments giving you one hybrid cloud management experience. We'll integrate there, giving you that consistent set of storage services within an IBM cloud satellite. We're also working with Red Hat on their Advanced Cluster Manager, also known as RACM to create a multi-cluster management of your Kubernetes environment and giving that consistent experience. Again, one common set of APIs. >> So the appliance comes first? Is that a no? Okay, so is that just time to market or is there a sort of enduring demand for appliances? Some customers, you know, they want that, maybe you could explain that strategy. >> Yeah, so first let me take it back a second. Look at our existing portfolio. Our award-winning products are both software defined and system-based. So for example Spectrum Virtualize comes on our flash system. Spectrum Scale comes on our elastic storage system. And we've had this model where we provide the exact same software, both on an array or as standalone piece of software. This is unique in the storage industry. When you look at our competitors, when they've got something that's embedded in their array, their array manager, if you will, that's not what they'll try to sell you. It's software defined storage. And of course, many of them don't offer software defined storage in any way, shape or form. So we've done both. So with spectrum fusion, we'll have a hyper-converged configuration which will be available in Q3. We'll have a software defined configuration which were available at the very beginning of 2022. So you wanted to get out of this market feedback from our clients, feedback from our business partners by doing a container native HCI technology, we're way ahead. We're going to where the park is. We're throwing the ball ahead of the wide receiver. If you're a soccer fan, we're making sure that the mid guy got it to the forward ahead of time so you could kick the goal right in. That's what we're doing. Other technologies lead with virtualization, which is great but virtualization is kind of old hat, right? VMware and other virtualization layers have been around for 20 now. Container is where the world is going. And by the way, we'll support everything. We still have customers in certain worlds that are using bare metal, guess what? We work fine with that. We worked fine with virtual as we have a tight integration with both hyper V and VMware. So some customers will still do that. And containers is a new wave. So with spectrum fusion, we are riding the wave not fighting the wave and that way we could meet all the needs, right? Bare metal, virtual environments, and container environments in a way that is all based on the end users applications, workloads, and use cases. What goes, where and IBM Storage can provide all of it. So we'll give them two methods of consumption, by early next year. And we started with a hyper-converged first because, A, we felt we had a lead, truly a lead. Other people are leading with virtualization. We're leading with OpenShift and containers where the first full container-native OpenShift ground up based hyper-converged of anyone in the industry versus somebody who's done VMware or some other virtualization layer and then sort of glommed on containers and as an afterthought. We're going to where the market is moving, not to where the market has been. >> So just follow up on that. You kind of, you got the sort of Switzerland DNA. And it's not just OpenShift and Red Hat and the open source ethos. I mean, it just goes all the way back to San Volume Controller back in the day where you could virtualize anybody's storage. How is that carrying through to this announcement? >> So Spectrum Fusion is doing the same thing. Spectrum Fusion, which has many key elements brought in from our history with Spectrum Scale supports not IBM storage, for example, EMC Isilon NFS. It will support, Fusion will support Spectrum Scale, Fusion will support our elastic storage system. Fusion will support NetApp filers as well. Fusion will support IBM cloud object storage both software defined storage, or as an array technology and Amazon S3 object stores and any other object storage vendor who's compliant with S3. All of those can be part of the global namespace, scalable file system. We can bring in, for example, object data without making a duplicate copy. The normal way to do that as you make a duplicate copy. So you had a copy in the object store. You make a copy and to bring that into the file. Well, guess what, we don't have to do that. So again, cutting CapEx and OpEx and ease of management. But just as we do with our flash systems product and our Spectrum Virtualize and the SAN Volume Controller, we support over 550 storage arrays that are not ours that are our competitors. With Spectrum Fusion, we've done the same thing, fusion, scale the IBM ESS, IBM cloud object storage, Amazon S3 object store, as well as other compliance, EMC Isilon NFS, and NFS from NetApp. And by the way, we can do the discovery model as well not just integration in the system. So we've made sure that we really do protect existing investments. And we try to eliminate, particularly with discovery capability, you've got AI or analytics software connecting with the API, into the discovery technology. You don't have to traverse and try to find things because the discovery will create real time, metadata cataloging, and indexing, not just of our storage but the other storage I'd mentioned, which is the competition. So talk about making it easier to use, particularly for people who are heterogeneous in their storage environment, which is pretty much the bulk of the global fortune 1500, for sure. And so we're allowing them to use multiple vendors but derive real value with Spectrum Fusion and get all the capabilities of Spectrum Fusion and all the advantages of the enterprise data services but not just for our own product but for the other products as well that aren't ours. >> So Sam, we understand the downside of copies, but then, so you're not doing multiple copies. How do you deal with latency? What's the secret sauce here? Is it the file system? Is there other magic in here? >> Yeah, that's a great question. And I'll build a little bit off of what Eric said, but look one of the really great and unique things about Spectrum Scale is its ability to consume any storage. And we can actually allow you to bring in data sets from where they are. It could have originated in object storage we'll cash it into the file system. It can be on any block storage. It can literally be on any storage you can imagine as long as you can integrate a file system with it. And as you know most applications run on top of the file system. So it naturally fits into your application stack. Spectrum Scale uniquely is a globally parallel file system. So there's not very many of them in the world and there's none that can achieve what Spectrum Scale can do. We have customers running in the exabytes of data and the performance improves with scales. So you can actually deploy Spectrum Scale on-prem, build out an environment of it, consuming whatever storage you have. Then you can go into AWS or IBM cloud or Azure, deploy an instance of it and it will now extend your file system into that cloud. Or you can deploy it at the edge and it'll extend your file system to that edge. This gives you the exact same set of files and visibility and we'll cash in only what's needed. Normally you would have to make a copy of data into the other environment. Then you'd have to deal with that copy later, let's say you were doing a cloud bursting use case. Let's look at that as an example, to make this real. You're running an application on-prem. You want to spin up more compute in the cloud for your AI. The data normally you'd have to make a copy of the data. You'd run your AI. They have to figure out what to do with that data. Do you copy some of the fact? Do we sync them? Do you delete it? What do you do? With Spectrum Scale just automatically cash in whatever you need. It'll run there and you get assigned to spin it down. Your copy is still on-prem. You know, no data is lost. We can actually deal with all of those scenarios for you. And then if you look at what's happening at the edge, a lot of say video surveillance, data pouring in. Looking at the manufacturing {for} looking for defects. You can run a AI right at the edge, make it available in the cloud, make that data available in your data center. Again, one file system going across all. And that's something unique in our data foundation built on Spectrum Scale. >> So there's some metadata magic in there as well, and that intelligence based on location. And okay, so you're smart enough to know where the data lives. What's the sweet spot for this Eric? Are there any particular use cases or industries that we should be focused on or is it through? >> Sure, so first let's talk about the industries. We see certain industries going more container quicker than other industries. So first is financial services. We see it happening there. Manufacturing, Sam already talked about AI based manufacturing platforms. We actually have a couple clients right now. We're doing autonomous driving software with us on containers right now, even before Spectrum Fusion with Spectrum Scale. We see public of course, healthcare and in healthcare don't just think delivery at IBM. That includes the research guys. So the genomic companies, the biotech companies, the drug companies are all included in that. And then of course, retail, both on-prem and off-prem. So those are sort of the industries. Then we see from an application workload, basically AI analytics and big data applications or workloads are the key things that Spectrum Fusion helps you because of its file system. It's high performance. And those applications are tending to spread across core ,edge and cloud. So those applications are spreading out. They're becoming broader than just running in the data center. And by the way they want to run it just into the data center, that's fine. Or perfect example, we had giant global auto manufacturer. They've got factories all over. And if you think there isn't compute resources in every factory, there is because those factories I just saw an article, actually, those factories cost about a billion dollars to build them, a billion. So they've got their own IT, now it's connected to their core data center as well. So that's a perfect example that enterprise edge where spectrum fusion would be an ideal solution whether they did it as software defined only, or of course when you got a billion dollar factory, just to make it let alone produce the autos or whatever you're producing. Silicon, for example, those fabs, all cost a billion. That's where the enterprise edge fits in very well with Spectrum Fusion. >> So are those industries, what's driving the adoption of containers? Is it just, they just want to modernize? Is it because they're doing some of those workloads that you mentioned or is there's edge? Like you mentioned manufacturing, I could see that potentially being an edge is the driver. >> Well, it's a little bit of all of those Dave. For example, virtualization came out and virtualization offered advantages over bare metal, okay? Now containerization has come out and containerization is offering advantage over virtualization. The good thing at IBM is we know we can support all three. And we know again, in the global fortune 2000, 1500 they're probably going to run all three based on the application workload or use case. And our storage is really good at bare metal. Very good at virtualization environments. And now with Spectrum Fusion are container native outstanding for container based environments. So we see these big companies will probably have all three and IBM storage is one of the few vendors if not the only vendor that could adroitly support all three of those various workload types. So that's why we see this as a huge advantage. And again, the market is going to containers. We are, I'm a native California. You don't fight the wave, you ride the wave. and the wave is containers and we're riding that wave. >> If you don't ride the wave you become driftwood as Pat Gelsinger would say. >> And that is true, another native California. I'm a whole boss. >> So okay, so, I wonder Sam I sort of hinted upfront in my little narrative there but the way we see this, as you've got on-prem hybrid, you got public clouds across cloud moving to the edge. Open shift is I said is the linchpin to enabling some of those. And what we see is this layer that abstracts the complexity, hides the underlying complexity of the infrastructure that becomes kind of an implementation detail. Eric talked about skating to the park or whatever sports analogy you want to use. Is that where the park is headed? >> Yeah, I mean, look, the bottom line is you have to remove the complexity for the developers. Again, the name of the game here is all about agility. You asked why these industries are implementing containers? It's about accelerating their innovation and their services for their customers. It's about leveraging AI to gain better insights about their customers and delivering what they want and proving their experience. So if it's all about agility developers don't want to wait around for infrastructure. You need to automate it as much as possible. So it's about building infrastructure that's automated, which requires consistent API APIs. And it requires abstracting out the complexity of things like HA and DR. You don't want every application owner to have to figure out how to implement that. You want to make those storage services available and easy for a developer to implement and integrate into what they're doing. You want to ensure security across everything you do as you bring more and more of your data of your information about your customers into these container worlds. You've got to have security rock solid. You can't leave any exposures there and you can't afford downtime. There's increasing threats from things like ransomware. You don't see it in the news every day but it happens every single day. So how do you make sure you can recover when an event happens to you? So yes, you need to build a abstracted layer of storage services and you need to make it simply available to the developers in these dev ops environments. And that's what we're doing with spectrum fusion. We're taking, I think, extremely unique and one of a kind storage foundation with Spectrum Scale that gives you single namespace globally. And we're building onto it an incredible set of storage services, making extremely simple to deploy enterprise class container applications. >> So what's the bottom line business impact. I mean, how does this change? I mean, Sam, you I think articulated very well through all about serving the developers versus you know, storage, admin provisioning, a LUN. So how does this change my organization, my business? What's the impact there? >> I've mentioned one other point that we talk about an IBM a lot, which is the AI ladder. And it's about how do you take all of this information you have and be able to take it to build new insights, to give your company and advantage. An incumbent in an industry shouldn't be able to be disrupted if they're able to leverage all the data they have about the industry and their customers. But in order to do that, you have to be able to get to a single source of data and be able to build it into the fabric of your business operations. So that all decisions you're making in your company, all services you deliver to your customers, are built on that data foundation and information and the only way to do that and infuse it into your culture is to make this stuff real time. And the only way to do that is to build out a containerized application environment that has access to real-time data. The ultimate outcome, sorry, I know you asked for business results is that you will, in real time understand your clients, understand your industry and deliver the best possible services. And the absolute, business outcome is you will continue to gain market share and your environment and grow revenue. I mean, that's the outcome every business wants. >> Yeah, it's all about speed. Everybody's kind of, everybody's last year was forced into digital transformation. It was sort of rushed into and compressed and now they get some time to do it right. And so modernizing apps, containers, dev ops developer led sort of initiatives are really key to modernization. All right, Eric, we've got, we're out of time but give us the bottom summary. We didn't talk, actually, we had to talk about the 3,200. Maybe you could give us a little insight on that before we close. >> Sure, so in addition to what we're doing with Fusion we also introduced a new elastic storage system, 3,200 and it's all flash. It gets 80 gigs, a second sustained at the node level and we can cluster them infinitely. So for example, I've got 10 of them. I'm delivering 800 gigabytes, a second sustained. And of course, AI, big data analytic workloads are extremely, extremely susceptible to bandwidth and or data transfer rate. That's what they need to deliver their application base properly. It comes with Spectrum Scale built in so that comes with it. So you get the advantage of Spectrum Scale. We talked a lot about Spectrum Scale because it is if you will, one of the three fathers of spectrum fusion. So it's ideal with it's highly parallel file system. It's used all over in high performance computing and super computing, in drug research, in health care in finance, probably about 80% of the world's largest banks in the world use Spectrum Scale already for AI, big data analytics. So the new 3,200 is an all flash version twice as fast as the older version and all the benefit of Spectrum Scale including the ability of seamlessly integrating into existing Spectrum Scale or ESS deployments. And when Fusion comes out, you'll be able to have Fusion. And you could also add 3,200 to it if you want to do that because of the capability of our global namespace and our single file system across edge, core and cloud. So that's the 3,200 in a nutshell, Dave. >> All right, give us a bottom line, Eric. And we got to go, what's the bumper sticker. >> Yeah, bumper sticker is, you got to ride the wave of containers and IBM storage is company that can take you there so that you win the big surfing context and get the big prize. >> Eric and Sam, thanks so much, guys. It's great to see you and miss you guys. Hopefully we'll get together soon. So get your jabs and we'll have a beer. >> All right. >> All right, thanks, Dave. >> Nice talking to you. >> All right, thank you for watching everybody. This is Dave Vellante for "theCUBE." We'll see you next time. (upbeat music)
SUMMARY :
and protecting the data about the problem statement. and the types of things you Maybe you could talk to that a lot of the capabilities Got it, so Eric, you the data is, you think So that's the core of it. you got an API out and entries in, into the application, run time. So the appliance comes first? that the mid guy got it to in the day where you could And by the way, we can do Is it the file system? and the performance improves with scales. What's the sweet spot for this Eric? And by the way they want to run it being an edge is the driver. and IBM storage is one of the few vendors If you don't ride the And that is true, but the way we see this, as So how do you make sure What's the impact there? and the only way to do that and infuse it and now they get some time to do it right. So that's the 3,200 in a nutshell, Dave. the bumper sticker. so that you win the big It's great to see you and miss you guys. All right, thank you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Eric | PERSON | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Sam | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Sam Werner | PERSON | 0.99+ |
April 27th | DATE | 0.99+ |
Dave | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
80 gigs | QUANTITY | 0.99+ |
12 copies | QUANTITY | 0.99+ |
3,200 | QUANTITY | 0.99+ |
California | LOCATION | 0.99+ |
80% | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2 million | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
CapEx | TITLE | 0.99+ |
800 gigabytes | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
IBM Storage | ORGANIZATION | 0.99+ |
single copy | QUANTITY | 0.99+ |
OpEx | TITLE | 0.98+ |
three layers | QUANTITY | 0.98+ |
Spectrum Fusion | COMMERCIAL_ITEM | 0.98+ |
20% | QUANTITY | 0.98+ |
EMC | ORGANIZATION | 0.98+ |
first pass | QUANTITY | 0.98+ |
S3 | TITLE | 0.98+ |
Global Storage Channels | ORGANIZATION | 0.98+ |
a billion | QUANTITY | 0.97+ |
two | QUANTITY | 0.97+ |
20 | QUANTITY | 0.97+ |
Spectrum Scale | TITLE | 0.97+ |
three fathers | QUANTITY | 0.97+ |
early next year | DATE | 0.97+ |
three | QUANTITY | 0.97+ |
GDPR | TITLE | 0.96+ |
Red Hat | ORGANIZATION | 0.96+ |
OpenShift | TITLE | 0.96+ |
Brian Gracely, Red Hat | KubeCon + CloudNativeCon Europe 2021 - Virtual
>> From around the globe, it's theCUBE, with coverage of KubeCon and CloudNativeCon Europe 2021 Virtual. Brought to you by Red Hat, the Cloud Native Computing Foundation and ecosystem partners. >> Hello, welcome back to theCUBE's coverage of KubeCon 2021 CloudNativeCon Europe Virtual, I'm John Furrier your host, preview with Brian Gracely from Red Hat Senior Director Product Strategy Cloud Business Unit Brian Gracely great to see you. Former CUBE host CUBE alumni, big time strategist at Red Hat, great to see you, always great. And also the founder of Cloudcast which is an amazing podcast on cloud, part of the cloud (indistinct), great to see you Brian. Hope's all well. >> Great to see you too, you know for years, theCUBE was always sort of the ESPN of tech, I feel like, you know ESPN has become nothing but highlights. This is where all the good conversation is. It's theCUBE has become sort of the the clubhouse of tech, if you will. I know that's that's an area you're focused on, so yeah I'm excited to be back on and good to talk to you. >> It's funny you know, with all the events going away loved going out extracting the signal from the noise, you know, game day kind of vibe. CUBE Virtual has really expanded, so it's been so much more fun because we can get more people easy to dial in. So we're going to keep that feature post COVID. You're going to hear more about theCUBE Virtual hybrid events are going to be a big part of it, which is great because as you know and we've talked about communities and ecosystems are huge advantage right now it's been a big part of the Red Hat story. Now part of IBM bringing that mojo to the table the role of ecosystems with hybrid cloud is so critical. Can you share your thoughts on this? Because I know you study it, you have podcasts you've had one for many years, you understand that democratization and this new direct to audience kind of concept. Share your thoughts on this new ecosystem. >> Yeah, I think so, you know, we're sort of putting this in the context of what we all sort of familiarly call KubeCon but you know, if we think about it, it started as KubeCon it was sort of about this one technology but it's always been CloudNativeCon and we've sort of downplayed the cloud native part of it. But even if we think about it now, you know Kubernetes to a certain extent has kind of, you know there's this feeling around the community that, that piece of the puzzle is kind of boring. You know, it's 21 releases in, and there's lots of different offerings that you can get access to. There's still, you know, a lot of innovation but the rest of the ecosystem has just exploded. So it's, you know, there are ecosystem partners and companies that are working on edge and miniaturization. You know, we're seeing things like Kubernetes now getting into outer space and it's in the space station. We're seeing, you know, Linux get on Mars. But we're also seeing, you know, stuff on the other side of the spectrum. We're sort of seeing, you know awesome people doing database work and streaming and AI and ML on top of Kubernetes. So, you know, the ecosystem is doing what you'd expect it to do once one part of it gets stable. The innovation sort of builds on top of it. And, you know, even though we're virtual, we're still seeing just tons and tons of contributions, different companies different people stepping up and leading. So it's been really cool to watch the last few years. >> Yes, interesting point about the CloudNativeCon. That's an interesting insight, and I totally agree with you. And I think it's worth double clicking on. Let me just ask you, because when you look at like, say Kubernetes, okay, it's enabled a lot. Okay, it's been called the dial tone of Cloud native. I think Pat Gelsinger of VMware used that term. We call it the kind of the interoperability layer it enables more large scale deployments. So you're seeing a lot more Kubernetes enablement on clusters. Which is causing more hybrid cloud which means more Cloud native. So it actually is creating a network effect in and of itself with more Cloud native components and it's changing the development cycle. So the question I want to ask you is one how does a customer deal with that? Because people are saying, I like hybrid. I agree, Multicloud is coming around the corner. And of course, Multicloud is just a subsystem of resource underneath hybrid. How do I connect it all? Now I have multiple vendors, I have multiple clusters. I'm cross-cloud, I'm connecting multiple clouds multiple services, Kubernetes clusters, some get stood up some gets to down, it's very dynamic. >> Yeah, it's very dynamic. It's actually, you know, just coincidentally, you know, our lead architect, a guy named Clayton Coleman, who was one of the Kubernetes founders, is going to give a talk on sort of Kubernetes is this hybrid control plane. So we're already starting to see the tentacles come out of it. So you know how we do cross cloud networking how we do cross cloud provisioning of services. So like, how do I go discover what's in other clouds? You know and I think like you said, it took people a few years to figure out, like how do I use this new thing, this Kubernetes thing. How do I harness it. And, but the demand has since become "I have to do multi-cloud." And that means, you know, hey our company acquires companies, so you know, we don't necessarily know where that next company we acquire is going to run. Are they going to run on AWS? Are they going to, you know, run on Azure I've got to be able to run in multiple places. You know, we're seeing banking industries say, "hey, look cloud's now a viable target for you to put your applications, but you have to treat multiple clouds as if they're your backup domains." And so we're, you know, we're seeing both, you know the way business operates whether it's acquisitions or new things driving it. We're seeing regulations driving hybrid and multi-cloud and, even you know, even if the stalwart were to you know, set for a long time, well the world's only going to be public cloud and sort of you know, legacy data centers even those folks are now coming around to "I've got to bring hybrid to, to these places." So it's been more than just technology. It's been, you know, industries pushing it regulations pushing it, a lot of stuff. So, but like I said, we're going to be talking about kind of our future, our vision on that, our future on that. And, you know Red Hat everything we end up doing is a community activity. So we expect a lot of people will get on board with it >> You know, for all the old timers out there they can relate to this. But I remember in the 80's the OSI Open Systems Interconnect, and I was chatting with Paul Cormier about this because we were kind of grew up through that generation. That disrupted network protocols that were proprietary and that opened the door for massive, massive growth massive innovation around just getting that interoperability with TCP/IP, and then everything else happened. So Kubernetes does that, that's a phenomenal impact. So Cloud native to me is at that stage where it's totally next-gen and it's happening really fast. And a lot of people getting caught off guard, Brian. So you know, I got to to ask you as a product strategist, what's your, how would you give them the navigation of where that North star is? If I'm a customer, okay, I got to figure out where I got to navigate now. I know it's super volatile, changing super fast. What's your advice? >> I think it's a couple of pieces, you know we're seeing more and more that, you know, the technology decisions don't get driven out of sort of central IT as much anymore right? We sort of talk all the time that every business opportunity, every business project has a technology component to it. And I think what we're seeing is the companies that tend to be successful with it have built up the muscle, built up the skill set to say, okay, when this line of business says, I need to do something new and innovative I've got the capabilities to sort of stand behind that. They're not out trying to learn it new they're not chasing it. So that's a big piece of it, is letting the business drive your technology decisions as opposed to what happened for a long time which was we built out technology, we hope they would come. You know, the other piece of it is I think because we're seeing so much push from different directions. So we're seeing, you know people put technology out at the edge. We're able to do some, you know unique scalable things, you know in the cloud and so forth That, you know more and more companies are having to say, "hey, look, I'm not, I'm not in the pharmaceutical business. I'm not in the automotive business, I'm in software." And so, you know the companies that realize that faster, and then, you know once they sort of come to those realizations they realize, that's my new normal, those are the ones that are investing in software skills. And they're not afraid to say, look, you know even if my existing staff is, you know, 30 years of sort of history, I'm not afraid to bring in some folks that that'll break a few eggs and, you know, and use them as a lighthouse within their organization to retrain and sort of reset, you know, what's possible. So it's the business doesn't move. That's the the thing that drives all of them. And it's, if you embrace it, we see a lot of success. It's the ones that, that push back on it really hard. And, you know the market tends to sort of push back on them as well. >> Well we're previewing KubeCon CloudNativeCon. We'll amplify that it's CloudNativeCon as well. You guys bought StackRox, okay, so interesting company, not an open source company they have soon to be, I'm assuring, but Advanced Cluster Security, ACS, as it's known it's really been a key part of Red Hat. Can you give us the strategy behind that deal? What does that product, how does it fit in that's a lot of people are really talking about this acquisition. >> Yeah so here's the way we looked at it, is we've learned a couple of things over the last say five years that we've been really head down in Kubernetes, right? One is, we've always embedded a lot of security capabilities in the platform. So OpenShift being our core Kubernetes platform. And then what's happened over time is customers have said to us, "that's great, you've made the platform very secure" but the reality is, you know, our software supply chain. So the way that we build applications that, you know we need to secure that better. We need to deal with these more dynamic environments. And then once the applications are deployed they interact with various types of networks. I need to better secure those environments too. So we realized that we needed to expand our functionality beyond the core platform of OpenShift. And then the second thing that we've learned over the last number of years is to be successful in this space, it's really hard to take technology that wasn't designed for containers, or it wasn't designed for Kubernetes and kind of retrofit it back into that. And so when we were looking at potential acquisition targets, we really narrowed down to companies whose fundamental technologies were you know, Kubernetes-centric, you know having had to modify something to get to Kubernetes, and StackRox was really the leader in that space. They really, you know have been the leader in enterprise Kubernetes security. And the great thing about them was, you know not only did they have this Kubernetes expertise but on top of that, probably half of their customers were already OpenShift customers. And about 3/4 of their customers were using you know, native Kubernetes services and other clouds. So, you know, when we went and talked to them and said, "Hey we believe in Kubernetes, we believe in multi-cloud. We believe in open source," they said, "yeah, those are all the foundational things for us." And to your point about it, you know, maybe not being an open source company, they actually had a number of sort of ancillary projects that were open source. So they weren't unfamiliar to it. And then now that the acquisition's closed, we will do what we do with every piece of Red Hat technology. We'll make sure that within a reasonable period of time that it's made open source. And so you know, it's good for the community. It allows them to keep focusing on their innovation. >> Yeah you've got to get that code out there cool. Brian, I'm hearing about Platform Plus what is that about? Take us through that. >> Yeah, so you know, one of the things that our customers, you know, have come to us over time is it's you know, it's like, I've been saying kind of throughout this discussion, right? Kubernetes is foundational, but it's become pretty stable. The things that people are solving for now are like, you highlighted lots and lots of clusters, they're all over the place. That was something that our advanced cluster management capabilities were able to solve for people. Once you start getting into lots of places you've got to be able to secure things everywhere you go. And so OpenShift for us really allows us to bundle together, you know, sort of the complete set of the portfolio. So the platform, security management, and it also gives us the foundational pieces or it allows our customers to buy the foundational pieces that are going to help them do multi and hybrid cloud. And, you know, when we bundle that we can save them probably 25% in terms of sort of product acquisition. And then obviously the integration work we do you know, saves a ton on the operational side. So it's a new way for us to, to not only bundle the platform and the technologies but it gets customers in a mindset that says, "hey we've moved past sort of single environments to hybrid and multi-cloud environments. >> Awesome, well thanks for the update on that, appreciate it. One of the things going into KubeCon, and that we're watching closely is this Cloud native developer action. Certainly end users want to get that in a separate section with you but the end user contribution, which is like exploding. But on the developer side there's a real trend towards adding stronger consistency programmability support for more use cases okay. Where it's becoming more of a data platform as a requirement. >> Brian: Right. >> So how, so that's a trend so I'm kind of thinking, there's no disagreement on that. >> Brian: No, absolutely. >> What does that mean? Like I'm a customer, that sounds good. How do I make that happen? 'Cause that's the critical discussion right now in the DevOps, DevSecOps day, two operations. What you want to call it. This is the number one concern for developers and that solution architect, consistency, programmability more use cases with data as a platform. >> Yeah, I think, you know the way I kind of frame this up was you know, for any for any organization, the last thing you want to to do is sort of keep investing in lots of platforms, right? So platforms are great on their surface but once you're having to manage five and six and, you know 10 or however many you're managing, the economies of scale go away. And so what's been really interesting to watch with Kubernetes is, you know when we first got started everything was Cloud native application but that really was sort of, you know shorthand for stateless applications. We quickly saw a move to, you know, people that said, "Hey I can modernize something, you know, a Stateful application and we add that into Kubernetes, right? The community added the ability to do Stateful applications and that got people a certain amount of the way. And they sort of started saying, okay maybe Kubernetes can help me peel off some things of an existing platform. So I can peel off, you know Java workloads or I can peel off, what's been this explosion is the data community, if you will. So, you know, the TensorFlows the PItorches, you know, the Apache community with things like Couchbase and Kafka, TensorFlow, all these things that, you know maybe in the past didn't necessarily, had their own sort of underlying system are now defaulting to Kubernetes. And what we see because of that is, you know people now can say, okay, these data workloads these AI and ML workloads are so important to my business, right? Like I can directly point to cost savings. I can point to, you know, driving innovation and because Kubernetes is now their default sort of way of running, you know we're seeing just sort of what used to be, you know small islands of clusters become these enormous footprints whether they're in the cloud or in their data center. And that's almost become, you know, the most prevalent most widely used use case. And again, it makes total sense. It's exactly the trends that we've seen in our industry, even before Kubernetes. And now people are saying, okay, I can consolidate a lot of stuff on Kubernetes. I can get away from all those silos. So, you know, that's been a huge thing over the last probably year plus. And the cool thing is we've also seen, you know the hardware vendors. So whether it's Intel or Nvidia, especially around GPUs, really getting on board and trying to make that simpler. So it's not just the software ecosystem. It's also the hardware ecosystem, really getting on board. >> Awesome, Brian let me get your thoughts on the cloud versus the power dynamics between the cloud players and the open source software vendors. So what's the Red Hat relationship with the cloud players with the hybrid architecture, 'cause you want to set up the modern day developer environment, we get that right. And it's hybrid, what's the relationship with the cloud players? >> You know, I think so we we've always had two philosophies that haven't really changed. One is, we believe in open source and open licensing. So you haven't seen us look at the cloud as, a competitive threat, right? We didn't want to make our business, and the way we compete in business, you know change our philosophy in software. So we've always sort of maintained open licenses permissive licenses, but the second piece is you know, we've looked at the cloud providers as very much partners. And mostly because our customers look at them as partners. So, you know, if Delta Airlines or Deutsche Bank or somebody says, "hey that cloud provider is going to be our partner and we want you to be part of that journey, we need to be partners with that cloud as well." And you've seen that sort of manifest itself in terms of, you know, we haven't gone and set up new SaaS offerings that are Red Hat offerings. We've actually taken a different approach than a lot of the open source companies. And we've said we're going to embed our capabilities, especially, you know OpenShift into AWS, into Azure into IBM cloud working with Google cloud. So we'd look at them very much as a partner. I think it aligns to how Red Hat's done things in the past. And you know, we think, you know even though it maybe easy to sort of see a way of monetizing things you know, changing licensing, we've always found that, you've got to allow the ecosystem to compete. You've got to allow customers to go where they want to go. And we try and be there in the most consumable way possible. So that's worked out really well for us. >> So I got to bring up the end user participation component. That's a big theme here at KubeCon going into it and around the event is, and we've seen this trend happen. I mean, Envoy, Lyft the laying examples are out there. But they're more end-use enterprises coming in. So the enterprise class I call classic enterprise end user participation is at an all time high in opensource. You guys have the biggest portfolio of enterprises in the business. What's the trend that you're seeing because it used to be limited to the hyperscalers the Lyfts and the Facebooks and the big guys. Now you have, you know enterprises coming in the business model is working, can you just share your thoughts on CloudNativeCons participation for end users? >> Yeah, I think we're definitely seeing a blurring of lines between what used to be the Silicon Valley companies were the ones that would create innovation. So like you mentioned Lyft, or, you know LinkedIn doing Kafka or Twitter doing you know, whatever. But as we've seen more and more especially enterprises look at themselves as software companies right. So, you know if you talk about, you know, Ford or Volkswagen they think of themselves as a software company, almost more than they think about themselves as a car company, right. They're a sort of mobile transportation company you know, something like that. And so they look at themselves as I've got to I've got to have software as an expertise. I've got to compete for the best talent, no matter where that talent is, right? So it doesn't have to be in Detroit or in Germany or wherever I can go get that anywhere. And I think what they really, they look for us to do is you know, they've got great technology chops but they don't always understand kind of the the nuances and the dynamics of open-source right. They're used to having their own proprietary internal stuff. And so a lot of times they'll come to us, not you know, "Hey how do we work with the project?" But you know like here's new technology. But they'll come to us and they'll say "how do we be good, good stewards in this community? How do we make sure that we can set up our own internal open source office and have that group, work with communities?" And so the dynamics have really changed. I think a lot of them have, you know they've looked at Silicon Valley for years and now they're modeling it, but it's, you know, for us it's great because now we're talking the same language, you know we're able to share sort of experiences we're able to share best practices. So it is really, really interesting in terms of, you know, how far that whole sort of software is eating the world thing is materialized in sort of every industry. >> Yeah and it's the workloads of expanding Cloud native everywhere edge is blowing up big time. Brian, final question for you before we break. >> You bet. >> Thanks for coming on and always great to chat with you. It's always riffing and getting the data out too. What's your expectation for KubeCon CloudNativeCon this year? What are you expecting to see? What highlights do you expect will come out of CloudNativeCon KubeCon this year? >> Yeah, I think, you know like I said, I think it's going to be much more on the Cloud native side, you know we're seeing a ton of new communities come out. I think that's going to be the big headline is the number of new communities that are, you know have sort of built up a following. So whether it's Crossplane or whether it's, you know get-ops or whether it's, you know expanding around the work that's going on in operators we're going to see a whole bunch of projects around, you know, developer sort of frameworks and developer experience and so forth. So I think the big thing we're going to see is sort of this next stage of, you know a thousand flowers are blooming and we're going to see probably a half dozen or so new communities come out of this one really strong and you know the trends around those are going to accelerate. So I think that'll probably be the biggest takeaway. And then I think just the fact that the community is going to come out stronger after the pandemic than maybe it did before, because we're learning you know, new ways to work remotely, and that, that brings in a ton of new companies and contributors. So I think those two big things will be the headlines. And, you know, the state of the community is strong as they, as they like to say >> Yeah, love the ecosystem, I think the values are going to be network effect, ecosystems, integration standards evolving very quickly out in the open. Great to see Brian Gracely Senior Director Product Strategy at Red Hat for the cloud business unit, also podcasts are over a million episode downloads for the cloud cast podcast, thecloudcast.net. What's it Brian, what's the stats now. >> Yeah, I think we've, we've done over 500 shows. We're you know, about a million and a half listeners a year. So it's, you know again, it's great to have community followings and, you know, and meet people from around the world. So, you know, so many of these things intersect it's a real pleasure to work with everybody >> You're going to create a culture, well done. We're all been there, done that great job. >> Thank you >> Check out the cloud cast, of course, Red Hat's got the great OpenShift mojo going on into KubeCon. Brian, thanks for coming on. >> Thanks John. >> Okay so CUBE coverage of KubeCon, CloudNativeCon Europe 2021 Virtual, I'm John Furrier with theCUBE virtual. Thanks for watching. (upbeat music)
SUMMARY :
Brought to you by Red great to see you Brian. Great to see you too, It's funny you know, with to a certain extent has kind of, you know So the question I want to ask you is one the stalwart were to you know, So you know, I got to to ask to say, look, you know Can you give us the but the reality is, you know, that code out there cool. Yeah, so you know, one of with you but the end user contribution, So how, so that's a trend What you want to call it. the PItorches, you know, and the open source software vendors. And you know, we think, you So the enterprise class come to us, not you know, Yeah and it's the workloads of What are you expecting to see? and you know the trends around for the cloud business unit, So it's, you know again, You're going to create Check out the cloud cast, of course, of KubeCon, CloudNativeCon
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ford | ORGANIZATION | 0.99+ |
Volkswagen | ORGANIZATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Deutsche Bank | ORGANIZATION | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Clayton Coleman | PERSON | 0.99+ |
Brian Gracely | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Delta Airlines | ORGANIZATION | 0.99+ |
Germany | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
25% | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Detroit | LOCATION | 0.99+ |
Paul Cormier | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
30 years | QUANTITY | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
second piece | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
two philosophies | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
KubeCon | EVENT | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
ESPN | ORGANIZATION | 0.99+ |
21 releases | QUANTITY | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
CloudNativeCon | EVENT | 0.98+ |
Facebooks | ORGANIZATION | 0.98+ |
second thing | QUANTITY | 0.98+ |
Cloudcast | ORGANIZATION | 0.98+ |
thecloudcast.net | OTHER | 0.98+ |
Lyft | ORGANIZATION | 0.98+ |
ORGANIZATION | 0.98+ | |
Silicon Valley | LOCATION | 0.97+ |
Linux | TITLE | 0.97+ |
over 500 shows | QUANTITY | 0.97+ |
CloudNativeCon Europe 2021 Virtual | EVENT | 0.97+ |
80's | DATE | 0.97+ |
one | QUANTITY | 0.97+ |
OpenShift | TITLE | 0.96+ |
Java | TITLE | 0.96+ |
Kubernetes | ORGANIZATION | 0.96+ |
Lyfts | ORGANIZATION | 0.96+ |
Kubernetes | TITLE | 0.96+ |
pandemic | EVENT | 0.96+ |
theCUBE | ORGANIZATION | 0.95+ |
one part | QUANTITY | 0.95+ |
KubeCon 2021 CloudNativeCon Europe Virtual | EVENT | 0.95+ |
Azure | TITLE | 0.94+ |
Mars | LOCATION | 0.94+ |
CloudNativeCon | TITLE | 0.94+ |
OpenShift | ORGANIZATION | 0.93+ |
ORGANIZATION | 0.93+ | |
Kafka | TITLE | 0.92+ |
Paula D'Amico, Webster Bank | Io Tahoe | Enterprise Data Automation
>> Narrator: From around the Globe, it's theCube with digital coverage of Enterprise Data Automation, and event series brought to you by Io-Tahoe. >> Everybody, we're back. And this is Dave Vellante, and we're covering the whole notion of Automated Data in the Enterprise. And I'm really excited to have Paula D'Amico here. Senior Vice President of Enterprise Data Architecture at Webster Bank. Paula, good to see you. Thanks for coming on. >> Hi, nice to see you, too. >> Let's start with Webster bank. You guys are kind of a regional I think New York, New England, believe it's headquartered out of Connecticut. But tell us a little bit about the bank. >> Webster bank is regional Boston, Connecticut, and New York. Very focused on in Westchester and Fairfield County. They are a really highly rated regional bank for this area. They hold quite a few awards for the area for being supportive for the community, and are really moving forward technology wise, they really want to be a data driven bank, and they want to move into a more robust group. >> We got a lot to talk about. So data driven is an interesting topic and your role as Data Architecture, is really Senior Vice President Data Architecture. So you got a big responsibility as it relates to kind of transitioning to this digital data driven bank but tell us a little bit about your role in your Organization. >> Currently, today, we have a small group that is just working toward moving into a more futuristic, more data driven data warehousing. That's our first item. And then the other item is to drive new revenue by anticipating what customers do, when they go to the bank or when they log in to their account, to be able to give them the best offer. And the only way to do that is you have timely, accurate, complete data on the customer and what's really a great value on offer something to offer that, or a new product, or to help them continue to grow their savings, or do and grow their investments. >> Okay, and I really want to get into that. But before we do, and I know you're, sort of partway through your journey, you got a lot to do. But I want to ask you about Covid, how you guys handling that? You had the government coming down and small business loans and PPP, and huge volume of business and sort of data was at the heart of that. How did you manage through that? >> We were extremely successful, because we have a big, dedicated team that understands where their data is and was able to switch much faster than a larger bank, to be able to offer the PPP Long's out to our customers within lightning speed. And part of that was is we adapted to Salesforce very for we've had Salesforce in house for over 15 years. Pretty much that was the driving vehicle to get our PPP loans in, and then developing logic quickly, but it was a 24 seven development role and get the data moving on helping our customers fill out the forms. And a lot of that was manual, but it was a large community effort. >> Think about that too. The volume was probably much higher than the volume of loans to small businesses that you're used to granting and then also the initial guidelines were very opaque. You really didn't know what the rules were, but you were expected to enforce them. And then finally, you got more clarity. So you had to essentially code that logic into the system in real time. >> I wasn't directly involved, but part of my data movement team was, and we had to change the logic overnight. So it was on a Friday night it was released, we pushed our first set of loans through, and then the logic changed from coming from the government, it changed and we had to redevelop our data movement pieces again, and we design them and send them back through. So it was definitely kind of scary, but we were completely successful. We hit a very high peak. Again, I don't know the exact number but it was in the thousands of loans, from little loans to very large loans and not one customer who applied did not get what they needed for, that was the right process and filled out the right amount. >> Well, that is an amazing story and really great support for the region, your Connecticut, the Boston area. So that's fantastic. I want to get into the rest of your story now. Let's start with some of the business drivers in banking. I mean, obviously online. A lot of people have sort of joked that many of the older people, who kind of shunned online banking would love to go into the branch and see their friendly teller had no choice, during this pandemic, to go to online. So that's obviously a big trend you mentioned, the data driven data warehouse, I want to understand that, but what at the top level, what are some of the key business drivers that are catalyzing your desire for change? >> The ability to give a customer, what they need at the time when they need it. And what I mean by that is that we have customer interactions in multiple ways. And I want to be able for the customer to walk into a bank or online and see the same format, and being able to have the same feel the same love, and also to be able to offer them the next best offer for them. But they're if they want looking for a new mortgage or looking to refinance, or whatever it is that they have that data, we have the data and that they feel comfortable using it. And that's an untethered banker. Attitude is, whatever my banker is holding and whatever the person is holding in their phone, that is the same and it's comfortable. So they don't feel that they've walked into the bank and they have to do fill out different paperwork compared to filling out paperwork on just doing it on their phone. >> You actually do want the experience to be better. And it is in many cases. Now you weren't able to do this with your existing I guess mainframe based Enterprise Data Warehouses. Is that right? Maybe talk about that a little bit? >> Yeah, we were definitely able to do it with what we have today the technology we're using. But one of the issues is that it's not timely. And you need a timely process to be able to get the customers to understand what's happening. You need a timely process so we can enhance our risk management. We can apply for fraud issues and things like that. >> Yeah, so you're trying to get more real time. The traditional EDW. It's sort of a science project. There's a few experts that know how to get it. You can so line up, the demand is tremendous. And then oftentimes by the time you get the answer, it's outdated. So you're trying to address that problem. So part of it is really the cycle time the end to end cycle time that you're progressing. And then there's, if I understand it residual benefits that are pretty substantial from a revenue opportunity, other offers that you can make to the right customer, that you maybe know, through your data, is that right? >> Exactly. It's drive new customers to new opportunities. It's enhanced the risk, and it's to optimize the banking process, and then obviously, to create new business. And the only way we're going to be able to do that is if we have the ability to look at the data right when the customer walks in the door or right when they open up their app. And by doing creating more to New York times near real time data, or the data warehouse team that's giving the lines of business the ability to work on the next best offer for that customer as well. >> But Paula, we're inundated with data sources these days. Are there other data sources that maybe had access to before, but perhaps the backlog of ingesting and cleaning in cataloging and analyzing maybe the backlog was so great that you couldn't perhaps tap some of those data sources. Do you see the potential to increase the data sources and hence the quality of the data or is that sort of premature? >> Oh, no. Exactly. Right. So right now, we ingest a lot of flat files and from our mainframe type of front end system, that we've had for quite a few years. But now that we're moving to the cloud and off-prem and on-prem, moving off-prem, into like an S3 Bucket, where that data we can process that data and get that data faster by using real time tools to move that data into a place where, like snowflake could utilize that data, or we can give it out to our market. Right now we're about we do work in batch mode still. So we're doing 24 hours. >> Okay. So when I think about the data pipeline, and the people involved, maybe you could talk a little bit about the organization. You've got, I don't know, if you have data scientists or statisticians, I'm sure you do. You got data architects, data engineers, quality engineers, developers, etc. And oftentimes, practitioners like yourself, will stress about, hey, the data is in silos. The data quality is not where we want it to be. We have to manually categorize the data. These are all sort of common data pipeline problems, if you will. Sometimes we use the term data Ops, which is sort of a play on DevOps applied to the data pipeline. Can you just sort of describe your situation in that context? >> Yeah, so we have a very large data ops team. And everyone that who is working on the data part of Webster's Bank, has been there 13 to 14 years. So they get the data, they understand it, they understand the lines of business. So it's right now. We could the we have data quality issues, just like everybody else does. But we have places in them where that gets cleansed. And we're moving toward and there was very much siloed data. The data scientists are out in the lines of business right now, which is great, because I think that's where data science belongs, we should give them and that's what we're working towards now is giving them more self service, giving them the ability to access the data in a more robust way. And it's a single source of truth. So they're not pulling the data down into their own, like Tableau dashboards, and then pushing the data back out. So they're going to more not, I don't want to say, a central repository, but a more of a robust repository, that's controlled across multiple avenues, where multiple lines of business can access that data. Is that help? >> Got it, Yes. And I think that one of the key things that I'm taking away from your last comment, is the cultural aspects of this by having the data scientists in the line of business, the lines of business will feel ownership of that data as opposed to pointing fingers criticizing the data quality. They really own that that problem, as opposed to saying, well, it's Paula's problem. >> Well, I have my problem is I have data engineers, data architects, database administrators, traditional data reporting people. And because some customers that I have that are business customers lines of business, they want to just subscribe to a report, they don't want to go out and do any data science work. And we still have to provide that. So we still want to provide them some kind of regiment that they wake up in the morning, and they open up their email, and there's the report that they subscribe to, which is great, and it works out really well. And one of the things is why we purchased Io-Tahoe was, I would have the ability to give the lines of business, the ability to do search within the data. And we'll read the data flows and data redundancy and things like that, and help me clean up the data. And also, to give it to the data analysts who say, all right, they just asked me they want this certain report. And it used to take okay, four weeks we're going to go and we're going to look at the data and then we'll come back and tell you what we can do. But now with Io-Tahoe, they're able to look at the data, and then in one or two days, they'll be able to go back and say, Yes, we have the data, this is where it is. This is where we found it. This is the data flows that we found also, which is what I call it, is the break of a column. It's where the column was created, and where it went to live as a teenager. (laughs) And then it went to die, where we archive it. And, yeah, it's this cycle of life for a column. And Io-Tahoe helps us do that. And we do data lineage is done all the time. And it's just takes a very long time and that's why we're using something that has AI in it and machine running. It's accurate, it does it the same way over and over again. If an analyst leaves, you're able to utilize something like Io-Tahoe to be able to do that work for you. Is that help? >> Yeah, so got it. So a couple things there, in researching Io-Tahoe, it seems like one of the strengths of their platform is the ability to visualize data, the data structure and actually dig into it, but also see it. And that speeds things up and gives everybody additional confidence. And then the other piece is essentially infusing AI or machine intelligence into the data pipeline, is really how you're attacking automation. And you're saying it repeatable, and then that helps the data quality and you have this virtual cycle. Maybe you could sort of affirm that and add some color, perhaps. >> Exactly. So you're able to let's say that I have seven cars, lines of business that are asking me questions, and one of the questions they'll ask me is, we want to know, if this customer is okay to contact, and there's different avenues so you can go online, do not contact me, you can go to the bank and you can say, I don't want email, but I'll take texts. And I want no phone calls. All that information. So, seven different lines of business asked me that question in different ways. One said, "No okay to contact" the other one says, "Customer 123." All these. In each project before I got there used to be siloed. So one customer would be 100 hours for them to do that analytical work, and then another analyst would do another 100 hours on the other project. Well, now I can do that all at once. And I can do those types of searches and say, Yes, we already have that documentation. Here it is, and this is where you can find where the customer has said, "No, I don't want to get access from you by email or I've subscribed to get emails from you." >> Got it. Okay. Yeah Okay. And then I want to go back to the cloud a little bit. So you mentioned S3 Buckets. So you're moving to the Amazon cloud, at least, I'm sure you're going to get a hybrid situation there. You mentioned snowflake. What was sort of the decision to move to the cloud? Obviously, snowflake is cloud only. There's not an on-prem, version there. So what precipitated that? >> Alright, so from I've been in the data IT information field for the last 35 years. I started in the US Air Force, and have moved on from since then. And my experience with Bob Graham, was with snowflake with working with GE Capital. And that's where I met up with the team from Io-Tahoe as well. And so it's a proven so there's a couple of things one is Informatica, is worldwide known to move data. They have two products, they have the on-prem and the off-prem. I've used the on-prem and off-prem, they're both great. And it's very stable, and I'm comfortable with it. Other people are very comfortable with it. So we picked that as our batch data movement. We're moving toward probably HVR. It's not a total decision yet. But we're moving to HVR for real time data, which is changed capture data, moves it into the cloud. And then, so you're envisioning this right now. In which is you're in the S3, and you have all the data that you could possibly want. And that's JSON, all that everything is sitting in the S3 to be able to move it through into snowflake. And snowflake has proven to have a stability. You only need to learn and train your team with one thing. AWS as is completely stable at this point too. So all these avenues if you think about it, is going through from, this is your data lake, which is I would consider your S3. And even though it's not a traditional data lake like, you can touch it like a Progressive or Hadoop. And then into snowflake and then from snowflake into sandbox and so your lines of business and your data scientists just dive right in. That makes a big win. And then using Io-Tahoe with the data automation, and also their search engine. I have the ability to give the data scientists and data analysts the way of they don't need to talk to IT to get accurate information or completely accurate information from the structure. And we'll be right back. >> Yeah, so talking about snowflake and getting up to speed quickly. I know from talking to customers you can get from zero to snowflake very fast and then it sounds like the Io-Tahoe is sort of the automation cloud for your data pipeline within the cloud. Is that the right way to think about it? >> I think so. Right now I have Io-Tahoe attached to my on-prem. And I want to attach it to my off-prem eventually. So I'm using Io-Tahoe data automation right now, to bring in the data, and to start analyzing the data flows to make sure that I'm not missing anything, and that I'm not bringing over redundant data. The data warehouse that I'm working of, it's an on-prem. It's an Oracle Database, and it's 15 years old. So it has extra data in it. It has things that we don't need anymore, and Io-Tahoe's helping me shake out that extra data that does not need to be moved into my S3. So it's saving me money, when I'm moving from off-prem to on-prem. >> And so that was a challenge prior, because you couldn't get the lines of business to agree what to delete, or what was the issue there? >> Oh, it was more than that. Each line of business had their own structure within the warehouse. And then they were copying data between each other, and duplicating the data and using that. So there could be possibly three tables that have the same data in it, but it's used for different lines of business. We have identified using Io-Tahoe identified over seven terabytes in the last two months on data that has just been repetitive. It's the same exact data just sitting in a different schema. And that's not easy to find, if you only understand one schema, that's reporting for that line of business. >> More bad news for the storage companies out there. (both laughs) So far. >> It's cheap. That's what we were telling people. >> And it's true, but you still would rather not waste it, you'd like to apply it to drive more revenue. And so, I guess, let's close on where you see this thing going. Again, I know you're sort of partway through the journey, maybe you could sort of describe, where you see the phase is going and really what you want to get out of this thing, down the road, mid-term, longer term, what's your vision or your data driven organization. >> I want for the bankers to be able to walk around with an iPad in their hand, and be able to access data for that customer, really fast and be able to give them the best deal that they can get. I want Webster to be right there on top with being able to add new customers, and to be able to serve our existing customers who had bank accounts since they were 12 years old there and now our multi whatever. I want them to be able to have the best experience with our bankers. >> That's awesome. That's really what I want as a banking customer. I want my bank to know who I am, anticipate my needs, and create a great experience for me. And then let me go on with my life. And so that follow. Great story. Love your experience, your background and your knowledge. I can't thank you enough for coming on theCube. >> Now, thank you very much. And you guys have a great day. >> All right, take care. And thank you for watching everybody. Keep right there. We'll take a short break and be right back. (gentle music)
SUMMARY :
to you by Io-Tahoe. And I'm really excited to of a regional I think and they want to move it relates to kind of transitioning And the only way to do But I want to ask you about Covid, and get the data moving And then finally, you got more clarity. and filled out the right amount. and really great support for the region, and being able to have the experience to be better. to be able to get the customers that know how to get it. and it's to optimize the banking process, and analyzing maybe the backlog was and get that data faster and the people involved, And everyone that who is working is the cultural aspects of this the ability to do search within the data. and you have this virtual cycle. and one of the questions And then I want to go back in the S3 to be able to move it Is that the right way to think about it? and to start analyzing the data flows and duplicating the data and using that. More bad news for the That's what we were telling people. and really what you want and to be able to serve And so that follow. And you guys have a great day. And thank you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Paula D'Amico | PERSON | 0.99+ |
Paula | PERSON | 0.99+ |
Connecticut | LOCATION | 0.99+ |
Westchester | LOCATION | 0.99+ |
Informatica | ORGANIZATION | 0.99+ |
24 hours | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
13 | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
100 hours | QUANTITY | 0.99+ |
Bob Graham | PERSON | 0.99+ |
iPad | COMMERCIAL_ITEM | 0.99+ |
Webster Bank | ORGANIZATION | 0.99+ |
GE Capital | ORGANIZATION | 0.99+ |
first item | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
two products | QUANTITY | 0.99+ |
seven | QUANTITY | 0.99+ |
New York | LOCATION | 0.99+ |
Boston | LOCATION | 0.99+ |
three tables | QUANTITY | 0.99+ |
Each line | QUANTITY | 0.99+ |
first set | QUANTITY | 0.99+ |
two days | QUANTITY | 0.99+ |
DevOps | TITLE | 0.99+ |
Webster bank | ORGANIZATION | 0.99+ |
14 years | QUANTITY | 0.99+ |
over 15 years | QUANTITY | 0.99+ |
seven cars | QUANTITY | 0.98+ |
each project | QUANTITY | 0.98+ |
Friday night | DATE | 0.98+ |
Enterprise Data Automation | ORGANIZATION | 0.98+ |
New England | LOCATION | 0.98+ |
Io-Tahoe | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
Webster's Bank | ORGANIZATION | 0.98+ |
one schema | QUANTITY | 0.97+ |
Fairfield County | LOCATION | 0.97+ |
One | QUANTITY | 0.97+ |
one customer | QUANTITY | 0.97+ |
over seven terabytes | QUANTITY | 0.97+ |
Salesforce | ORGANIZATION | 0.96+ |
both | QUANTITY | 0.95+ |
single source | QUANTITY | 0.93+ |
one thing | QUANTITY | 0.93+ |
US Air Force | ORGANIZATION | 0.93+ |
Webster | ORGANIZATION | 0.92+ |
S3 | COMMERCIAL_ITEM | 0.92+ |
Enterprise Data Architecture | ORGANIZATION | 0.91+ |
Io Tahoe | PERSON | 0.91+ |
Oracle | ORGANIZATION | 0.9+ |
15 years old | QUANTITY | 0.9+ |
Io-Tahoe | PERSON | 0.89+ |
12 years old | QUANTITY | 0.88+ |
Tableau | TITLE | 0.87+ |
four weeks | QUANTITY | 0.86+ |
S3 Buckets | COMMERCIAL_ITEM | 0.84+ |
Covid | PERSON | 0.81+ |
Data Architecture | ORGANIZATION | 0.79+ |
JSON | TITLE | 0.79+ |
Senior Vice President | PERSON | 0.78+ |
24 seven development role | QUANTITY | 0.77+ |
last 35 years | DATE | 0.77+ |
both laughs | QUANTITY | 0.75+ |
Io-Tahoe | TITLE | 0.73+ |
each | QUANTITY | 0.72+ |
loans | QUANTITY | 0.71+ |
zero | QUANTITY | 0.71+ |
Paula D'Amico, Webster Bank
>> Narrator: From around the Globe, it's theCube with digital coverage of Enterprise Data Automation, and event series brought to you by Io-Tahoe. >> Everybody, we're back. And this is Dave Vellante, and we're covering the whole notion of Automated Data in the Enterprise. And I'm really excited to have Paula D'Amico here. Senior Vice President of Enterprise Data Architecture at Webster Bank. Paula, good to see you. Thanks for coming on. >> Hi, nice to see you, too. >> Let's start with Webster bank. You guys are kind of a regional I think New York, New England, believe it's headquartered out of Connecticut. But tell us a little bit about the bank. >> Webster bank is regional Boston, Connecticut, and New York. Very focused on in Westchester and Fairfield County. They are a really highly rated regional bank for this area. They hold quite a few awards for the area for being supportive for the community, and are really moving forward technology wise, they really want to be a data driven bank, and they want to move into a more robust group. >> We got a lot to talk about. So data driven is an interesting topic and your role as Data Architecture, is really Senior Vice President Data Architecture. So you got a big responsibility as it relates to kind of transitioning to this digital data driven bank but tell us a little bit about your role in your Organization. >> Currently, today, we have a small group that is just working toward moving into a more futuristic, more data driven data warehousing. That's our first item. And then the other item is to drive new revenue by anticipating what customers do, when they go to the bank or when they log in to their account, to be able to give them the best offer. And the only way to do that is you have timely, accurate, complete data on the customer and what's really a great value on offer something to offer that, or a new product, or to help them continue to grow their savings, or do and grow their investments. >> Okay, and I really want to get into that. But before we do, and I know you're, sort of partway through your journey, you got a lot to do. But I want to ask you about Covid, how you guys handling that? You had the government coming down and small business loans and PPP, and huge volume of business and sort of data was at the heart of that. How did you manage through that? >> We were extremely successful, because we have a big, dedicated team that understands where their data is and was able to switch much faster than a larger bank, to be able to offer the PPP Long's out to our customers within lightning speed. And part of that was is we adapted to Salesforce very for we've had Salesforce in house for over 15 years. Pretty much that was the driving vehicle to get our PPP loans in, and then developing logic quickly, but it was a 24 seven development role and get the data moving on helping our customers fill out the forms. And a lot of that was manual, but it was a large community effort. >> Think about that too. The volume was probably much higher than the volume of loans to small businesses that you're used to granting and then also the initial guidelines were very opaque. You really didn't know what the rules were, but you were expected to enforce them. And then finally, you got more clarity. So you had to essentially code that logic into the system in real time. >> I wasn't directly involved, but part of my data movement team was, and we had to change the logic overnight. So it was on a Friday night it was released, we pushed our first set of loans through, and then the logic changed from coming from the government, it changed and we had to redevelop our data movement pieces again, and we design them and send them back through. So it was definitely kind of scary, but we were completely successful. We hit a very high peak. Again, I don't know the exact number but it was in the thousands of loans, from little loans to very large loans and not one customer who applied did not get what they needed for, that was the right process and filled out the right amount. >> Well, that is an amazing story and really great support for the region, your Connecticut, the Boston area. So that's fantastic. I want to get into the rest of your story now. Let's start with some of the business drivers in banking. I mean, obviously online. A lot of people have sort of joked that many of the older people, who kind of shunned online banking would love to go into the branch and see their friendly teller had no choice, during this pandemic, to go to online. So that's obviously a big trend you mentioned, the data driven data warehouse, I want to understand that, but what at the top level, what are some of the key business drivers that are catalyzing your desire for change? >> The ability to give a customer, what they need at the time when they need it. And what I mean by that is that we have customer interactions in multiple ways. And I want to be able for the customer to walk into a bank or online and see the same format, and being able to have the same feel the same love, and also to be able to offer them the next best offer for them. But they're if they want looking for a new mortgage or looking to refinance, or whatever it is that they have that data, we have the data and that they feel comfortable using it. And that's an untethered banker. Attitude is, whatever my banker is holding and whatever the person is holding in their phone, that is the same and it's comfortable. So they don't feel that they've walked into the bank and they have to do fill out different paperwork compared to filling out paperwork on just doing it on their phone. >> You actually do want the experience to be better. And it is in many cases. Now you weren't able to do this with your existing I guess mainframe based Enterprise Data Warehouses. Is that right? Maybe talk about that a little bit? >> Yeah, we were definitely able to do it with what we have today the technology we're using. But one of the issues is that it's not timely. And you need a timely process to be able to get the customers to understand what's happening. You need a timely process so we can enhance our risk management. We can apply for fraud issues and things like that. >> Yeah, so you're trying to get more real time. The traditional EDW. It's sort of a science project. There's a few experts that know how to get it. You can so line up, the demand is tremendous. And then oftentimes by the time you get the answer, it's outdated. So you're trying to address that problem. So part of it is really the cycle time the end to end cycle time that you're progressing. And then there's, if I understand it residual benefits that are pretty substantial from a revenue opportunity, other offers that you can make to the right customer, that you maybe know, through your data, is that right? >> Exactly. It's drive new customers to new opportunities. It's enhanced the risk, and it's to optimize the banking process, and then obviously, to create new business. And the only way we're going to be able to do that is if we have the ability to look at the data right when the customer walks in the door or right when they open up their app. And by doing creating more to New York times near real time data, or the data warehouse team that's giving the lines of business the ability to work on the next best offer for that customer as well. >> But Paula, we're inundated with data sources these days. Are there other data sources that maybe had access to before, but perhaps the backlog of ingesting and cleaning in cataloging and analyzing maybe the backlog was so great that you couldn't perhaps tap some of those data sources. Do you see the potential to increase the data sources and hence the quality of the data or is that sort of premature? >> Oh, no. Exactly. Right. So right now, we ingest a lot of flat files and from our mainframe type of front end system, that we've had for quite a few years. But now that we're moving to the cloud and off-prem and on-prem, moving off-prem, into like an S3 Bucket, where that data we can process that data and get that data faster by using real time tools to move that data into a place where, like snowflake could utilize that data, or we can give it out to our market. Right now we're about we do work in batch mode still. So we're doing 24 hours. >> Okay. So when I think about the data pipeline, and the people involved, maybe you could talk a little bit about the organization. You've got, I don't know, if you have data scientists or statisticians, I'm sure you do. You got data architects, data engineers, quality engineers, developers, etc. And oftentimes, practitioners like yourself, will stress about, hey, the data is in silos. The data quality is not where we want it to be. We have to manually categorize the data. These are all sort of common data pipeline problems, if you will. Sometimes we use the term data Ops, which is sort of a play on DevOps applied to the data pipeline. Can you just sort of describe your situation in that context? >> Yeah, so we have a very large data ops team. And everyone that who is working on the data part of Webster's Bank, has been there 13 to 14 years. So they get the data, they understand it, they understand the lines of business. So it's right now. We could the we have data quality issues, just like everybody else does. But we have places in them where that gets cleansed. And we're moving toward and there was very much siloed data. The data scientists are out in the lines of business right now, which is great, because I think that's where data science belongs, we should give them and that's what we're working towards now is giving them more self service, giving them the ability to access the data in a more robust way. And it's a single source of truth. So they're not pulling the data down into their own, like Tableau dashboards, and then pushing the data back out. So they're going to more not, I don't want to say, a central repository, but a more of a robust repository, that's controlled across multiple avenues, where multiple lines of business can access that data. Is that help? >> Got it, Yes. And I think that one of the key things that I'm taking away from your last comment, is the cultural aspects of this by having the data scientists in the line of business, the lines of business will feel ownership of that data as opposed to pointing fingers criticizing the data quality. They really own that that problem, as opposed to saying, well, it's Paula's problem. >> Well, I have my problem is I have data engineers, data architects, database administrators, traditional data reporting people. And because some customers that I have that are business customers lines of business, they want to just subscribe to a report, they don't want to go out and do any data science work. And we still have to provide that. So we still want to provide them some kind of regiment that they wake up in the morning, and they open up their email, and there's the report that they subscribe to, which is great, and it works out really well. And one of the things is why we purchased Io-Tahoe was, I would have the ability to give the lines of business, the ability to do search within the data. And we'll read the data flows and data redundancy and things like that, and help me clean up the data. And also, to give it to the data analysts who say, all right, they just asked me they want this certain report. And it used to take okay, four weeks we're going to go and we're going to look at the data and then we'll come back and tell you what we can do. But now with Io-Tahoe, they're able to look at the data, and then in one or two days, they'll be able to go back and say, Yes, we have the data, this is where it is. This is where we found it. This is the data flows that we found also, which is what I call it, is the break of a column. It's where the column was created, and where it went to live as a teenager. (laughs) And then it went to die, where we archive it. And, yeah, it's this cycle of life for a column. And Io-Tahoe helps us do that. And we do data lineage is done all the time. And it's just takes a very long time and that's why we're using something that has AI in it and machine running. It's accurate, it does it the same way over and over again. If an analyst leaves, you're able to utilize something like Io-Tahoe to be able to do that work for you. Is that help? >> Yeah, so got it. So a couple things there, in researching Io-Tahoe, it seems like one of the strengths of their platform is the ability to visualize data, the data structure and actually dig into it, but also see it. And that speeds things up and gives everybody additional confidence. And then the other piece is essentially infusing AI or machine intelligence into the data pipeline, is really how you're attacking automation. And you're saying it repeatable, and then that helps the data quality and you have this virtual cycle. Maybe you could sort of affirm that and add some color, perhaps. >> Exactly. So you're able to let's say that I have seven cars, lines of business that are asking me questions, and one of the questions they'll ask me is, we want to know, if this customer is okay to contact, and there's different avenues so you can go online, do not contact me, you can go to the bank and you can say, I don't want email, but I'll take texts. And I want no phone calls. All that information. So, seven different lines of business asked me that question in different ways. One said, "No okay to contact" the other one says, "Customer 123." All these. In each project before I got there used to be siloed. So one customer would be 100 hours for them to do that analytical work, and then another analyst would do another 100 hours on the other project. Well, now I can do that all at once. And I can do those types of searches and say, Yes, we already have that documentation. Here it is, and this is where you can find where the customer has said, "No, I don't want to get access from you by email or I've subscribed to get emails from you." >> Got it. Okay. Yeah Okay. And then I want to go back to the cloud a little bit. So you mentioned S3 Buckets. So you're moving to the Amazon cloud, at least, I'm sure you're going to get a hybrid situation there. You mentioned snowflake. What was sort of the decision to move to the cloud? Obviously, snowflake is cloud only. There's not an on-prem, version there. So what precipitated that? >> Alright, so from I've been in the data IT information field for the last 35 years. I started in the US Air Force, and have moved on from since then. And my experience with Bob Graham, was with snowflake with working with GE Capital. And that's where I met up with the team from Io-Tahoe as well. And so it's a proven so there's a couple of things one is Informatica, is worldwide known to move data. They have two products, they have the on-prem and the off-prem. I've used the on-prem and off-prem, they're both great. And it's very stable, and I'm comfortable with it. Other people are very comfortable with it. So we picked that as our batch data movement. We're moving toward probably HVR. It's not a total decision yet. But we're moving to HVR for real time data, which is changed capture data, moves it into the cloud. And then, so you're envisioning this right now. In which is you're in the S3, and you have all the data that you could possibly want. And that's JSON, all that everything is sitting in the S3 to be able to move it through into snowflake. And snowflake has proven to have a stability. You only need to learn and train your team with one thing. AWS as is completely stable at this point too. So all these avenues if you think about it, is going through from, this is your data lake, which is I would consider your S3. And even though it's not a traditional data lake like, you can touch it like a Progressive or Hadoop. And then into snowflake and then from snowflake into sandbox and so your lines of business and your data scientists just dive right in. That makes a big win. And then using Io-Tahoe with the data automation, and also their search engine. I have the ability to give the data scientists and data analysts the way of they don't need to talk to IT to get accurate information or completely accurate information from the structure. And we'll be right back. >> Yeah, so talking about snowflake and getting up to speed quickly. I know from talking to customers you can get from zero to snowflake very fast and then it sounds like the Io-Tahoe is sort of the automation cloud for your data pipeline within the cloud. Is that the right way to think about it? >> I think so. Right now I have Io-Tahoe attached to my on-prem. And I want to attach it to my off-prem eventually. So I'm using Io-Tahoe data automation right now, to bring in the data, and to start analyzing the data flows to make sure that I'm not missing anything, and that I'm not bringing over redundant data. The data warehouse that I'm working of, it's an on-prem. It's an Oracle Database, and it's 15 years old. So it has extra data in it. It has things that we don't need anymore, and Io-Tahoe's helping me shake out that extra data that does not need to be moved into my S3. So it's saving me money, when I'm moving from off-prem to on-prem. >> And so that was a challenge prior, because you couldn't get the lines of business to agree what to delete, or what was the issue there? >> Oh, it was more than that. Each line of business had their own structure within the warehouse. And then they were copying data between each other, and duplicating the data and using that. So there could be possibly three tables that have the same data in it, but it's used for different lines of business. We have identified using Io-Tahoe identified over seven terabytes in the last two months on data that has just been repetitive. It's the same exact data just sitting in a different schema. And that's not easy to find, if you only understand one schema, that's reporting for that line of business. >> More bad news for the storage companies out there. (both laughs) So far. >> It's cheap. That's what we were telling people. >> And it's true, but you still would rather not waste it, you'd like to apply it to drive more revenue. And so, I guess, let's close on where you see this thing going. Again, I know you're sort of partway through the journey, maybe you could sort of describe, where you see the phase is going and really what you want to get out of this thing, down the road, mid-term, longer term, what's your vision or your data driven organization. >> I want for the bankers to be able to walk around with an iPad in their hand, and be able to access data for that customer, really fast and be able to give them the best deal that they can get. I want Webster to be right there on top with being able to add new customers, and to be able to serve our existing customers who had bank accounts since they were 12 years old there and now our multi whatever. I want them to be able to have the best experience with our bankers. >> That's awesome. That's really what I want as a banking customer. I want my bank to know who I am, anticipate my needs, and create a great experience for me. And then let me go on with my life. And so that follow. Great story. Love your experience, your background and your knowledge. I can't thank you enough for coming on theCube. >> Now, thank you very much. And you guys have a great day. >> All right, take care. And thank you for watching everybody. Keep right there. We'll take a short break and be right back. (gentle music)
SUMMARY :
to you by Io-Tahoe. And I'm really excited to of a regional I think and they want to move it relates to kind of transitioning And the only way to do But I want to ask you about Covid, and get the data moving And then finally, you got more clarity. and filled out the right amount. and really great support for the region, and being able to have the experience to be better. to be able to get the customers that know how to get it. and it's to optimize the banking process, and analyzing maybe the backlog was and get that data faster and the people involved, And everyone that who is working is the cultural aspects of this the ability to do search within the data. and you have this virtual cycle. and one of the questions And then I want to go back in the S3 to be able to move it Is that the right way to think about it? and to start analyzing the data flows and duplicating the data and using that. More bad news for the That's what we were telling people. and really what you want and to be able to serve And so that follow. And you guys have a great day. And thank you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Paula D'Amico | PERSON | 0.99+ |
Paula | PERSON | 0.99+ |
Connecticut | LOCATION | 0.99+ |
Westchester | LOCATION | 0.99+ |
Informatica | ORGANIZATION | 0.99+ |
24 hours | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
13 | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
100 hours | QUANTITY | 0.99+ |
Bob Graham | PERSON | 0.99+ |
iPad | COMMERCIAL_ITEM | 0.99+ |
Webster Bank | ORGANIZATION | 0.99+ |
GE Capital | ORGANIZATION | 0.99+ |
first item | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
two products | QUANTITY | 0.99+ |
seven | QUANTITY | 0.99+ |
New York | LOCATION | 0.99+ |
Boston | LOCATION | 0.99+ |
three tables | QUANTITY | 0.99+ |
Each line | QUANTITY | 0.99+ |
first set | QUANTITY | 0.99+ |
two days | QUANTITY | 0.99+ |
DevOps | TITLE | 0.99+ |
Webster bank | ORGANIZATION | 0.99+ |
14 years | QUANTITY | 0.99+ |
over 15 years | QUANTITY | 0.99+ |
seven cars | QUANTITY | 0.98+ |
each project | QUANTITY | 0.98+ |
Friday night | DATE | 0.98+ |
New England | LOCATION | 0.98+ |
Io-Tahoe | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
Webster's Bank | ORGANIZATION | 0.98+ |
one schema | QUANTITY | 0.97+ |
Fairfield County | LOCATION | 0.97+ |
One | QUANTITY | 0.97+ |
one customer | QUANTITY | 0.97+ |
over seven terabytes | QUANTITY | 0.97+ |
Salesforce | ORGANIZATION | 0.96+ |
both | QUANTITY | 0.95+ |
single source | QUANTITY | 0.93+ |
one thing | QUANTITY | 0.93+ |
US Air Force | ORGANIZATION | 0.93+ |
Webster | ORGANIZATION | 0.92+ |
S3 | COMMERCIAL_ITEM | 0.92+ |
Enterprise Data Architecture | ORGANIZATION | 0.91+ |
Oracle | ORGANIZATION | 0.9+ |
15 years old | QUANTITY | 0.9+ |
Io-Tahoe | PERSON | 0.89+ |
12 years old | QUANTITY | 0.88+ |
Tableau | TITLE | 0.87+ |
four weeks | QUANTITY | 0.86+ |
S3 Buckets | COMMERCIAL_ITEM | 0.84+ |
Covid | PERSON | 0.81+ |
Data Architecture | ORGANIZATION | 0.79+ |
JSON | TITLE | 0.79+ |
Senior Vice President | PERSON | 0.78+ |
24 seven development role | QUANTITY | 0.77+ |
last 35 years | DATE | 0.77+ |
both laughs | QUANTITY | 0.75+ |
Io-Tahoe | TITLE | 0.73+ |
each | QUANTITY | 0.72+ |
loans | QUANTITY | 0.71+ |
zero | QUANTITY | 0.71+ |
Amazon cloud | ORGANIZATION | 0.65+ |
last two months | DATE | 0.65+ |
Eric Herzog, IBM | Cisco Live EU Barcelona 2020
>> Announcer: Live from Barcelona, Spain, it's theCUBE, covering Cisco Live 2020, brought to you by Cisco and its ecosystem partners. >> Welcome back to Barcelona, everybody, we're here at Cisco Live, and you're watching theCUBE, the leader in live tech coverage. We go to the events and extract the signal from the noise. This is day one, really, we started day zero yesterday. Eric Herzog is here, he's the CMO and Vice President of Storage Channels. Probably been on theCUBE more than anybody, with the possible exception of Pat Gelsinger, but you might surpass him this week, Eric. Great to see you. >> Great to see you guys, love being on theCUBE, and really appreciate the coverage you do of the entire industry. >> This is a big show for you guys. I was coming down the escalator, I saw up next Eric Herzog, so I sat down and caught the beginning of your presentation yesterday. You were talking about multicloud, which we're going to get into, you talked about cybersecurity, well let's sort of recap what you told the audience there and really let's dig in. >> Sure, well, first thing is, IBM is a strong partner of Cisco, I mean they're a strong partner of ours both ways. We do all kinds of joint activities with them on the storage side, but in other divisions as well. The security guys do stuff with Cisco, the services guys do a ton of stuff with Cisco. So Cisco's one of our valued partners, which is why we're here at the show, and obviously, as you guys know, with a lot of the coverage you do to the storage industry, that is considered one of the big storage shows, you know, in the industry, and has been a very strong show for IBM Storage and what we do. >> Yeah, and I feel like, you know, it brings together storage folks, whether it's data protection, or primary storage, and sort of is a collection point, because Cisco is a very partner-friendly organization. So talk a little bit about how you go to market, how you guys see the multicloud world, and what each of you brings to the table. >> Well, so we see it in a couple of different facts. So first of all, the day of public cloud only or on-prem only is long gone. There are a few companies that use public cloud only, but yeah, when you're talking mid-size enterprise, and certainly into let's say the global 2500, that just doesn't work. So certain workloads reside well in the cloud, and certain workloads reside well on-prem, and there's certain that can back and forth, right, developed in a cloud but then move it back on, for example, highly transactional workload, once you get going on that, you're not going to run that on any cloud provider, but that doesn't mean you can't develop the app, test the app, out in the cloud and then bring it back on. So we also see that the days of a cloud provider for big enterprise and again up to the 2500 of the global fortunes, that's not true either, because just as with other infrastructure and other technologies, they often have multiple vendors, and in fact, you know, what I've seen from talking to CIOs is, if they have three cloud providers, that's low. Many of 'em talk about five or six, whether that be for legal reasons, whether that be for security reasons, or of course the easy one, which is, we need to get a good price, and if we just use one vendor, we're not going to get a good price. And cloud is mature, cloud's not new anymore, the cloud is pretty old, it's basically, sort of, version three of the internet, (laughs) and so, you know, I think some of the procurement guys are a little savvy about why would you only use Amazon or only use Azure or only use Google or only use IBM Cloud. Why not use a couple to keep them, you know, which is kind of normal when procurement gets involved, and say, cloud is not new anymore, so that means procurement gets involved. >> Well, and it's kind of, comes down to the workload. You got certain clouds that are better, you have Microsoft if you want collaboration, you have Amazon if you want infrastructure for devs, on-prem if you want, you know, family jewels. So I got a question for you. So if you look at, you know, it's early 2020, entering a new decade, if you look at the last decade, some of the big themes. You had the consumerization of IT, you had, you know, Web 2.0, you obviously had the big data meme, which came and went and now it's got an AI. And of course you had cloud. So those are the things that brought us here over the last 10 years of innovation. How do you see the next 10 years? What are going to be those innovation drivers? >> Well I think one of the big innovations from a cloud perspective is like, truly deploying cloud. Not playing with the cloud, but really deploying the cloud. Obviously when I say cloud, I would include private cloud utilization. Basically, when you think on-prem in my world, on-prem is really a private cloud talking to a public cloud. That's how you get a multicloud, or, if you will, a hybrid cloud. Some people still think when you talk hybrid, like literally, bare metal servers talking to the cloud, and that just isn't true, because when you look at certainly the global 2500, I can't think any of them what isn't essentially running a private cloud inside their own walls, and then, whether they're going out or not, most do, but the few that don't, they mimic a public cloud inside because of the value they see in moving workloads around, easy deployment, and scale up and scale down, whether that be storage or servers or whatever the infrastructure is, let alone the app. So I think what you're going to see now is a recognization that it's not just private cloud, it's not just public cloud, things are going to go back and forth, and basically, it's going to be a true hybrid cloud world, and I also think with the cloud maturity, this idea of a multicloud, 'cause some people think multicloud is basically private cloud talking to public cloud, and I see multicloud as not just that, but literally, I'm a big company, I'm going to use eight or nine cloud providers to keep everybody honest, or, as you just said, Dave, and put it out, certain clouds are better for certain workloads, so just as certain storage or certain servers are better when it's on-prem, that doesn't surprise us, certain cloud vendors specialize in the apps. >> Right, so Eric, we know IBM and Cisco have had a very successful partnership with the VersaStack. If you talk about in your data center, in IBM Storage, Cisco networking in servers. When I hear both IBM and Cisco talking about the message for hybrid and multicloud, they talk the software solutions you have, the management in various pieces and integration that Cisco's doing. Help me understand where VersaStack fits into that broader message that you were just talking about. >> So we have VersaStack solutions built around primarily our FlashSystems which use our Spectrum Virtualize software. Spectrum Virtualize not only supports IBM arrays, but over 500 other arrays that are not ours. But we also have a version of Spectrum Virtualize that will work with AWS and IBM Cloud and sits in a virtual machine at the cloud providers. So whether it be test and dev, whether it be migration, whether it business continuity and disaster recovery, or whether it be what I'll call logical cloud error gapping. We can do that for ourselves, when it's not a VersaStack, out to the cloud and back. And then we also have solutions in the VersaStack world that are built around our Spectrum Scale product for big data and AI. So Spectrum Scale goes out and back to the cloud, Spectrum Virtualize, and those are embedded on the arrays that come in a VersaStack solution. >> I want to bring it back to cloud a little bit. We were talking about workloads and sort of what Furrier calls horses for courses. IBM has a public cloud, and I would put forth that your wheelhouse, IBM's wheelhouse for cloud workload is the hybrid mission-critical work that's being done on-prem today in the large IBM customer base, and to the extent that some of that work's going to move into the cloud. The logical place to put that is the IBM Cloud. Here's why. You could argue speeds and feeds and features and function all day long. The migration cost of moving data and workloads from wherever, on-prem into a cloud or from on-prem into another platform are onerous. Any CIO will tell you that. So to the extent that you can minimize those migration costs, the business case for, in IBM's case, for staying within that blue blanket, is going to be overwhelmingly positive relative to having to migrate. That's my premise. So I wonder if you could comment on that, and talk about, you know, what's happening in that hybrid world specifically with your cloud? >> Well, yeah, the key thing from our perspective is we are basically running block data or file data, and we just see ourselves sitting in IBM Cloud. So when you've got a FlashSystem product or you've got our Elastic Storage System 3000, when you're talking to the IBM Cloud, you think you're talking to another one of our boxes sitting on-prem. So what we do is make that transition completely seamless, and moving data back and forth is seamless, and that's because we take a version of our software and stick in a virtual machine running at the cloud provider, in this case IBM Cloud. So the movement of data back and forth, whether it be our FlashSystem product, even we have our DS8000 can do the same thing, is very easy for an IBM customer to move to an IBM Cloud. That said, just to make sure that we're covering, and in the year of multicloud, remember the IBM Cloud division just released the Multicloud Manager, you know, second half of last year, recognizing that while they want people to focus on the IBM Cloud, they're being realistic that they're going to have multiple cloud vendors. So we've followed that mantra too, and made sure that we've followed what they're doing. As they were going to multicloud, we made sure we were supporting other clouds besides them. But from IBM to IBM Cloud it's easy to do, it's easy to traverse, and basically, our software sits on the other side, and it basically is as if we're talking to an array on prem but we're really not, we're out in the cloud. We make it seamless. >> So testing my premise, I mean again, my argument is that the complexity of that migration is going to determine in part what cloud you should go to. If it's a simple migration, and it's better, and the customer decides okay it's better off on AWS, you as a storage supplier don't care. >> That is true. >> It's agnostic to you. IBM, as a supplier of multicloud management doesn't care. I'm sure you'd rather have it run on the IBM Cloud, but if the customer says, "No, we're going to run it "over here on Azure", you say, "Great. "We're going to help you manage that experience across clouds". >> Absolutely. So, as an IBM shareholder, we wanted to go to IBM Cloud. As a realist, with what CIOs say, which is I'm probably going to use multiple clouds, we want to make sure whatever cloud they pick, hopefully IBM first, but they're going to have a secondary cloud, we want to make sure we capture that footprint regardless, and that's what we've done. As I've said for years and years, a partial PO is better than no PO. So if they use our storage and go to a competitor of IBM Cloud, while I don't like that as a shareholder, it's still good for IBM, 'cause we're still getting money from the storage division, even though we're not working with IBM Cloud. So we make it as flexible as possible for the customer, The Multicloud Manager is about customer choice, which is leading with IBM Cloud, but if they want to use a, and again, I think it's a realization at IBM Corporate that no one's going to use just one cloud provider, and so we want to make sure we empower that. Leading with IBM Cloud first, always leading with IBM Cloud first, but we want to get all of their business, and that means, other areas, for example, the Red Hat team. Red Hat works with every cloud, right? And they don't really necessarily lead with IBM Cloud, but they work with IBM Cloud all right, but guess what, IBM gets the revenue no matter what. So I don't see it's like the old traditional component guy with an OEM deal, but it kind of sort of is. 'Cause we can make money no matter what, and that's good for the IBM Corporation, but we do always lead with IBM Cloud first but we work with everybody. >> Right, so Eric, we'd agree with your point that data is not just going to live one place. One area that there's huge opportunity that I'd love to get your comment here on is edge. So we talked about, you know, the data center, we talked about public cloud. Cisco's talking a lot about their edge strategy, and one of our questions is how will they enable their partners and help grow that ecosystem? So love to hear your thoughts on edge, and any synergies between what Cisco's doing and IBM in that standpoint. >> So the thing from an edge perspective for us, is built around our new Elastic Storage System 3000, which we announced in Q4. And while it's ideal for the typical big data and AI workloads, runs Spectrum Scale, we have many a customers with Scale that are exabytes in production, so we can go big, but we also go small. It's a compact 2U all-flash array, up to 400 terabytes, that can easily be deployed at a remote location, an oil well, right, or I should say, a platform, oil platform, could be deployed obviously if you think about what's going on in the building space or I should say the skyscraper space, they're all computerized now. So you'd have that as an edge processing box, whether that be for the heating systems, the security systems, we can do that at the edge, but because of Spectrum Scale you could also send it back to whatever their core is, whether that be their core data center or whether they're working with a cloud provider. So for us, the ideal solution for us, is built around the Elastic Storage System 3000. Self-contained, two rack U, all-flash, but with Spectrum Scale on it, versus what we normally sell with our all-flash arrays, which tends to be our Spectrum Virtualize for block. This is file-based, can do the analytics at the edge, and then move the data to whatever target they want. So the source would be the ESS 3000 at the edge box, doing processing at the edge, such as an oil platform or in, I don't know what really you call it, but, you know, the guys that own all the buildings, right, who have all this stuff computerized. So that's at the edge, and then wherever their core data center is, or their cloud partner they can go that way. So it's an ideal solution because you can go back and forth to the cloud or back to their core data center, but do it with a super-compact, very high performance analytics engine that can sit at the edge. >> You know, I want to talk a little bit about business. I remember seven years ago, we covered, theCUBE, the z13 announcement, and I was talking to a practitioner at a very large bank, and I said, "You going to buy this thing?", this is the z13, you know, a couple of generations ago. He says, "Yeah, absolutely, I'll buy it sight unseen". I said, "Really, sight unseen?" He goes, "Yeah, no question. "By going to the upgrade, I'm able to drive "more transactions through my system "in a certain amount of time. "That's dropping revenue right to my bottom line. "It's a no-brainer for me." So fast forward to the z15 announcement in September in my breaking analysis, I said, "Look, IBM's going to have a great Q4 in systems", and the thing you did in storage is you synchronized, I don't know if it was by design or what, you synchronized the DS8000, new 8000 announcement with the z15, and I predicted at the time you're going to see an uptick in both the systems business, which we saw, huge, 63%, and the storage business grew I think three points as well. So I wonder if you can talk about that. Was that again by design, was it a little bit of luck involved, and you know, give us an update. >> So that was by design. When the z14 came out, which is right when I first come over from EMC, one of the things I said to my guys is, "Let's see, we have "the number one storage platform on the mainframe "in revenue, according to the analysts that check revenue. "When they launch a box, why are we not launching with them?" So for example, we were in that original press release on the z14, and then they ran a series of roadshows all over the world, probably 60. I said, "Well don't you guys do the roadshows?", and my team said, "No, we didn't do that on z12 and 13". I said, "Well were are now, because we're the number one "mainframe storage company". Why would we not go out there, get 20 minutes to speak, the bulk of it would be on the Zs. So A, we did that of course with this launch, but we also made sure that on day one launch, we were part of the launch and truly integrated. Why IBM hadn't been doing for a while is kind of beyond me, especially with our market position. So it helped us with a great quarter, helped us in the field, now by the way, we did talk about other areas that grew publicly, so there were other areas, particularly all-flash. Now we do have an all-flash 8900 of course, and the high-end tape grew as well, but our overall all-flash, both at the high end, mid range and entry, all grew. So all-flash for us was a home run. Yeah, I would argue that, you know, on the Z side, it was grand slam home run, but it was a home run even for the entry flash, which did very, very well as well. So, you know, we're hitting the right wheelhouse on flash, we led with the DS8900 attached to the Z, but some of that also pulls through, you get the magic fairy dust stuff, well they have an all-flash array on the Z, 'cause last time we didn't have an all, we had all-flash or hybrids, before that was hybrid and hard drive. This time we just said, "Forget that hybrid stuff. "We're going all-flash." So this helps, if you will, the magic fairy dust across the entire portfolio, because of our power with the mainframe, and you know, even in fact the quarter before, our entry products, we announced six nines of availability on an array that could be as low cost as $US16,000 for RAID 5 all-flash array, and most guys don't offer six nines of availability at the system level, let alone we have 100% availability guaranteed. We do charge extra for that, but most people won't even offer that on entry product, we do. So that's helped overall, and then the Z was a great launch for us. >> Now you guys, you obviously can't give guidance, you have to be very careful about that, but I, as I say, predicted in September that you'd have a good quarter in systems and storage both. I'm on the record now I'm going to say that you're going to continue to see growth, particularly in the storage side, I would say systems as well. So I would look for that. The other thing I want to point out is, you guys, you sell a lot of storage, you sell a lot of storage that sometimes the analysts don't track. When you sell into cloud, for example, IBM Storage Cloud, I don't think you get credit for that, or maybe the services, the global services division. So there's a big chunk of revenue that you don't get credited for, that I just want to highlight. Is that accurate? >> Yeah, so think about it, IBM is a very diverse company, all kinds of acquisitions, tons of different divisions, which we document publicly, and, you know, we do it differently than if it was Zoggan Store. So if I were Zoggan Store, a standalone storage company, I'd get all credit for supporting services, there's all kinds of things I'd get credit for, but because of IBM's history of how the company grew and how company acquired, stuff that is storage that Ed Walsh, or GM, does own, it's somewhat dispersed, and so we don't always get credit on it publicly, but the number we do in storage is substantially larger than what we report, 'cause all we really report is our storage systems business. Even our storage software, which one of the analysts that does numbers has us as the number two storage software company, when we do our public stuff, we don't take credit for that. Now, luckily that analyst publishes a report on the numbers side, and we are shown to be the number two storage software company in the world, but when we do our financial reporting, that, because just the history of IBM, is spread out over other parts of the company, even though our guys do the work on the sales side, the marketing side, the development side, all under Ed Walsh, but you know, part of that's just the history of the company, and all the acquisitions over years and years, remember it's a 100-year-old company. So, you know, just we don't always get all the credit, but we do own it internally, and our teams take and manage most of what is storage in the minds of storage analysts like you guys, you know what storage is, most of that is us. >> I wanted to point that out because a lot of times, practitioners will look at the data, and they'll say, oh wow, the sales person of the competitor will come in and say, "Look at this, we're number one!" But you really got to dig in, ask the questions, and obviously make the decisions for yourself. Eric, great to see you. We're going to see you later on this week as well we're going to dig into cyber. Thanks so much for coming back. >> Great, well thank you, you guys do a great job and theCUBE is literally the best at getting IT information out, particularly all the shows you do all over the world, you guys are top notch. >> Thank you. All right, and thank you for watching everybody, we'll be back with our next guest right after this break. We're here at Cisco Live in Barcelona, Dave Vellante, Stu Miniman, John Furrier. We'll be right back.
SUMMARY :
covering Cisco Live 2020, brought to you by Cisco but you might surpass him this week, Eric. and really appreciate the coverage you do and caught the beginning of your presentation yesterday. and obviously, as you guys know, Yeah, and I feel like, you know, and in fact, you know, what I've seen from talking So if you look at, you know, it's early 2020, and that just isn't true, because when you look at that broader message that you were just talking about. So Spectrum Scale goes out and back to the cloud, So to the extent that you can minimize the Multicloud Manager, you know, second half of last year, is going to determine in part what cloud you should go to. "We're going to help you manage that experience across clouds". and that's good for the IBM Corporation, So we talked about, you know, the data center, the security systems, we can do that at the edge, and the thing you did in storage is you synchronized, and you know, even in fact the quarter before, I'm on the record now I'm going to say in the minds of storage analysts like you guys, We're going to see you later on this week as well particularly all the shows you do all over the world, All right, and thank you for watching everybody,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Eric | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
September | DATE | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
eight | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
20 minutes | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
$US16,000 | QUANTITY | 0.99+ |
63% | QUANTITY | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Zoggan Store | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Barcelona | LOCATION | 0.99+ |
VersaStack | TITLE | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
z14 | COMMERCIAL_ITEM | 0.99+ |
Dave | PERSON | 0.99+ |
z15 | COMMERCIAL_ITEM | 0.99+ |
ORGANIZATION | 0.99+ | |
this week | DATE | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
GM | ORGANIZATION | 0.99+ |
Barcelona, Spain | LOCATION | 0.98+ |
six | QUANTITY | 0.98+ |
DS8000 | COMMERCIAL_ITEM | 0.98+ |
early 2020 | DATE | 0.98+ |
both ways | QUANTITY | 0.98+ |
seven years ago | DATE | 0.98+ |
ESS 3000 | COMMERCIAL_ITEM | 0.98+ |
both | QUANTITY | 0.98+ |
IBM Corporation | ORGANIZATION | 0.98+ |
100-year-old | QUANTITY | 0.98+ |
six nines | QUANTITY | 0.97+ |
z13 | COMMERCIAL_ITEM | 0.97+ |
multicloud | ORGANIZATION | 0.97+ |
DS8900 | COMMERCIAL_ITEM | 0.97+ |
lps | QUANTITY | 0.97+ |
z12 | COMMERCIAL_ITEM | 0.96+ |
Chris Wright, Red Hat | Red Hat Summit 2018
>> Narrator: Live from San Francisco. It's theCUBE! Covering RedHat Summit 2018. Brought to you by Red Hat. >> Alright welcome back, this is theCUBE's exclusive coverage of Red Hat 2018. I'm John Furrier, the co host of theCUBE with John Troyer, co-founder of TechReckoning Advisory Firm. Next guest is Chris Wright, Vice President and CTO Chief of Technology of his Red Hat. Great to see you again, thanks for joining us today. >> Yeah, great to be here. >> Day one of three days of CUBE coverage, you got, yesterday had sessions over there in Moscone South, yet in classic Red Hat fashion, good vibes, things are rocking. Red Hat's got a spring to their step, making some good calls technically. >> Chris: That's right. >> Kubernetes' one notable, Core OS Acquisition, really interesting range, this gives, I mean I think people are now connecting the dots from the tech side, but also now on the business side, saying "Okay we can see now some, a wider market opportunity for Red Hat". Not just doing it's business with Linux, software, you're talking about a changing modern software architecture, for application developers. I mean, this is a beautiful thing, I mean. >> Chris: It's not just apps but it's the operator, you know, operation side as well, so we've been at it for a long time. We've been doing something that's really similar for quite some time, which is building a platform for applications, independent from the underlying infrastructure, in the Linux days I was X86 hardware, you know, you get this HeteroGenius hardware underneath, and you get a consistent standardized application run time environment on top of Linux. Kubernetes is helping us do that at a distributive level. And it's taken some time for the industry to kind of understand what's going on, and we've been talking about hybrid cloud for years and, you really see it real and happening and it's in action and for us that distributed layer round Kubernetes which just lights up how do you manage distributed applications across complex infrastructure, makes it really real. >> Yeah it's also timing's everything too right? I mean, good timing, that helps, the evolution of the business, you always have these moments and these big waves where you can kind of see clunking going on, people banging against each other and you know, the glue layers developing, and then all of a sudden snaps into place, and then it just scales, right? So you're starting to see that, we've seen this in other ways, TCPIP, Linux itself, and you guys are certainly making that comparison, being Red Hat, but what happens next is usually an amazing growth phase. Again, small little, and move the ball down the field, and then boom, it opens up. As a CTO, you have to look at that 20 mile stair now, what's next? What's that wave coming that you're looking at in the team that you have on Red Hat's side and across your partners? What's the wave next? >> Well there's a lot of activity going on that's beyond what we're building today. And so much of it, first of all, is happening in Open Source. So that itself is awesome. Like we're totally tuned into these environments, it's core to who we are, it's our DNA to be involved in these Open Source communities, and you look across all of the different projects and things like machine learning and blockchain, which are really kind of native Open Source developments, become really relevant in ways that we can change how we build functionality and build business, and build business value in the future. So, those are the things that we look at, what's emerging out of the Open Source communities, what's going to help continue to accelerate developers' ability to quickly build applications? Operations team's ability to really give that broad scale, policy level view of what's going on inside your infrastructure to support those applications, and all the data that we're gathering and needing to sift through and build value from inside the applications, that's very much where we're going. >> Well I think we had a really good example of machine learning used in an everyday enterprise application this morning, they kicked off the keynote, talking about optimizing the schedule and what sessions were in what rooms, you know, using an AI tool right? >> Chris: That's right. >> And so, that's reality as you look at, is that going to be the new reality as you're looking into the future of building in these kind of machine learning opportunities into everyday business applications that, you know, in the yesteryear would've been just some, I don't know, visual basic, or whatever, depending on how far back you look, right? You know, is that really going to be a reality in the enterprise? It seems so. >> It is, absolutely. And so what we're trying to do is build the right platforms, and build the right tools, and then interfaces to those platforms and tools to make it easier and easier for developers to build, you know, what we've been calling "Intelligent Apps", or applications that take advantage of the data, and the insights associated with that data, right in the application. So, the scheduling optimization that you saw this morning in the keynote is a great example of that. Starting with basic rules engine, and augmenting that with machine learning intelligence is one example, and we'll see more and more of that as the sophisticated tools that are coming out of Open Source communities building machine learning platforms, start to specialize and make it easier and easier to do specific machine learning tasks within an application. So you don't have to be a data scientist and an app developer all in one, you know, that's, there's different roles and different responsibilities, and how do we build, develop, life cycle managed models is one question, and how do we take advantage of those models and applications is another question, and we're really looking at that from a Red Hat perspective. >> John F: And the enterprises are always challenged, they always (mumbles), Cloud Native speaks to both now, right? So you got hybrid cloud and now multi-cloud on the horizon, set perfectly up with Open Shift's kind of position in that, kind of the linchpin, but you got, they're still two different worlds. You got the cloud-native born in the cloud, and that's pretty much a restart-up these days, and then you've got legacy apps with container, so the question is, that people are asking is, okay, I get the cloud-native, I see the benefits, I know what the investment is, let's do it upfront, benefits are horizontally scalable, asynchronous, et cetera et cetera, but I got legacy. I want to do micro-servicing, I want to do server-less, do I re-engineer that or just containers, what's the technical view and recommendation from Red Hat when you say, when the CIO says or enterprise says, "Hey I want to go cloud native for over here and new staff, but I got all this old staff, what do I do?". Do I invest more region, or just containerize it, what's the play? >> I think you got to ask kind of always why? Why you're doing something. So, we hear a lot, "Can I containerize it?", often the answer is yes. A different question might be, "What's the value?", and so, a containerized application, whether it's an older application that's stateful or whether it's a newer cloud-native application (mumbles), horizontally scalable, and all the great things, there's value potentially in just the automation around the API's that allow you to lifecycle manage the application. So if the application itself is still continuing to change, we have some great examples with some of our customers, like Keybank, doing what we call the "Fast moving monolith". So it's still a traditional application, but it's containerized and then you build a CICD model around it, and you have automation on how you deliver and deploy production. There's value there, there's also value in your existing system, and maybe building some different services around the legacy system to give you access, API access, to data in that system. So different ways to approach that problem, I don't think there's a one size fits all. >> So Chris, some of this is also a cultural and a process shift. I was impressed this morning, we've already talked with two Red Hat customers, Macquarie and Amadeus, and you know Macquarie was talking about, "Oh yeah we moved 40 applications in a year, you know, onto Open Shift", and it turns out they were already started to be containerized and dockerized and, oh yeah yeah you know, that is standard operating procedure, for that set of companies. There's a long tail of folks who are still dealing with the rest of the stuff we've had to deal, the stack we've had to deal with for years. How is Red Hat, how are you looking at this kind of cultural shift? It's nice that it's real, right? It's not like we're talking about microservices, or some sort of future, you know, Jettison sort of thing, that's going to save us all, it's here today and they're doing it. You know, how are you helping companies get there? >> So we have a practice that we put in place that we call the "Open Innovation Lab". And it's very much an immersive practice to help our customers first get experience building one of these cloud native applications. So we start with a business problem, what are you trying to solve? We take that through a workshop, which is a multi-week workshop, really to build on top of a platform like Open Shift, real code that's really useful for that business, and those engineers that go through that process can then go back to their company and be kind of the change agent for how do we build the internal cultural shift and the appreciation for Agile development methodologies across our organization, starting with some of this practical, tangible and realist. That's one great example of how we can help, and I think part of it is just helping customers understand it isn't just technology, I'm a technologist so there's part of me that feels pain to say that but the practical reality is there's whole organizational shifts, there's mindset and cultural changes that need to happen inside the organization to take advantage of the technology that we put in place to build that optimize. >> John F: And roles are changing too, I'll see the system admin kind of administrative things getting automated way through more operating role. I heard some things last week at CubeCon in Copenhagen, Denmark, and I want to share some quotes and I want to get your reaction. >> Alright. >> This is the hallway, I won't attribute the names but, these were quotes, I need, quote, "I need to get away from VP Engine firewalls. I need user and application layer security with unfishable access, otherwise I'm never safe". Second quote, "Don't confuse lift and shift with running cloud-native global platform. Lot of actors in this system already running seamlessly. Versus say a VM Ware running environment wherein V Center running in a data center is an example of a lift and shift". So the comments are one for (mumbles) cloud, you need to have some sort of security model, and then two, you know we did digital transformation before with VM's, that was a different world, but the new world's not a lift and shift, it's re-architect of a cloud-native global platform. Your reaction to those two things, and what that means to customers as they think about what they're going to look like, as they build that bridge to the future. >> Security peace is critical, so every CIO that we're talking to, it's top of mind, nobody wants to be on the front page of The Wall Street Journal for the wrong reasons. And so understanding, as you build a micro-services software architected application, the components themselves are exposed to services, those services are API's that become potentially part of the attack surface. Thinking of it in terms of VPN's and firewalls, is the kind of traditional way that we manage security at the edge. Hardened at the edge, soft in the middle isn't an acceptable way to build a security policy around applications that are internally exposing parts of their API's to other parts of the application. So, looking at it for me, application use case perspective, which portions of the application need to be able to talk to one another, and it's part of why somebody like Histio are so exciting, because it builds right in to the platform, the notion of mutual authentication between services. So that you know you're talking to a service that you're allowed to talk to. Encryption associated with that, so that you get another level of security for data and motion, and all of that is not looking at what is the VPN or what is the VLAN tag, or what is the encapsulation ID, and thinking layer two, layer three security, it's really application layer, and thinking in terms of that policy, which pieces of the application have to talk to each other, and nobody else can talk to that service unless it's, you know, understood that that's an important part for how the application works. So I think, really agree, and you could even say DevSecOps to me is something that I've come around to. Initially I thought it was a bogus term and I see the value in considering security at every step of build, test and deliver an application. Lift and shift, totally different topic. What does it mean to lift and shift? And I think there's still, some people want to say there's no value in lift and shift, and I don't fully agree, I think there's still value in moving, and modernizing the platform without changing the application, but ultimately the real value does come in re-architecting, and so there's that balance. What can you optimize by moving? And where does that free up resources to invest in that real next generation application re-architecting? >> So Chris, you've talked about machine learning, right? Huge amounts of data, you've just talked about security, we've talked about multi-cloud, to me that says we might have an issue in the future with the data layer. How are people thinking about the data layer, where it lives, on prem, in the cloud, think about GDPR compliance, you know, all that sort of good stuff. You know, how are you and Red Hat, how are you asking people to think about that? >> So, data management is a big question. We build storage tooling, we understand how to put the bytes on disc, and persist, and maintain the storage, it's a different question what are the data services, and what is the data governance, or policy around placement, and I think it's a really interesting part of the ecosystem today. We've been working with some research partners in the Massachusetts Open Cloud and Boston University on a project called "Cloud Dataverse", and it has a whole policy question around data. 'Cause there, scientists want to share data sets, but you have to control and understand who you're sharing your data sets with. So, it's definitely a space that we are interested in, understand, that there's a lot of work to be done there, and GDPR just kind of shines a light right on it and says policy and governance around where data is placed is actually fundamental and important, and I think it's an important part, because you've seen some of the data issues recently in the news, and you know, we got to get a handle on where data goes, and ultimately, I'd love to see a place where I'm in control of how my data is shared with the rest of the world. >> John F: Yeah, certainly the trend. So a final question for you, Open Source absolutely greatness going on, more and more good things are happening in projects, and bigger than ever before, I mean machine learning's a great example, seeing not just code snippets, code bases being you know, TensorFlow jumps out at me (mumbles), what are you doing here this year that's new and different from an Open Source standpoint, but also from a Red Hat standpoint that's notable that people should pay attention to? >> Well, one of the things that we're focused on is that platform layer, how do we enable a machine learning workload to run well on our platform? So it starts actually at the very bottom of the stack, hardware enablement. You got to get GPUs functional, you got to get them accessible to virtual machine based applications, and container based applications, so that's kind of table stakes. Accelerate a machine learning workload to make it usable, and valuable, to an enterprise by reducing the training and interference times for a machine learning model. Some of the next questions are how do we embed that technology in our own products? So you saw Access Insights this morning, talking about how we take machine learning, look at all of the data that we're gathering from the systems that our customers are deploying, and then derive insights from those and then feed those back to our customers so they can optimize the infrastructure that they're building and running and maintaining, and then, you know, the next step is that intelligent application. How do we get that machine learning capability into the hands of the developer, and pair the data scientist with the developers so you build these intelligent applications, taking advantage of all the data that you're gathering as an enterprise, and turning that into value as part of your application development cycle. So those are the areas that we're focused on for machine learning, and you know, some of that is partnering, you know, talking through how do we connect some of these services from Open Shift to the cloud service providers that are building some of these great machine learning tools, so. >> Any new updates on (mumbles) the success of Red Hat just in the past two years? You see the growth, that correlates, that was your (mumbles) Open Shift, and a good calls there, positioned perfectly, analysts, financial analysts are really giving you guys a lot of props on Wall Street, about the potential revenue growth opportunities on the business side, what's it like now at Red Hat? I mean, do you look back and say, "Hey, it was only like three years ago we did this", and I mean, the vibes are good, I mean share some inside commentary on what's happening inside Red Hat. >> It's really exciting. I mean, we've been working on these things for a long time. And, the simplest example I have is the combination of tools like the JBoss Middleware Suite and Linux, well they could run well together and we have a lot of customers that combine those, but when you take it to the next step, and you build containerized services and you distribute those broadly, you got a container platform, you got middleware components, you know, even providing functionality as services, you see how it all comes together and that's just so exciting internally. And at the same time we're growing. And a big part of-- >> John F: Customers are using it. >> Customers are using it, so putting things into production is critical. It's not just exciting technology but it's in production. The other piece is we're growing, and as we grow, we have to maintain the core of who we are. There's some humility that's involved, there's some really core Open Source principles that are involved, and making sure that as we continue to grow, we don't lose sight of who we are, really important thing for our internal culture, so. >> John F: Great community driven, and great job. Chris, thanks for coming on theCUBE, appreciate it. Chris Wright, CTO of Red Hat, sharing his insights here on theCUBE. Of course, bringing you all a live action as always here in San Francisco in Moscone West, for Red Hat Summit 2018, we'll be right back. (electronic music) (intense music)
SUMMARY :
Brought to you by Red Hat. Great to see you again, thanks for joining us today. you got, yesterday had sessions over there from the tech side, but also now on the business side, and you get a consistent standardized application run time in the team that you have on Red Hat's side and all the data that we're gathering is that going to be the new reality So, the scheduling optimization that you in that, kind of the linchpin, but you got, around the legacy system to give you access, Macquarie and Amadeus, and you know and be kind of the change agent for I'll see the system admin kind of administrative and then two, you know we did digital transformation and I see the value in considering think about GDPR compliance, you know, and you know, we got to get a handle on code bases being you know, TensorFlow jumps out at me and then, you know, the next step is that I mean, do you look back and say, and you build containerized services and as we grow, we have to maintain Of course, bringing you all a live action as always
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
Chris Wright | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
John F | PERSON | 0.99+ |
40 applications | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
one question | QUANTITY | 0.99+ |
Massachusetts Open Cloud | ORGANIZATION | 0.99+ |
20 mile | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
Keybank | ORGANIZATION | 0.99+ |
Moscone South | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
last week | DATE | 0.99+ |
Boston University | ORGANIZATION | 0.99+ |
Amadeus | ORGANIZATION | 0.99+ |
today | DATE | 0.98+ |
Linux | TITLE | 0.98+ |
Macquarie | ORGANIZATION | 0.98+ |
TechReckoning Advisory Firm | ORGANIZATION | 0.98+ |
one example | QUANTITY | 0.98+ |
Moscone West | LOCATION | 0.98+ |
Kubernetes | TITLE | 0.98+ |
HeteroGenius | ORGANIZATION | 0.97+ |
both | QUANTITY | 0.97+ |
Second quote | QUANTITY | 0.97+ |
three years ago | DATE | 0.97+ |
one | QUANTITY | 0.97+ |
GDPR | TITLE | 0.97+ |
Red Hat Summit 2018 | EVENT | 0.97+ |
three days | QUANTITY | 0.96+ |
Copenhagen, Denmark | LOCATION | 0.96+ |
theCUBE | ORGANIZATION | 0.96+ |
Day one | QUANTITY | 0.93+ |
Open Innovation Lab | ORGANIZATION | 0.92+ |
RedHat Summit 2018 | EVENT | 0.92+ |
this year | DATE | 0.92+ |
first | QUANTITY | 0.91+ |
CUBE | ORGANIZATION | 0.91+ |
CTO | PERSON | 0.9+ |
The Wall Street Journal | TITLE | 0.89+ |
Red Hat | TITLE | 0.89+ |
this morning | DATE | 0.86+ |
a year | QUANTITY | 0.85+ |
one size | QUANTITY | 0.85+ |
Agile | TITLE | 0.82+ |
Red Hat 2018 | TITLE | 0.82+ |
years | QUANTITY | 0.81+ |
Open Shift | TITLE | 0.8+ |
Vice President | PERSON | 0.8+ |
past two years | DATE | 0.77+ |
two different worlds | QUANTITY | 0.76+ |
TCPIP | TITLE | 0.74+ |
Middleware Suite | TITLE | 0.74+ |
X86 | TITLE | 0.72+ |
Wall Street | LOCATION | 0.72+ |
Bina Hallman & Steven Eliuk, IBM | IBM Think 2018
>> Announcer: Live, from Las Vegas, it's theCUBE. Covering IBM Think 2018. Brought to you by IBM. >> Welcome back to IBM Think 2018. This is theCUBE, the leader in live tech coverage. My name is Dave Vellante and I'm here with Peter Burress. Our wall-to-wall coverage, this is day two. Everything AI, Blockchain, cognitive, quantum computing, smart ledger, storage, data. Bina Hallman is here, she's the Vice President of Offering Management for Storage and Software Defined. Welcome back to theCUBE, Bina. >> Bina: Thanks for having me back. >> Steve Elliot is here. He's the Vice President of Deep Learning in the Global Chief Data Office at IBM. >> Thank you sir. >> Dave: Welcome to the Cube, Steve. Thanks, you guys, for coming on. >> Pleasure to be here. >> That was a great introduction, Dave. >> Thank you, appreciate that. Yeah, so this has been quite an event, consolidating all of your events, bringing your customers together. 30,000 40,000, too many people to count. >> Very large event, yes. >> Standing room only at all the sessions. It's been unbelievable, your thoughts? >> It's been fantastic. Lots of participation, lots of sessions. We brought, as you said, all of our conferences together and it's a great event. >> So, Steve, tell us more about your role. We were talking off the camera, we've had here Paul Bhandari on before, Chief Data Officer at IBM. You're in that office, but you've got other roles around Deep Learning, so explain that. >> Absolutely. >> Sort of multi-tool star here. >> For sure, so, roles and responsibility at IBM and the Chief Data Office, kind of two pillars. We focus in the Deep Learning group on foundation platform components. So, how to accelerate the infrastructure and platform behind the scenes, to accelerate the ideation or product phase. We want data scientists to be very effective, and for us to ensure our projects very very quickly. That said, I mentioned projects, so on the applied side, we have a number of internal use cases across IBM. And it's not just hand vault, it's in the orders of hundreds and those applied use cases are part of the cognitive plan, per se, and each one of those is part of the transformation of IBM into our cognitive. >> Okay, now, we were talking to Ed Walsh this morning, Bina, about how you collaborate with colleagues in the storage business. We know you guys have been growing, >> Bina: That's right. >> It's the fourth quarter straight, and that doesn't event count, some of the stuff that you guys ship on the cloud in storage, >> That's right, that's right. >> Dave: So talk about the collaboration across company. >> Yeah, we've had some tremendous collaboration, you know, the broader IBM and bringing all of that together, and that's one of the things that, you know, we're talking about here today with Steve and team is really as they built out their cognitive architecture to be able to then leverage some of our capabilities and the strengths that we bring to the table as part of that overall architecture. And it's been a great story, yeah. >> So what would you add to that, Steve? >> Yeah, absolutely refreshing. You know I've built up super computers in the past, and, specifically for deep learning, and coming on board at IBM about a year ago, seeing the elastic storage solution, or server. >> Bina: Yeah, elastic storage server, yep. >> It handles a number of different aspects of my pipeline, very uniquely, so for starters, I don't want to worry about rolling out new infrastructure all the time. I want to be able to grow my team, to grow my projects, and that's what nice about ESS is it's distensible, I'm able to roll out more projects, more people, multi-tenancy et cetera, and it supports us effectively. Especially, you know, it has very unique attributes like the read only performance feed, and random access of data, is very unique to the offering. >> Okay, so, if you're a customer of Bina's, right? >> I am, 100%. >> What do you need for infrastructure for Deep Learning, AI, what is it, you mentioned some attributes before, but, take it down a little bit. >> Well, the reality is, there's many different aspects and if anything kind of breaks down, then the data science experience breaks down. So, we want to make sure that everything from the interconnect of the pipelines is effective, that you heard Jensen earlier today from Nvidia, we've got to make sure that we have compute devices that, you know, are effective for the computation that we're rolling out on them. But that said, if those GPUs are starved by data, that we don't have the data available which we're drawing from ESS, then we're not making effective use of those GPUs. It means we have to roll out more of them, et cetera, et cetera. And more importantly, the time for experimentation is elongated, so that whole idea, so product timeline that I talked about is elongated. If anything breaks down, so, we've got to make sure that the storage doesn't break down, and that's why this is awesome for us. >> So let me um, especially from a deep learning standpoint, let me throw, kind of a little bit of history, and tell me if you think, let me hear your thoughts. So, years ago, the data was put as close to the application as possible, about 10, 15 years ago, we started breaking the data from the application, the storage from the application, and now we're moving the algorithm down as close to the data as possible. >> Steve: Yeah. >> At what point in time do we stop calling this storage, and start acknowledging that we're talking about a fabric that's actually quite different, because we put a lot more processing power as close to the data as possible. We're not just storing. We're really doing truly, deeply distributing computing. What do you think? >> There's a number of different areas where that's coming from. Everything from switches, to storage, to memory that's doing computing very close to where the data actually residents. Still, I think that, you know, this is, you can look all the way back to Google file system. Moving computation to where the data is, as close as possible, so you don't have to transfer that data. I think that as time goes on, we're going to get closer and closer to that, but still, we're limited by the capacity of very fast storage. NVMe, very interesting technology, still limited. You know, how much memory do we have on the GPUs? 16 gigs, 24 is interesting, 48 is interesting, the models that I want to train is in the 100s of gigabytes. >> Peter: But you can still parallelize that. >> You can parallelize it, but there's not really anything that's true model parallelism out there right now. There's some hacks and things that people are doing, but. I think we're getting there, it's still some time, but moving it closer and closer means we don't have to spend the power, the latency, et cetera, to move the data. >> So, does that mean that the rate of increase of data and the size of the objects we're going to be looking at, is still going to exceed the rate of our ability to bring algorithms and storage, or algorithms and data together? What do you think? >> I think it's getting closer, but I can always just look at the bigger problem. I'm dealing with 30 terabytes of data for one of the problems that I'm solving. I would like to be using 60 terabytes of data. If I could, if I could do it in the same amount of time, and I wasn't having to transfer it. With that said, if you gave me 60, I'd say, "I really wanted 120." So, it doesn't stop. >> David: (laughing) You're one of those kind of guys. >> I'm definitely one of those guys. I'm curious, what would it look like? Because what I see right now is it would be advantageous, and I would like to do it, but I ran 40,000 experiments with 30 terabytes of data. It would be four times the amount of transfer if I had to run that many experiments of 120. >> Bina, what do you think? What is the fundamental, especially from a software defined side, what does the fundamental value proposition of storage become, as we start pushing more of the intelligence close to the data? >> Yeah, but you know the storage layer fundamentally is software defined, you still need that setup, protocols, and the file system, the NFS, right? And, so, some of that still becomes relevant, even as you kind of separate some of the physical storage or flash from the actual compute. I think there's still a relevance when you talk about software defined storage there, yeah. >> So you don't expect that there's going to be any particular architectural change? I mean, NVMe is going to have a real impact. >> NVMe will have a real impact, and there will be this notion of composable systems and we will see some level of advancement there, of course, and that's around the corner, actually, right? So I do see it progressing from that perspective. >> So what's underneath it all, what actually, what products? >> Yeah, let me share a little bit about the product. So, what Steve and team are using is our elastic storage server. So, I talked about software defined storage. As you know, we have a very complete set of software defined storage offerings, and within that, our strategy has always been allow the clients to consume the capabilities the way they want. A software only on their own hardware, or as a service, or as an integrated solution. And so what Steve and team are using is an integrated solution with our spectrum scale software, along with our flash and power nine server power systems. And on the software side from spectrum scale, this is a very rich offering that we've had in our portfolio. Highly scalable file system, it's one of the solutions that powers a lot of our supercomputers. A project that we are still in the process and have delivered on around Whirl, our national labs. So same file system combined with a set of servers and flash system, right? Highly scalable, erasure coding, high availability as well as throughput, right? 40 gigabytes per second, so that's the solution, that's the storage and system underneath what Steve and team are leveraging. >> Steve, you talk about, "you want more," what else is on Bina's to-do-list from your standpoint? >> Specifically targeted at storage, or? >> Dave: Yeah, what do you want from the products? >> Well, I think long stretch goals are multi-tenancy and the wide array of dimensions that, especially in the chief data office, that we're dealing with. We have so many different business units, so many different of those enterprise problems in the orders of hundreds how do you effectively use that storage medium driving so many different users? I think it's still hard, I think we're doing it a hell of a lot better than we ever have, but it's still, it's an open research area. How do you do that? And especially, there's unique attributes towards deep learning, like, most of the data is read only to a certain degree. When data changes there's some consistency checks that could be done, but really, for my experiment that's running right now, it doesn't really matter that it's changed. So there's a lot of nuances specific to deep learning that I would like exploited if I could, and that's some of the interactions that we're working on to kind of alleviate those pains. >> I was at a CDO conference in Boston last October, and Indra Pal was there and he presented this enterprise data architecture, and there were probably about three or four hundred CDOs, chief data officers, in the room, to sort of explain that. Can you, sort of summarize what that is, and how it relates to sort of what you do on a day to day basis, and how customers are using it? >> Yeah, for sure, so the architecture is kind of like the backbone and rules that kind of govern how we work with the data, right? So, the realities are, there's no sort of blueprint out there. What works at Google, or works at Microsoft, what works at Amazon, that's very unique to what they're doing. Now, IBM has a very unique offering as well. We have so many, we're a composition of many, many different businesses put together. And now, with the Chief Data Office that's come to light across many organizations like you said, at the conference, three to 400 people, the requirements are different across the orders. So, bringing the data together is kind of one of the big attributes of it, decreasing the number of silos, making a monolithic kind of reliable, accessible entity that various business units can trust, and that it's governed behind the scenes to make sure that it's adhering to everyone's policies, that their own specific business unit has deemed to be their policy. We have to adhere to that, or the data won't come. And the beauty of the data is, we've moved into this cognitive era, data is valuable but only if we can link it. If the data is there, but there's no linkages there, what do I do with it? I can't really draw new insights. I can't draw, all those hundreds of enterprise use cases, I can't build new value in them, because I don't have any more data. It's all about linking the data, and then looking for alternative data sources, or additional data sources, and bringing that data together, and then looking at the new insights that come from it. So, in a nutshell, we're doing that internally at IBM to help our transformation. But at the same time creating a blueprint that we're making accessible to CDOs around the world, and our enterprise customers around the world, so they can follow us on this new adventure. New adventure being, you know, two years old, but. >> Yeah, sure, but it seems like, if you're going to apply AI, you've got to have your data house in order to do that. So this sounds like a logical first step, is that right? >> Absolutely, 100%. And, the realities are, there's a lot of people that are kicking the tires and trying to figure out the right way to do that, and it's a big investment. Drawing out large sums of money to kind of build this hypothetical better area for data, you need to have a reference design, and once you have that you can actually approach the C-level suite and say, "Hey, this is what we've seen, this is the potential, "and we have an architecture now, "and they've already gone down all the hard paths, "so now we don't have to go down as many hard paths." So, it's incredibly empowering for them to have that reference design and learning from our mistakes. >> Already proven internally now, bringing it to our enterprise alliance. >> Well, and so we heard Jenny this morning talk about incumbent disruptors, so I'm kind of curious as to what, any learnings you have there? It's early days, I realize that, but when you think about, the discussions, are banks going to lose control of the payment systems? Are retail stores going to go away? Is owning and driving your own vehicle going to be the exception, not the norm? Et cetera, et cetera, et cetera, you know, big questions, how far can we take machine intelligence? Have you seen your clients begin to apply this in their businesses, incumbents, we saw three examples today, good examples, I thought. I don't think it's widespread yet, but what are you guys seeing? What are you learning, and how are you applying that to clients? >> Yeah, so, I mean certainly for us, from these new AI workloads, we have a number of clients and a number of different types of solutions. Whether it's in genomics, or it's AI deep learning in analyzing financial data, you know, a variety of different types of use cases where we do see clients leveraging the capabilities, like spectrum scale, ESS, and other flash system solutions, to address some of those problems. We're seeing it now. Autonomous driving as well, right, to analyze data. >> How about a little road map, to end this segment? Where do you want to take this initiative? What should we be looking for as observers from the outside looking in? >> Well, I think drawing from the endeavors that we have within the CDO, what we want to do is take some of those ideas and look at some of the derivative products that we can take out of there, and how do we kind of move those in to products? Because we want to make it as simple as possible for the enterprise customer. Because although, you see these big scale companies, and all the wonderful things that they're doing, what we've had the feedback from, which is similar to our own experiences, is that those use cases aren't directly applicable for most of the enterprise customers. Some of them are, right, some of the stuff in vision and brand targeting and speech recognition and all that type of stuff are, but at the same time the majority and the 90% area are not. So we have to be able to bring down sorry, just the echoes, very distracting. >> It gets loud here sometimes, big party going on. >> Exactly, so, we have to be able to bring that technology to them in a simpler form so they can make it more accessible to their internal data scientists, and get better outcomes for themselves. And we find that they're on a wide spectrum. Some of them are quite advanced. It doesn't mean just because you have a big name you're quite advanced, some of the smaller players have a smaller name, but quite advanced, right? So, there's a wide array, so we want to make that accessible to these various enterprises. So I think that's what you can expect, you know, the reference architecture for the cognitive enterprise data architecture, and you can expect to see some of the products from those internal use cases come out to some of our offerings, like, maybe IGC or information analyzer, things like that, or maybe the Watson studio, things like that. You'll see it trickle out there. >> Okay, alright Bina, we'll give you the final word. You guys, business is good, four straight quarters of growth, you've got some tailwinds, currency is actually a tailwind for a change. Customers seem to be happy here, final word. >> Yeah, no, we've got great momentum, and I think 2018 we've got a great set of roadmap items, and new capabilities coming out, so, we feel like we've got a real strong set of future for our IBM storage here. >> Great, well, Bina, Steve, thanks for coming on theCUBE. We appreciate your time. >> Thank you. >> Nice meeting you. >> Alright, keep it right there everybody. We'll be back with our next guest right after this. This is day two, IBM Think 2018. You're watching theCUBE. (techno jingle)
SUMMARY :
Brought to you by IBM. Bina Hallman is here, she's the Vice President He's the Vice President of Deep Learning Dave: Welcome to the Cube, Steve. Yeah, so this has been quite an event, Standing room only at all the sessions. We brought, as you said, all of our conferences together You're in that office, but you've got other roles behind the scenes, to accelerate the ideation in the storage business. and that's one of the things that, you know, seeing the elastic storage solution, or server. like the read only performance feed, AI, what is it, you mentioned some attributes before, that the storage doesn't break down, and tell me if you think, let me hear your thoughts. and start acknowledging that we're talking about a fabric the models that I want to train is in the 100s of gigabytes. to move the data. for one of the problems that I'm solving. and I would like to do it, protocols, and the file system, the NFS, right? So you don't expect that there's going to be and that's around the corner, actually, right? allow the clients to consume the capabilities and that's some of the interactions that we're working on and how it relates to sort of what you do on a and that it's governed behind the scenes you've got to have your data house in order to do that. that are kicking the tires and trying to figure out bringing it to our enterprise alliance. and how are you applying that to clients? leveraging the capabilities, like spectrum scale, ESS, and all the wonderful things that they're doing, So I think that's what you can expect, you know, Okay, alright Bina, we'll give you the final word. and new capabilities coming out, so, we feel We appreciate your time. This is day two, IBM Think 2018.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Steve | PERSON | 0.99+ |
Steve Elliot | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Peter Burress | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Paul Bhandari | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Boston | LOCATION | 0.99+ |
Bina Hallman | PERSON | 0.99+ |
Indra Pal | PERSON | 0.99+ |
60 terabytes | QUANTITY | 0.99+ |
90% | QUANTITY | 0.99+ |
16 gigs | QUANTITY | 0.99+ |
Peter | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
2018 | DATE | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
30 terabytes | QUANTITY | 0.99+ |
Jenny | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
60 | QUANTITY | 0.99+ |
40,000 experiments | QUANTITY | 0.99+ |
Steven Eliuk | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
24 | QUANTITY | 0.99+ |
Bina | PERSON | 0.99+ |
two years | QUANTITY | 0.99+ |
120 | QUANTITY | 0.99+ |
48 | QUANTITY | 0.99+ |
last October | DATE | 0.99+ |
one | QUANTITY | 0.98+ |
40 gigabytes | QUANTITY | 0.98+ |
first step | QUANTITY | 0.98+ |
hundreds | QUANTITY | 0.97+ |
three examples | QUANTITY | 0.97+ |
30,000 40,000 | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
400 people | QUANTITY | 0.97+ |
four hundred CDOs | QUANTITY | 0.96+ |
Whirl | ORGANIZATION | 0.95+ |
about 10, 15 years ago | DATE | 0.94+ |
this morning | DATE | 0.94+ |
about three | QUANTITY | 0.92+ |
four times | QUANTITY | 0.91+ |
years ago | DATE | 0.91+ |
100s of gigabytes | QUANTITY | 0.89+ |
fourth quarter | DATE | 0.89+ |
a year ago | DATE | 0.88+ |
four straight quarters | QUANTITY | 0.88+ |
Watson studio | ORGANIZATION | 0.85+ |
day two | QUANTITY | 0.84+ |
ESS | ORGANIZATION | 0.83+ |
nine server power systems | QUANTITY | 0.82+ |
Vice President | PERSON | 0.78+ |
Nithin Eapen, Arcadia Crypto Ventures | Polycon 2018
>> Announcer: Live from Nassau in the Bahamas, it's the Cube. Covering Polycon '18. Brought to you by Polymath. >> Welcome back, everyone. This is the Cube's exclusive coverage. We're live in the Bahamas, here for day two of our wall to wall coverage of Polycon '18. It's a security token conference, securitizing, you know, token economics, cryptography, cryptocurrency. All this is in play. Token economics powering the world. New investors are here. I'm John Furrier, Dave Vellante. Our next guest is Nithin Eapen Who's the Chief Investment Officer for Arcadia Crypto Ventures. Welcome to the Cube. >> Thank you very much gentlemen. >> Thanks for joining us. >> Thanks for coming out. >> Excited to have you on for a couple reasons. One, we've been talking since day one, lot of hallway conversations. Small, intimate conference, so we've had a chance to talk. Folks haven't heard that yet, so let's kind of get some of the key things we discussed. You are very bullish and long on cryptocurrency and Blockchain. You guys are doing a variety of deals. You're also advising companies and you guys are rolling your sleeves up. So kind of interesting dynamics. So take a minute to explain what you guys are doing, your model. >> Okay. >> And we're going to try to get some of your partners on later. You have a great team. >> Yep. >> Experienced pros in investing. And you got wales, you got pros. So you got a nice balance. >> Yes we do. >> So take a minute to explain Arcadia, your approach and philosophy. >> Okay. Okay. So Arcadia Crypto Ventures primarily we are a private fund. We invest other money. We believe in the whole crypto space. We believe this market is expanding and it is growing and it's going to be the biggest thing that ever happened. It's going to be this fusion of internet and PC and mobile. And everything is going to go batshit, okay. We believe in the whole tokenization world. Everything is going to be tokenized. So as a whole, we believe this space is going to go very big. Okay, so that's one piece and because of that, we invest in the space, the whole space. Not one bitcoin or Ethereum, but everything in the space that makes sense. People who have a use case. Now the second piece of it is we advised great founders. We want to get founders to come out and build these new things because this is the new internet of the new era and people have to come out and build these things. And so many of them are traditional businesses and we have to explain to them why this matters, why you should come to this space and be decentralized and reach the whole world. Because initially, the internet came. The idea of the internet was everybody gets information. Now information did get everywhere. You don't have to worry that the mailman is there to deliver your email anymore. Even if it's a Sunday, your mail will get delivered. So that part was good. But now you have these few companies that's holding all your data. It's okay for most people, but they do censor a lot of people. So that is one point. That censorship. We want a censorship-resistant world where everybody's ideas get out. So that way, we believe that's how this whole internet space itself is going to change because of that. See this is if I explained in one word, this is the greatest sociopolitical economic experimental revolution ever that has happened in humankind. >> In the history of the world. I mean this is important. I'd said that on my opening today. >> Uh-huh. >> Dave and I were riffing and Dave and I have always been studying. We've been entre-- We are entrepreneurs. We live in Silken Valleys in Boston and so you seeing structural change going on. So it's not just make money. >> Nope. >> There's mission-based, younger demographics. So you starting to see really great stuff. So I want to ask you specifically, 'cause you guys are unique in the sense that you're investing in a lot of things. But startups, pure-playing startups? >> Which had only one path before, or two paths. >> Right, yeah. >> Cashflow financing and venture capital. >> Okay. >> So that's a startup model. The growing companies that are transform their growth business with token economics, those would have long odds. Those are the best deals. >> Okay. Then there's like the third deal. Well we're out of business, throw the Hail Mary, repivot. (laughs) Right, so categorically, you're starting to see the shape of the kinds of swim lanes of deals. >> Okay. >> Okay, pivoting, that Hail Mary. Okay, you can evaluate that pretty much straight up on that. Startups need nurturing, right? >> Yeah. >> So the VC1 al-oc-chew works really well for startups because of the product market fits going to be developed. You got cloud computing so you can go faster. So you guys are nurturing startups. At the same time, you're also doing growth deals. >> We do. >> Explain the dynamic between those kinds of deals, how you guys approach them. What's the dynamic? What are the key things that you're bringing? Is it just packaging? Is it tech? So on, so forth. >> So with a lot of people, when they are on the advisory side. Primarily we look at the founder and the tech. What are they trying to solve? That is key. If it's a turd, you can't package it. No matter how you package it, that's not going to work. >> You can't package dog you-know-what. >> Yeah, exactly, okay. >> So that's one thing that we look at. The founders and their idea. Now their idea, can it be decentralized? Some models are meant to be centralized maybe so it doesn't work, okay. Like, see it all boils down to-- Let me break it down. We look at it. Okay, do you have an asset? Behind the scenes, is there an asset? Is that asset being transferred among parties? If you have an asset and it's being transferred, is there some central mechanism in between? Because if there is a central mechanism in between, that means you're going to be paying rent to that. Okay, all right. You have these things. Okay, great. Now you have your asset. Do you have that in between party? But in some of them, let's say you have money in your pocket. You walk, it falls down. Somebody else pick ups the money. It's his. It's a bearer asset, okay? So that's where bitcoin solved a very big problem. It was bearer asset. >> Unless they hack your wallet, then they take your money. >> Right. That happens in real life too, right? Somebody can take money from your wallet. So it can happen in bitcoin. They can hack your wallet. All right. So bitcoin was solving that problem. Now the second piece is a registered asset. And I mean by registered asset is take your car. You buy your car, you go to the DMV, stand in line, register. There's a record of data at the DMV in their central database. If somebody steals your car, the car is still not his. It's only if they can change the record over there in DMV. Then it becomes his. Now there maybe you do want the DMV to be there. Or maybe we can-- But the DMV being there, now you have a problem. They're going to charge you rent and they can decide, oh you know what? John, I'm not going to give him a license or a car in the state of California. They can decide, right? So that is where now you decide do you want to go the centralized route or the decentralized route? So we break it down to the asset. >> So there could be a fit for decentralized. I get that. >> Yeah. >> Let me ask you a tactical question, because I know a lot of entrepreneurs out there. They're watching and they'll hear this. A big strategic decision up front is, obviously, token selection. >> So it's pretty clear that security token works really well for funding and whatnot. Then there's a role for security tokens. I mean utility tokens. >> Yes. >> So do people, should they start from a risk management standpoint, a new company. So let's just say we had an existing business. Entrepreneur says, "Hey, you know what? We're doing well. We're doing 10 million dollars in revenue and I want to do tokenize 'cause we're a decentralized business. That's a perfect fit." Do they start a new company or do they just use the security token with their existing stable company? >> I would suggest, usually at that time, that's more of a legal question at that time. I don't know if I'm a lawyer to answer that. I tell them, you have a business. The business model is going well. If you're happy with it, let that be there. Make a new company. If your business model was not doing good, you might as well start from there because you figure out it's not working. But again, at that time, we tried to come up with this question. Are you trying to put the old wine in a new bottle kind of thing? If the wine is old, it ain't going to work. You have to get to that realization. So, here. >> People are being sued. So mainly the legal question is do I want to risk being. >> All right, let me hop in here. I wanted to ask, go back to something you said about censorship. I had this conversation with my kid the other day. I was explaining Google essentially censors your search results based on what they think you're going to click on. >> They do that. >> He's like no and then he thought about it and he's like okay, yeah they kind of do that. Okay, so that's an underpinning of we're going to take back the internet, right? >> Yeah. >> Okay, I just wanted to sort of clarify that. From an investment philosophy standpoint, you're technical, yet you don't exclusively vet or invest in infrastructure protocols and dig deep into what-- You read the white papers, but there are some folks out there hedge funds, et cetera. All they do is just invest in utility tokens. They're trying to invest in stuff that's going to be infrastructure for the next internet. Your philosophy is different. You're saying, we talked about this, we don't really know what's going to win, but we make prudent investments in areas that we think will win. We like to spread it around a little bit. Why that philosophy? May reduce your return, but it also reduces your risk. Maybe you could describe that a little bit. >> Sure. See, in general, picking winners in the long run has been-- It's a proved fact that nobody could pick winners. Like if you take active hedge fund managers. Active hedge fund managers, in the long run, if you take 10 to 20 years, they lag the S and P. So if you had money, if you give it to an active hedge fund manager, and so that you just had to buy the S and P, you will have beaten 93%. >> That's Buffet's advice. Buy an S and P 500. >> Buffet made a bet for a billion dollars or something where, you know. So take Warren Buffet for that matter, his fund is lagging too. In reality, all his stock investments are down. He put it in IBM at $200 after eight years, it's at the 143 or something, right? So realistically,-- There's a lot of luck element, okay. You can do all of the analysis and you could still end up buying Enron, Lehman, and Bear Stearns, right? >> Right, yeah. >> And at that time, see they were using some models that they knew 'til then. Most people, investment comes from, you have this background that you know, okay this is what I look at. Cash flow, discounted cash flow. Great. If that is there, price to earnings, I'm going to buy. But then an Amazon came, most of the traditional investors never invested in Amazon. They were like, it's a loss- making company. They never going to survive. But they forgot the fact that companies like that there's this network effect and once the people are there, at any point, Jeff Bezos can just turn off the switch and take off the discount. You're not going to change your shopping from Amazon at that point because this month I lost my 15%. We're so used to it so people missed that. Nowadays they see that, but when it came to Blockchain they're like, oh, no, no, this is a fad. That's what most people said. >> So we talked about discounted cashflow as a classic valuation method. I see guys trying to do DCF on these investments. I mean, we were joking about that. (laughs) How do you-- What's your reaction to that? >> If anybody's saying that if they come to me and I'm like you-- I don't know what Kool-Aid do you drink at that point because what cashflow are they discounting? There's no cashflow. It's not like you're going to get dividends from these tokens. There's no dividends. It's like can you find out how many people are going to use it. What is the network effect? And again, for that, a lot of people are coming with a lot of these matrices or matrix right now. But I think even that, they're trying to retrofit into it. They're like, oh I can use this matrix. But, really we don't know. >> So people tend to want metrics. Dave and I talk about this all the time. When people part with their money, they need to know what they're betting on. So the question is when you look at investments, when you spend cash, when you write checks, what is your valuation technique? Do you look for the l-- How do you play that long game? What's the criteria? Besides like the normal stuff like founders, disruptive, like you got to write the check, let's say. Okay, buying a token. It's got to be worth something in the future, obviously. >> So we look at that space, where invariably they are trying to disrupt. Is there a big market? And even if it's a niche market, okay? So we're doing an error chain token. It's a very niche market. It's just the pilot, the maintenance folks, and the charter people, or the plain charter guys. It's a very small market, but that's good enough. It's very niche. They can have an ecosystem between themselves rather than being incentivized to long game miles and stuff like that, right? It doesn't have to be a very big market. We just look at it, okay. Founder is good, he has an idea, it is a space that can be decentralized and people can come in and they feel that they're part of the ecosystem. See the whole thing with the token economy and a traditional economy like let's say I'm spending money to buy a stock. So I buy stock. As an investor, what do I want? I want maximum returns. The employee, he wants to get maximum pay. And the consumer who's buying the product, he wants to get it at the cheapest price. So there's a-- It start aligned, okay? The moment you give 'em the cheapest price, my profits go down. If I increase the employees' salary, my profits go down. So we are all three of us are totally misaligned. >> If I for an important point, do you favor certain asset classes, you know, token, security tokens, or utility tokens, or you looking for equity? I mean, maybe just ... >> Right now, we've moved away from the whole equity bonds, or any of those things. We are totally concentrated on the utility or security tokens. We don't mind if it's a security token or utility token. >> And if it's a security token, are you looking for dividends, are you looking for >> At that point it's some kind of dividend. >> So you're not expecting equity as part of that security token? >> No, I like to expect equity, but if they are saying okay my token, if people buy and if they pay me $10, and out of that you're going to get $1 back, okay that's fine. We don't mind that as long as it's legal and all those things we're fine because it just makes the process easier. Earlier you invest and you didn't know when you could get out of your investment. At this point, it's become so liquid, at any point of time within two or three months, the token is less to people are either buying and selling. We know, otherwise, earlier when we used to do Ren Chain investments, we would get into our product, have it it's time seven to 10 years to get out. And in the meanwhile, they say great stories. Oh we're doing great. Who do I check with that we are doing great? I'm not getting any dividends. Nobody's buying this from me. How do I know? Where am I? I really don't know. I can make these values up and on my Excel sheet and say okay we valuing this company at a billion. >> So your technique is to say okay look at the equity plays the long game. You need an exit on liquidity, either M and A or IPO. >> Yes. >> Now you have a new liquidity market, so you play the game differently. I won't say spray and pray, but you have multiple bets going on so you can monitor liquidity opportunity. So that's a new calculation. >> And it's a great calculation, also. Because see we're in the market and now we know at any point of time, we don't have things on our books that are like we don't know what the value is. We know what that price is because the market is there, the exchange is there. What other people are willing to pay for us doesn't surprise. It's like saying my house is worth a million dollars. Actually it might be worth to me. It depends on what people are willing to pay me. >> Right exactly. >> If I have to synthesize this, you're taking high frequency trading techniques with classic venture investing, handling token from those two perspectives. >> Yes. >> High frequency trading meaning I'm looking at volatility and then option to abandon and get rid of whatever or whatever. >> The only thing is, we're not exiting our positions. We are in the long game. We believe the score market is supposed to at least reach eight trillion. When we started this whole investing, at that time, the whole market was at six billion and we said okay this market, based on our thesis, is supposed to reach eight trillion. Until then, we keep buying, okay? >> But to your HFT, you're not really arbitraging. >> No, no, we're not doing any of those. Because see >> They're applying real time techniques to token evaluations so they're game is try to get into a winner. >> Yes. >> With some tokens. >> A lot of the funds, they're doing this arbitrage more. They're trying to do arbitrage. But the problem is they're missing the big picture that way. So, arbitrage works in a very tight market. So S and P, let's say, somebody's doing 5% return on S and P. The guy with a arbitrage is coming and saying I made five point three, 5.5% or 6%. That's great in the equity world. Now, I want returns last year are 10 x or 30 x or 50 x. And somebody comes and tells me I made an extra 0.2%, doesn't really matter to me. I'm like instead of wasting that time doing arbitrage and paying taxes, I might just hold it. >> You believe in the fundamentals. >> You guys are in New York. Obviously, Arcadia Crypto Ventures, that's how they get ahold of you guys. Final question for you to end the segment. As new real pros come in, and let's take New York as a since you're in New York. The New York crowd comes in or the Silken Valley comes crowd existing market players other markets come in here. How important is optics packaging and compatibility with the sector, meaning I just can't throw my weight around on the hedge fund scene. We do it this way, I got money. Because people here have money. So what's the dynamic of pros coming in, we're seeing institutional folks come in, we're seeing real pros come in. They've never been to Burning Man. So, you know, they get that Burning Man culture exists, but this is not a Burning Man industry. >> Right, right. >> Business doesn't run like Burning Man. Maybe it should, that's a debate we'll have. Your take. >> So the new funds that are coming in, so they have a fear that they have missed out. They are missing the picture that this is just the beginning. So they've seen that this industry has gone from six billion to 500 billion in a year or year and a half. They're like, oh my god, I missed it. >> It's got to be over. >> So I have to write these big checks to get this. We don't write big checks. We write much smaller checks because we believe that if a founder is raising money, he has to raise it through small checks from everybody. That means all those people are really interested in this. And they're all of them really want the token to go up. Whether it's the investor, the user, and the employee who is working there because all of them they're interests are aligned. The moment you give a big check, so let's say you could raise 10 million from 10,000 people or you could raise it from one person. So when the big check is there, let's say I go to raise my money. There's this fund who's missed it and he says here's 10 million dollars. Okay, now I've got me and the fund and my tokens. Nobody else knows about my tokens. My tokens are as good as valueless. Now the funders looking okay, I need to exit. Nobody knows about my tokens. The fund is the only guy who has my tokens, he's trying to exit. Obviously the market is going to crash. There's no market. And he's like why did I get into this. So he missed that point that you need people around you. It's not just you alone. See, earlier days when ... >> This is your point about understanding how token economics works. >> Yes. >> So having more people in actually creates a game mechanic for trading. >> Because then you know that you're not the only guy interested in this. And earlier venture capital space there were these bunch of few venture capitals who wanted to capture that whole thing and tried to sell it to the next guy. Here, I'm what I'm saying is, we all have to come in together. We all can be together at the same price, which is good because the small person has, the common man has a chance to be a VC right now. Earlier you could never be a VC. I could only see Google, after IPO. I could never get it at what KPCB or Sequoia got it at. I had to wait 'til they got through CDA, CDB, which they bought at five cents. I would get at about $40 maybe. In this case, the big fund has a lot more money than me, but I can have my small 5,000 or 10,000. I can invest in the ICO. >> If you picked the right spot and you were there at the right place, the right time. 'Cause you are seeing guys come in and try to buy up all the tokens early on. >> They're trying to do that. They don't get it, but they will understand. So it is a learning (mumbles). Even they will evolve. They're like okay this is not how it works. And you have to make mistakes. >> Sorry, got to ask you one final, final since you brought it up. More people the better. So we're hearing rumors inside the hallways here that big wales are buying full allocations and then sharing them with all their friends. >> Possible, it is possible. >> We see some of that behavior. Dave calls it steel on steel, you know. Groups, you know. I'm going to take this whole deal down. We see that in venture capital. Used to be syndicates. Now you seeing Andreessen Horowitz doing the whole deals. That kind of creates some alienation, my opinion, but what's your take on that? I'm a big wale. I'm taking down the whole allocation. >> It's okay. Some of those things are going to happen, okay. It is fine. The only problem is usually when that happens the big wale who takes it he will realize very quickly. >> He's got to get more people. >> He needs more people otherwise he might be able to exit to his five buddies who were always taking it from him. Now those guys, they also have to exit at some point. Nobody knows about the product. Might as well just take a small piece, even the founders in this case typically in a token model. Founders who've taken 20% or 10% have done better than founders who took 60% of the whole tokens. >> Right. Nithin, great to have you on. Love your business model. Arcadia Crypto Ventures. They got real pros, they got a wale, they got people who know what they're doing, and they're active. They understand the ethos. I think you guys are well-aligned and you're not trying to come in and saying this is how we did it in New York before. You get the culture. You're aligned and you're making investments. Great perspective. Thanks for sharing. >> Thank you so much. >> This is the Cube, bringing the investor perspective live here in the Bahamas. More exclusive Cube coverage. Token economics, huge opportunity for entrepreneurs and investors to create value and capture it. That's Blockchain, that's crypto, that's token economics. I'm John with Dave Vallante. We'll be back with more coverage after this short break. (futuristic digital music)
SUMMARY :
Brought to you by Polymath. This is the Cube's exclusive coverage. So take a minute to explain what you guys are doing, And we're going to try to get some of your partners on later. So you got a nice balance. So take a minute to explain Arcadia, and reach the whole world. In the history of the world. and so you seeing structural change going on. So I want to ask you specifically, or two paths. Those are the best deals. of the kinds of swim lanes of deals. Okay, you can evaluate that pretty much straight up on that. because of the product market fits going to be developed. What are the key things that you're bringing? If it's a turd, you can't package it. Now you have your asset. your wallet, then they take your money. But the DMV being there, now you have a problem. So there could be Let me ask you a tactical question, So it's pretty clear that security token works really well Entrepreneur says, "Hey, you know what? I tell them, you have a business. So mainly the legal question is do I want to risk being. go back to something you said about censorship. and he's like okay, yeah they kind of do that. Maybe you could describe that a little bit. and so that you just had to buy the S and P, Buy an S and P 500. and you could still end up buying and take off the discount. So we talked about discounted cashflow I don't know what Kool-Aid do you drink at that point So the question is when you look at investments, and the charter people, or the plain charter guys. or you looking for equity? from the whole equity bonds, or any of those things. And in the meanwhile, they say great stories. okay look at the equity plays the long game. Now you have a new liquidity market, and now we know at any point of time, If I have to synthesize this, and then option to abandon We are in the long game. No, no, we're not doing any of those. real time techniques to token evaluations A lot of the funds, they're doing this arbitrage more. that's how they get ahold of you guys. Maybe it should, that's a debate we'll have. So the new funds that are coming in, So he missed that point that you need people around you. This is your point about understanding So having more people in actually the common man has a chance to be a VC right now. and you were there at the right place, the right time. And you have to make mistakes. Sorry, got to ask you one final, Dave calls it steel on steel, you know. the big wale who takes it he will realize very quickly. even the founders in this case typically in a token model. Nithin, great to have you on. and investors to create value and capture it.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Dave Vallante | PERSON | 0.99+ |
Jeff Bezos | PERSON | 0.99+ |
Enron | ORGANIZATION | 0.99+ |
$10 | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
New York | LOCATION | 0.99+ |
Bear Stearns | ORGANIZATION | 0.99+ |
20% | QUANTITY | 0.99+ |
six billion | QUANTITY | 0.99+ |
DMV | ORGANIZATION | 0.99+ |
$1 | QUANTITY | 0.99+ |
Bahamas | LOCATION | 0.99+ |
KPCB | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Nithin Eapen | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
15% | QUANTITY | 0.99+ |
Andreessen Horowitz | PERSON | 0.99+ |
Buffet | PERSON | 0.99+ |
eight trillion | QUANTITY | 0.99+ |
60% | QUANTITY | 0.99+ |
5.5% | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
$200 | QUANTITY | 0.99+ |
five buddies | QUANTITY | 0.99+ |
Arcadia Crypto Ventures | ORGANIZATION | 0.99+ |
93% | QUANTITY | 0.99+ |
two paths | QUANTITY | 0.99+ |
Lehman | ORGANIZATION | 0.99+ |
6% | QUANTITY | 0.99+ |
10% | QUANTITY | 0.99+ |
5% | QUANTITY | 0.99+ |
10 million | QUANTITY | 0.99+ |
second piece | QUANTITY | 0.99+ |
seven | QUANTITY | 0.99+ |
five cents | QUANTITY | 0.99+ |
Silken Valleys | LOCATION | 0.99+ |
10 million dollars | QUANTITY | 0.99+ |
one piece | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
10,000 | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
one path | QUANTITY | 0.99+ |
Warren Buffet | PERSON | 0.99+ |
one point | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
10,000 people | QUANTITY | 0.99+ |
143 | QUANTITY | 0.99+ |
five point | QUANTITY | 0.99+ |
Sequoia | ORGANIZATION | 0.99+ |
third deal | QUANTITY | 0.99+ |
Polymath | ORGANIZATION | 0.99+ |
500 billion | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
two perspectives | QUANTITY | 0.99+ |
Kool-Aid | ORGANIZATION | 0.99+ |
Arcadia | ORGANIZATION | 0.99+ |
one word | QUANTITY | 0.99+ |
Excel | TITLE | 0.99+ |
30 | QUANTITY | 0.99+ |
three months | QUANTITY | 0.99+ |
50 | QUANTITY | 0.99+ |
0.2% | QUANTITY | 0.98+ |
20 years | QUANTITY | 0.98+ |
one person | QUANTITY | 0.98+ |
10 years | QUANTITY | 0.98+ |
California | LOCATION | 0.98+ |
5,000 | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
One | QUANTITY | 0.98+ |
about $40 | QUANTITY | 0.97+ |
Ren Chain | ORGANIZATION | 0.96+ |