Brian Mullen & Arwa Kaddoura, InfluxData | AWS re:Invent 2021
(upbeat music) >> Everybody welcome back to theCUBE, continuous coverage of AWS 2021. This is the biggest hybrid event of the year, theCUBEs ninth year covering AWS re:Invent. My name is Dave Vellante. Arwa Kaddoura is here CUBE alumni, chief revenue officer now of InfluxData and Brian Mullen, who's the chief marketing officer. Folks good to see you. >> Thanks for having us. >> Dave: All right, great to see you face to face. >> It's great to meet you in person finally. >> So Brian, tell us about InfluxData. People might not be familiar with the company. >> Sure, yes. InfluxData, we're the company behind a pretty well-known project called Influx DB. And we're a platform for handling time series data. And so what time series data is, is really it's any, we think of it as any data that's stamped in time in some way. That could be every second, every two minutes, every five minutes, every nanosecond, whatever it might be. And typically that data comes from, you know, of course, sources and the sources are, you know, they could be things in the physical world like devices and sensors, you know, temperature gauges, batteries. Also things in the virtual world and, you know, software that you're building and running in the cloud, you know, containers, microservices, virtual machines. So all of these, whether in the physical world or the virtual world are kind of generating a lot of time series data and our platforms are designed specifically to handle that. >> Yeah so, lots to unpack here Arwa, I mean, I've kind of followed you since we met on virtually. Kind of followed your career and I know when you choose to come to a company, you start with the customer that's what your that's your... Those are your peeps. >> Arwa: Absolutely. >> So what was it that drew you to InfluxData, the customers were telling you? >> Yeah, I think what I saw happening from a marketplace is a few paradigm shifts, right? And the first paradigm shift is obviously what the cloud is enabling, right? So everything that we used to take for granted, when you know, Andreessen Horowitz said, "software was eating the world", right? And then we moved into apps are eating the world. And now you look at the cloud infrastructure that, you know, folks like AWS have empowered, they've allowed services like ours and databases, and sort of querying capabilities like Influx DB to basically run at a scale that we never would have been able to do. Just sort of with, you know, you host it yourself type of a situation. And then the other thing that it's enabled is again, if you go back to sort of database history, relational, right? Was humongous, totally transformed what we could do in terms of transactional systems. Then you moved into sort of the big data, the Hadoops, the search, right. The elastic. And now what we're seeing is time series is becoming the new paradigm. That's enabling a whole set of new use cases that have never been enabled before, right? So people that are generating these large volumes of data, like Brian talked about and needing a platform that can ingest millions of points per second. And then the ability to query that in real time in order to take that action and in order to power things like ML and things like sort of, you know, autonomous type capabilities now need this type of capability. So that's all to know >> Okay so, it's the real timeness, right? It's the use cases. Maybe you could talk a little bit more about those use cases and--- >> Sure, sure. So, yeah so we have kind of thinking about things as both the kind of virtual world where people are pulling data off of sources that are in infrastructure, software infrastructure. We have a number like PayPal is a customer of ours, and Apple. They pull a time series data from the infrastructure that runs their payments platform. So you can imagine the volume that they're dealing with. Think about how much data you might have in like a regular relational scenario now multiply every that, every piece of data times however, often you're looking at it. Every one second, every 10 minutes, whatever it might be. You're talking about an order of magnitude, larger volume, higher volume of data. And so the tools that people were using were just not really equipped to handle that kind of volume, which is unique to time series. So we have customers like PayPal in kind of the software infrastructure side. We also have quite a bit of activity among customers on the IOT side. So Tesla is a customer they're pulling telematics and battery data off of the vehicle, pulling that back into their cloud platform. Nest is also our customer. So we're pretty used to seeing, you know, connected thermostats in homes. Think of all the data that's coming from those individual units and their, it's all time series data and they're pulling it into their platform using Influx. >> So, that's interesting. So Tesla take that example they will maybe persist some of the data, maybe not all of it. It's a femoral and end up putting some of it back to the cloud, probably a small portion percentage wise but it's a huge amount of data of data, right? >> Brian: Yeah. >> So, if they might want to track some anomalies okay, capture every time animal runs across, you know, and put that back into the cloud. So where do you guys fit in that analysis and what makes you sort of the best platform for time series data base. >> Yeah, it's interesting you say that because it is a femoral and there are really two parts of it. This is one of the reasons that time series is such a challenge to handle with something that's not really designed to handle it. In a moment, in that minute, in the last hour, you have, you really want to see all the data you want all of what's happening and have full context for what's going on and seeing these fluctuations but then maybe a day later, a week later, you may not care about that level of fidelity. And so you down sample it, you have like a, kind of more of a summarized view of what happened in that moment. So being able to kind of toggle between high fidelity and low fidelity, it's a super hard problem to solve. And so our platform Influx DB really allows you to do that. >> So-- >> And that is different from relational databases, which are great at ingesting, but not great at kicking data out. >> Right. >> And I think what you're pointing to is in order to optimize these platforms, you have to ingest and get rid of data as quickly as you can. And that is not something that a traditional database can do. >> So, who do you sell to? Who's your ideal customer profile? I mean, pretty diverse. >> Yeah, It, so it tends to focus on builders, right? And builders is now obviously a much wider audience, right? We used to say developers, right. Highly technical folks that are building applications. And part of what we love about InfluxData is we're not necessarily trying to only make it for the most sophisticated builders, right? We are trying to allow you to build an application with the minimum amount of code and the greatest amount of integrations, right. So we really power you to do more with less and get rid of unnecessary code or, you know, give you that simplicity. Because for us, it's all about speed to market. You want an application, you have an idea of what it is that you're trying to measure or monitor or instrument, right? We give you the tools, we give you the integrations. We allow you to have to work in the IDE that you prefer. We just launched VS Code Integration, for example. And that then allows these technical audiences that are solving really hard problems, right? With today's technologies to really take our product to market very quickly. >> So, I want to follow up on that. So I like the term builder. It's an AWS kind of popularized that term, but there's sort of two vectors of that. There's the hardcore developers, but there's also increasingly domain experts that are building data products and then more generalists. And I think you're saying you serve both of those, but you do integrations that maybe make it easier for the latter. And of course, if the former wants to go crazy they can. Is that a right understanding? >> Yes absolutely. It is about accessibility and meeting developers where they are. For example, you probably still need a solid technical foundation to use a product like ours, but increasingly we're also investing in education, in videos and templates. Again, integrations that make it easier for people to maybe just bring a visualization layer that they themselves don't have to build. So it is about accessibility, but yes obviously with builders they're a technical foundation is pretty important. But, you know, right now we're at almost 500,000 active instances of Influx DB sort of being out there in the wild. So that to me shows, that it's a pretty wide variety of audiences that are using us. >> So, you're obviously part of the AWS ecosystem, help us understand that partnership they announced today of Serverless for Kinesis. Like, what does that mean to you as you compliment that, is that competitive? Maybe you can address that. >> Yeah, so we're a long-time partner of AWS. We've been in the partner network for several years now. And we think about it now in a couple of ways. First it's an important channel, go to market channel for us with our customers. So as you know, like AWS is an ecosystem unto itself and so many developers, many of these builders are building their applications for their own end users in, on AWS, in that ecosystem. And so it's important for us to number one, have an offering that allows them to put Influx on that bill so we're offered in the marketplace. You can sign up for and purchase and pay for Influx DB cloud using or via AWS marketplace. And then as Arwa mentioned, we have a number of integrations with all the kind of adjacent products and services from Amazon that many of our developers are using. And so when we think about kind of quote and quote, going to where the developer, meeting developers where they are that's an important part of it. If you're an AWS focused developer, then we want to give you not only an easy way to pay for and use our product but also an easy way to integrate it into all the other things that you're using. >> And I think it was 2012, it might've even been 11 on theCUBE, Jerry Chen of Greylock. We were asking him, you think AWS is going to move up the stack and develop applications. He said, no I don't think so. I think they're going to enable developers and builders to do that and then they'll compete with the traditional SaaS vendors. And that's proved to be true, at least thus far. You never say never with AWS. But then recently he wrote a piece called "Castles on the Cloud." And the premise was essentially the ISV's will build on top of clouds. And that seems to be what you're doing with Influx DB. Maybe you could tell us a little bit more about that. We call it super clouds. >> Arwa: That's right. >> you know, leveraging the 100 billion dollars a year that the hyperscalers spend to develop an abstraction layer that solves a particular problem but maybe you could describe what that is from your perspective, Influx DB. >> Yeah, well increasingly we grew up originally as an open source software company. >> Dave: Yeah, right. >> People downloaded the download Influx DB ran it locally on a laptop, put up on the server. And, you know, that's our kind of origin as a company, but increasingly what we recognize is our customers, our developers were building on the building in and on the cloud. And so it was really important for us to kind of meet them there. And so we think about, first of all, offering a product that is easily consumed in the cloud and really just allows them to essentially hit an end point. So with Influx DB cloud, they really have, don't have to worry about any of that kind of deployment and operation of a cluster or anything like that. Really, they just from a usage perspective, just pay for three things. The first is data in, how much data are you putting in? Second is query count. How many queries are you making against? And then third is storage. How much data do you have and how long are you storing it? And really, it's a pretty simple proposition for the developer to kind of see and understand what their costs are going to be as they grow their workload. >> So it's a managed service is that right? >> Brian: It is a managed service. >> Okay and how do you guys price? Is it kind of usage based. >> Total usage based, yeah, again data ingestion. We've got the query count and the storage that Brian talked about, but to your point, back to the sort of what the hyperscalers are doing in terms of creating this global infrastructure that can easily be tapped into. We then extend above that, right? We effectively become a platform as a service builder tool. Many of our customers actually use InfluxData to then power their own products, which they then commercialize into a SaaS application. Right, we've got customers that are doing, you know, Kubernetes monitoring or DevOps monitoring solutions, right? That monitor, you know, people's infrastructure or web applications or any of those things. We've got people building us into, you know, Industrial IoT such as PTC's ThingWorx, right? Where they've developed their own platform >> Dave: Very cool. >> Completely backed up by our time series database, right. Rather than them having to build everything, we become that key ingredient. And then of course the fully cloud managed service means that they could go to market that much quicker. Nobody's for procuring servers, nobody is managing, you know, security patches any of that, it's all fully done for you. And it scales up beautifully, which is the key. And to some of our customers, they also want to scale up or down, right. They know when their peak hours are or peak times they need something that can handle that load. >> So looking ahead to next year, so anyway, I'm glad AWS decided to do re:Invent live. (Arwa mumbling) >> You know, that's weird, right? We thought in June, at Mobile World Congress, we were going to, it was going to be the gateway to returning but who knows? It's like two steps forward, one step back. One step forward, two steps back but we're at least moving in the right direction. So what about for you guys InfluxData? Looking ahead for the coming year, Brian, what can we expect? You know, give us a little view of sharp view of (mumbles) >> Well kind of a keeping in the theme of meeting developers where they are, we want to build out more in the Amazon ecosystem. So more integrations, more kind of ease of use for kind of adjacent products. Another is just availability. So we've been, we're now on actually three clouds. In addition to AWS, we're on Azure and Google cloud, but now expanding horizontally and showing up so we can meet our customers that are working in Europe, expanding into Asia-Pacific which we did earlier this year. And so I think we'll continue to expand the platform globally to bring it closer to where our customers are. >> Arwa: Can I. >> All right go ahead, please. >> And I would say also the hybrid capabilities probably will also be important, right? Some of our customers run certain workloads locally and then other workloads in the cloud. That ability to have that seamless experience regardless, I think is another really critical advancement that we're continuing to invest in. So that as far as the customer is concerned, it's just an API endpoint and it doesn't matter where they're deploying. >> So where do they go, can they download a freebie version? Give us the last word. >> They go to influxdata.com. We do have a free account that anyone can sign up for. It's again, fully cloud hosted and managed. It's a great place to get started. Just learn more about our capabilities and if you're here at AWS re:Invent, we'd love to see you as well. >> Check it out. All right, guys thanks for coming on theCUBEs. >> Thank you. >> Dave: Great to see you. >> All right, thank you. >> Awesome. >> All right, and thank you for watching. Keep it right there. This is Dave Vellante for theCUBEs coverage of AWS re:Invent 2021. You're watching the leader in high-tech coverage. (upbeat music)
SUMMARY :
hybrid event of the year, to see you face to face. you in person finally. So Brian, tell us about InfluxData. the sources are, you know, I've kind of followed you and things like sort of, you know, Maybe you could talk a little So we're pretty used to seeing, you know, of it back to the cloud, and put that back into the cloud. And so you down sample it, And that is different and get rid of data as quickly as you can. So, who do you sell to? in the IDE that you prefer. And of course, if the former So that to me shows, Maybe you can address that. So as you know, like AWS And that seems to be what that the hyperscalers spend we grew up originally as an for the developer to kind of see Okay and how do you guys price? that are doing, you know, means that they could go to So looking ahead to So what about for you guys InfluxData? Well kind of a keeping in the theme So that as far as the So where do they go, can It's a great place to get started. for coming on theCUBEs. All right, and thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brian Mullen | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Arwa Kaddoura | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
PayPal | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Asia | LOCATION | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
Jerry Chen | PERSON | 0.99+ |
Andreessen Horowitz | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
two parts | QUANTITY | 0.99+ |
June | DATE | 0.99+ |
2012 | DATE | 0.99+ |
two steps | QUANTITY | 0.99+ |
Arwa | PERSON | 0.99+ |
one step | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
third | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
a week later | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
three things | QUANTITY | 0.98+ |
influxdata.com | OTHER | 0.98+ |
Influx DB | TITLE | 0.98+ |
Castles on the Cloud | TITLE | 0.98+ |
One step | QUANTITY | 0.98+ |
a day later | DATE | 0.97+ |
today | DATE | 0.97+ |
CUBE | ORGANIZATION | 0.96+ |
11 | QUANTITY | 0.95+ |
InfluxData | ORGANIZATION | 0.94+ |
two vectors | QUANTITY | 0.94+ |
Greylock | ORGANIZATION | 0.94+ |
ThingWorx | ORGANIZATION | 0.94+ |
ninth year | QUANTITY | 0.92+ |
VS Code | TITLE | 0.92+ |
every five minutes | QUANTITY | 0.92+ |
earlier this year | DATE | 0.92+ |
500,000 | QUANTITY | 0.91+ |
every nanosecond | QUANTITY | 0.9+ |
first paradigm | QUANTITY | 0.9+ |
every two minutes | QUANTITY | 0.9+ |
three clouds | QUANTITY | 0.89+ |
PTC | ORGANIZATION | 0.89+ |
every 10 minutes | QUANTITY | 0.88+ |
Mobile World Congress | EVENT | 0.86+ |
100 billion dollars a year | QUANTITY | 0.86+ |
Azure | TITLE | 0.83+ |
Influx | TITLE | 0.82+ |
theCUBE | ORGANIZATION | 0.82+ |
DevOps | TITLE | 0.81+ |
coming year | DATE | 0.81+ |
Kimberly Leyenaar, Broadcom
(upbeat music) >> Hello everyone, and welcome to this CUBE conversation where we're going to go deep into system performance. We're here with an expert. Kim Leyenaar is the Principal Performance Architect at Broadcom. Kim. Great to see you. Thanks so much for coming on. >> Thanks so much too. >> So you have a deep background in performance, performance assessment, benchmarking, modeling. Tell us a little bit about your background, your role. >> Thanks. So I've been a storage performance engineer and architect for about 22 years. And I'm specifically been for abroad with Broadcom for I think next month is going to be my 14 year mark. So what I do there is initially I built and I manage their international performance team, but about six years ago I moved back into architecture, and what my roles right now are is I generate performance projections for all of our next generation products. And then I also work on marketing material and I interface with a lot of the customers and debugging customer issues, and looking at how our customers are actually using our storage. >> Great. Now we have a graphic that we want to share. It talks to how storage has evolved over the past decade. So my question is what changes have you seen in storage and how has that impacted the way you approach benchmarking. In this graphic we got sort of big four items that impact performance, memory processor, IO pathways, and the storage media itself, but walk us through this data if you would. >> Sure. So what I put together is a little bit of what we've seen over the past 15 to 20 years. So I've been doing this for about 22 years and kind of going back and focusing a little bit on the storage, we looked back at hard disk, they ruled for. And nearly they had almost 50 years of ruling. And our first hard drive that came out back in the 1950s was only capable of five megabytes in capacity. and one and a half iOS per second. It had almost a full second in terms of seat time. So we've come a long way since then. But when I first came on, we were looking at Ultra 320 SCSI. And one of the biggest memories that I have of that was my office is located close to our tech support. And I could hear the first question was always, what's your termination like? And so we had some challenges with SCSI, and then we moved on into SAS and data protocols. And we continued to move on. But right now, back in the early 2000s when I came on board, the best drives really could do maybe 400 iOS per second. Maybe two 250 megabytes per second, with millisecond response times. And so when I was benchmarking way back when it was always like, well, IOPS are IOPS. We were always faster than what the drives to do. And that was just how it was. The drives were always the bottleneck in the system. And so things started changing though by the early 2000s, mid 2000s. We started seeing different technologies come out. We started seeing that virtualization and multi-tenant infrastructures becoming really popular. And then we had cloud computing that was well on the horizon. And so at this point, we're like, well, wait a minute, we really can't make processors that much faster. And so everybody got excited to include (indistinct) and the home came out but, they had two cores per processor and four cores per processor. And so we saw a little time period where actually the processing capability kind of pulled ahead of everybody else. And memory was falling behind. We had good old DVR, 2, 6, 67. It was new with the time, but we only had maybe one or two memory channels per processor. And then in 2007 we saw disk capacity hit one terabyte. And we started seeing a little bit of an imbalance because we were seeing these drives are getting massive, but their performance per drive was not really kind of keeping up. So now we see a revolution around 2010. And my co-worker and I at the time, we have these little USB discs, if you recall, we would put them in. They were so fast. We were joking at the time. "Hey, you know what, wonder if we could make a raid array out of these little USB disks?" They were just so fast. The idea was actually kind of crazy until we started seeing it actually happen. So in 2010 SSD started revolutionizing storage. And the first SSDs that we really worked with these plaint LS-300 and they were amazing because they were so over-provisioned that they had almost the same reader, right performance. But to go from a drive that could do maybe 400 IOS per second to a drive like 40,000 plus iOS per second, really changed our thought process about how our storage controller could actually try and keep up with the rest of the system. So we started falling behind. That was a big challenge for us. And then in 2014, NVMe came around as well. So now we've got these drives, they're 30 terabytes. They can do one and a half million iOS per second, and over 6,000 megabytes per second. But they were expensive. So people start relegating SSDs more towards tiered storage or cash. And as the prices of these drives kind of came down, they became a lot more mainstream. And then the memory channels started picking up. And they started doubling every few years. And we're looking now at DVR 5 4800. And now we're looking at cores that used to go from two to four cores per processor up to 48 with some of the latest different processes that are out there. So our ability to consume the computing and the storage resources, it's astounding, you know, it's like that whole saying, 'build it and they will come.' Because I'm always amazed, I'm like, how are we going to possibly utilize all this memory bandwidth? How are we going to utilize all these cores? But we do. And the trick to this is having just a balanced infrastructure. It's really critical. Because if you have a performance mismatch between your server and your storage, you really lose a lot of productivity and it does impact your revenue. >> So that's such a key point. Pardon, begin that slide up again with the four points. And that last point that you made Kim about balance. And so here you have these, electronic speeds with memory and IO, and then you've got the spinning disc, this mechanical disc. You mentioned that SSD kind of changed the game, but it used to be, when I looked at benchmarks, it was always the D stage bandwidth of the cash out to the spinning disc was always the bottleneck. And, you go back to the days of you it's symmetrics, right? The huge backend disk bandwidth was how they dealt with that. But, and then you had things the oxymoron of the day was high spin speed disks of a high performance disk. Compared to memories. And, so the next chart that we have is show some really amazing performance increases over the years. And so you see these bars on the left-hand side, it looks at historical performance for 4k random IOPS. And on the right-hand side, it's the storage controller performance for sequential bandwidth from 2008 to 2022. That's 22 is that yellow line. It's astounding the increases. I wonder if you could tell us what we're looking at here, when did SSD come in and how did that affect your thinking? (laughs) >> So I remember back in 2007, we were kind of on the precipice of SSDs. We saw it, the writing was on the wall. We had our first three gig SAS and SATA capable HPAs that had come out. And it was a shock because we were like, wow, we're going to really quickly become the bottleneck once this becomes more mainstream. And you're so right though about people work in, building these massive hard drive based back ends in order to handle kind of that tiered architecture that we were seeing that back in the early 2010s kind of when the pricing was just so sky high. And I remember looking at our SAS controllers, our very first one, and that was when I first came in at 2007. We had just launched our first SAS controller. We're so proud of ourselves. And I started going how many IOPS can this thing, even handled? We couldn't even attach enough drives to figure it out. So what we would do is we'd do these little tricks where we would do a five 12 byte read, and we would do it on a 4k boundary, so that it was actually reading sequentially from the disc, but we were handling these discrete IOPS. So we were like, oh, we can do around 35,000. Well, that's just not going to hit it anymore. Bandwidth wise we were doing great. Really our limitation and our bottleneck on bandwidth was always either the host or the backend. So, our controllers are there basically, there were three bottlenecks for our storage controllers. The first one is the bottleneck from the host to the controller. So that is typically a PCIe connection. And then there's another bottleneck on the controller to the disc. And that's really the number of ports that we have. And then the third one is the discs themselves. So in typical storage, that's what we look at. And we say, well, how do we improve this? So some of these are just kind of evolutionary, such as PCIE generations. And we're going to talk a little bit about that, but some of them are really revolutionary, and those are some of the things that we've been doing over the last five or six years to try and make sure that we are no longer the bottleneck. And we can enable these really, really fast drives. >> So can I ask a question? I'm sorry to interrupted but on these blue bars here. So these all spinning disks, I presume, out years they're not. Like when did flash come in to these blue bars? is that..you said 27 you started looking at it, but on these benchmarks, is it all spinning disc? Is it all flash? How should we interpret that? >> No, no. Initially they were actually all hard drives. And the way that we would identify, the max iOS would be by doing very small sequential reads to these hard drives. We just didn't have SSDs at that point. And then somewhere around 2010 is where we.. it was very early in that chart, we were able to start incorporating SSD technology into our benchmarking. And so what you're looking at here is really the max that our controller is capable of. So we would throw as many drives as we could and do what we needed to do in order to just make sure our controller was the bottleneck and what can we expose. >> So the drive then when SSD came in was no longer the bottleneck. So you guys had to sort of invent and rethink sort of how, what your innovation and your technology, because, I mean, these are astounding increases in performance. I mean, I think in the left-hand side, we've built this out pad, you got 170 X increase for the 4k random IOPS, and you've got a 20 X increase for the sequential bandwidth. How were you able to achieve that level of performance over time? >> Well, in terms of the sequential bandwidth, really those come naturally by increases in the PCIe or the SAS generation. So we just make sure we stay out of the way, and we enable that bandwidth. But the IOPS that's where it got really, really tricky. So we had to start thinking about different things. So, first of all, we started optimizing all of our pathways, all of our IO management, we increased the processing capabilities on our IO controllers. We added more on-chip memory. We started putting in IO accelerators, these hardware accelerators. We put in SAS poor kind of enhancements. We even went and improved our driver to make sure that our driver was as thin as possible. So we can make sure that we can enable all the IOPS on systems. But a big thing happening a few couple of generations ago was we started introducing something called tri capable controllers, which means that you could attach NVMe. You could attach SAS or you could attach SATA. So you could have this really amazing deployment of storage infrastructure based around your customized needs and your cost requirements by using one controller. >> Yeah. So anybody who's ever been to a trade show where they were displaying a glass case with a Winchester disc drive, for example, you see it's spinning and its actuators is moving, wow, that's so fast. Well, no. That's like a tourist slower. It's like a snail compared to the system's speed. So it's, in a way life was easy back in those days, because when you did a right to a disk, you had plenty of time to do stuff, right. And now it's changed. And so I want to talk about Gen3 versus Gen4, and how all this relates to what's new in Gen4 and the impacts of PCIe here, you have a chart here that you've shared with us that talks to that. And I wonder if you could elaborate on that, Kim. >> Sure. But first, you said something that kind of hit my funny bone there. And I remember I made a visit once about 15 or 20 years ago to IBM. And this gentleman actually had one of those old ones in his office and he referred to them as disk files. And he never until the day he retired, he'd never stopped calling them disc files. And it's kind of funny to be a part of that history. >> Yeah. DASD. They used to call it. (both laughing) >> SD, DASD. I used to get all kinds of, you know, you don't know what it was like back then, but yeah. But now nowadays we've got it quite easily enabled because back then, we had, SD DASD and all that. And then, ATA and then SCSI, well now we've got PCIe. And what's fabulous about PCIe is that it just has the generations are already planned out. It's incredible. You know, we're looking at right now, Gen3 moving to Gen4, and that's a lot about what we're going to be talking about. And that's what we're trying to test out. What is Gen4 PCIe when to bias? And it really is. It's fantastic. And PCIe came around about 18 years ago and Broadcom is, and we do participate and contribute to the PCIe SIG, which is, who develops the standards for PCIe, but the host in both our host interface in our NVMe desk and utilize the standards. So this is really, really a big deal, really critical for us. But if you take a look here, you can see that in terms of the capabilities of it, it's really is buying us a lot. So most of our drives right now NVMe drives tend to be by four. And a lot of people will connect them. And what that means is four lanes of NVMe and a lot of people that will connect them either at by one or by two kind of depending on what their storage infrastructure will allow. But the majority of them you could buy, or there are so, as you can see right now, we've gone from eight gig transfers per second to 16 gig of transfers per second. What that means is for a by four, we're going from one drive being able to do 4,000 to do an almost 8,000 megabytes per second. And in terms of those 4k IOPS that really evade us, they were really really tough sometimes to squeeze out of these drives, but now we're got 1 million, all we have to 2 million, it's just, it's insane. You know, just the increase in performance. And there's a lot of other standards that are going to be sitting on top of PCIe. So it's not going away anytime soon. We've got to open standards like CXL and things like that, but we also have graphics cards. You've got all of your hosts connections, they're also sitting on PCIe. So it's fantastic. It's backwards, it's orbits compatible, and it really is going to be our future. >> So this is all well and good. And I think I really believe that a lot of times in our industry, the challenges in the plumbing are underappreciated. But let's make it real for the audience because we have all these new workloads coming out, AI, heavily data oriented. So I want to get your thoughts on what types of workloads are going to benefit from Gen4 performance increases. In other words, what does it mean for application performance? You shared a chart that lists some of the key workloads, and I wonder if we could go through those. >> Yeah, yeah. I could have a large list of different workloads that are able to consume large amounts of data, whether or not it's in small or large kind of bytes of data. But as you know right now, and I said earlier, our ability to consume these compute and storage resources is amazing. So you build it and we'll use it. And the world's data we're expected to grow 61% to 175 zettabytes by the year 2025, according to IDC. So that's just a lot of data to manage. It's a lot of data to have, and it's something that's sitting around, but to be useful, you have to actually be able to access it. And that's kind of where we come in. So who is accessing it? What kind of applications? I spend a lot of time trying to understand that. And recently I attended a virtual conference SDC and what I like to do when I attend these conferences is to try to figure out what the buzz words are. What's everybody talking about? Because every year it's a little bit different, but this year was edge, edge everything. And so I kind of put edge on there first in, even you can ask anybody what's edge computing and it's going to mean a lot of different things, but basically it's all the computing outside of the cloud. That's happening typically at the edge of the network. So it tends to encompass a lot of real time processing on those instant data. So in the data is usually coming from either users or different sensors. It's that last mile. It's where we kind of put a lot of our content caching. And, I uncovered some interesting stuff when I was attending this virtual conference and they say only about 25% of all the usable data actually even reach the data center. The rest is ephemeral and it's localized, locally and in real time. So what it does is in the goal of edge computing is to try and reduce the bandwidth costs for these kinds of IOT devices that go over a long distance. But the reality is the growth of real-time applications that require these kinds of local processing are going to drive this technology forward over the coming years. So Dave, your toaster and your dishwasher they're, IOT edge devices probably in the next year, if they're not already. So edge is a really big one and consumes a lot of the data. >> The buzzword does your now is met the metaverse, it's almost like the movie, the matrix is going to come in real time. But the fact is it's all this data, a lot of videos, some of the ones that I would call out here, you mentioned facial recognition, real-time analytics. A lot of the edge is going to be real-time inferencing, applying AI. And these are just a massive, massive data sets that you again, you and of course your customers are enabling. >> When we first came out with our very first Gen3 product, our marketing team actually asked me, "Hey, how can we show users how they can consume this?" So I actually set up a head to environment. I decided I'm going to learn how to do this. I set up this massive environment with Hadoop, and at the time they called big data, the 3V's, I don't know if you remember these big 3Vs, the volume, velocity and variety. Well Dave, did you know, there are now 10 Vs? So besides those three, we got velocity, we got valued, we got variability, validity, vulnerability, volatility, visualization. So I'm thinking we need just to add another beat of that. >> Yeah. (both laughing) Well, that's interesting. You mentioned that, and that sort of came out of the big data world, a dupe world, which was very centralized. You're seeing the cloud is expanding, the world's getting, you know, data is by its very nature decentralized. And so you've got to have the ability to do an analysis in place. A lot of the edge analytics are going to be done in real time. Yes, sure. Some of it's going to go back in the cloud for detailed modeling, but we are the next decade Kim, ain't going to be like the last I often say. (laughing) I'll give you the last word. I mean, how do you see this sort of evolving, who's going to be adopting this stuff. Give us a sort of a timeframe for this kind of rollout in your world. >> In terms of the timeframe. I mean really nobody knows, but we feel like Gen5, that it's coming out next year. It may not be a full rollout, but we're going to start seeing Gen5 devices and Gen5 infrastructure is being built out over the next year. And then follow very, very, very quickly by Gen6. And so what we're seeing though is, we're starting to see these graphics processors, These GPU's, and I'm coming out as well, that are going to be connecting, using PCIe interfaces as well. So being able to access lots and lots and lots of data locally is going to be a really, really big deal and order because worldwide, all of our companies they're using business analytics. Data is money. And the person that actually can improve their operational efficiency, bolster those sales and increase your customer satisfaction. Those are the companies that are going on to win. And those are the companies that are going to be able to effectively store, retrieve and analyze all the data that they're collecting over the years. And that requires an abundance of data. >> Data is money and it's interesting. It kind of all goes back to when Steve jobs decided to put flash inside of an iPhone and the industry exploded, consumer economics kicked in 5G now edge AI, a lot of the things you talked about, GPU's the neural processing unit. It's all going to be coming together in this decade. Very exciting. Kim, thanks so much for sharing this data and your perspectives. I'd love to have you back when you got some new perspectives, new benchmark data. Let's do that. Okay. >> I look forward to it. Thanks so much. >> You're very welcome. And thank you for watching this CUBE conversation. This is Dave Vellante and we'll see you next time. (upbeat music)
SUMMARY :
Kim Leyenaar is the Principal So you have a deep a lot of the customers and how has that impacted the And I could hear the And, so the next chart that we have And it was a shock because we were like, in to these blue bars? And the way that we would identify, So the drive then when SSD came in Well, in terms of the And I wonder if you could And it's kind of funny to They used to call it. and a lot of people that will But let's make it real for the audience and consumes a lot of the data. the matrix is going to come in real time. and at the time they the ability to do an analysis And the person that actually can improve a lot of the things you talked about, I look forward to it. And thank you for watching
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
2007 | DATE | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
2008 | DATE | 0.99+ |
Kim Leyenaar | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
61% | QUANTITY | 0.99+ |
Kimberly Leyenaar | PERSON | 0.99+ |
4,000 | QUANTITY | 0.99+ |
14 year | QUANTITY | 0.99+ |
2010 | DATE | 0.99+ |
20 X | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
1 million | QUANTITY | 0.99+ |
Kim | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
two cores | QUANTITY | 0.99+ |
third one | QUANTITY | 0.99+ |
2022 | DATE | 0.99+ |
2 million | QUANTITY | 0.99+ |
16 gig | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
five megabytes | QUANTITY | 0.99+ |
10 Vs | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
170 X | QUANTITY | 0.99+ |
eight gig | QUANTITY | 0.99+ |
30 terabytes | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
mid 2000s | DATE | 0.99+ |
400 | QUANTITY | 0.99+ |
early 2000s | DATE | 0.99+ |
one and a half million | QUANTITY | 0.99+ |
one terabyte | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
four cores | QUANTITY | 0.99+ |
175 zettabytes | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
three bottlenecks | QUANTITY | 0.99+ |
early 2010s | DATE | 0.99+ |
next decade | DATE | 0.99+ |
early 2000s | DATE | 0.99+ |
4k | QUANTITY | 0.99+ |
one drive | QUANTITY | 0.99+ |
first one | QUANTITY | 0.99+ |
IDC | ORGANIZATION | 0.99+ |
LS-300 | COMMERCIAL_ITEM | 0.98+ |
next month | DATE | 0.98+ |
1950s | DATE | 0.98+ |
about 22 years | QUANTITY | 0.98+ |
one controller | QUANTITY | 0.98+ |
2025 | DATE | 0.98+ |
iOS | TITLE | 0.98+ |
Steve | PERSON | 0.98+ |
Winchester | ORGANIZATION | 0.98+ |
five | QUANTITY | 0.98+ |
DVR 5 4800 | COMMERCIAL_ITEM | 0.98+ |
both | QUANTITY | 0.98+ |
four lanes | QUANTITY | 0.98+ |
around 35,000 | QUANTITY | 0.97+ |
first three gig | QUANTITY | 0.97+ |
about six years ago | DATE | 0.96+ |
about 25% | QUANTITY | 0.96+ |
first hard drive | QUANTITY | 0.96+ |
over 6,000 megabytes per second | QUANTITY | 0.96+ |
20 years ago | DATE | 0.96+ |
almost 50 years | QUANTITY | 0.96+ |
22 | QUANTITY | 0.95+ |
one and a half | QUANTITY | 0.95+ |
Sean Knapp, Ascend.io & Jason Robinson, Steady | AWS Startup Showcase
(upbeat music) >> Hello and welcome to today's session, theCUBE's presentation of the AWS Startup Showcase, New Breakthroughs in DevOps, Data Analytics, Cloud Management Tools, featuring Ascend.io for the data and analytics track. I'm your host, John Furrier with theCUBE. Today, we're proud joined by Sean Knapp, CEO and founder of Ascend.io and Jason Robinson who's the VP of Data Science and Engineering at Steady. Guys, thanks for coming on and congratulations, Sean, for the continued success, loves our cube conversation and Jason, nice to meet you. >> Great to meet you. >> Thanks for having us. >> So, the session today is really kind of looking at automating analytics workloads, right? So, and Steady as a customer. Sean, talk about the relationship with the customer Steady. What's the main product, what's the core relationship? >> Yeah, it's a really great question. when we work with a lot of companies like Steady we're working hand in hand with their data engineering teams, to help them onboard onto the Ascend platform, build these really powerful data pipelines, fueling their analytics and other workloads, and really helping to ensure that they can be successful at getting more leverage and building faster than ever before. So we tend to partner really closely with each other's teams and really think of them even as extensions of each other's own teams. I watch in slack oftentimes and our teams just go back and forth. And it's like, as if we were all just part of the same company. >> It's a really exciting time, Jason, great to have you on as a person cutting your teeth into this kind of what I call next gen data as intellectual property. Sean and I chat on theCUBE conversation previous to this event where every company is a data company, right? And we've heard that cliche. >> Right. >> But it's true, right? It's going to, it's getting more powerful with the edge. You seeing more diverse data, faster data, small, big, large, medium, all kinds of different aspects and patterns. And it's becoming a workflow kind of intellectual property paradigm for companies, not so much. >> That's right. >> Just the tech it's the database is you can, it's the data itself, data in flight, it's moving around, it's got value. What's your take-- >> Absolutely. >> On this trend? >> Basically, Steady helps our members and we have a community of members earn more income. So we want to help them steady their financial lives. And that's all based on data, so we have a web app, you could go to the iOS Store, you could go to the Google Play Store, you can download the app. And we have a large number of members, 3 million plus, who are actively using this. And we also have a very exciting new product called income passport. And this helps 1099 and mixed wage earners verify their income, which is very important for different government benefits. And then third, we help people with emergency cash grants as well as awards. So all of that is built on a bedrock of data, so if you're using our apps, it's all data powered. So what you were mentioning earlier from pipelines that are running it real time to yeah, anything, that's a kind of a small data aggregation, we do everything from small to real-time and large. >> You guys are like a multiple sided marketplace here, you've got it, you're a FinTech app, as well as the future of work and with virtual space-- >> That's right. >> Happening now, this is becoming, actually encapsulates kind of the critical problems that people trying to solve right now, you've got multiple stakeholders. >> That's right. >> In the data. >> Yes, we absolutely do. So we have our members, but we also, within the company, we have product, we have strategy, we have a growth team, we have operations. So data engineering and data science also work with a data analytics organization. So at Steady we're very much a data company. And we have a data organization led by our chief data officer and we have data engineering and data science, which are my teams, but also that business insights and analytics. So a lot of what we're building on the data engineering side is powering those insights and analytics that the business stakeholders use every day to run the organization. >> Sean, I want to get your thoughts on this because we heard from Emily Freeman in the keynote about how this revolution in DevOps or for premiering her talk around how, it's not just one persona anymore, I'm a release engineer, I'm this kind of engineer, you're seeing now all engineering, all developers are developers. You have some specialty, but for the most part, the team makeups are changing. We touched on this in our cube conversation. The journey of data is not just the data people, the data folks. It's like there's, they're developers too. So the confluence of data science, data management, developing, is changing the team and cultural makeup of companies. Could you share your thoughts on this dynamic and how it impacts customers? >> Absolutely, I think the, we're finding a similar trend to what we saw a number of years ago, when we talked about how software was eating the world and every company was now becoming a software company. And as a result, we saw this proliferation and expansion of what the software roles look like and thought of a company pulled through this entire new era of DevOps. We were finding that same pattern now emerging around data as not only is every company a software company, every company is a data company and data really is that field, that oil that fuels the business and in doing so, we're finding that as Jason describes it's pervasive across the team, it is no longer just one team that is creating some insights and reports around operational analytics, or maybe a team over here doing data science or machine learning. It is expensive. And I think the really interesting challenges that start to come with this too, are so many data teams are so over capacity. We did a recent study that highlighted that 96% of data teams are at, or over capacity, only 4% had spare capacity. But as a result, the net is being cast even wider to pull in people from even broader and more adjacent domains to all participate in the data future of their organization. >> Yeah, and I think I'd love to get your guys react to this conversation with Andy Jassy, who's now the CEO of Amazon, but when he was the CEO of AWS last year, I talked with him about how the old guard and new guard are thinking around team formations. Obviously team capacity is growing and challenged when you've got the right formula. So that's one thing, right? But what if you don't have the right formula? If you're in the skills gap, problem, or team formation side of it, where you maybe there was two years ago where the mandate came down? Well, we got to build a data team even in two years, if you're not inquisitive. And this is what Andy and I were talking about is the thinking and the mindset of that mission and being open to discovering and understanding the changes, because if you were deciding what your team was two, three years ago, that might have changed a lot. So team capacity, Sean, to your point, if you got it right, and that's a challenge in and of itself, but what if you don't have it, right? What do you guys think about this? >> Yeah, I think that's exactly right. Basically trying to see, look and gaze into the crystal ball and see what's going to happen in a year or two years, even six months is quite difficult. And if you don't have it right, you do spend a lot of time because of the technical debt that you've amassed. And we certainly spend quite a bit of time with technical debt for things we wanted to build. So, deconvolving that, getting those ETLs to a runnable state, getting performance there, that's what we spend a bit of time on. And yeah, it's something that it's really part of the package. >> What do you guys see as the big challenge on teams? The scaling challenge okay. Formation is one thing, Sean, but like, okay, getting it right, getting it formed properly and then scaling it, what are the big things you're seeing? >> One of the, I think the overarching management themes in general, it is the highest out by the highest performing teams are those where the individual with the context and the idea is able to execute as far and as fast and as efficiently as possible, and removing a lot of those encumbrances and put it a slightly different way. If DevOps was all basically boiled down to, how do we help more people write more software faster and safely data ops would be very similarly, how do we enable more people to do more things with data faster and safely? And to do that, I think the era of these massive multi-year efforts around data are gone and hopefully in the not too distant future, even these multi-quarter efforts around data are gone and we get into a much more agile, nimble methodology where smaller initiatives and smaller efforts are possible by more diverse skillsets across the business. And really what we should be doing is leveraging technology and automation to ensure that people are able to be productive and efficient and that we can trust our data and that systems are automated. And these are problems that technology is good at. And so in many ways, how in the early days Amazon would described as getting people out of the muck of DevOps. I think we're going to do the same thing around getting people out of the muck of the data and get them really focused on the higher level aspects. >> Yeah, we're going to get into that complexity, heavy lifting side muck, and then the heavy lifting taking away from the customers. But I want to go back to real quick with Jason while we're on this topic. Jason, I was just curious, how much has your team grown in the recent year and how much could've, should've grown, what's the status and how has Ascend helped you guys? What's the dynamic there? ' Cause that's their value proposition. So, take us through that. >> Absolutely, so, since the beginning of the year data engineering has doubled. So, we're a lean team, we certainly use the agile mindset and methodologies, but we have gone from, yeah, we've essentially doubled. So a lot of that is there's just so much to do and the capacity problem is certainly there. So we also spend a lot of time figuring out exactly what the right tooling is. And I was mentioning the technical debt. So you have those, there's the big O notation of whatever that involves technical debt. And when you're building new things, you're fixing old things. And then you're trying to maintain everything. That scaling starts to hit hard. So even if we continue to double, I mean, we could easily add more data engineers. And a lot of that is, I mean, you know about the hiring cycles, like, a lot of of great talent, but it's difficult to make all of those hires. So, we do spend quite a bit of time thinking about exactly what tools data engineering is using day-to-day. And what I mentioned, were technologies on the streaming side all the way to like the small batch things, but, like something that starts as a small batch getting grow and grow and grow and take, say 15 hours, it's possible, I've seen it. But, and getting that back down and managing that complexity while not overburdening people who probably don't want to spend all their waking hours building ETLs, maintaining ETL, putting in monitoring, putting in alerting, that I think is quite a challenge. >> It's so funny because you mentioned 18 hours, you got to kind of being, you didn't roll your eyes, but you almost did, but this is, but people want it yesterday, they want real time, so there's a lot of demand-- >> Yes. >> On the minds of the business outcome side of it. So, I got to ask you, because this comes up a lot with technical debt, and now we're starting to see that come into the data conversation. And so I always curious, is there a different kind of technical debt with data? Because again, data is like software, but it's a little bit of more elusive in the sense it's always changing. So is there, what kind of technical debt do you see in the data side that's different than say software side? >> Absolutely, now that's a great question. So a lot of thinking about your data and structuring your data and how you want to use that data going into a particular project might be different from what happens after stakeholders have a new considerations and new products and new items that need to be built. So thinking about how that, let's say you have a document store, or you have something that you thought was going to be nice and structured, how that can evolve and support those particular products can essentially, unless you take the time and go through and say, well, let's architect it perfectly so that we can handle that. You're going to make trade-offs and choices, and essentially that debt builds up. So you start cutting corners, you start changing your normalization. You start essentially taking those implicit schema that then tend to build into big things, big implicit schema. And then of course, with implicit schema, you're going to have a lot of null values, you're going to have a lot of items to deal with. So, how do you deal with that? And then you also have the opportunity to create keys and values and oops, do we take out those keys that were slightly misspelled? So, I could go on for hours, but basically the technical debt certainly is there with on data. I see a lot of this as just a spectrum of technical debt, because it's all trade-offs that you made to build a product, and the efficiency has start to hit you. So, the 15 hour ETL, I was mentioning, basically you start with something and you were building things for stakeholders and essentially you have so much complex logic within that. So for the transforms that you're doing from if you're thinking of the bronze, silver, gold, kind of a framework, going from that bronze to a silver, you may have a massive number of transformations or just a few, just to lightly dust it. But you could also go to gold with many more transformations and managing that, managing the complexity, managing what you're spending for servers day after day after day. That's another real challenge of that technical debt stuff. >> That's a great lead into my next question, for Sean, this is the disparate system complexity, technical debt and software was always kind of the belief was, oh yeah, I'll take some technical debt on and work it off once I get visibility and say, unit economics or some sort of platform or tool feature, and then you work it off as fast as possible. I was, this becomes the art and science of technical debt. Jason, what you're saying is that this can be unwieldy pretty quickly. You got state and you got a lot of different inter moving parts. This is a huge issue, Sean, this is where it's, technical debt in the data world is much different architecturally. If you don't get it right, this is a huge, huge issue. Could you aluminate why that is and what you guys are doing to help unify and change some of those conditions? >> Yeah, absolutely. When we think about technical debt and I'll keep drawing some parallels between DevOps and data ops, 'cause I think there's a tremendous number of similarities in these worlds. We used to always have the saying that "Your tech debt grows manually across microservices, "but exponentially within services." And so you want that right level of architecture and composibility if you will, of your systems where you can deploy changes, you can test, you can have high degrees of competence and the roll-outs. And I think the interesting part in the data side, as Jason highlighted, the big O-notation for tech debt in the data ecosystem, is still fairly exponential or polynomial in nature. As right now, we don't have great decomposition of the components. We have different systems. We have a streaming system, we have a databases, we have documents, doors and so on, but how the whole data pipeline data engineering part works generally tends to be pretty monolithic in nature. You take your whole data pipeline and you deploy the whole thing and you basically just cross your fingers, and hopefully it's not 15 hours, but if it is 15 hours, you go to sleep, you wake up the next morning, grab a coffee and then maybe it worked. And that iteration cycle is really slow. And so when we think about how we can improve these things, right? This is combinations of intelligent systems that do instantaneous schema detection, and validation, excuse me, it's combinations of things that do instantaneous schema detection and validation. It's things like automated lineage and dependency tracking. So you know that when you deploy code, what piece of data it affects, it's things like automated testing on individual core parts of your data pipelines to validate that you're getting the expected output that you need. So it's pulling a lot of these same DevOps style principles into the data world, which is really designed to going back to how do you help more people build more things faster and safely really rapid iterations for rapid feedback. So you know if there's breaks in the system much earlier on. >> Well, I think Sean, you're onto something really big there. And I think this is something that's emerging pretty quickly in the cloud scale that I called, 2.0, whatever, what version we're in, is the systems thinking mindset. 'Cause you mentioned the model that that was essentially a silo or subsystem. It was cohesive in it's own way, but now it's been monolithic. Now you have a broken down set of decomposed sets of data pieces that have to work together. So Jason, this is the big challenge that everyone, not really people are talking about, I think most these guys are, and you're using them. What are you unifying? Because this is a systems operating systems thinking, this is not like a database problem. It's a systems problem applied to data where databases are just pieces of it, what's your thoughts? >> That's absolutely right. And I would, so Sean touched on composibility of ETL and thinking about reusable components, thinking about pieces that all fit together, because as you're building something as complex as some of these ETS are, we do think about the platform itself and how that lends to the overarching output. So one thing, being able to actually see the different components of an ETL and blend those in and you as the dry principal, don't repeat yourself. So you essentially are able to take pieces that one person built, maybe John builds a couple of our connectors coming in, Sean also has a bunch of transforms and I just want this stuff out, so I can use a lot of what you guys have already built. I think that's key, because a lot of engineering and data engineering is about managing complexity. So taking that complexity and essentially getting it out fast and getting out error free, is where we're going with all of the data products we're building. >> What are some of the complexity that you guys have that you're dealing with? Can you be specific and share what these guys are doing to solve that problem for you? That's, this is a big problem everyone's having, I'm seeing that all over the place. >> Absolutely, so I could start at a couple of places. So I don't know if you guys are on the three Vs, four Vs or five Vs, but we have all of those. And if you go to that five, four or five V model, there is the veracity piece, which you have to ask yourself, is it true? Is it accurate when? So change happens throughout the pipeline, change can come from web hooks, change can come from users. You have to make sure that you're managing that complexity and what we we're building, I mentioned that we are paying down a lot of tech debt, but we're also building new products. And one pretty challenging, quite challenging ETL that we're building is something going from a document store to an analytical application. So in that document store, we talked about flexible schema. Basically, you don't really know exactly what you're going to get day to day, and you need to be able to manage that change through the whole process in a way that the ultimate business users find value. So, that's one of the key applications that we're using right now. And that's one that the team at Ascend and my team are working hand in hand going through a lot of those challenges. And it's, I also watch the slack just as Sean does, and it's a very active discussion board. So it is essentially like they're just partnering together. It's fabulous, but yeah-- >> And you're seeing kind of a value on this too, I mean, in terms of output what's the business results? >> Yes, absolutely. So essentially this is all, so yes, the fifth V value. So, getting to that value is essentially, there were a few pieces of the, to the value. So there's some data products that we're building within that product and their data science, data analytics based products that essentially do things with the data that help the user. There's also the question of exactly the usage and those kinds of metrics that people in ops want to understand as well as our growth team. So we have internal and external stakeholders for that. >> Jason, this is a great use case, a great customer, Sean, you guys are automating. For the folks watching, who were seeing their peer living the dream here and the data journey, as we say, things are happening. What's the message to customers that you guys want to send because you guys are really cutting your teeth into a whole another level of data engineering, data platform. That's really about the systems view and about cloud. What's the pitch, Sean? What should people know about the company? >> Absolutely, yeah, well, so one, I'd say even before the pitch, I would encourage people to not accept the status quo. And in particular, in data engineering today, the status quo is an incredibly high degree of pain and discomfort. And I think the important part of why Ascend exists and why we're so helpful for our customers, there is a much more automated future of how we build data products, how we optimize those and how we can get a larger cohort of builders into the data ecosystem. And that helps us get out of the muck as we talked about before and put really advanced technology to work for more people inside of our companies to build these data products, leveraging the latest and greatest technologies to drive increased business value faster. >> Jason, what's your assessment of these guys, as people are watching might say, hey, you know what, I'm going to contact them, I need this. How would you talk about Ascend into your peers? >> Absolutely, so I think just thinking about the whole process has been a great partnership. We started with a POC, I think Ascend likes to start with three use cases, I think we came out with four and we went through the ones that we really cared about and really wanted to bring value to the company with. So we have roadmaps for some, as we're paying down technical debt and transitioning, others we can go directly to. And I think that thinking about just like you're saying, John, that systems view of everything you're building, where that makes sense, you can actually take a lot of that complexity and encapsulate it in a way that you can essentially manage it all in that platform. So the Ascend platform has the composibility piece that we touched on. It also, not only can you compose it, but you can drill into it. And my team is super talented and is going to drill into it. So basically loves to open up each of those data flows each of the components therein and has the control there with the combination of Spark Sequel, PI Spark SQL Scala and so on. And I think that the variety of connections is also quite helpful. So thinking about the dry principle from a systems perspective is extremely useful because it's dry, you often get that in a code review, right? I think you can be a little bit more dry here. >> Yeah. >> But you can really do that in the way that you're composing your systems as well. >> That's a great, great point. One quick thing for the folks that they're watching that are trying to figure this out, and a lot of architecture is going on. A lot of people are looking at different solutions. What things have you learned that you could give them a tip like to avoid like maybe some scar tissue or tips of the trade, where you can say, hey, this way, be careful, what's some of the learnings? Could you give a few pointers to folks out there, if they're kicking tires on the direction, what's the wrong direction? What's the right direction look like? >> Absolutely, I think that, I think it through, and I don't know how much time we have that, that feels like a few days conversation as far as ways to go wrong. But absolutely, I think that thinking through exactly where want to be is the key. Otherwise it's kind of like when you're writing a ticket on Jarrah, if you don't have clear success criteria, if you don't know where you going to go, then you'll end up somewhere building something and it might work. But if you think through your exact destination that you want to be at, that will drive a lot of the decisions as you think backwards to where you started. And also I think that, so Sean also mentioned challenging the status quo. I think that you really have to be ready to challenge the status quo at every step of that journey. So if you start with some particular service that you had and its legacy, if it's not essentially performing what you need, then it's okay to just take a step back and say, well, maybe that's not the one. So I think that thinking through the system, just like you were saying, John, and also I think that having a visual representation of where you want to go is critical. So hopefully that encapsulates a lot of it, but yes, the destination is key. >> Yeah, and having an engineering platform that also unifies the multiple components and it's agile. >> That's right. >> It gets you out of the muck and on the last day and differentiate heavy lifting is a cloud plan. >> Absolutely. >> Sean, wrap it up for us here. What's the bumper sticker for your vision, share your founding principles of the company. >> Absolutely, for us, we started the company as a former in recovery and CTO. The last company I founded, we had nearly 60 people on our data team alone and had invested tremendous amounts of effort over the course of eight years. And one of the things that I've learned is that over time innovation comes just as much from deciding what you're no longer going to do as what you're going to do. And focusing heavily around, how do you get out of that muck? How do you continue to climb up that technology stack? Is incredibly important. And so really we are excited to be a part of it and taking the industry is continuing to climb higher and higher level. We're building more and more advanced levels of automation and what we call our data awareness into the automated engine of the Ascend platform that takes us across the entire data ecosystem, connecting and automating all data movement. And so we have a very exciting vision for this fabric that's emerging over time. >> Awesome, Sean, thank you so much for that insight, Jason, thanks for coming on customer of Ascend.io. >> Thank you. >> I appreciate it, gentlemen, thank you. This is the track on automating analytic workloads. We here at the end of us showcase, startup showcase, the hottest companies here at Ascend.io, I'm John Furrier, with theCUBE, thanks for watching. (upbeat music)
SUMMARY :
and Jason, nice to meet you. So, and Steady as a customer. and really helping to ensure great to have you on as a person kind of intellectual property the database is you can, So all of that is built of the critical problems that the business and cultural makeup of companies. and data really is that field, that oil but what if you don't have it, right? that it's really part of the package. What do you guys see as and the idea is able to execute as far grown in the recent year And a lot of that is, I mean, that come into the data conversation. and essentially you have so and then you work it and you basically just cross your fingers, And I think this is something and how that lends to complexity that you guys have and you need to be able of exactly the usage that you guys want to send of builders into the data ecosystem. hey, you know what, I'm going and has the control there in the way that you're that you could give them a tip of where you want to go is critical. Yeah, and having an and on the last day and What's the bumper sticker for your vision, and taking the industry is continuing Awesome, Sean, thank you This is the track on
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Andy | PERSON | 0.99+ |
Jason | PERSON | 0.99+ |
Sean | PERSON | 0.99+ |
Emily Freeman | PERSON | 0.99+ |
Sean Knapp | PERSON | 0.99+ |
Jason Robinson | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
15 hours | QUANTITY | 0.99+ |
Ascend | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
96% | QUANTITY | 0.99+ |
eight years | QUANTITY | 0.99+ |
15 hour | QUANTITY | 0.99+ |
iOS Store | TITLE | 0.99+ |
18 hours | QUANTITY | 0.99+ |
Google Play Store | TITLE | 0.99+ |
Ascend.io | ORGANIZATION | 0.99+ |
Steady | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
six months | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
Spark Sequel | TITLE | 0.99+ |
two | DATE | 0.98+ |
Today | DATE | 0.98+ |
a year | QUANTITY | 0.98+ |
two years | QUANTITY | 0.98+ |
two years ago | DATE | 0.98+ |
today | DATE | 0.98+ |
four | QUANTITY | 0.98+ |
Jarrah | PERSON | 0.98+ |
each | QUANTITY | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
three years ago | DATE | 0.97+ |
one | QUANTITY | 0.97+ |
3 million plus | QUANTITY | 0.97+ |
4% | QUANTITY | 0.97+ |
one thing | QUANTITY | 0.96+ |
one team | QUANTITY | 0.95+ |
three use cases | QUANTITY | 0.94+ |
one person | QUANTITY | 0.93+ |
nearly 60 people | QUANTITY | 0.93+ |
one persona | QUANTITY | 0.91+ |
Skyla Loomis, IBM | AnsibleFest 2020
>> (upbeat music) [Narrator] From around the globe, it's theCUBE with digital coverage of AnsibleFest 2020, brought to you by Red Hat. >> Hello welcome back to theCUBE virtual coverage of AnsibleFest 2020 Virtual. We're not face to face this year. I'm John Furrier, your host. We're bringing it together remotely. We're in the Palo Alto Studios with theCUBE and we're going remote for our guests this year. And I hope you can come together online enjoy the content. Of course, go check out the events site on Demand Live. And certainly I have a lot of great content. I've got a great guest Skyla Loomis Vice president, for the Z Application Platform at IBM. Also known as IBM Z talking Mainframe. Skyla, thanks for coming on theCUBE Appreciate it. >> Thank you for having me. So, you know, I've talked many conversations about the Mainframe of being relevant and valuable in context to cloud and cloud native because if it's got a workload you've got containers and all this good stuff, you can still run anything on anything these days. By integrating it in with all this great glue layer, lack of a better word or oversimplifying it, you know, things going on. So it's really kind of cool. Plus Walter Bentley in my previous interview was talking about the success of Ansible, and IBM working together on a really killer implementation. So I want to get into that, but before that let's get into IBM Z. How did you start working with IBM Z? What's your role there? >> Yeah, so I actually just got started with Z about four years ago. I spent most of my career actually on the distributed platform, largely with data and analytics, the analytics area databases and both On-premise and Public Cloud. But I always considered myself a friend to Z. So in many of the areas that I'd worked on, we'd, I had offerings where we'd enabled it to work with COS or Linux on Z. And then I had this opportunity come up where I was able to take on the role of leading some of our really core runtimes and databases on the Z platform, IMS and z/TPF. And then recently just expanded my scope to take on CICS and a number of our other offerings related to those kind of in this whole application platform space. And I was really excited because just of how important these runtimes and this platform is to the world,really. You know, our power is two thirds of our fortune 100 clients across banking and insurance. And it's you know, some of the most powerful transaction platforms in the world. You know doing hundreds of billions of transactions a day. And you know, just something that's really exciting to be a part of and everything that it does for us. >> It's funny how distributed systems and distributed computing really enable more longevity of everything. And now with cloud, you've got new capabilities. So it's super excited. We're seeing that a big theme at AnsibleFest this idea of connecting, making things easier you know, talk about distributed computing. The cloud is one big distribute computer. So everything's kind of playing together. You have a panel discussion at AnsibleFest Virtual. Could you talk about what your topic is and share, what was some of the content in there? Content being, content as in your presentation? Not content. (laughs) >> Absolutely. Yeah, so I had the opportunity to co-host a panel with a couple of our clients. So we had Phil Allison from Black Knight and Pat Lane from Allstate and they were really joining us and talking about their experience now starting to use Ansible to manage to z/OS. So we just actually launched some content collections and helping to enable and accelerate, client's use of using Ansible to manage to z/OS back in March of this year. And we've just seen tremendous client uptake in this. And these are a couple of clients who've been working with us and, you know, getting started on the journey of now using Ansible with Z they're both you know, have it in the enterprise already working with Ansible on other platforms. And, you know, we got to talk with them about how they're bringing it into Z. What use cases they're looking at, the type of culture change, that it drives for their teams as they embark on this journey and you know where they see it going for them in the future. >> You know, this is one of the hot items this year. I know that events virtual so has a lot of content flowing around and sessions, but collections is the top story. A lot of people talking collections, collections collections, you know, integration and partnering. It hits so many things but specifically, I like this use case because you're talking about real business value. And I want to ask you specifically when you were in that use case with Ansible and Z. People are excited, it seems like it's working well. Can you talk about what problems that it solves? I mean, what was some of the drivers behind it? What were some of the results? Could you give some insight into, you know, was it a pain point? Was it an enabler? Can you just share why that was getting people are getting excited about this? >> Yeah well, certainly automation on Z, is not new, you know there's decades worth of, of automation on the platform but it's all often proprietary, you know, or bundled up like individual teams or individual people on teams have specific assets, right. That they've built and it's not shared. And it's certainly not consistent with the rest of the enterprise. And, you know, more and more, you're kind of talking about hybrid cloud. You know, we're seeing that, you know an application is not isolated to a single platform anymore right. It really expands. And so being able to leverage this common open platform to be able to manage Z in the same way that you manage the entire rest of your enterprise, whether that's Linux or Windows or network or storage or anything right. You know you can now actually bring this all together into a common automation plane in control plane to be able to manage to all of this. It's also really great from a skills perspective. So, it enables us to really be able to leverage. You know Python on the platform and that's whole ecosystem of Ansible skills that are out there and be able to now use that to work with Z. >> So it's essentially a modern abstraction layer of agility and people to work on it. (laughs) >> Yeah >> You know it's not the joke, Hey, where's that COBOL programmer. I mean, this is a serious skill gap issues though. This is what we're talking about here. You don't have to replace the, kill the old to bring in the new, this is an example of integration where it's classic abstraction layer and evolution. Is that, am I getting that right? >> Absolutely. I mean I think that Ansible's power as an orchestrator is part of why, you know, it's been so successful here because it's not trying to rip and replace and tell you that you have to rewrite anything that you already have. You know, it is that glue sort of like you used that term earlier right? It's that glue that can span you know, whether you've got rec whether you've got ACL, whether you're using z/OSMF you know, or any other kind of custom automation on the platform, you know, it works with everything and it can start to provide that transparency into it as well, and move to that, like infrastructure as code type of culture. So you can bring it into source control. You can have visibility to it as part of the Ansible automation platform and tower and those capabilities. And so you, it really becomes a part of the whole enterprise and enables you to codify a lot of that knowledge. That, you know, exists again in pockets or in individuals and make it much more accessible to anybody new who's coming to the platform. >> That's a great point, great insight.& It's worth calling out. I'm going to make a note of that and make a highlight from that insight. That was awesome. I got to ask about this notion of client uptake. You know, when you have z/OS and Ansible kind of come in together, what are the clients area? When do they get excited? When do they know that they've got to do? And what are some of the client reactions? Are they're like, wake up one day and say, "Hey, yeah I actually put Ansible and z/OS together". You know peanut butter and chocolate is (mumbles) >> Honestly >> You know, it was just one of those things where it's not obvious, right? Or is it? >> Actually I have been surprised myself at how like resoundingly positive and immediate the reactions have been, you know we have something, one of our general managers runs a general managers advisory council and at some of our top clients on the platform and you know we meet with them regularly to talk about, you know, the future direction that we're going. And we first brought this idea of Ansible managing to Z there. And literally unanimously everybody was like yes, give it to us now. (laughs) It was pretty incredible, you know? And so it's you know, we've really just seen amazing uptake. We've had over 5,000 downloads of our core collection on galaxy. And again that's just since mid to late March when we first launched. So we're really seeing tremendous excitement with it. >> You know, I want to want to talk about some of the new announcements, but you brought that up. I wanted to kind of tie into it. It is addictive when you think modernization, people success is addictive. This is another theme coming out of AnsibleFest this year is that when the sharing, the new content you know, coders content is the theme. I got to ask you because you mentioned earlier about the business value and how the clients are kind of gravitating towards it. They want it.It is addictive, contagious. In the ivory towers in the big, you know, front office, the business. It's like, we've got to make everything as a service. Right. You know, you hear that right. You know, and say, okay, okay, boss You know, Skyla, just go do it. Okay. Okay. It's so easy. You can just do it tomorrow, but to make everything as a service, you got to have the automation, right. So, you know, to bridge that gap has everything is a service whether it's mainframe. I mean okay. Mainframe is no problem. If you want to talk about observability and microservices and DevOps, eventually everything's going to be a service. You got to have the automation. Could you share your, commentary on how you view that? Because again, it's a business objective everything is a service, then you got to make it technical then you got to make it work and so on. So what's your thoughts on that? >> Absolutely. I mean, agility is a huge theme that we've been focusing on. We've been delivering a lot of capabilities around a cloud native development experience for folks working on COBOL, right. Because absolutely you know, there's a lot of languages coming to the platform. Java is incredibly powerful and it actually runs better on Z than it runs on any other platform out there. And so, you know, we're seeing a lot of clients you know, starting to, modernize and continue to evolve their applications because the platform itself is incredibly modern, right? I mean we come out with new releases, we're leading the industry in a number of areas around resiliency, in our security and all of our, you know, the face of encryption and number of things that come out with, but, you know the applications themselves are what you know, has not always kept pace with the rate of change in the industry. And so, you know, we're really trying to help enable our clients to make that leap and continue to evolve their applications in an important way, and the automation and the tools that go around it become very important. So, you know, one of the things that we're enabling is the self service, provisioning experience, right. So clients can, you know, from Open + Shift, be able to you know, say, "Hey, give me an IMS and z/OS connect stack or a kicks into DB2 stack." And that is all under the covers is going to be powered by Ansible automation. So that really, you know, you can get your system programmers and your talent out of having to do these manual tasks, right. Enable the development community. So they can use things like VS Code and Jenkins and GET Lab, and you'll have this automated CICB pipeline. And again, Ansible under the covers can be there helping to provision those test environments. You know, move the data, you know, along with the application, changes through the pipeline and really just help to support that so that, our clients can do what they need to do. >> You guys got the collections in the hub there, so automation hub, I got to ask you where do you see the future of the automating within z/OS going forward? >> Yeah, so I think, you know one of the areas that we'd like to see go is head more towards this declarative state so that you can you know, have this declarative configuration defined for your Z environment and then have Ansible really with the data and potency right. Be able to, go out and ensure that the environment is always there, and meeting those requirements. You know that's partly a culture change as well which goes along with it, but that's a key area. And then also just, you know, along with that becoming more proactive overall part of, you know, AI ops right. That's happening. And I think Ansible on the automation that we support can become you know, an integral piece of supporting that more intelligent and proactive operational direction that, you know, we're all going. >> Awesome Skyla. Great to talk to you. And so insightful, appreciate it. One final question. I want to ask you a personal question because I've been doing a lot of interviews around skill gaps and cybersecurity, and there's a lot of jobs, more job openings and there are a lot of people. And people are with COVID working at home. People are looking to get new skilled up positions, new opportunities. Again cybersecurity and spaces and event we did and want to, and for us its huge, huge openings. But for people watching who are, you know, resetting getting through this COVID want to come out on the other side there's a lot of online learning tools out there. What skill sets do you think? Cause you brought up this point about modernization and bringing new people and people as a big part of this event and the role of the people in community. What areas do you think people could really double down on? If I wanted to learn a skill. Or an area of coding and business policy or integration services, solution architects, there's a lot of different personas, but what skills can I learn? What's your advice to people out there? >> Yeah sure. I mean on the Z platform overall and skills related to Z, COBOL, right. There's, you know, like two billion lines of COBOL out there in the world. And it's certainly not going away and there's a huge need for skills. And you know, if you've got experience from other platforms, I think bringing that in, right. And really being able to kind of then bridge the two things together right. For the folks that you're working for and the enterprise we're working with you know, we actually have a bunch of education out there. You got to master the mainframe program and even a competition that goes on that's happening now, for folks who are interested in getting started at any stage, whether you're a student or later in your career, but you know learning, you know, learn a lot of those platforms you're going to be able to then have a career for life. >> Yeah. And the scale on the data, this is so much going on. It's super exciting. Thanks for sharing that. Appreciate it. Want to get that plug in there. And of course, IBM, if you learn COBOL you'll have a job forever. I mean, the mainframe's not going away. >> Absolutely. >> Skyla, thank you so much for coming on theCUBE Vice President, for the Z Application Platform and IBM, thanks for coming. Appreciate it. >> Thanks for having me. >> I'm John Furrier your host of theCUBE here for AnsibleFest 2020 Virtual. Thanks for watching. (upbeat music)
SUMMARY :
brought to you by Red Hat. And I hope you can come together online So, you know, I've And it's you know, some you know, talk about with us and, you know, getting started And I want to ask you in the same way that you of agility and people to work on it. kill the old to bring in on the platform, you know, You know, when you have z/OS and Ansible And so it's you know, we've I got to ask you because You know, move the data, you know, so that you can you know, But for people watching who are, you know, And you know, if you've got experience And of course, IBM, if you learn COBOL Skyla, thank you so much for coming I'm John Furrier your host of theCUBE
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Phil Allison | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
AnsibleFest | ORGANIZATION | 0.99+ |
Walter Bentley | PERSON | 0.99+ |
Skyla Loomis | PERSON | 0.99+ |
Java | TITLE | 0.99+ |
Python | TITLE | 0.99+ |
tomorrow | DATE | 0.99+ |
Linux | TITLE | 0.99+ |
two things | QUANTITY | 0.99+ |
Windows | TITLE | 0.99+ |
Pat Lane | PERSON | 0.99+ |
this year | DATE | 0.99+ |
Skyla | PERSON | 0.99+ |
Ansible | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
mid | DATE | 0.98+ |
100 clients | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
One final question | QUANTITY | 0.98+ |
over 5,000 downloads | QUANTITY | 0.97+ |
Z | TITLE | 0.97+ |
two billion lines | QUANTITY | 0.97+ |
March of this year | DATE | 0.95+ |
Z. | TITLE | 0.95+ |
VS Code | TITLE | 0.95+ |
COBOL | TITLE | 0.93+ |
z/OS | TITLE | 0.92+ |
single platform | QUANTITY | 0.91+ |
hundreds of billions of transactions a day | QUANTITY | 0.9+ |
first | QUANTITY | 0.9+ |
Allstate | ORGANIZATION | 0.88+ |
Palo Alto Studios | LOCATION | 0.88+ |
Z Application Platform | TITLE | 0.86+ |
four years ago | DATE | 0.84+ |
COVID | EVENT | 0.81+ |
late March | DATE | 0.81+ |
about | DATE | 0.8+ |
Vice | PERSON | 0.79+ |
Jenkins | TITLE | 0.78+ |
Vice President | PERSON | 0.77+ |
AnsibleFest 2020 | EVENT | 0.77+ |
IBM Z. | TITLE | 0.72+ |
two thirds | QUANTITY | 0.72+ |
one big distribute computer | QUANTITY | 0.72+ |
one day | QUANTITY | 0.71+ |
z/OSMF | TITLE | 0.69+ |
Z. | ORGANIZATION | 0.69+ |
Black Knight | TITLE | 0.64+ |
ACL | TITLE | 0.64+ |
CICS | ORGANIZATION | 0.63+ |
IMS | TITLE | 0.63+ |
ON DEMAND SPEED K8S DEV OPS SECURE SUPPLY CHAIN
>> In this session, we will be reviewing the power and benefits of implementing a secure software supply chain and how we can gain a cloud like experience with the flexibility, speed and security of modern software delivering. Hi, I'm Matt Bentley and I run our technical pre-sales team here at Mirantis. I spent the last six years working with customers on their containerization journey. One thing almost every one of my customers has focused on is how they can leverage the speed and agility benefits of containerizing their applications while continuing to apply the same security controls. One of the most important things to remember is that we are all doing this for one reason and that is for our applications. So now let's take a look at how we can provide flexibility to all layers of the stack from the infrastructure on up to the application layer. When building a secure supply chain for container focused platforms, I generally see two different mindsets in terms of where their responsibilities lie between the developers of the applications and the operations teams who run the middleware platforms. Most organizations are looking to build a secure, yet robust service that fits their organization's goals around how modern applications are built and delivered. First, let's take a look at the developer or application team approach. This approach falls more of the DevOps philosophy, where a developer and application teams are the owners of their applications from the development through their life cycle, all the way to production. I would refer to this more of a self service model of application delivery and promotion when deployed to a container platform. This is fairly common, organizations where full stack responsibilities have been delegated to the application teams. Even in organizations where full stack ownership doesn't exist, I see the self service application deployment model work very well in lab development or non production environments. This allows teams to experiment with newer technologies, which is one of the most effective benefits of utilizing containers. In other organizations, there is a strong separation between responsibilities for developers and IT operations. This is often due to the complex nature of controlled processes related to the compliance and regulatory needs. Developers are responsible for their application development. This can either include dock at the development layer or be more traditional, throw it over the wall approach to application development. There's also quite a common experience around building a center of excellence with this approach where we can take container platforms and be delivered as a service to other consumers inside of the IT organization. This is fairly prescriptive in the manner of which application teams would consume it. Yeah when examining the two approaches, there are pros and cons to each. Process, controls and compliance are often seen as inhibitors to speed. Self-service creation, starting with the infrastructure layer, leads to inconsistency, security and control concerns, which leads to compliance issues. While self-service is great, without visibility into the utilization and optimization of those environments, it continues the cycles of inefficient resource utilization. And a true infrastructure as a code experience, requires DevOps, related coding skills that teams often have in pockets, but maybe aren't ingrained in the company culture. Luckily for us, there is a middle ground for all of this. Docker Enterprise Container Cloud provide the foundation for the cloud like experience on any infrastructure without all of the out of the box security and controls that our professional services team and your operations teams spend their time designing and implementing. This removes much of the additional work and worry around ensuring that your clusters and experiences are consistent, while maintaining the ideal self service model. No matter if it is a full stack ownership or easing the needs of IT operations. We're also bringing the most natural Kubernetes experience today with Lens to allow for multi-cluster visibility that is both developer and operator friendly. Lens provide immediate feedback for the health of your applications, observability for your clusters, fast context switching between environments and allowing you to choose the best in tool for the task at hand, whether it is the graphic user interface or command line interface driven. Combining the cloud like experience with the efficiencies of a secure supply chain that meet your needs brings you the best of both worlds. You get DevOps speed with all the security and controls to meet the regulations your business lives by. We're talking about more frequent deployments, faster time to recover from application issues and better code quality. As you can see from our clusters we have worked with, we're able to tie these processes back to real cost savings, real efficiency and faster adoption. This all adds up to delivering business value to end users in the overall perceived value. Now let's look and see how we're able to actually build a secure supply chain to help deliver these sorts of initiatives. In our example secure supply chain, where utilizing Docker desktop to help with consistency of developer experience, GitHub for our source control, Jenkins for our CACD tooling, the Docker trusted registry for our secure container registry and the Universal Control Plane to provide us with our secure container runtime with Kubernetes and Swarm, providing a consistent experience, no matter where our clusters are deployed. You work with our teams of developers and operators to design a system that provides a fast, consistent and secure experience. For my developers, that works for any application, Brownfield or Greenfield, Monolith or Microservice. Onboarding teams can be simplified with integrations into enterprise authentication services, calls to GitHub repositories, Jenkins access and jobs, Universal Control Plan and Docker trusted registry teams and organizations, Kubernetes namespace with access control, creating Docker trusted registry namespaces with access control, image scanning and promotion policies. So, now let's take a look and see what it looks like from the CICD process, including Jenkins. So let's start with Docker desktop. From the Docker desktop standpoint, we'll actually be utilizing visual studio code and Docker desktop to provide a consistent developer experience. So no matter if we have one developer or a hundred, we're going to be able to walk through a consistent process through Docker container utilization at the development layer. Once we've made our changes to our code, we'll be able to check those into our source code repository. In this case, we'll be using GitHub. Then when Jenkins picks up, it will check out that code from our source code repository, build our Docker containers, test the application that will build the image, and then it will take the image and push it to our Docker trusted registry. From there, we can scan the image and then make sure it doesn't have any vulnerabilities. Then we can sign them. So once we've signed our images, we've deployed our application to dev, we can actually test our application deployed in our real environment. Jenkins will then test the deployed application. And if all tests show that as good, we'll promote our Docker image to production. So now, let's look at the process, beginning from the developer interaction. First of all, let's take a look at our application as it's deployed today. Here, we can see that we have a change that we want to make on our application. So our marketing team says we need to change containerize NGINX to something more Mirantis branded. So let's take a look at visual studio code, which we'll be using for our ID to change our application. So here's our application. We have our code loaded and we're going to be able to use Docker desktop on our local environment with our Docker desktop plugin for visual studio code, to be able to build our application inside of Docker, without needing to run any command line specific tools. Here with our code, we'll be able to interact with Docker maker changes, see it live and be able to quickly see if our changes actually made the impact that we're expecting our application. So let's find our updated tiles for application and let's go ahead and change that to our Mirantis sized NGINX instead of containerized NGINX. So we'll change it in a title and on the front page of the application. So now that we've saved that changed to our application, we can actually take a look at our code here in VS code. And as simple as this, we can right click on the Docker file and build our application. We give it a name for our Docker image and VS code will take care of the automatic building of our application. So now we have a Docker image that has everything we need in our application inside of that image. So, here we can actually just right click on that image tag that we just created and do run. This will interactively run the container for us. And then once our containers running, we can just right click and open it up in a browser. So here we can see the change to our application as it exists live. So, once we can actually verify that our applications working as expected, we can stop our container. And then from here, we can actually make that change live by pushing it to our source code repository. So here, we're going to go ahead and make a commit message to say that we updated to our Mirantis branding. We will commit that change and then we'll push it to our source code repository. Again, in this case, we're using GitHub to be able to use as our source code repository. So here in VS code, we'll have that pushed here to our source code repository. And then, we'll move on to our next environment, which is Jenkins. Jenkins is going to be picking up those changes for our application and it checked it out from our source code repository. So GitHub notifies Jenkins that there's a change. Checks out the code, builds our Docker image using the Docker file. So we're getting a consistent experience between the local development environment on our desktop and then in Jenkins where we're actually building our application, doing our tests, pushing it into our Docker trusted registry, scanning it and signing our image in our Docker trusted registry and then deploying to our development environment. So let's actually take a look at that development environment as it's been deployed. So, here we can see that our title has been updated on our application, so we can verify that it looks good in development. If we jump back here to Jenkins, we'll see that Jenkins go ahead and runs our integration tests for our development environment. Everything worked as expected, so it promoted that image for our production repository in our Docker trusted registry. We're then, we're going to also sign that image. So we're assigning that yes, we've signed off that has made it through our integration tests and it's deployed to production. So here in Jenkins, we can take a look at our deployed production environment where our application is live in production. We've made a change, automated and very secure manner. So now, let's take a look at our Docker trusted registry, where we can see our name space for our application and our simple NGINX repository. From here, we'll be able to see information about our application image that we've pushed into the registry, such as the image signature, when it was pushed by who and then, we'll also be able to see the results of our image. In this case, we can actually see that there are vulnerabilities for our image and we'll actually take a look at that. Docker trusted registry does binary level scanning. So we get detailed information about our individual image layers. From here, these image layers give us details about where the vulnerabilities were located and what those vulnerabilities actually are. So if we click on the vulnerability, we can see specific information about that vulnerability to give us details around the severity and more information about what exactly is vulnerable inside of our container. One of the challenges that you often face around vulnerabilities is how exactly we would remediate that in a secure supply chain. So let's take a look at that. In the example that we were looking at, the vulnerability is actually in the base layer of our image. In order to pull in a new base layer for our image, we need to actually find the source of that and update it. One of the ways that we can help secure that as a part of the supply chain is to actually take a look at where we get our base layers of our images. Docker hub really provides a great source of content to start from, but opening up Docker hub within your organization, opens up all sorts of security concerns around the origins of that content. Not all images are made equal when it comes to the security of those images. The official images from Docker hub are curated by Docker, open source projects and other vendors. One of the most important use cases is around how you get base images into your environment. It is much easier to consume the base operating system layer images than building your own and also trying to maintain them. Instead of just blindly trusting the content from Docker hub, we can take a set of content that we find useful such as those base image layers or content from vendors and pull that into our own Docker trusted registry, using our mirroring feature. Once the images have been mirrored into a staging area of our Docker trusted registry, we can then scan them to ensure that the images meet our security requirements. And then based off of the scan result, promote the image to a public repository where you can actually sign the images and make them available to our internal consumers to meet their needs. This allows us to provide a set of curated content that we know is secure and controlled within our environment. So from here, we can find our updated Docker image in our Docker trusted registry, where we can see that the vulnerabilities have been resolved. From a developer's point of view, that's about as smooth as the process gets. Now, let's take a look at how we can provide that secure content for our developers in our own Docker trusted registry. So in this case, we're taking a look at our Alpine image that we've mirrored into our Docker trusted registry. Here, we're looking at the staging area where the images get temporarily pulled because we have to pull them in order to actually be able to scan them. So here we set up mirroring and we can quickly turn it on by making it active. And then we can see that our image mirroring, we'll pull our content from Docker hub and then make it available in our Docker trusted registry in an automatic fashion. So from here, we can actually take a look at the promotions to be able to see how exactly we promote our images. In this case, we created a promotion policy within Docker trusted registry that makes it so that content gets promoted to a public repository for internal users to consume based off of the vulnerabilities that are found or not found inside of the Docker image. So our actual users, how they would consume this content is by taking a look at the public to them, official images that we've made available. Here again, looking at our Alpine image, we can take a look at the tags that exist and we can see that we have our content that has been made available. So we've pulled in all sorts of content from Docker hub. In this case, we've even pulled in the multi architecture images, which we can scan due to the binary level nature of our scanning solution. Now let's take a look at Lens. Lens provides capabilities to be able to give developers a quick opinionated view that focuses around how they would want to view, manage and inspect applications deployed to a Kubernetes cluster. Lens integrates natively out of the box with Universal Control Plane clam bundles. So you're automatically generated TLS certificates from UCP, just work. Inside our organization, we want to give our developers the ability to see their applications in a very easy to view manner. So in this case, let's actually filter down to the application that we just employed to our development environment. Here, we can see the pod for application. And when we click on that, we get instant detailed feedback about the components and information that this pod is utilizing. We can also see here in Lens that it gives us the ability to quickly switch contexts between different clusters that we have access to. With that, we also have capabilities to be able to quickly deploy other types of components. One of those is helm charts. Helm charts are a great way to package up applications, especially those that may be more complex to make it much simpler to be able to consume and inversion our applications. In this case, let's take a look at the application that we just built and deployed. In this case, our simple NGINX application has been bundled up as a helm chart and is made available through Lens. Here, we can just click on that description of our application to be able to see more information about the helm chart. So we can publish whatever information may be relevant about our application. And through one click, we can install our helm chart. Here, it will show us the actual details of the helm charts. So before we install it, we can actually look at those individual components. So in this case, we can see this created an ingress rule. And then this will tell Kubernetes how did it create this specific components of our application. We'd just have to pick a namespace to deploy it to and in this case, we're actually going to do a quick test here because in this case, we're trying to deploy the application from Docker hub. In our Universal Control Plane, we've turned on Docker content trust policy enforcement. So this is actually going to fail to deploy. Because we're trying to employ our application from Docker hub, the image hasn't been properly signed in our environment. So the Docker content trust policy enforcement prevents us from deploying our Docker image from Docker hub. In this case, we have to go through our approved process through our secure supply chain to be able to ensure that we know where our image came from and that meets our quality standards. So if we comment out the Docker hub repository and comment in our Docker trusted registry repository and click install, it will then install the helm chart with our Docker image being pulled from our DTR, which then it has a proper signature. We can see that our application has been successfully deployed through our home chart releases view. From here, we can see that simple NGINX application and in this case, we'll get details around the actual deployed helm chart. The nice thing is, is that Lens provides us this capability here with helm to be able to see all of the components that make up our application. From this view, it's giving us that single pane of glass into that specific application, so that we know all of the components that is created inside of Kubernetes. There are specific details that can help us access the applications such as that ingress rule that we just talked about, gives us the details of that, but it also gives us the resources such as the service, the deployment and ingress that has been created within Kubernetes to be able to actually have the application exist. So to recap, we've covered how we can offer all the benefits of a cloud like experience and offer flexibility around DevOps and operations control processes through the use of a secure supply chain, allowing our developers to spend more time developing and our operators, more time designing systems that meet our security and compliance concerns.
SUMMARY :
of our application to be
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Matt Bentley | PERSON | 0.99+ |
GitHub | ORGANIZATION | 0.99+ |
First | QUANTITY | 0.99+ |
one reason | QUANTITY | 0.99+ |
Mirantis | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
NGINX | TITLE | 0.99+ |
Docker | TITLE | 0.99+ |
two approaches | QUANTITY | 0.99+ |
Monolith | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.98+ |
UCP | ORGANIZATION | 0.98+ |
Kubernetes | TITLE | 0.98+ |
One thing | QUANTITY | 0.98+ |
one developer | QUANTITY | 0.98+ |
Jenkins | TITLE | 0.98+ |
today | DATE | 0.98+ |
Brownfield | ORGANIZATION | 0.97+ |
both worlds | QUANTITY | 0.97+ |
two | QUANTITY | 0.97+ |
both | QUANTITY | 0.96+ |
one click | QUANTITY | 0.96+ |
Greenfield | ORGANIZATION | 0.95+ |
each | QUANTITY | 0.95+ |
single pane | QUANTITY | 0.92+ |
Docker hub | TITLE | 0.91+ |
a hundred | QUANTITY | 0.91+ |
Lens | TITLE | 0.9+ |
Docker | ORGANIZATION | 0.9+ |
Microservice | ORGANIZATION | 0.9+ |
VS | TITLE | 0.88+ |
DevOps | TITLE | 0.87+ |
K8S | COMMERCIAL_ITEM | 0.87+ |
Docker hub | ORGANIZATION | 0.85+ |
ways | QUANTITY | 0.83+ |
Kubernetes | ORGANIZATION | 0.83+ |
last six years | DATE | 0.82+ |
Jenkins | PERSON | 0.72+ |
One of | QUANTITY | 0.7+ |
Harish Grama, IBM | IBM Think 2020
from the cube studios in Palo Alto in Boston it's the cube covering the IBM thing brought to you by IBM we're back this is the cubes coverage of the IBM think 20/20 digital experience my name is Dave Volante is a wall-to-wall coverage over the multi-day event Harish promise here he's the general manager of the IBM public cloud rishi welcome back to the cube good to see you sorry we're not face to face but this will do yeah thank you great to see you again as well and you know it is the times what do you do you know I want to start by asking you you did a stint at a large bank I'd love to talk to you about that but but I want to stay focused you said last year on the cube you can't do everything in the public cloud certain things need to remain on Prem I'm interested in how your experience at the large financial institution and your experience generally working with you know your colleagues in in the banking industry how that shaped your vision of the IBM public cloud yeah I think that's a great question you know if you think about trying to transform yourself to a public cloud a lot of people what they try to do is you know they take applications that I've been running their enterprise and they try to redo them in its entirety with micro services using as level services I mean trying to put it all up in the public cloud now you know just think about some of these applications that are running your large institution right some of them will have regulatory ruled around it some of them have latency requirements or low latency requirements I should say some of them need to be close to the back end because that's where the data is so for all these reasons you know you have to think about a holistic cloud picture of which public cloud is you know integral to it but some of the things will need to remain on Prem right so when I build my public cloud out for IBM I kind of keep those in in the back of my mind as I get the team to work on it to ensure that we have the right capabilities on the public cloud and then where it makes sense you know have the right capabilities on the hybrid side as well working with my colleagues in IBM well you know the doc Ovid 19 pandemic that we've been talking to a lot of CISOs and CIOs we had a couple of roundtables with our data partner ETR and it was interesting you know organizations that maybe wouldn't have considered the cloud certainly as aggressively maybe they put some test dev in the cloud but you know have said well we're really reconsidering that one CIO actually said you know I'd love to delete my data center but but to your point you can't just delete the data center um first of all you don't want to necessarily move stuff second of all we've got a lot of experience from a consulting standpoint looking at this if you have to migrate migrates like an evil word especially with mission critical systems if you have to freeze code and you can't upgrade you know for some number of months you may be out of compliance or you're not remaining competitive so you have to really be circumspect and thoughtful with regard to what you do move I wonder if you could comment on that no I completely agree with what you're saying you know if you think about to your point right with Kovac things that really changed I've been speaking with a lot of cloud transformers I would say you know in the various industries but specifically with banks as well and the cloud leader for one of the large European banks said to me he said this was amazing because for four years he's been trying to get his he shows organization and risk and compliance etc to get their heads around moving applications to the cloud and he said that you know one month of kovat and having everyone locked down and home has been able to unblock more than what he's tried in the last four years so that's telling in itself right so look you know I've been working on public clouds for a good long time now both from a provider side as well as a consumer side and while you know you certainly just can't close your data centers that are running your large enterprise overnight you certainly can take a lot of stuff over there and move it to the public cloud in a meaningful fashion where you're able to take the pieces that really iterate more rapidly where you can get the end while keeping your data safe and you know being able to connect back into your back-end systems which will run a lot of your large processes in your enterprise as well so I think there is a balance to be had here and people especially banks I would say haven't been moving so much to the public cloud and I think this is the time where they're starting to realize that there is a time and place for a bunch of applications that can safely move and that gives them the agility and the productivity while everyone's locked at home and I think that's the eye opening so I'd love to have a frank conversation about why the IBM cloud I mean you know you got the the big guys you know Amazon Microsoft and Google maybe not as large people put them sort of in a category of hyper scalars great fair enough that people oftentimes dismiss you know the IBM public cloud however your point that you just made is critical and Ginni Rometty you was the first to kind of make this point Arvin's picked up on it that 80% of the workloads still are on trem and and it's that hard to move stuff that hasn't moved so and that's kind of IBM's wheelhouse I mean let's face it that the hard stuff it's the mission-critical you're kind of running the the banks and the insurance companies and the manufacturers and airlines around the world so what's the what's the case for the IBM cloud why the IBM cloud and why even move that stuff why not just leave it where it is yeah so I think there's a couple of answers here right one of them is the fact that when you talk to the hyper scalars and by the way I can't stress enough or a hyper scaler as well right people have taken a look at iCloud from about two plus years ago which you know at which point in time we were not but we certainly are and we can provision via size and so on and so forth as best as the best guys can so I want to just get that out of the way but to your point you know the reason why you would consider the IBM public cloud is when you talk to the other people they come at it from a very narrow perspective right they think about you know use vs is on x86 using cloud native pass services now you know I want to stress again that we do all three of those things extremely well but if you think about how large enterprises work nothing is as clean as that I did say there is a lot of applications that I've been running your institution that you can just willingly rewrite and then you have bare metal you'll have power systems whether it's AIX or I you'll have some Z in there Z Linux in there and then there's containers and then there's the VMware stack and there's containers running on bare metal containers running on vsi containers running on you know the VMware stack as well as the other architectures that I mentioned so we really meet our customers where they are in their journey and we give them a wide variety of capabilities and choices and flexibility to do their applications on the public cloud and that's what we mean by saying our cloud is enterprise ready as opposed to the nano answer of you'll do everything with vs is x86 and a service yeah I like that and I want to I want to circle back on that thank you for clarifying that point about hyper skills having said that it I've often said and I wonder if you could confirm or deny it's not IBM strategy to go head-to-head on cost per bit even though you you will you'll price it very competitively but your your game is to add value in other ways through your your very large software portfolio through AI things like blockchain and differentiable services that you can layer on top I've often made the point I think a lot of people don't understand that that insulates IBM from a race to the bottom with the alcohol traditional a cloud suppliers I wonder if you could comment yeah you know so I have to stress the point that just because I talk about all our other distinguishing capabilities that people don't walk away with the impression that we don't do what any of the other large cloud service providers do you know to your point we have AI we have IOT we've got a hundred ninety API driven cloud native pass services where you can write a cloud native application just like you build on the hyper scale other hyper scalars as well right so we give nothing away but for us the true value proposition here is to give you all of those capabilities in a very secure environment you know whether it is the fact that we are the only cloud where we don't have access to your data or your code because we have a keep your own key mechanism where we as a cloud service provider have no access to your key nobody else can say that so it is those Enterprise qualities of service and security that we bring to the table and the other architectures and the other you know constructs around bare metal and containers etc that distinguishes us further right and that's how it really so these are really important points that you're making and I know I'm kind of bringing out probably parts of the landscape that IBM generally doesn't want to talk about but I think it's important again to have that frank conversation because I think a lot of people misunderstand IBM is in the cloud game not only in the cloud game to your point but has very competitive you know from an infrastructure standpoint so many companies in the last decade we saw HP tried to get in they exited very quickly Joe to cheese's the CEO of EMC said we will be in the cloud you know they're buying mozi and you know exiting that so Dell right now doesn't you know it won't have a cloud play VMware tried to get in and now its course big partner of yours so you got in and that to me is critical just in terms of positioning for the next decade and beyond and and the other piece of differentiation that I want to drill into is the financial services cloud so what is that you obviously have a strong background there let's let's dig into that a little bit yeah if you look at the way most banks or actually every bank uses a public cloud is they build guardrail right they build guardrails from where their data center ends to where the public cloud begins but once you get into the public cloud then it really depends on the security that the cloud service providers provide and the csps will tell you that they have a lot of secure mechanisms there but if you ever speak with a bang you know they will never put their highly confidential data bearing apps with PII on a public cloud because they don't feel that the security that the cloud service providers provide is good enough for them to be able to put it there safely number one and number two prove to their regulators that they are in fact and compliant so what we've done is we work with a Bank of America and now you know a whole bunch of other banks that I'm not allowed to mention by name as yet where we're building a series of controls right these are both controls during your dev Sakharov cycle when you're building your app and another 400-plus controlled and the runtime that allow you is the bank to securely take your apps that have highly confidential data in III and put it on the public cloud and will give you the right things whether it's the isolation of the control plane and the data plane or it's their data loss prevention mechanisms the right auditing points the right logging points the right monitoring points the right reporting data sovereignty so we have controls built into the cloud that enable you to do all of this now banks will be quick to tell you that the onus of proof is on them alone to the regulators and we can claim that for them and they're absolutely right but today they spend hundreds of millions of dollars collecting all of that in providing that proof to the regulators you use our cloud we automate a whole bunch of that so you're not number one as a bank trying to implement these controls on a public cloud because that's not your job that's not your core expertise and number two when you actually build these compliance report you spending you know millions and millions of dollars trying to put it together whether compliance regulator will say yes this is okay we automate a large part of that for you and I think that's the key is the key issue we're solving you I want to follow up and just make sure I understand it is when I talk to executives in the financial services industry and other industries those they'll say things like look it's not that the cloud security is is bad it's just that I can't map the edicts of my organization into it certainly easily or even at all because I'm getting a sort of standard set of capabilities and it may not fit with what I need what I'm what I what I'm hearing is that IBM you know you guys are enterprise-c you used the specials but but so that's part of it but you also said you know they feel sometimes the cloud is that the security is not good enough and I want to understand what that is specifically if IBM is doing something differently so two things there one is your willingness whether it's auditability transparency mapping to corporate edicts and it may be other things that you're doing that make it better relative to go together yeah absolutely so one of the things is as I mentioned it's the mechanisms like keep your own key which is fundamental to building some of these compliance safeguards in but the the fundamental different thing we've done here is we work with the Bank of America and we've defined these controls to use your language that maps to their edicts right which should map to every banks edicts no you know there'll be a couple of extra controls here or there but largely they're all regulated by the same regulator so what satisfies one bang for the most part satisfies every other bank of the US as well right and so specifically what we've done is we've built those controls whether they are preventative controls or compensating controls in the CI CD pipeline as well as in the runtime on the cloud and that gives them a path to automation to produce the right results and the right reports to their auditors and that's really what we've helped them do so I know I'm pushing you here a little bit I'm gonna keep pushing if that's okay I was a great conversation when when IBM completed the acquisition of Red Hat you know the marketing was all about cloud cloud cloud and I came out and said yeah okay fine but what it's really about is application modernization that's the near-term opportunity for IBM you certainly saw that in the last earnings report where I think you're working with a hundred plus you know clients in terms of their application modernization so I said that is the way in which this thing becomes a creative which by the way it's already a creative and from a cash cash flow standpoint but but but but I'm gonna press you on on the cloud piece so talk about Red Hat and why it is cloud in terms of a cloud play yeah so you know this is the power of Red Hat and the IBM public cloud and of course Red Hat works or the other cloud print service providers as well so if you think about modernizing your application you know the industry pretty much has standardized around containers right as the best way to modernize their applications and those containers are orchestrated by kubernetes that's the orchestrator that's basically won the battle and Red Hat has OpenShift which is a industry-leading capability you know it's a coupon IDs control plane that manages containers and we from IBM we've put our content we've read backer a content into containers and we've made it run on an open ship and we have a cloud managed open ship server on the IBM public cloud as well as an on-prem that really helps bring our content to people who are trying to modernize their applications now think about an application that most people try to modernize you know the rough rule of thumb about 20 to 25 percent of it there's application code that is the onus is on the client to go and modernize that and they've chosen containers and turbidities and the other 75 to 80% arguably is middleware that they've got right and we've really tected in refactor that middleware into containers managed by open ship and we've done 80% of the work for them so that's how this whole thing comes together and you can run that on Prem you can run it on the IBM public cloud and I give you a cloud managed openshift service to do that effectively honor so that's interesting yeah that's very interesting I think there are you know probably at least three sort of foundational platforms one is obviously easy mainframe it's still much of IBM's customer base you know the tied to the Z and it drives all kinds of other software and so what the second is middleware to your point and you're saying you refactored and I think the third really is your choice of hybrid cloud strategy you kind of made the point you threw an on-prem it's to me it's that end-to-end that's your opportunity and your challenge if you can show people that look we've got this cloud-like experience of from cloud all the way to RM multi clouds that is a winning strategy it's jump ball right now nobody really owns that space and I think IBM's intent is to try to go after that I think you've called it a trillion-dollar market opportunity and it's obviously growing yes that's exactly right and the P spot so that I've been describing to you the you know the way people modernize our applications all fit very nicely into that now if you speak with the analysts they're going towards a whole different category called distributed cloud which basically means you know how do you bring these capabilities that run on your public cloud do on-prem and do other people's clouds and you know what I hinted at here is that's exactly where we're going with our set of capabilities and that is a technical journey I mean kubernetes is necessary but insufficient condition to have that sort of Nirvana of this distributed massive distributed system bring in edge edge systems as well so this is a you know at least a multi-year maybe even a decade-long journey there's a lot of work to be done there what would you say are their strategic imperatives for IBM cloud over the next several years so I think for us really it is you know building on this notion of the distributed cloud as I talked about it is you know fully building out the FSS cloud most of which we've already done and you know some of these things will never be at end of job because regulations keep changing and you keep adding to it and so you have to keep adding to it as well so a focus on FSS to begin with but then also to other industries as well right because there are other regulated industries here that can benefit from the same kind of automation that we're doing for FSS so we'll certainly do that and we're in a good position because it's not only our technology but it's our services practice it's a premonitory that deals with regulators etc so we have the whole package so we want to continue to build out on that branch into other industry verticals using our industry expertise across the board services product everything and then of course you know if there's one thing I BM has market permission for it is understanding the enterprise and building a secure product so we clearly want to evolve on that as well the IBM is a lot of arrows in its quiver including as we discuss cloud you know you just got to get her done as they say so iris thanks so much for coming to the cute great discussion appreciate your your transparency and and stay well Thank You YouTube thank you so much re welcome and thank you for watching everybody this is the cubes coverage of the IBM pink 2020 digital event experience we'll be right back right after this short break [Music]
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
millions | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Bank of America | ORGANIZATION | 0.99+ |
Dave Volante | PERSON | 0.99+ |
75 | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
80% | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Bank of America | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Ginni Rometty | PERSON | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
four years | QUANTITY | 0.99+ |
one month | QUANTITY | 0.99+ |
ETR | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Harish Grama | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
iCloud | TITLE | 0.99+ |
hundreds of millions of dollars | QUANTITY | 0.99+ |
Harish | PERSON | 0.99+ |
trillion-dollar | QUANTITY | 0.98+ |
millions of dollars | QUANTITY | 0.98+ |
Boston | LOCATION | 0.98+ |
VMware stack | TITLE | 0.98+ |
last year | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
Red Hat | ORGANIZATION | 0.98+ |
first | QUANTITY | 0.97+ |
third | QUANTITY | 0.97+ |
YouTube | ORGANIZATION | 0.97+ |
400-plus | QUANTITY | 0.97+ |
two things | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
Kovac | ORGANIZATION | 0.96+ |
25 percent | QUANTITY | 0.96+ |
both | QUANTITY | 0.95+ |
VMware | TITLE | 0.94+ |
Arvin | PERSON | 0.94+ |
Sakharov | TITLE | 0.93+ |
next decade | DATE | 0.93+ |
AIX | TITLE | 0.92+ |
a hundred plus | QUANTITY | 0.92+ |
three | QUANTITY | 0.91+ |
IBM pink 2020 | EVENT | 0.91+ |
OpenShift | TITLE | 0.9+ |
Red Hat | TITLE | 0.9+ |
last decade | DATE | 0.89+ |
US | LOCATION | 0.89+ |
about two plus years ago | DATE | 0.88+ |
European | OTHER | 0.88+ |
second | QUANTITY | 0.88+ |
both controls | QUANTITY | 0.86+ |
lot of stuff | QUANTITY | 0.86+ |
80% of | QUANTITY | 0.86+ |
CEO | PERSON | 0.84+ |
couple of answers | QUANTITY | 0.83+ |
one thing | QUANTITY | 0.81+ |
last four years | DATE | 0.8+ |
IBM Think 2020 | EVENT | 0.79+ |
two | QUANTITY | 0.79+ |
about 20 | QUANTITY | 0.79+ |
Joe | PERSON | 0.79+ |
Brandon Traffanstedt, CyberArk | AWS Marketplace 2018
>> From the ARIA Resort in Las Vegas, it's theCUBE. Covering AWS Marketplace. Brought to you by Amazon Web Services. >> Hey, welcome back here everybody Jeff Frick here with theCUBE. We are at AWS re:Invent 2018 wrapping up day one. We're going to do four days of coverage. We have four sets, three locations. But we're kicking things off here at the AWS Marketplace and Service Catalog event here at the ARIA. We're excited to be joined by our next guest, first time on theCUBE, but he's been working on the security stuff for a long time. He's Brandon Traffanstedt, he's the Global Director of System Engineering for CyberArk. Brandon, great to see you. >> Thank you very much. Glad to be here. >> Absolutely. So we started the conversation first off let's just give us the quick overview of CyberArk for people who are unfamiliar with the company. >> Definitely. So CyberArk does privilege access security, and that is the vaulting rotation in management of incredibly powerful accounts. Both traditional ones, the domain admin, to ones that exist in a more femoral, or cloud state. Access key, secret key pairs, route access into your console. So our goal is to take those out of the minds of users, out of those spreadsheets, out of hard coded code stacks. Place them in a secure location, rotate them, and then provide secure access to people as well as non people too. >> So you really segregate the privilege access as a very different category than just any regular user of kind of admin type of person. >> Absolutely. Though the focus is key. When we look the general spectrum of accounts in an organization, yes you've got the lower ones that are identity driven. Attackers might use those to get in, but really the creamy, nuggety center are those high value credentials. It's what brings down organizations. It's what we see involved in breaches every single day. So the focus there on those powerful ones is what gets us the most security posture increase with the least amount of effort. >> You know, it's interesting. 'Cause I always think of security as kind of like insurance. You can't absolutely be 100% positively. You can't spend every nickel you have on security, but you want to have a good ROI. So what you're saying, really, is this is a really good ROI investment from your security investment because these are really the crown jewels that you need to protect first. >> Absolutely. And like insurance, we often want to plan for the absolute worst to occur. There have been breaches in the past where yes, there were dollars that were spent on things like remediation, but if you have a huge customer base, even the postage alone to notify folks that you've had a compelling event tends to up into the seven figures. >> I never even thought of that. It's not a trivial expense. >> Absolutely. >> So, you said you've been doing this for 20 years, so a lot of change. There was no AWS re:Invent 20 years ago. There was not cloud computing as we know it today. So, you know we'll talk about kind of the current state but I'd love to get more kind of your historical perspective, you know being a security export, how your challenges have changed as this kind of continual escalation of war, accounting of strike counters strike. I'm thinking of MAD Magazine's Spy vs Spy, right, has continued to escalate over these 20 years. >> Definitely. So, years and years ago organizations were very monolithic from both the application side as well as their more kind of human focused infrastructure. Right, we had one or two domain controllers. Typically physical systems. But what happened is, the architecture broke down. So what, 10 years ago virtualization was the big thing, right. Same types of accounts, but more systems. More automation flows. So as we replaced humans with non humans, what happened was, more human users got over privileged, right? They were empowered to get their jobs done. But we had more and more robots that began doing their work. So one of the things that we saw, was the breaking down of the applications stacks to the point that we are now, you can spin up thousands of instances in a matter of clicks over a matter of seconds. Move that into a more micro services model, and you now have tens of thousands of nodes that can exist in the blink of an eye. All having the same type of access restrictions but just being far more distributed. >> Right. And so many more tax services with IOT, and all these things all over the place. And so, much more complex environment. >> Definitely. One of the things about all this beautiful automation and centralization that's occurring, is that now attackers don't have to go through that same type of flow they used to, right. Compromise an in user, escalate privilege on a laptop for instance, move laterally and continue to perform that dance. Now, all it takes is one compromise into your cloud management console for instance. And a lot of times that's game over. Our attacker is also changing a little bit. So I'm proud to say, but I'm a millennial and the thing about millennials is we tend to be very, some would say lazy, but I would say efficient in how we perform tasks. So for me, performing that lateral movement verses a one stop shop for a public effacing entity, I'm going to choose the one stop shop. >> Very true. So one of the hot topics in today's world is RPA, robotic process automation. We are at Automation Anywhere, we are at the UiPath Show this year, it's getting a lot of buzz. Both those companies have raised a ton of money. Hot, hot, hot space. It adds a whole new level of complexity and opportunity on the security side. So how should people be thinking about RPA and security? >> So when it comes to RPA, one of the things that is simply parr for the course, is that in order for robots to do their jobs, to build this automation that folks are looking for, they've got to authenticate this stuff. A lot of times we'll see that authentication happen as kind of an isolated secret that's stored, say inside of Automation Anywhere for instance. The goal there is, well we can rotate it, maybe, but now we have to update it here and there and a number of other spots. So one thing that we see as being a very prevalent theme is well let's find a centralized and secure source to manage them, and allow the robotic process automation to authenticate securely to that entity, pull the secrets as they need. Now, we can rotate that as many as what, ten, twelve times a day if we wanted to without our RPA missing a beat. At CyberArk we have what's called a C cubed alliance where we brought together a number of RPA vendors. All the ones that you mentioned. As well as other automation platforms, security vendors too. To where you don't have to do the work of integrating. It's already there and it's been built. And we're taking a huge direction from our customer base there to tell us what's hot, what's new for them. To let us proper those conversations. >> Because the robots are actually treated inside the system I believe, as like a person right? It's kind of like your own personal assistant. So in terms of the identity and the access, it's managed very much as if it was just a new hire. >> For sure. And if you look at it for instance using something like another automation platform like Jenkins. Jenkins is personified by a butler. Jenkins' task is to go out and perform all these tasks for you. But I'll submit to you if I were to offer you, hey Brandon, you can come to my house, vacuum my floor every Friday, that sounds like a pretty good deal. Especially if it's an open source. If I do it for you for free. But you encounter risk by giving me the keys to your house. The same is true for those automation platforms. A lot of times we divorce that robot from a human so we don't do the same level of due diligence to give the robot an identity to instantiate lease privilege. It's one of the things we've seen be a very huge theme in successful customer deployments. As well as automating their security too. >> Well at least they're not going to give away the security when someone calls up and says can you please give me the URL for the company picnic. I can't get in, you got to help me out. Hopefully they didn't train the robots to answer that question and let that social engineering enter. Is there social engineering for RPA? >> There is. When you look at RPA or even code that exists in public repositories, one of the quickest attacks you can do is to GitHub, search for your secret of choice. Maybe it's Postgres, maybe it's a vendor name underscore secret. If you sort that code by recent commits, you'll find people's hardcoded secrets that exist inside of public repositories. It's not because our developers are malicious. It's because it wasn't top of mind for them. They didn't have a more compelling solution. So that's one of the quickest attacks and I think that's social engineering. It could be as easy as compromising as say, one of your AWS administrators who happens to have a privileged key in a text file on his desktop. Same is also true there. >> Right Brandon, so we're here at the AWS Marketplace experience. Share with us a little bit about how you work with AWS Marketplace and what's that meant for your company. You've been around for 20 years. So you didn't need them to get started, but how are they helping you change your business? >> So one of the things that has been very top of mind for us over the past couple of years is supporting the community. In many cases folks will come to us with a project. Whether it be post breach mediation, audit compliance; whatever it may be, they have some indicator of moving forward. A lot of times when developers are building out processes, they may not be the driver from the business so the goal was we need to be able to support the community to provide open source secrets managements and do so very quickly. So there doesn't need to be a project or a red tape. AWS Marketplace has helped us provide our open source solution in a beautifully deployed package to as many folks as possible, so that at least they have some secure place to store those secrets without altering the way they do things. If they have to go outside of the Marketplace flows that they're used to, it's extra work. And we never want security to be a constraint to building good, quality automation development practices. >> Right. And how's Amazon been as a partner? There's a lot out there, be careful, they're going to see what you do and copy it and knock you out of business. How have they been working with as a partner? >> They've been fantastic. Highly supportive from both the programmatic secrets management perspective but also in providing best practices for how to deploy our core stack into AWS. How to handle things like auto scaling. As well as providing some APIs to extend our secrets management capability based on customer ASPs on both sides. >> Alright Brandon, well thank you for taking a few minutes. I'm sure we're both going to be dog tired in a couple of days. >> We can hope so, yeah. >> So we started while we were fresh. So I appreciate you taking a few minutes and stopping by. >> Always a pleasure. Thank you again for the invite. >> All right, he's Brandon, I'm Jeff. You're watching theCUBE. We're at AWS Marketplace and Service Catalog Experience here at the ARIA. Thanks for watching. See ya next time. (upbeat music)
SUMMARY :
Brought to you by Amazon Web Services. and Service Catalog event here at the ARIA. Glad to be here. So we started the conversation and that is the vaulting rotation in management So you really segregate the privilege access So the focus there on those powerful ones the crown jewels that you need to protect first. There have been breaches in the past It's not a trivial expense. but I'd love to get more kind of your historical So one of the things that we saw, And so many more tax services with IOT, and the thing about millennials is we tend to be very, So one of the hot topics in today's world All the ones that you mentioned. So in terms of the identity and the access, But I'll submit to you if I were to offer you, hey Brandon, the robots to answer that question one of the quickest attacks you can do So you didn't need them to get started, So one of the things that has been they're going to see what you do and copy it for how to deploy our core stack into AWS. Alright Brandon, well thank you for taking a few minutes. So I appreciate you taking a few minutes and stopping by. Thank you again for the invite. here at the ARIA.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brandon | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
ten | QUANTITY | 0.99+ |
Jeff | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
CyberArk | ORGANIZATION | 0.99+ |
20 years | QUANTITY | 0.99+ |
Brandon Traffanstedt | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
10 years ago | DATE | 0.99+ |
Both | QUANTITY | 0.98+ |
three locations | QUANTITY | 0.98+ |
seven figures | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
four days | QUANTITY | 0.98+ |
four sets | QUANTITY | 0.98+ |
Postgres | ORGANIZATION | 0.98+ |
first time | QUANTITY | 0.97+ |
day one | QUANTITY | 0.97+ |
GitHub | ORGANIZATION | 0.97+ |
one thing | QUANTITY | 0.96+ |
today | DATE | 0.95+ |
20 years ago | DATE | 0.95+ |
this year | DATE | 0.95+ |
two domain controllers | QUANTITY | 0.95+ |
2018 | DATE | 0.95+ |
first | QUANTITY | 0.95+ |
AWS Marketplace | ORGANIZATION | 0.94+ |
both sides | QUANTITY | 0.93+ |
years and | DATE | 0.87+ |
twelve times a day | QUANTITY | 0.86+ |
One | QUANTITY | 0.86+ |
Automation Anywhere | ORGANIZATION | 0.85+ |
tens of thousands of nodes | QUANTITY | 0.83+ |
ARIA | ORGANIZATION | 0.83+ |
CyberArk | TITLE | 0.83+ |
Spy vs Spy | TITLE | 0.82+ |
Marketplace | TITLE | 0.82+ |
single day | QUANTITY | 0.81+ |
re:Invent 2018 | EVENT | 0.81+ |
theCUBE | ORGANIZATION | 0.8+ |
one stop shop | QUANTITY | 0.78+ |
past couple of years | DATE | 0.77+ |
Jenkins | TITLE | 0.77+ |
Jenkins' | PERSON | 0.75+ |
Invent | EVENT | 0.75+ |
Resort | ORGANIZATION | 0.74+ |
ton of money | QUANTITY | 0.72+ |
years ago | DATE | 0.72+ |
every nickel | QUANTITY | 0.68+ |
things | QUANTITY | 0.67+ |
UiPath Show | EVENT | 0.64+ |
CyberArk | PERSON | 0.64+ |
Magazine | TITLE | 0.63+ |
IOT | ORGANIZATION | 0.58+ |
ARIA | LOCATION | 0.54+ |
Friday | QUANTITY | 0.53+ |
seconds | QUANTITY | 0.52+ |
Alex Ellis, OpenFaaS | DevNet Create 2018
>> Announcer: Live from the Computer History Museum in Mountain View, California. It's theCUBE covering DevNet Create, 2018, brought to you by Cisco. (techy music playing) >> Okay, welcome back, everyone. We're live here in Mountain View, California, in the heart of Silicon Valley for Cisco's DevNet Create. This is their new developer outreach kind of cloud, devops conference, different than DevNet their core, Cisco Networking Developer Conference is kind of an extension, kind of forging new ground. Of course theCUBE's covering, we love devops, we love cloud. I'm John Furrier with Lauren Cooney, my cohost today. Our next guest is Alex Ellis, project founder of OpenFaas, F-A-A-S, function as a service. That's serverless, that's Kubernetes, that's container madness. You name it, that's the cool, important trend, thanks for joining us. >> Yeah, thanks for having me, it's great to be here. >> So, talk about the founding of the project. So, you're the founder of the project-- >> Alex: Yeah. >> And you now work for VmWare, so let's just get this-- >> Yeah. >> On the record, so-- >> Alex: Yeah, I think this is-- >> Take a minute to explain. >> This is important just to set a bit of context now. I started this project from the lens of working with AWS Lambda as a Docker captain. I was writing these Alexa skills and I found that I had to hack in a web editor and click upload, or I had to write a zip file, put dependencies on my laptop, and upload that to the cloud every time I changed it. It just didn't feel right because I was so bought into containers. It's the same everywhere, there's no more, "It works on my machine." >> John: You're going backwards. >> Right? (laughing) So, I put a POC together for Docker Swarm and nobody had done it at that point, and it got really popular. I got to Docker Concourse Hacks Contest and presented to 4,000 people in the closing keynote, and I kind of thought it would just blossom overnight, it would explode, but it didn't happen, and actually, the months... We're going back 14 now, I grew a community and spent most of my time growing the community and extending the project. Now, that has been really fruitful. It's led to over 11,000 stars on GitHub, 91 individual contributors, and much, much more. It's been a really rich experience, but at the same time-- >> So, rather than going big rocket ship you kind of went, hunkered down and got a kernel of core people together. >> Alex: Yeah. >> Kind of set the DNA, what is the DNA of this project if you had to describe it? >> Yeah, so I think at the heart of it it's serverless functions made simple for Docker and Kubernetes. >> Great, and so how does Amazon play into this? You were using Amazon cloud? >> Yeah, I was using AWS and I was using Lambda, and that flow was not what I was used to in the enterprise. It wasn't what I was used to as a Docker captain. You know, I wanted a finite image that I could scan for vulnerabilities. >> John: Yeah. >> I could check off and promote through an environment. >> John: Yeah. >> Couldn't do it, so that was what OpenFaas aimed to do, was to make those serverless functions easy with Docker as a runtime. >> Well, congratulations, it's a lot of hard work. First, building a community's very difficult, and certainly one that's relevant. Cool and relevant, I would say, is serverless and functions. We'll certainly be seeing that now at the uptake. Still early on, but people are working on it. So, then now, let's forward to today. You work for VMWare, so-- >> Alex: Yeah. >> How did they get involved, are you shipping the project to VMWare, do they own it? Do you maintain the independence? What's the relationship between VMWare, yourself, and the project, if you can talk about that. >> Yeah, I think that's a great question. So, I got to the point where I had demands on my time around the clock. I couldn't rest, open source project, weekends, nights, the lot. >> John: You need the beer money, too, by the way. >> Right, yeah. >> You need some beer money. >> And I was working at ADP and just doing all of this in my own time, and then had a number of different options that came up and people saying, "Look, how are you going to sustain this, "how are you going to keep doing what you love?" You know, you should be working on it full time. One of the options that came up was from VMWare to work in the Open Source Technology Center. It's relatively new-- >> John: Mm-hmm. >> And the mission of the OSTC is to show VMWare as a good citizen in the community and to contribute back to meaningful projects, right, that relate to their products. >> Yeah, and they have good leadership, too, at VMWare. A lot of people don't know that. We did a couple CUBE interviews with them last year, and there is a group inside VMWare that just does that, not with the tentacles of VMWare and Dell Technologies in there. It's an independent group. >> Alex: Yeah. >> They probably go to some meetings and do some debrief, but for the most part it's kind of decoupled from VMWare, right. >> Yeah, right. So, the mission is not necessarily to make money and to produce products. It's to contribute to open source. Help with inbound so when we need to consume a project in a product, and outbound when we want to make the world a better place. >> So, I'm not going to put words in VMWare's mouth, but I will speculate covering VMWare since theCUBE started. We've been to every VMWorld and everyone knows we've got the good presence there, but if I'm VMWare I'm like, "Hey, you know what, we just "did a deal with Amazon, our enterprise "group is not so cloud savvy." I mean, the enterprise, there are operators, not true cloud native, but they're bridging that gap. The world of cloud native and enterprise is coming together. Does this project fit into that spot? Is that kind of where they saw it? Did I get that right or what was their interest other than doing-- >> Alex: Yeah. >> Helping the world out and solving world peace in the open source community. >> Yeah, so the mission of OSTC is slightly different. It's to contribute back to meaningful projects and to have this presence in the community. You know, I think OpenFaas is particularly attractive because it has such a broad community. There's people all around the world that are contributing to it, very active. For VMWare it makes a lot of sense because it runs natively on Kubernetes or Docker Swarm, and it's gained a lot of traction, people are using it. >> John: Mm-hmm. >> I had a call with BT Research before I came out and they said, "We've been using it for seven months. "We absolutely love it, it's transforming "how we're doing our microservices," and so I think that's part of it, as well as already have kind of a lead. Already have a lot of momentum with this project. >> So, are you looking to, you know, I know that the organization that you work for is really focused on driving this outbound, right? >> Alex: Yeah, yeah. >> Is VMWare using this internally as well? >> So, I think there's been a number of people who've shown an interest. You can think, "Right, there's a problem "we could solve with this," and I'm just getting my feet under the table, but really my mission is to make serverless functions simple to build this community-- >> Lauren: Mm-hmm. >> And to have something that people can turn to as an alternative. So, one of the things that I did in the talk yesterday was, "How do you explain OpenFaas to your boss," and one of the points there was to unlock your data. >> Lauren: Mm-hmm. >> And I think we talked about this briefly before, now with controversies recently about data and who owns it, what's happening with it, I think it's even more relevant that-- >> John: Yeah. >> You can have full control over the whole stack if you want-- >> John: Yeah. >> Or use a product like Microsoft AKS, their Kubernetes service-- >> Lauren: Mm-hmm. >> Or GKE and actually treat OpenFaas like a very thin layer of automation. >> Lauren: Really, okay. >> Or go full stack and have everything under your control. >> I mean, that's a great conversation to have, too, because obviously you're kind of referring to the Facebook situation. Zuckerberg's testifying it front of Senate yesterday, Congress today, and it's funny because watching him talk to senators in the US, they really don't know how stuff works, and so if you think about what Facebook does... I mean, granted they took some liberties. They're not the perfect citizen, they got slapped. They took it to the woodshed, if you will, but their mission is to use the data, and this is where cloud native's interesting and I think I want to get your reaction to this, you need to use the data, not treat it as a siloed, fenced in data warehouse. That model's old, right-- >> Alex: Yeah. >> It's now horizontal and scalable. Data's got to move and you've got to have data to make other things happen. That's the way these services are working. >> Yeah. >> So, it's really important to have addressability of the data and you know, GDPR takes an attempt at, you know, kind of hand waving that simple argument away. I'm not really a big fan of that, personally, but the role of data's super important. You've got to make it pervasive, so the challenge is how do you manage those controls. Is that an opportunity for functions? What's your reaction to that whole paradigm of data? >> Yeah, so we're talking about anonymous usage data, like Facebook situation or-- >> Just data in general... Oh, no, just data in general, if I'm an application and I have data-- >> Alex: Yeah. >> That I'm generating, same development of service-- >> Alex: Yeah. >> I need, you might want to leverage that data. So, I'm going to have to have a mechanism for you to share that data to make your service better-- >> Alex: Yeah. >> Because data makes data, you know-- >> Alex: Yeah. >> The alchemy side of it is interesting, but then there's all... You get trapped in regulation, licensing, it can be destructive. >> Yes, so as an engineer, and as an open source engineer, you find people that have no clue about what an MIT license is to a GPL or why you'd use one or the other. I think there's a lot we can do to educate the wider community and help them to learn the basics of these issues. When I was at university we had a course on ethics and legal issues and licensing, and I heard on the radio earlier on the Uber that they're starting to try and up the level of that again, and I think it really needs to start at a ground level. We need to educate people about these issues so that they're aware of how to handle the data. I mean, if you look at common tools like Docker and VS Code and Atom, popular editors, they collect anonymous usage statistics and you have to opt out. You know, should OpenFaas collect data as well, because it can be super helpful for us to know the right thing to do. >> Yeah. >> And when you come to open source you get no feedback until somebody wants support from you and it has to be done yesterday for free. >> Yeah, yeah, yeah. >> And so, yeah, getting data can be super powerful. >> Well, Alex, you bring up a great point. I think this is something that's worthy of an ongoing conversation. I think it will be, too, because GPL, Apache license, all these licenses were built when open source was a Tier 2 citizen, so the whole idea of these-- >> Alex: Yeah. >> Licenses was to create a robust sharing economy of code, and you know, with the certain nuances of those licenses. But just like stacks get updated and modernized with what we've seen the containers and now Kubernetes is serverless, the stack is changing and modernizing. The licenses have to, as well, so I think this is something that... I don't, I think it's kind of like we've got to get on it. (laughing) It's like I think we should just, this is a work area. It's not necessarily... It's game changing if you don't do it, right, because it could-- >> Yeah. >> It could flip it either way. So, to me that's my opinion. >> Well, I think you're under MIT, correct, is that-- >> So, it's under MIT right now. >> Lauren: Okay. >> One of the things that I didn't realize when I started the project is if you want to get into a big foundation like the Cloud Native Computing Foundation you need an Apache 2.0 license, and the main difference is that it offers some protections around patent claims, but it's basically-- >> Lauren: Okay. >> Compatible, so it is a minefield, and it's-- >> Lauren: So, that's just for the CNCF? >> Right, and the Apache Foundation, obviously as well. >> Lauren: Yes. >> And probably many others follow suit because I think it, we talk about the-- >> John: It's the dual source, it's the dual source. >> A refresh... >> John: Yeah, yeah. >> Right, it's a compatible license, it seems to help a lot of people. >> Lauren: Mm-hmm. >> That's a huge issue because you could be well down the road with committing code and then the lawyers will make you take it out. >> Right, so that's why organizations like the Open Source Program Office exist within VMWare, to help these issues and to monitor and do compliance. They may use software like Black Duck to check stuff-- >> Lauren: Yep, mm-hmm. >> Automatically because you don't want to be doing checks on your aircraft once it's in the air. >> Lauren: Mm-hmm. >> John: Yeah. >> You want to sort out everything out on the ground. >> You'll be grounding your fleet, that's for sure-- >> Right. >> When it comes to that, how do you handle that with licensing? How do you guys handle that when people contribute? >> Yeah. >> Are they aware of the license or they don't understand the implications? >> So, with OpenFaas we follow a model very similar to the Linux kernel, which is a sign off developer certificate of origin. What you're saying is I'm allowed to give you this code, I'm allowed for this to be a part of the project and I wrote it, I originated it. >> Lauren: Mm-hmm. >> And that's pretty much a good balance between a full contributor license agreement and nothing at all. >> John: Yeah. >> Lauren: Mm-hmm. >> But look, there's a lot of projects in this space right now. I don't know if you've noticed that, Kubernetes serverless projects. >> Yeah, I mean, it's a lot of really interesting, it's why I like this show here. I think what Cisco's smart to do here at DevNet Create is identify the network programmability, which really takes devops, expands the aperture of what devops is, so-- >> Alex: Yeah. >> You know, as you got new applications coming online some developers want nothing to do with the infrastructure. Kubernetes has got a much more active and more prominent role with layer seven primitives, for instance, or-- >> Alex: Yeah. >> Managing things down to the network layer. You're talking about policy services inside services on the fly, so this is really a big, a good thing, in my opinion. So, you know, I think, Kubernetes, most people look at as a kind of generic orchestration, but I think there's so much more there. >> Alex: Yeah. >> I think that to me is attracting some really rockstar developers. >> Yeah, well I think, you know, the fact that you are open, you're under the MIT license, which I am a fan of-- >> Alex: Yeah. >> And you know, it is, you're on a very successful trajectory in terms of, you know, what you're building and who's engaged and the fact that VMWare is behind you means that they're going to put some money into it, hopefully, and help you guys along as it works, but it is also a project that is not... You know, it doesn't have folks just from VMWare. >> Alex: Yeah. >> It's really, really diverse in terms of who's committing the code. So, I think there's a lot of things that are really going for you. Now, who do you see, you mentioned competitors... >> Alex: Yeah. >> So, can you talk a little bit about what the ecosystem there looks like? >> Yeah, so there's a number of projects that I think have made some really good decisions about their architecture and their implementation. They all vary quite subtly, and one of the questions I get asked a lot is, you know, how is this different from X, cubeless nucleo, and if you look at the CNCF landscape there used to be a very small section with OpenFaas, Lambda, and a couple of others. It's now so big it has its own PDF just about serverless, and I think that's super confusing for people. So, part of what we're trying to do is make that simple and say, "Look, there may be many options. "Here's OpenFaas, here's how it works. "You can get it deployed in 60 seconds. "You can have any binary or any programming language "you want and it will scale up over Kubernetes." We'll just make a really deep integration, give you everything you'd expect, really nice developer experience. >> Lauren: That's great. >> What are some of the use cases you see right now, low hanging fruit for developers that want to come in and get involved in the project? Have you guys identified any low hanging fruit use cases? >> So, what I've seen, and I talked about this a bit yesterday in the talk, is three big use cases, really. The first one was Anisha Keshavan at University of Washington. >> Lauren: Mm-hmm. >> Now, she's doing a lot of data science with neuroinformatics, medical images. She's able to take scans of brains and give them to people like you and me, who don't know anything about medical science. We just draw around the lesions and we train her model, and then she makes it competitive like a game, gamefies it, you get more points, but actually, what we're doing is making the world a better place by training her medical imaging database. >> Lauren: Mm-hmm. >> She'll then use that as an OpenFaas function to test real images as part of her postdoctorate. >> So, she's crowd sourcing, wisdom of crowds. >> Alex: Right. >> Collect some intelligence for her research. >> Now, one of the other things that I think's really cool is in the community we built out a project with two 17 year olds. Two 17 year olds built a really cool project, and when I think back to when I was 15, 16, I was playing with something like PHP on Windows Lamp Stack. You know, I had to do everything myself. >> John: Yeah. >> They got, like, this scaffolding built up and they could just go to the tenth story and just keep adding on. >> John: Yeah, yeah. >> And they didn't have to worry about managing this infrastructure at all. >> Or architecture, foundation architecture. >> Alex: Right, right. >> Yeah, and that's exactly the reason why you want to do that. >> So, they wrote some small blocks of Python that we found this machine learning code that could convert a black and white image to color, wrapped it in a box and said, "There's a function," then dropped it into OpenFaas and started feeding tweets in, and that was pretty much it. >> John: Yeah. >> Now we have @ColorizeBot, a bit of a strange spelling but you'll find it on Twitter, and it's been in Le Monde newspaper, all round the world. It was pronounced at CubeCon as well, and it's just a super interesting way of showing how you can take something very complex, right, and democratize it. >> Yeah, we'd love to get those people working for theCUBE and put the little cube box and throw all the tweets in there. >> Alex: Right, yeah. >> Alex, thanks for coming on, congratulations. What's next on your project, tell us what's going on, what's next for you, what are you guys conquering next? >> So, I'm really focused on growing the team and community. We've got an open recruitment position open right now and a small team that's building internally. I think the more people we can get contributing on a regular basis the more support there's going to be for the community, the more people are going to want to use this Actually had 26 people join a call last week. "How to contribute to OpenFaas," that was the name of it. >> Lauren: Mm-hmm. >> Around the world, and the best part for me was where we got to the testimonies and I had people just sharing their tips and experiences. How rewarding it is to contribute something bigger, something that you as a developer will actually want to use. >> Yeah, and the value opportunities, to extract value out of the group-- >> Yeah. >> It's phenomenal, functions as a service. Super relevant in cloud and devops as the middleware, if you want to call it that, expands more capabilities in devops are coming. It's theCUBE coverage here at DevNet Create. We'll be back with more live coverage here in Silicon Valley in Mountain View, California, after this short break. (techy music playing)
SUMMARY :
2018, brought to you by Cisco. You name it, that's the cool, So, talk about the founding of the project. that I had to hack in a web editor and click upload, and actually, the months... you kind of went, hunkered down and got Yeah, so I think at the heart of it it's serverless and that flow was not what I was used to in the enterprise. Couldn't do it, so that was what OpenFaas aimed to do, So, then now, let's forward to today. and the project, if you can talk about that. So, I got to the point where I had One of the options that came up was from VMWare And the mission of the OSTC is to show VMWare Yeah, and they have good leadership, too, at VMWare. but for the most part it's kind of decoupled It's to contribute to open source. So, I'm not going to put words in VMWare's mouth, Helping the world out and solving and to have this presence in the community. and so I think that's part of it, my mission is to make serverless and one of the points there was to unlock your data. Or GKE and actually treat OpenFaas I mean, that's a great conversation to have, have data to make other things happen. of the data and you know, GDPR takes an attempt at, Just data in general... So, I'm going to have to have a mechanism for you You get trapped in regulation, and I think it really needs to start at a ground level. and it has to be done yesterday for free. so the whole idea of these-- economy of code, and you know, with the So, to me that's my opinion. the project is if you want to get into a big foundation it seems to help a lot of people. the lawyers will make you take it out. to help these issues and to monitor and do compliance. Automatically because you don't want to be of the project and I wrote it, I originated it. And that's pretty much a good balance between a full I don't know if you've noticed that, the aperture of what devops is, so-- nothing to do with the infrastructure. So, you know, I think, Kubernetes, most people I think that to me is attracting and the fact that VMWare is behind you means Now, who do you see, you mentioned competitors... I get asked a lot is, you know, how is this different So, what I've seen, and I talked about this a bit to people like you and me, who don't to test real images as part of her postdoctorate. You know, I had to do everything myself. the tenth story and just keep adding on. And they didn't have to worry about Yeah, and that's exactly the reason that we found this machine learning code of showing how you can take something Yeah, we'd love to get those people What's next on your project, tell us what's going on, So, I'm really focused on growing the team and community. something that you as a developer will actually want to use. if you want to call it that, expands
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Anisha Keshavan | PERSON | 0.99+ |
Alex Ellis | PERSON | 0.99+ |
Lauren | PERSON | 0.99+ |
Lauren Cooney | PERSON | 0.99+ |
Alex | PERSON | 0.99+ |
seven months | QUANTITY | 0.99+ |
Zuckerberg | PERSON | 0.99+ |
26 people | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
VMWare | ORGANIZATION | 0.99+ |
60 seconds | QUANTITY | 0.99+ |
Apache Foundation | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Two | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Python | TITLE | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
last week | DATE | 0.99+ |
MIT | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Lambda | TITLE | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Senate | ORGANIZATION | 0.99+ |
First | QUANTITY | 0.99+ |
Alexa | TITLE | 0.99+ |
Mountain View, California | LOCATION | 0.99+ |
BT Research | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
Mountain View, California | LOCATION | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
4,000 people | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
Congress | ORGANIZATION | 0.98+ |
Key Pillars of a Modern Analytics & Monitoring Strategy for Hybrid Cloud
>> Good morning, everyone. My name is Sudip Datta. I head up product management for Infrastructure Management and Analytics at CA Technologies. Today I am going to talk about the key pillars for modern analytics and monitoring for hybrid cloud. So before we get started, let's set the context. Let's take a stock of where we are today. Today in terms of digital business, software is driving business. Software is the backbone, is the driving force for most of the business services. Whether you are a financial institution or a hospitality service or a health care service or even a restaurant service pizza, you are front-ended by software. And therefore the user experience is of paramount importance. Just to give you some factoids. Eighty-three percent of U.S. consumers say that the brand that, the frontal software portal is more important than the product itself. And the companies are reciprocating by putting a lot of emphasis on user experience, as you see in the second factoid. The third factoid, it's even more interesting that 53% of the users of a mobile app actually abandon the app if the app doesn't load within a specified time. So we all understand now the importance of user experience in today's business. So what's happening to the infrastructure underneath that's hosting these applications? The infrastructure itself is evolving, right? How? First of all, as we all know there is a huge movement, a huge shift towards cloud. Customers are adopting cloud for reasons of economy, agility and efficiency. And whether you are running on cloud or on prem, the architecture itself is getting more and more dynamic. On the server side we hear about server-less computing. More and more enterprises are adopting containers, could be Dockers or other containers. And on the networking side we see an adoption of software-defined networking. The logical overlay on top of the physical underlay is abstracting the network. While we see a huge shift, a movement towards cloud, it is also true that customers are also retaining some of their assets on prem, and that's why we talk about hybrid cloud. Hybrid cloud is a reality, and it's going to be a reality for the foreseeable future. Take for example a bank that has its systems of engagement on public cloud, and systems of records on prem deeply nested within their DNC. So the transaction, the end-to-end transaction has to traverse multiple clouds. Similarly we talk to customers who run their production tier one application on prem, while tier two and tier three desktop applications run on public cloud. So that's the reality. Multi-cloud dynamic environment is a reality of today. While that's a reality, they pose a serious challenge for IT operations. What are the challenges? Because of multiple clouds, because of assets spanning multiple data centers, multiple clouds, there are blind spots getting created. IT ops is often blindsided on things that are happening on the other side of the firewall. And as a result what's happening is they're late to react, and often they react to problems much later than their customers find it, and that's an embarrassment. The other thing that's happening is because of the dynamic nature of the cloud, things are ephemeral, things are dynamic, things come and go, assets come and go, IT ops is often in the business of keeping pace with these changes. They are reacting to these changes. They are trying to keep pace with these changes, and silo'd tools are not the way to go. They are trying to keep up with these changes, but they are failing in doing so. And as a result we see poor user experience, low productivity, capacity problems and delayed time to market. Now what's the solution? What is the solution to all these problems? So what we are recommending is a four-pronged solution, what we represent as four pillars. The first pillar is about dynamic policy-based configuration and discovery. The second one is unification of the monitoring and analytics. The third one is contextual intelligence, and the fourth one is integration and collaboration. Let's go through them one by one. First of all, in terms of dynamic policy-based configuration, why is it important? I was talking to a VP of IT last week, and he commented that the time to deploy the monitoring for an application is longer than the time to deploy the application itself, and that's a shame. That's a real shame because in today's world application needs to be monitored straight out of the box. This is compounded by the fact that once you deploy the application, the application today is dynamic, as I said, the cloud assets are dynamic. The topology changes, and monitoring tools need to keep pace with that changing topology. So we need automated discovery. We need API driven discovery, and we need policy-based monitoring for large scale standardization. And last but not the least, the policies need to be based on dynamic baselines. The age, the era of static thresholds is long over because static thresholds lead to false alerts, resulting in higher opics for IT, and IT personnel absolutely, absolutely want to move away from it. Unified monitoring and analytics. This morning I stumbled upon a Lincoln white paper which said 20 tools you need for your hybrid monitoring, and I was absolutely dumbfounded. Twenty tools? I mean, that's a conversation non-starter. So how do we rationalize the tools, minimize the silos, and bring them under single pane of glass, or at least minimal panes for glass for monitoring? So IT admins can have a coherent view of servers, storage, network and applications through a single pane of glass? And why is that important? It's important because it results in lesser blame game. Because of silo'd tools what happens is admins are often fighting with each other, blaming each other. Server admins think that it's a storage problem. The storage admin thinks it's a database problem, and they are pointing to each other, right? So the tools, the management tools should be a point of collaboration, not a point of contention. Talking about blame game, one area that often gets ignored is the area of fault management and monitoring. Why is it important? And I will give a specific example. Let's say you have 100 VMs, and all those VMs become unreachable as a result of router being down. The root cause of the problem therefore are not the VMs, but the router. So instead of generating 101 alarms, the management tool needs to be smart enough to generate one single alarm. And that's why fault management and root cause analysis is of paramount importance. It suppresses unnecessary noise and results in lesser blaming. Contextual intelligence. Now when we talk about the cloud administrator, the cloud admin, the cloud admin in the past were living in the cocoon of their hybrid infrastructure. They were managing the hybrid infrastructure, but in today's world to have an end-to-end visibility of the digital chain, they need to integrate with application performance management tools, APM, as well as what lies underneath, which is the network, so that they have an end-to-end visibility of what's happening in the whole digital chain. But that's not all. They also need what we call is the context of the application. I will give you a specific example. For example, if the server runs out of memory when a lot of end users log into the system, or run out of capacity when a particular marketing promotion is running, then the context really is the business that leads to a saturation in IT. So what you need is to capture all the data, whether they come from logs, whether they come from alarms, capacity events as well as business events, into a single analytics platform and perform analytics on top of it. And then augment it with machine learning and pattern recognition capabilities so that it will not only perform root cause analysis for what happened in the past, but you're also able to anticipate, predict and prevent future problems. The fourth pillar is collaboration and integration. IT ops in today's world doesn't and shouldn't run in a silo. IT ops need to interact with dev ops. Within dev ops developers need to interact with QA. Storage admins need to collaborate with server admins, database admins and various other admins. So the tools need to encourage and provide a platform for collaboration. Similarly IT tools, IT management tools should not run standalone. They need to integrate with other tools. For example, if you want monitoring straight out of the box, the monitoring needs to integrate with provisioning processes. The monitoring downstream needs to integrate with ticketing systems. So integration with other tools, whether third party or custom developed, whatever it is, it's very, very important. Having said that, having laid what the solution should be, what the prescription should be, how is CA Technologies gearing up for it? In CA we have the industry's most comprehensive, the richest portfolio of infrastructure management tools, which is capable of managing all forms of infrastructure, traditional, private cloud, public cloud. Just to give you an example, in private cloud we support the traditional VMs as well as hyper converged infrastructure like Nutanix. We support Docker and other forms of containers. In public cloud we support the monitoring of infrastructure as a service, platform as a service, software as a service. We support all the popular clouds, AWS, Azure, Office 365 on Azure, as well as Salesforce.com. In terms of network, out net ops tools manage the latest and greatest SDN and SD-WAN, the VMware SDN, the open stack SDN, in terms of SD-WAN Cisco, Viptella. If you are a hybrid cloud customer, then you are no longer blindsided on things that are happening on the cloud side because we integrate with tools like Ixia. And once we monitor all these tools, we provide value on top of it. First of all, we monitor not only performance, but also packet, flow, all the net ops attributes. Then on top of that we provide predictive insights and learning. And because of our presence in the application performance management space, we integrate with APM to provide application to infrastructure correlation. Finally our monitoring is integrally linked with our operational intelligence platform. So in CA we have an operational intelligence platform built around CA Jarvis technology, which is based on open source technology, Elastic Logstash and Kibana, supplemented by Hadoop and Spark. And what we are doing is we are ingesting data from our monitoring tools into this data lake to provide value added insights and intelligence. When we talk about big data we talk about the three Vs, the variety, the volume and the velocity of data. But there is a fourth V that we often ignore. That's the veracity of the data, the truthfulness of data. CA being a leader in monitoring space, we have been in the business of collecting and monitoring data for ages, and what we are doing is we are ingesting these data into the platform and provided value added analytics on top of it. If you can read the slide, it's also an open framework we have the APIs from for ingesting data from third-party sources as well. For example, if you have your business data, your business sentiment data, and if you want to correlate that with IT metrics, how your IT is keeping up with your business cycles, you can do that as well. Now some of the applications that we are building, and this product is in beta as you see, are correlation between the various events, IT events and business events, network events and server events. Contextual log analytics. The operative word is contextual. There are a plethora of tools in the market that perform log analytics, but log analytics in the context of a problem when you really need it is of paramount importance. Predictive capacity analytics. Again, capacity analytics is not only about trending, right? It's about what if analysis. What will happen to your infrastructure? Or can your infrastructure sustain the pressure if your business grows by 2X, for example? That kind of what if analysis we should be able to do. And finally machine learning, we are working on it. Out of box machine learning algorithm to make sure that problems are not only corrected after the fact, but we can predict problems. We can prevent the problems in future. So for those who may be listening to this might be wondering where do we start? If you are already a CA customer, you are familiar with CA tools, but if you're not, what's the starting point? So I would recommend the starting point is CA Unified Infrastructure Manager, which is the market leading tool for hybrid cloud management. And it's not a hollow claim that we are making, right? It has been testified, it has been blessed by customers and analysts alike. And you can see it was voted the cloud monitoring software of the year 2016 by a third party. And here are some of the customer experiences. NMSP, they were able to achieve 15% productivity improvement as a result of adopting UIM. A healthcare provider, their meantime to repair, MTTR, went down by 40% as a result of UIM. And a telecom provider, they had a faster adoption to cloud as a result of UIM, the reason being UIM gave them for the first time a single pane of glass to manage their on prem and cloud environments, which has been a detriment for them for adopting cloud. And once they were able to achieve that, they were able to switch onto cloud much, much faster. Finally, the infrastructure management capabilities that I talked about is now being delivered as a turnkey solution, as a SAS solution, which we call digital experience insights. And I strongly, strongly encourage you to try UIM via CA digital experience insights, and here is the URL. You can go and sign up for the trial. With that, thank you.
SUMMARY :
And on the networking side we see an adoption of
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
101 alarms | QUANTITY | 0.99+ |
100 VMs | QUANTITY | 0.99+ |
53% | QUANTITY | 0.99+ |
20 tools | QUANTITY | 0.99+ |
Twenty tools | QUANTITY | 0.99+ |
15% | QUANTITY | 0.99+ |
Eighty-three percent | QUANTITY | 0.99+ |
second factoid | QUANTITY | 0.99+ |
fourth V | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
CA | LOCATION | 0.99+ |
third factoid | QUANTITY | 0.99+ |
fourth pillar | QUANTITY | 0.99+ |
first pillar | QUANTITY | 0.99+ |
2X | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
CA Technologies | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
NMSP | ORGANIZATION | 0.99+ |
four pillars | QUANTITY | 0.98+ |
2016 | DATE | 0.98+ |
third one | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
Sudip Datta | PERSON | 0.98+ |
fourth one | QUANTITY | 0.98+ |
Hadoop | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
First | QUANTITY | 0.97+ |
Office 365 | TITLE | 0.97+ |
one single alarm | QUANTITY | 0.97+ |
second one | QUANTITY | 0.97+ |
Elastic Logstash | ORGANIZATION | 0.96+ |
Azure | TITLE | 0.96+ |
UIM | ORGANIZATION | 0.95+ |
single pane | QUANTITY | 0.95+ |
Lincoln | ORGANIZATION | 0.95+ |
U.S. | LOCATION | 0.95+ |
Kibana | ORGANIZATION | 0.95+ |
This morning | DATE | 0.95+ |
three Vs | QUANTITY | 0.93+ |
one area | QUANTITY | 0.87+ |
one | QUANTITY | 0.86+ |
Viptella | ORGANIZATION | 0.84+ |
VMware | TITLE | 0.82+ |
Nutanix | ORGANIZATION | 0.81+ |
single analytics | QUANTITY | 0.8+ |
Spark | ORGANIZATION | 0.75+ |
four-pronged | QUANTITY | 0.69+ |
Salesforce.com | ORGANIZATION | 0.67+ |
Docker | TITLE | 0.67+ |
tier three | QUANTITY | 0.62+ |
CA | ORGANIZATION | 0.61+ |
Ixia | TITLE | 0.6+ |
tier two | QUANTITY | 0.57+ |
Jarvis | ORGANIZATION | 0.56+ |
APM | ORGANIZATION | 0.54+ |
prem | ORGANIZATION | 0.53+ |
tier one | QUANTITY | 0.53+ |
Eric Herzog, IBM Storage - #VMworld - #theCUBE
why from the mandalay bay convention center in las vegas it's the cues covering vmworld 2016 rock you buy vmware and its ecosystem sponsors now you're your host John furrier and John wall's well welcome back to Mandalay Bay here at vmworld along with John furrier I'm John wall's glad to be with you here on the cubes to continue our coverage what's happening at vmworld exclusive broadcast a partner here for the show and along with John we're joined by eric Herzog's the vice president product marketing and management at IBM storage and Erica I just found out you're one of the all-time 10 most popular cute guests or most prominent cube guests most prolific congratulations well thank you we always love coming to the cube it's always energizing you love controversy and I love controversy and you get down to the heart of it you're the hard copy of high tech they're like oh I loved and we could probably mark each of your appearances by the Hawaiian shirt I think what do you think either Hawaiian shirt or one of my luggage share we could trace those back ever stop vibe about the show I mean just your thoughts about they've been here for three four days now just your general feel about about the the messaging here and then what's actually being conveyed in the enthusiasm out on the show floor well it's pretty clear that the world has gone cloud the world is doing cognitive and big data analytics vmware is leading that charge their strong partner of IBM we do a lot of things with them both with our cloud division on our storage division and vmware is a very strong partner of IBM we have all kinds of integration in our storage technology products with vai with vasa with vcenter ops all the various product lines at vmware offers and the key thing is ever wants to go to the cloud so by working with IBM and vmware together makes it easier and easier for customers whether it be the small shop Herzog's barn grill or whether it be the giant fortune 500 global entity working with us together allow them to get to the cloud sooner faster and have a better cloud experience so you got you know everybody cloud and virtualization and you know big themes big big topics so why does storage still matter well the big thing is if you're going to go to a cloud infrastructure and you're going to run everything on the cloud you think of storage as at solid foundation it has to be rock solid it has to be highly resilient it has to be able to handle error codes and error messaging and things failing and things falling off the earth at the same time it needs to be incredibly fast where things like all-flash arrays come in and even flexible so things like software-defined storage so think of storage as the critical foundation underneath any cloud or virtualized environment if you don't have a strong storage foundation with great resiliency great availability great serviceability and great performance your cloud or your virtual infrastructure is going to be mediocre and that's a very generous term so that's a key point so controversial II speaking to get to the controversy the whole complexity around converged infrastructure hyper converge or whatever the customers are deploying for compute they're putting the storage close to that whether it's a SAS and the cloud which is basically a data center that no one knows the address of as we were saying they always going to have stores has to sit somewhere what is the key trends right now for you because software is leading the way iBM has been doing a lot of work I know and soft we've been covering you guys will be at IBM edge coming up shortly in a couple weeks where's the innovation on the storage side for you guys well how do you talk to the customer base to say ok I got some sass options now for back and recovery weird one of your partners earlier i'm talking about that where is the physical storage innovation is that the software what's your thoughts on so we have a couple paths of integration for us first software-defined storage several the other analyst firms have named it's the number one software-defined storage coming in the world for several years in a row now software-defined storage gives a flexible infrastructure you don't have to buy any of the underlying media or underlying array controller from us just by our software and then you could put on anybody else's hardware you want you can work with your cloud provider with your reseller with your distributor enterprises create their own cloud whether it's a software-defined storage gives you a wide swath of storage functionality backup archive primary store grid scale out software only so ultimate flexibility so that one area of innovation secondary ish is all flash all flash is not expensive essentially I love old Schwarzenegger movies in the 1980s was all about tape he was a spy go and show what is supposedly the CIA was Schwarzenegger I'll take mid 90s Schwarzenegger another spy movie show a datacenter all hard drive arrays now in the next Schwarzenegger movie hopefully it'll be all flash arrays from IBM in the background so flash is just an evolution and we do tons of humor white shirts I keep swapping monitors it so he's intimated I get one from Maui went from kawaii one from the Big Island so flash is where it's at from a system level perspective so you've got that innovation and then you've got converged infrastructure as you mentioned already will you get the server the storage the networking and VMware hypervisor all packaged up dramatically so we have a product called the vs tak we do jointly with Cisco and vmware we were late to market on that we freely admit that but just give you an idea in the first half of this year we have done almost 2x what we did in the entire year of 2015 so that's another growth ending particularly cloud service providers love to get these pre-canned pre racked versus tax and deploy them in a number of our public references are cloud service providers both big and small essentially wheel in a versus stack when they need it whelan not own will another pre-configured ready to go and they get up and up and quit going so those are three trends we just had a client on Scott equipment not a Monroe Louisiana went to the Versa stack and singing your praises like a great example of medium size small sized businesses so we keep think about enterprises and all this and that it doesn't have to be the case their services that you're providing the companies of all sizes that are gaining new efficiencies in protocol al people everybody needs storage and you think about it is really how do you want to consume the storage and in a smaller shop you may choose one way so versus stack is converged infrastructure our software-defined storage like spectrum accelerate spectrum virtualize a software-only model several of the products like spectrum accelerate inspect can protect are available through softlayer or other cloud is he consumed it as a cloud entity so whether you want to consume an on-premises software only full array full integrated stack or cloud configuration we offer any way in which you want to eat that cake big cake small cake fruit cake chocolate cake vanilla cake we got kicked for ever you need and we can cover every base with that a good point about the diversity of choices from tape to flash and they get the multi multi integrated Universal stack so a lot of different choices I want to ask you about you know with that kind of array of options how you view the competitive strategy for IBM with storage so you know I know you're a wrestler so is there a is there a judo move on the competition how would you talk about your differentiation how do you choke hold the competition well couple ways first a lot from a technical perspective by leading with software-defined storage and we are unmatched in that capacity according the industry analysts on what we do and we have it in all areas in block storage we got scale-out file storage and scale out big data analytics we got back up we got archive almost no one has that panoply of offering in a software-defined space and you don't need to buy the hardware from us you can buy from our competitors two things I hear software and then after the array of eyelash what's specifically on the software are you guys leading and have unmatched as-safir already well spectrum protect is you know been a leader in the enterprise for years spectrum scale is approaching 5,000 customers now and we have customers close to an exabyte in production single customer with an exabyte pretty incredible so for big data analytic workloads with on gastronomic research so for us it's all about the application workload in use case part of the reason we have a broad offering is anyone who comes in here and sits in front of you guys and says my array or my software will do everything for you is smoking something that's not legal just not true maybe in Colorado or yeah okay me but the reality is workloads applications and use cases very dramatically and let's take an easy example we have multiple all-flash arrays why do we have multiple all flash arrays a we have a version for mainframe attached everyone in there wants six or seven 9s guess what we can provide that it's expensive as they're all is that our six or seven 9s but now they can get all flash performance on the mainframe in the upper end of the Linux world that's what you would consume at the other end we have our flash our store wise 50 30 f which can be as low street price as low street price as eighteen thousand dollars for an all-flash array to get started basically the same prices our Drive rang and it has all the enterprise data services snapshot replication data encryption at rest migration capability tiering capability it's basically what a hard drive array used to cost so why not go all flash threat talk about the evolution of IBM storage actually them in a leader in storage in the beginning but there was a period of time there and Dave when I won't talk handling the cube about this where storage my BMC it took a lot of share but there's been a huge investment in storage over the past i'd say maybe five years in particular maybe past three specifically i think over a billion dollars has been spent I think we thought the Jamie talent variety of folks on from IBM what is the update take a minute to explain how IBM has regained their mojo in storage where that come from just add some color to that because I think that's something that let people go hmm I great for things from my being but they didn't always have it in the storage so as you know IBM invented the hard drive essentially created the storage industry so saying that we lost our mojos a fair statement but boy do we have it back explain so first thing is when you have this cloud and analytic cognitive era you need a solid foundation of storage and IBM is publicly talked about the future of the world is around cloud on cognitive infrastructure cognitive applications so if your storage is not the best from an availability perspective and from a performance perspective then the reality is your cloud and cognitive that you're trying to do is basically going to suck yeah so in order to have the cloud and convey this underlying infrastructure that's rock-solid so quite honestly as you mentioned Dave we've actually invested over three and a half billion dollars in the last three years not to mention we bought a company called Texas memory systems which is the grandfather our flash systems knocks before that so we've invested well over three billion dollars we've also made a number of executive hirings ed walls just joined us CEO of several startups former general manager from emc i myself was a senior vice president at emc we just hired a new VP of Sales they're serious you guys are serious you guys are all in investing bringing on the right team focusing on applications work gloves in use case as much as I love storage most CEOs hate it yeah there's almost no cio that whatever a storage guy they're all app guys got to talk their lingo application workload in use case how the storage enables their availability of those apps workloads and use cases and how it gives them the right performance to meet their essays to the business guy what's interesting I want to highlight that because I think it's a good point people might not know is that having just good storage in and of itself was an old siloed model but now you mentioned could we cover all the IBM events world of Watson we should call insights edge and and interconnect the cloud show cognitive is front and center there's absolutely the moon shot and the mandate from IBM to be number one in cognitive computing which means big data analytics integrated to the application level obviously bluemix in the cloud Philip blank was here on stage about IBM cloud the relation with VMware so that fails if it doesn't have good steward doesn't perform well and and latency matters right I mean data matters well I add a couple things there so first of all absolutely correct but the other thing is we actually have cognitive storage ok if you automate processes automatically for example to your data some of our competitors have tiering most of them tier only within their own box we actually can tear not only within our own box for from our box to emc our box to netapp our box to HP HP to del Delta hitachi we can t r from anything to anything so that's a huge advantage right there but we tier we don't just set policy which is when data's 90 days old automatically move it that's automation cog nation is where we not only watch the applications and watch the data set we move it from hot to cold so let's take for example financial data your publicly traded company cuban SiliconANGLE going to be public soon i'm sure guys are getting so big your finance guys going to say Dave John team this financial data is white-hot got to be on all flash after you guys do your announcement of your incredible earnings and thank God I hopefully get friend of the company stock and my stock goes way up as your stock goes way up what are we spoking now come on let me tell you when that happens the date is going to go stone-cold we see that you don't have to set a policy two-tier the data with IBM we automatically learn when the data is hot and when it's cold and move it back and forth for you you know there's no policy setting cognition or cognitive its storage understand or stands out as the work for some big data mojo coming into the storage right and that's a huge change so again not only is it critical for any cognitive application to have incredibly performance storage with incredible resiliency availability reliability ok when there is cognitive health care true cognitive health care and Dave's on the table and they bring out their cognitive Juan because they found something in your chest that they didn't see before if the storage fails not going to be good for Dave yeah at the same time if the storage is too slow that might not be good for Dave either when they run that cognitive wand a that hospital knows that it's never going to fail that doctor says Oh Dave okay we better take that thing out boom he takes it out Dave's healthy again well that's a real example by the way not necessary Dave on the table but there was a story we wrote insult an angle one of our most popular post last month IBM Watson actually found a diagnosis uncured a patient the doctor had missed I don't know if you saw that story when super viral but that's the kind of business use case that you're in kind of illuminating with the storage yeah well in fact that one of the recent trade shows what's called the flash memory summit we won an award for best enterprise application commercial developer spark cognition they developed cyber security applications they recommend IBM flash systems and actually Watson's embedded in their application and it detects security threats for enterprises so there's an example of combining cognition with Watson the cognition capability of flash systems and then their software which is commercially available it's not an in-house thing or they're you know a regular software all right now we're a now we're in like the big time you know intoxication mode with all this awesome futuristic real technology how does a customer get this now because now back to IT yeah the silos are still out there they're breaking down the silos how do you take this to customers what's to use case how do you guys deploy this what's the what are you seeing for success stories well the key thing is to make it easy to use and deploy which we do so if you want the cloud model we're available in software IBM Global resiliency services uses us for their resiliency service over 300 cloud providers you spectrum protect for backup pick the cloud guy just pick one you want we work with all of them if you want to deploy in-house we have a whole set of channel partners globally we have the IBM sales team IBM global services uses IBM's own storage of course to provide to the larger enterprises so with your big shop medium swaps well flop we have a whole set of people out there with our partner base with our own sales guys that can help that and you get up and then we back it up as you know IBM is renowned for supporting service in all of our divisions in all of our product portfolio not just in storage so they need support and service our storage service guys are there right away you'd it installed we can install it our partners can install this stuff so we try to make it as brain dead as possible as easy as possible Jen being cognitive and are some of our user interfaces are as easy as a Macintosh I mean drag-and-drop move your lungs around run analytics on when you're going to run out of storage so you know ahead of time all these things that cut things people want today remember IT budget cut dramatically in the downturn of 08 09 and while budgets have returned they're not hiring storage guys there are hiring developers and they're hiring cloud guys so those guys don't know how to use storage well you got to make it easy always fast and always resilient that way it doesn't fail anyway but when it does you just go into the GUI it tells you what's wrong bingo and IBM service our partner service comes right out and fix it so that's what you need today because there aren't as many storage guys as you used to be no question you've got the waterfront covered no doubt about that and again congratulations on cracking the top 10 way back we consider that an honor and a privilege to be a part of that great welcome picture we really appreciate it thank you we'll continue the coverage here on the Cuba vmworld right after this
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Colorado | LOCATION | 0.99+ |
eric Herzog | PERSON | 0.99+ |
eighteen thousand dollars | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
Erica | PERSON | 0.99+ |
5,000 customers | QUANTITY | 0.99+ |
John wall | PERSON | 0.99+ |
vmworld | ORGANIZATION | 0.99+ |
John wall | PERSON | 0.99+ |
vmware | ORGANIZATION | 0.99+ |
Herzog | ORGANIZATION | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
John furrier | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
John furrier | PERSON | 0.99+ |
Macintosh | COMMERCIAL_ITEM | 0.99+ |
five years | QUANTITY | 0.99+ |
Mandalay Bay | LOCATION | 0.98+ |
Schwarzenegger | PERSON | 0.98+ |
last month | DATE | 0.98+ |
today | DATE | 0.98+ |
Hawaiian | OTHER | 0.98+ |
over three and a half billion dollars | QUANTITY | 0.97+ |
iBM | ORGANIZATION | 0.97+ |
both | QUANTITY | 0.97+ |
over a billion dollars | QUANTITY | 0.97+ |
mid 90s | DATE | 0.97+ |
10 most popular cute guests | QUANTITY | 0.96+ |
BMC | ORGANIZATION | 0.96+ |
over three billion dollars | QUANTITY | 0.96+ |
#VMworld | ORGANIZATION | 0.95+ |
Linux | TITLE | 0.95+ |
first thing | QUANTITY | 0.94+ |
10 | QUANTITY | 0.94+ |
Watson | TITLE | 0.94+ |
Big Island | LOCATION | 0.93+ |
three four days | QUANTITY | 0.93+ |
one | QUANTITY | 0.93+ |
las vegas | LOCATION | 0.93+ |
1980s | DATE | 0.92+ |
first | QUANTITY | 0.92+ |
IBM Storage | ORGANIZATION | 0.92+ |
each | QUANTITY | 0.92+ |
six | QUANTITY | 0.92+ |
2016 | DATE | 0.91+ |
one way | QUANTITY | 0.91+ |
50 30 f | OTHER | 0.91+ |
90 days old | QUANTITY | 0.9+ |
over 300 cloud providers | QUANTITY | 0.9+ |
2015 | DATE | 0.89+ |
first half of this year | DATE | 0.89+ |
Philip blank | PERSON | 0.88+ |
last three years | DATE | 0.86+ |