Image Title

Search Results for single paradigm:

Tony Coleman, Temenos and Boris Bialek, MongoDB | MongoDB World 2022


 

>>Yeah, yeah, yeah. We're back at the center of the coverage of the world 20 twenty-two, the first live event in three years. Pretty amazing. And I'm really excited to have Tony Coleman. Here is the c e o of those who changing the finance and banking industry. And this is the global head of industry solutions. That would be welcome. Back to the cube. Welcome. First time. Um, so thanks for coming on. Thank you. >>Thanks for having us, >>Tony. Tell us about what are you guys up to? Disrupting the finance world. >>So tomorrow is everyone's banking platform. So we are a software company. We have over 3000 financial institutions around the world. Marketing tell me that that works out is over 1.2 billion people rely on terminal software for their banking and financial needs. 41 of the top 50 banks in the world run software and we are very proud to be powering all of those entities on their innovation journeys and bringing you know, that digital transformation that we've seen so much all over the past few years and enabling a lot of the world's unbanked through digital banking become, you know, members of the >>community. So basically you're bringing the software platform to enable that to somebody you don't have to build it themselves because they never get there. Absolutely. And and so that's why I don't know if you consider that disruptive. I guess I do to the industry to a certain extent. But when you think of disruption in the business, you think of Blockchain and crypto, and 50 is that is completely separate world and you guys participate in that as well. Well, I >>would say it's related right? I mean, I was doing a podcast recently and they had this idea of, um, buzzword jail where you could choose words to go into jail and I said 50 not because I think they're intrinsically bad, but I think just at the moment they are a rife for scam area. I think it's one of those one of these technologies and investment area that people don't understand it, and there's a lot of a lot of mistakes that can be made in that, >>Yeah, >>I mean, it's a fascinating piece that it could be truly transformative if we get it right, but it's very emerging, so we'll see so don't play a huge part in the Blockchain industry directly. We work with partners in that space, but in terms of digital assets and that sort of thing. Yeah, absolutely. >>So, Boris, you have industry solutions in your title. What does that entail? So >>basically, I'm responsible for all the verticals, and that includes great partners like Tony. And we're doing a lot of verticals by now. When you listen. Today in all these various talks, we have so much stuff ranging from banking, go retail, healthcare, insurance, you name it, we have it by now. And that's obviously the clients moving from the edge solution. Like touching a little toe in the water, but longer to going all in building biggest solutions you saw on stage the lady from this morning. These are not second Great. Yeah, we do something small now. We're part of the transformation journey. And this is where Tony and I can regularly together how we transform things and how we built a new way of banking is done with Michael services and technology surrounding it. Yeah, >>but what about performance in this world? Can you tell me about that? >>Yeah. This is an interesting thing because people always challenging what is performance and document databases. And Tony challenged us actually, six weeks before his own show several weeks ago in London and says, Boris, let's do a benchmark And maybe you bring your story because if I get too excited, I follow. >>Yeah, sure, that performance and efficiency topics close close to my heart. I have been for for years. And so, yeah, we every two or three years, we run a high water. We've got a high water benchmark, and this year we sort of double down literally double down on everything we did previously. So this was 200 million accounts, 100 million customers, and we were thrashing through 102,800 seventy-five transactions a second, which is a phenomenal number. And, uh, >>can I do that on the Blockchain? >>Wow. Yeah, exactly. Right. So this is you know, I get asked why we do such high numbers and the reason is very straightforward. If somebody wants 10,000 transactions a second, we're seeing banks now that need that sort of thing. If I can give them a benchmark report, this is 100,000. I don't need to keep doing benchmarks. 10. >>Yeah. Tell me more about the Anytime you get into benchmarks, you want to understand the configuration. The workload. Tell me more about that. So we have >>a pretty well path of a standard transaction mix. We call it a retail transaction mix. And so it's the tries to the workload. Is that because it's a simulation right around what you would do in your daily basis? So you're going to make payments you're going to check? Your balance is you're going to see what he's moved on your account. So we do all of that and we run it through a proper production, good environment. And this is really important. This is something we do in the lab you couldn't go live on. This is all all of the horrible, non functional requirements around high availability, >>security, security passes, private wings, all these things. And one thing is, they're doing this for a long time. So this is not like let's define something new for the world. Now, this is something Tony's doing for literally 10, 15 years now, right? >>It was only 15 years, but this >>is your benchmark >>top >>developed Okay, >>so we run it through and, um yeah, some fantastic numbers. And not just on the share sort of top-level numbers 100,000 transactions. A second response time out of it was fantastic. One-millisecond, which is just brilliant. So it means you get these really efficient numbers what that helped us do with, you know, some of the other partners that are involved in the benchmark as well. It meant that our throughput court, which is a really good measure of efficiency, is up to four times better than we ran it three years ago. So in terms of a sustainability piece, which is so important that that's really a huge improvement, that's down to application changes, architect changes as well as using appropriate technology in the right place. >>How important? With things like the number, of course, the memory size is the block sizes. All that stuff. >>We are very tiny. So this is the part. When I talk to people, we have what we call a system in the back of people. Look at me. Um, how many transactions on that one? So, to be fair, three-quarters, we're going to be one quarter or something else because we're still putting some components of and start procedures for disclosure. But when I think Seventy-five 1000 transactions on a single single 80 system, which is thirty-two cause you're saying correctly, something like that. This is a tiny machine in the world of banking. So before this was the main friends and now it's wonderful instance on a W s. And this is really amazing. Costed and environmental footprint is so, so important >>and there's a heavy right heavy environment. >>So the the way we the way we architect the solution is it follows something called a command query responsibility, segregated segregation. So what we do, we do all the commands inappropriate database for that piece, and that was running at about Twenty-five 1000 transactions a second and then we're streaming the data out of that directly into So actually I was doing more than the Seventy-five 1000 queries. A second, which is the part of it was also investing Twenty-five 1000 transactions the second at the same time >>and okay, and the workload had a high locality medium locality. It was just give us a picture of what that's like. Sorry. So, >>yeah, >>we don't have that. Yeah, >>so explain that That's not That's not the mindset for a document. Exactly. >>Exactly. In the document database, you don't have the hot spotting the one single field off the table, which is suddenly hot spotting. And now you have literally and recovery comes up and we say, What goes, goes together, get together belongs together, comes out together. So the number of, for example, it's much, much smaller and the document system, then historically, relationship. >>So it is not a good good indicator, necessarily >>anymore. That's what this is so much reduced. The number of access patterns are smaller, and I mean it is highly optimized, for example, internally as well. The internal structures, so that was very close to a >>traditional benchmark, would have a cash in front of a high cash rate. So 100 and 99% right, That's a high locality reference. But that's that's irrelevant. >>It's gone. There's no cashing in the middle anymore. It goes straight against the database. All these things are out, and that's what makes it so exciting and all the things in a real environment. I think we really need to stress it. It's not a test that at home. It's a real life environment out into the wild with the benchmark driving and driving. >>How did your customers respond? You did this for your recent event? >>Yeah, we did it for our use. A conference, our community for, um, which was a few weeks ago in London. Um, and the You know, the reaction was Certainly it was a great reception, of course, but the main thing that people are fascinated about, how much more efficient the whole platform it's explaining. So you know when we can run and it's a great number that we've got the team pulled out, which is so having doubled throughput on the platform from what we did three years ago, we're actually using 20% less infrastructure to give double the performance. Uh, macro-level, that's a phenomenal achievement. And that means that these changes that we make everything that we're doing benefits all of our customers. So all of the banks, when they take the latest release, is they get these benefits. Everything is that much more efficient. So everybody benefits from every investment, >>and this was running in the cloud. Is that correct? You're running out of this. >>So this was list, Um, 80 on a W s with a W s cases and processes. And so it was a really reality driven environment, >>pure pure cloud-native or using mana services on a W s. And then at least for the peace. It's >>awesome. I mean, uh, So now how convenient for the timing from, uh, the world. How are you socializing with your community? >>We're having this afternoon session as well, where we talk a little bit more detail about that, and he has a session as well tomorrow. So we see a lot of good feedback as well when we bring it up with clients. Obviously some clients get very specific because this reduction footprint is so huge when you think a client has 89 environments from early development systems to production to emergency standby, maybe a different cloud. All these things what day talks about the different Atlas features multi cloud environmentally. All this stuff comes to play. And this is why I'm so excited to work with them. We should bring up as well the other things which are available to ready already with your front and solutions with Infinity services because that's the other part of the modernization, the Michael Services, which Tony so politely not mentioning. So there's a lot of cool technology into that one, which fits to how it works in micros services. Happy I first all these what they called factors. Micro service a p. I cloud-native headless. I think that was the right order now. So all these things are reflected as well. But with their leadership chief now, I think a lot of companies have to play Catch-up now to what Tony and his team are delivering on the bank. This >>gets the modernization. We really haven't explicitly talks about that. Everything you've just said talks to modernization. So you typically in financial services find a lot of relation. Database twenty-year-old, hardened, etcetera, high availability. Give them credit for that. But a lot of times you'll see them just shift that into the cloud. You guys chose not to do that. What was the modernization journey look like? >>So it's a bit of, um yeah, a firm believer in pragmatism and using. I think you touched on earlier the appropriate technology. So >>horses for courses >>exactly right out of my mouth. And I was talking to one of the uh, the investor analysts earlier. And you know, the exact same question comes up, right? So if you've got a relation database or you've got a big legacy system and you're not gonna mainframe or whatever it is and you wanna pull that over when you it's not just a case of moving the data model from one paradigm to another. You need to look at it holistically, and you need to be ambitious. I think the industry has got, you know, quite nervous about some of these transformation projects, but in some ways it might be counter intuitive. I think being ambitious and being in bold is a better way. Better way through, you know, take take of you, look at it holistically. Layout of plan. It is hard. It is hard to do these sorts of transformations, but that's what makes it the challenge. That's what makes it fun. Take take those bold steps. Look at it holistically. Look at the end state and then work out a practical way. You can deliver value to the business and your customers as you deliver on the road. So >>did you migrate from a traditional R D B. M s to go. >>So So, Yeah, this is a conversation. So, uh, in the late nineties, the kind of the phrase document model hasn't really been coined yet. And for some of our work at the time, we refer to as a hierarchical model. Um, And at that point in time, really, if you wanted to sell to a bank, you needed to be running Oracle. So we took this data model and we got it running an article and then other relational databases as well, but actually under the colors there it is, sort of as well. So there is a project that we're looking at to say Well, okay, taking that model, which is in a relational database. And of course, you build over time, you do rely on some of the features of relations databases moving that over to something like, isn't it? You know, it's not quite as simple as just changing the data model. Um, so there's a few bits and pieces that we need to work through, but there is a concept that we are running, which is looking really promising and spurred on by the amazing results from the benchmark. That could be something That's really >>yeah, I think you know, 20 years ago you probably wouldn't even thought about it. It's just too risky. But today, with the modern tools and the cloud and you're talking about micro services and containers, it becomes potentially more feasible. >>But the other side of it is, you know, it's only relatively recently the Mongo who's had transaction support across multiple document multi collection transactions and in banking. As we all know, you know, it's highly regulated. That is, all of your worst possible non functional requirement. Security transaction reality. Thomas City You know, the whole the whole shebang. Your worst possible nightmare is Monday morning for >>us. So and I think one part which is exciting about this Tony is a very good practical example about this large scale modernization and cutting out by cutting off that layer and going back to the hierarchical internal structures. We're simply find a lot of the backing components of our because obviously translation which was done before, it's not need it anymore. And that is as well for me, an exciting example to see how long it takes what it is. So Tony space in my life experiments so to speak >>well, you're right because it used to be those migrations. Where how many line of code? How long do I have to freeze it? And that a lot of times lead people to say, Well, forget it, because the business is going to shut down. >>But now we do that. We do that. So I'm working, obviously, besides the work with a lot of financial clients, and but now it's my job is normally shift and left a pain in the game because the result of the work is when they move everything to the cloud and it was bad before. It will not be better in the cloud only because it's in somebody else's data center. So these modernization and innovation factor is absolutely critical. And it's only said that people get it by now. This shift and left over it is how can I innovate? How can accelerate innovation, and that leads very quickly to the document model discussion. >>Yeah, I think the world practitioners will tell you, if you really want to affect the operational model, have a meaningful impact on your business. You have to really modernized. You can't just lift shift that they're absolutely. You know, what's the difference between hundreds of millions or billions in some cases, versus, you know, some nice little hits here or there. >>So we see as well a lot of clients asking for solutions like the terminal solutions. And like others where there is not anymore discussion about how to move to the The question is how fast how can accelerate. We see the services request the first one. It's amazing. After the event, what we had in London, 100 clients calling us. So it's not our sales people calling upon the clients, the clients coming in. I saw it. How do we get started? And that is for me, from the vendor perspective, so to speak. Amazing moment >>yourself. You go, guys, we're gonna go. Thanks so much for that. You have to have you back and see how that goes. That. Yeah, that's a big story of if you're a great All right, keep it right there. Everybody will be right back. This is David for the Cube. You're watching our live coverage of mongo D B World 20 twenty-two from New York City. >>Yeah, >>Yeah, yeah, yeah, yeah

Published Date : Jun 7 2022

SUMMARY :

Here is the c e o of those Disrupting the finance world. So we are a software And and so that's why I don't know if you consider that disruptive. of, um, buzzword jail where you could choose words to go into I mean, it's a fascinating piece that it could be truly transformative if we get it right, So, Boris, you have industry solutions in your title. And that's obviously the clients moving show several weeks ago in London and says, Boris, let's do a benchmark And maybe you bring your story So this was 200 million accounts, 100 million customers, So this is you know, So we have This is something we do in the lab you couldn't go live on. So this is not like let's define something new for the world. So it means you get these really efficient numbers what that helped us do with, All that stuff. When I talk to people, we have what we call a system So the the way we the way we architect the solution is it follows something and okay, and the workload had a high locality medium locality. we don't have that. so explain that That's not That's not the mindset for a document. In the document database, you don't have the hot spotting the one single field so that was very close to a So 100 and It's a real life environment out into the wild with the benchmark driving and driving. So all of the banks, when they take the latest release, is they get these benefits. and this was running in the cloud. So this was list, Um, 80 on a W s with a W s cases And then at least for the peace. the timing from, uh, the world. So we see a lot of good feedback as well when we bring it So you typically in financial I think you touched on earlier the appropriate technology. And you know, the exact same question comes up, So So, Yeah, this is a conversation. yeah, I think you know, 20 years ago you probably wouldn't even thought about it. But the other side of it is, you know, it's only relatively recently the the backing components of our because obviously translation which was done before, it's not need it anymore. And that a lot of times lead people to say, of financial clients, and but now it's my job is normally shift and left a pain in the what's the difference between hundreds of millions or billions in some cases, versus, you know, So we see as well a lot of clients asking for solutions like You have to have you back and see how that goes.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BorisPERSON

0.99+

TonyPERSON

0.99+

100,000QUANTITY

0.99+

LondonLOCATION

0.99+

Tony ColemanPERSON

0.99+

100QUANTITY

0.99+

20%QUANTITY

0.99+

TemenosPERSON

0.99+

41QUANTITY

0.99+

100 clientsQUANTITY

0.99+

one quarterQUANTITY

0.99+

New York CityLOCATION

0.99+

Boris BialekPERSON

0.99+

99%QUANTITY

0.99+

tomorrowDATE

0.99+

three yearsQUANTITY

0.99+

Monday morningDATE

0.99+

One-millisecondQUANTITY

0.99+

100 million customersQUANTITY

0.99+

89 environmentsQUANTITY

0.99+

thirty-twoQUANTITY

0.99+

this yearDATE

0.99+

100,000 transactionsQUANTITY

0.99+

MongoORGANIZATION

0.99+

hundreds of millionsQUANTITY

0.99+

102,800 seventy-five transactionsQUANTITY

0.99+

secondQUANTITY

0.99+

Michael ServicesORGANIZATION

0.99+

OracleORGANIZATION

0.99+

First timeQUANTITY

0.98+

billionsQUANTITY

0.98+

three-quartersQUANTITY

0.98+

20 years agoDATE

0.98+

first oneQUANTITY

0.98+

several weeks agoDATE

0.98+

Twenty-five 1000 transactionsQUANTITY

0.98+

late ninetiesDATE

0.98+

80QUANTITY

0.98+

DavidPERSON

0.98+

over 3000 financial institutionsQUANTITY

0.98+

three years agoDATE

0.98+

MongoDBORGANIZATION

0.98+

over 1.2 billion peopleQUANTITY

0.97+

TodayDATE

0.97+

todayDATE

0.97+

oneQUANTITY

0.97+

200 million accountsQUANTITY

0.96+

Seventy-five 1000 queriesQUANTITY

0.96+

Seventy-five 1000 transactionsQUANTITY

0.96+

one thingQUANTITY

0.95+

15 yearsQUANTITY

0.95+

about Twenty-five 1000 transactionsQUANTITY

0.95+

this morningDATE

0.94+

few weeks agoDATE

0.94+

one paradigmQUANTITY

0.94+

twenty-year-oldQUANTITY

0.93+

one partQUANTITY

0.93+

second responseQUANTITY

0.93+

Thomas CityPERSON

0.93+

moreQUANTITY

0.92+

one single fieldQUANTITY

0.92+

10, 15 yearsQUANTITY

0.92+

10,000 transactions a secondQUANTITY

0.92+

50 banksQUANTITY

0.92+

MichaelPERSON

0.92+

firstQUANTITY

0.91+

first live eventQUANTITY

0.9+

mongo D B World 20 twenty-twoTITLE

0.9+

six weeksDATE

0.9+

Infinity servicesORGANIZATION

0.83+

20 twenty-twoQUANTITY

0.83+

single single 80 systemQUANTITY

0.8+

AtlasORGANIZATION

0.8+

50QUANTITY

0.75+

four timesQUANTITY

0.72+

for yearsQUANTITY

0.68+

a secondQUANTITY

0.63+

every twoQUANTITY

0.61+

doubleQUANTITY

0.59+

upQUANTITY

0.57+

Brian Mullen & Arwa Kaddoura, InfluxData | AWS re:Invent 2021


 

(upbeat music) >> Everybody welcome back to theCUBE, continuous coverage of AWS 2021. This is the biggest hybrid event of the year, theCUBEs ninth year covering AWS re:Invent. My name is Dave Vellante. Arwa Kaddoura is here CUBE alumni, chief revenue officer now of InfluxData and Brian Mullen, who's the chief marketing officer. Folks good to see you. >> Thanks for having us. >> Dave: All right, great to see you face to face. >> It's great to meet you in person finally. >> So Brian, tell us about InfluxData. People might not be familiar with the company. >> Sure, yes. InfluxData, we're the company behind a pretty well-known project called Influx DB. And we're a platform for handling time series data. And so what time series data is, is really it's any, we think of it as any data that's stamped in time in some way. That could be every second, every two minutes, every five minutes, every nanosecond, whatever it might be. And typically that data comes from, you know, of course, sources and the sources are, you know, they could be things in the physical world like devices and sensors, you know, temperature gauges, batteries. Also things in the virtual world and, you know, software that you're building and running in the cloud, you know, containers, microservices, virtual machines. So all of these, whether in the physical world or the virtual world are kind of generating a lot of time series data and our platforms are designed specifically to handle that. >> Yeah so, lots to unpack here Arwa, I mean, I've kind of followed you since we met on virtually. Kind of followed your career and I know when you choose to come to a company, you start with the customer that's what your that's your... Those are your peeps. >> Arwa: Absolutely. >> So what was it that drew you to InfluxData, the customers were telling you? >> Yeah, I think what I saw happening from a marketplace is a few paradigm shifts, right? And the first paradigm shift is obviously what the cloud is enabling, right? So everything that we used to take for granted, when you know, Andreessen Horowitz said, "software was eating the world", right? And then we moved into apps are eating the world. And now you look at the cloud infrastructure that, you know, folks like AWS have empowered, they've allowed services like ours and databases, and sort of querying capabilities like Influx DB to basically run at a scale that we never would have been able to do. Just sort of with, you know, you host it yourself type of a situation. And then the other thing that it's enabled is again, if you go back to sort of database history, relational, right? Was humongous, totally transformed what we could do in terms of transactional systems. Then you moved into sort of the big data, the Hadoops, the search, right. The elastic. And now what we're seeing is time series is becoming the new paradigm. That's enabling a whole set of new use cases that have never been enabled before, right? So people that are generating these large volumes of data, like Brian talked about and needing a platform that can ingest millions of points per second. And then the ability to query that in real time in order to take that action and in order to power things like ML and things like sort of, you know, autonomous type capabilities now need this type of capability. So that's all to know >> Okay so, it's the real timeness, right? It's the use cases. Maybe you could talk a little bit more about those use cases and--- >> Sure, sure. So, yeah so we have kind of thinking about things as both the kind of virtual world where people are pulling data off of sources that are in infrastructure, software infrastructure. We have a number like PayPal is a customer of ours, and Apple. They pull a time series data from the infrastructure that runs their payments platform. So you can imagine the volume that they're dealing with. Think about how much data you might have in like a regular relational scenario now multiply every that, every piece of data times however, often you're looking at it. Every one second, every 10 minutes, whatever it might be. You're talking about an order of magnitude, larger volume, higher volume of data. And so the tools that people were using were just not really equipped to handle that kind of volume, which is unique to time series. So we have customers like PayPal in kind of the software infrastructure side. We also have quite a bit of activity among customers on the IOT side. So Tesla is a customer they're pulling telematics and battery data off of the vehicle, pulling that back into their cloud platform. Nest is also our customer. So we're pretty used to seeing, you know, connected thermostats in homes. Think of all the data that's coming from those individual units and their, it's all time series data and they're pulling it into their platform using Influx. >> So, that's interesting. So Tesla take that example they will maybe persist some of the data, maybe not all of it. It's a femoral and end up putting some of it back to the cloud, probably a small portion percentage wise but it's a huge amount of data of data, right? >> Brian: Yeah. >> So, if they might want to track some anomalies okay, capture every time animal runs across, you know, and put that back into the cloud. So where do you guys fit in that analysis and what makes you sort of the best platform for time series data base. >> Yeah, it's interesting you say that because it is a femoral and there are really two parts of it. This is one of the reasons that time series is such a challenge to handle with something that's not really designed to handle it. In a moment, in that minute, in the last hour, you have, you really want to see all the data you want all of what's happening and have full context for what's going on and seeing these fluctuations but then maybe a day later, a week later, you may not care about that level of fidelity. And so you down sample it, you have like a, kind of more of a summarized view of what happened in that moment. So being able to kind of toggle between high fidelity and low fidelity, it's a super hard problem to solve. And so our platform Influx DB really allows you to do that. >> So-- >> And that is different from relational databases, which are great at ingesting, but not great at kicking data out. >> Right. >> And I think what you're pointing to is in order to optimize these platforms, you have to ingest and get rid of data as quickly as you can. And that is not something that a traditional database can do. >> So, who do you sell to? Who's your ideal customer profile? I mean, pretty diverse. >> Yeah, It, so it tends to focus on builders, right? And builders is now obviously a much wider audience, right? We used to say developers, right. Highly technical folks that are building applications. And part of what we love about InfluxData is we're not necessarily trying to only make it for the most sophisticated builders, right? We are trying to allow you to build an application with the minimum amount of code and the greatest amount of integrations, right. So we really power you to do more with less and get rid of unnecessary code or, you know, give you that simplicity. Because for us, it's all about speed to market. You want an application, you have an idea of what it is that you're trying to measure or monitor or instrument, right? We give you the tools, we give you the integrations. We allow you to have to work in the IDE that you prefer. We just launched VS Code Integration, for example. And that then allows these technical audiences that are solving really hard problems, right? With today's technologies to really take our product to market very quickly. >> So, I want to follow up on that. So I like the term builder. It's an AWS kind of popularized that term, but there's sort of two vectors of that. There's the hardcore developers, but there's also increasingly domain experts that are building data products and then more generalists. And I think you're saying you serve both of those, but you do integrations that maybe make it easier for the latter. And of course, if the former wants to go crazy they can. Is that a right understanding? >> Yes absolutely. It is about accessibility and meeting developers where they are. For example, you probably still need a solid technical foundation to use a product like ours, but increasingly we're also investing in education, in videos and templates. Again, integrations that make it easier for people to maybe just bring a visualization layer that they themselves don't have to build. So it is about accessibility, but yes obviously with builders they're a technical foundation is pretty important. But, you know, right now we're at almost 500,000 active instances of Influx DB sort of being out there in the wild. So that to me shows, that it's a pretty wide variety of audiences that are using us. >> So, you're obviously part of the AWS ecosystem, help us understand that partnership they announced today of Serverless for Kinesis. Like, what does that mean to you as you compliment that, is that competitive? Maybe you can address that. >> Yeah, so we're a long-time partner of AWS. We've been in the partner network for several years now. And we think about it now in a couple of ways. First it's an important channel, go to market channel for us with our customers. So as you know, like AWS is an ecosystem unto itself and so many developers, many of these builders are building their applications for their own end users in, on AWS, in that ecosystem. And so it's important for us to number one, have an offering that allows them to put Influx on that bill so we're offered in the marketplace. You can sign up for and purchase and pay for Influx DB cloud using or via AWS marketplace. And then as Arwa mentioned, we have a number of integrations with all the kind of adjacent products and services from Amazon that many of our developers are using. And so when we think about kind of quote and quote, going to where the developer, meeting developers where they are that's an important part of it. If you're an AWS focused developer, then we want to give you not only an easy way to pay for and use our product but also an easy way to integrate it into all the other things that you're using. >> And I think it was 2012, it might've even been 11 on theCUBE, Jerry Chen of Greylock. We were asking him, you think AWS is going to move up the stack and develop applications. He said, no I don't think so. I think they're going to enable developers and builders to do that and then they'll compete with the traditional SaaS vendors. And that's proved to be true, at least thus far. You never say never with AWS. But then recently he wrote a piece called "Castles on the Cloud." And the premise was essentially the ISV's will build on top of clouds. And that seems to be what you're doing with Influx DB. Maybe you could tell us a little bit more about that. We call it super clouds. >> Arwa: That's right. >> you know, leveraging the 100 billion dollars a year that the hyperscalers spend to develop an abstraction layer that solves a particular problem but maybe you could describe what that is from your perspective, Influx DB. >> Yeah, well increasingly we grew up originally as an open source software company. >> Dave: Yeah, right. >> People downloaded the download Influx DB ran it locally on a laptop, put up on the server. And, you know, that's our kind of origin as a company, but increasingly what we recognize is our customers, our developers were building on the building in and on the cloud. And so it was really important for us to kind of meet them there. And so we think about, first of all, offering a product that is easily consumed in the cloud and really just allows them to essentially hit an end point. So with Influx DB cloud, they really have, don't have to worry about any of that kind of deployment and operation of a cluster or anything like that. Really, they just from a usage perspective, just pay for three things. The first is data in, how much data are you putting in? Second is query count. How many queries are you making against? And then third is storage. How much data do you have and how long are you storing it? And really, it's a pretty simple proposition for the developer to kind of see and understand what their costs are going to be as they grow their workload. >> So it's a managed service is that right? >> Brian: It is a managed service. >> Okay and how do you guys price? Is it kind of usage based. >> Total usage based, yeah, again data ingestion. We've got the query count and the storage that Brian talked about, but to your point, back to the sort of what the hyperscalers are doing in terms of creating this global infrastructure that can easily be tapped into. We then extend above that, right? We effectively become a platform as a service builder tool. Many of our customers actually use InfluxData to then power their own products, which they then commercialize into a SaaS application. Right, we've got customers that are doing, you know, Kubernetes monitoring or DevOps monitoring solutions, right? That monitor, you know, people's infrastructure or web applications or any of those things. We've got people building us into, you know, Industrial IoT such as PTC's ThingWorx, right? Where they've developed their own platform >> Dave: Very cool. >> Completely backed up by our time series database, right. Rather than them having to build everything, we become that key ingredient. And then of course the fully cloud managed service means that they could go to market that much quicker. Nobody's for procuring servers, nobody is managing, you know, security patches any of that, it's all fully done for you. And it scales up beautifully, which is the key. And to some of our customers, they also want to scale up or down, right. They know when their peak hours are or peak times they need something that can handle that load. >> So looking ahead to next year, so anyway, I'm glad AWS decided to do re:Invent live. (Arwa mumbling) >> You know, that's weird, right? We thought in June, at Mobile World Congress, we were going to, it was going to be the gateway to returning but who knows? It's like two steps forward, one step back. One step forward, two steps back but we're at least moving in the right direction. So what about for you guys InfluxData? Looking ahead for the coming year, Brian, what can we expect? You know, give us a little view of sharp view of (mumbles) >> Well kind of a keeping in the theme of meeting developers where they are, we want to build out more in the Amazon ecosystem. So more integrations, more kind of ease of use for kind of adjacent products. Another is just availability. So we've been, we're now on actually three clouds. In addition to AWS, we're on Azure and Google cloud, but now expanding horizontally and showing up so we can meet our customers that are working in Europe, expanding into Asia-Pacific which we did earlier this year. And so I think we'll continue to expand the platform globally to bring it closer to where our customers are. >> Arwa: Can I. >> All right go ahead, please. >> And I would say also the hybrid capabilities probably will also be important, right? Some of our customers run certain workloads locally and then other workloads in the cloud. That ability to have that seamless experience regardless, I think is another really critical advancement that we're continuing to invest in. So that as far as the customer is concerned, it's just an API endpoint and it doesn't matter where they're deploying. >> So where do they go, can they download a freebie version? Give us the last word. >> They go to influxdata.com. We do have a free account that anyone can sign up for. It's again, fully cloud hosted and managed. It's a great place to get started. Just learn more about our capabilities and if you're here at AWS re:Invent, we'd love to see you as well. >> Check it out. All right, guys thanks for coming on theCUBEs. >> Thank you. >> Dave: Great to see you. >> All right, thank you. >> Awesome. >> All right, and thank you for watching. Keep it right there. This is Dave Vellante for theCUBEs coverage of AWS re:Invent 2021. You're watching the leader in high-tech coverage. (upbeat music)

Published Date : Nov 30 2021

SUMMARY :

hybrid event of the year, to see you face to face. you in person finally. So Brian, tell us about InfluxData. the sources are, you know, I've kind of followed you and things like sort of, you know, Maybe you could talk a little So we're pretty used to seeing, you know, of it back to the cloud, and put that back into the cloud. And so you down sample it, And that is different and get rid of data as quickly as you can. So, who do you sell to? in the IDE that you prefer. And of course, if the former So that to me shows, Maybe you can address that. So as you know, like AWS And that seems to be what that the hyperscalers spend we grew up originally as an for the developer to kind of see Okay and how do you guys price? that are doing, you know, means that they could go to So looking ahead to So what about for you guys InfluxData? Well kind of a keeping in the theme So that as far as the So where do they go, can It's a great place to get started. for coming on theCUBEs. All right, and thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Brian MullenPERSON

0.99+

BrianPERSON

0.99+

Arwa KaddouraPERSON

0.99+

AWSORGANIZATION

0.99+

Dave VellantePERSON

0.99+

EuropeLOCATION

0.99+

AppleORGANIZATION

0.99+

PayPalORGANIZATION

0.99+

DavePERSON

0.99+

AmazonORGANIZATION

0.99+

AsiaLOCATION

0.99+

TeslaORGANIZATION

0.99+

Jerry ChenPERSON

0.99+

Andreessen HorowitzPERSON

0.99+

oneQUANTITY

0.99+

two partsQUANTITY

0.99+

JuneDATE

0.99+

2012DATE

0.99+

two stepsQUANTITY

0.99+

ArwaPERSON

0.99+

one stepQUANTITY

0.99+

next yearDATE

0.99+

thirdQUANTITY

0.99+

SecondQUANTITY

0.99+

FirstQUANTITY

0.99+

a week laterDATE

0.99+

firstQUANTITY

0.99+

bothQUANTITY

0.99+

three thingsQUANTITY

0.98+

influxdata.comOTHER

0.98+

Influx DBTITLE

0.98+

Castles on the CloudTITLE

0.98+

One stepQUANTITY

0.98+

a day laterDATE

0.97+

todayDATE

0.97+

CUBEORGANIZATION

0.96+

11QUANTITY

0.95+

InfluxDataORGANIZATION

0.94+

two vectorsQUANTITY

0.94+

GreylockORGANIZATION

0.94+

ThingWorxORGANIZATION

0.94+

ninth yearQUANTITY

0.92+

VS CodeTITLE

0.92+

every five minutesQUANTITY

0.92+

earlier this yearDATE

0.92+

500,000QUANTITY

0.91+

every nanosecondQUANTITY

0.9+

first paradigmQUANTITY

0.9+

every two minutesQUANTITY

0.9+

three cloudsQUANTITY

0.89+

PTCORGANIZATION

0.89+

every 10 minutesQUANTITY

0.88+

Mobile World CongressEVENT

0.86+

100 billion dollars a yearQUANTITY

0.86+

AzureTITLE

0.83+

InfluxTITLE

0.82+

theCUBEORGANIZATION

0.82+

DevOpsTITLE

0.81+

coming yearDATE

0.81+

Breaking Analysis: Data Mesh...A New Paradigm for Data Management


 

from the cube studios in palo alto in boston bringing you data driven insights from the cube and etr this is breaking analysis with dave vellante data mesh is a new way of thinking about how to use data to create organizational value leading edge practitioners are beginning to implement data mesh in earnest and importantly data mesh is not a single tool or a rigid reference architecture if you will rather it's an architectural and organizational model that's really designed to address the shortcomings of decades of data challenges and failures many of which we've talked about on the cube as important by the way it's a new way to think about how to leverage data at scale across an organization and across ecosystems data mesh in our view will become the defining paradigm for the next generation of data excellence hello and welcome to this week's wikibon cube insights powered by etr in this breaking analysis we welcome the founder and creator of data mesh author thought leader technologist jamaak dagani shamak thank you for joining us today good to see you hi dave it's great to be here all right real quick let's talk about what we're going to cover i'll introduce or reintroduce you to jamaac she joined us earlier this year in our cube on cloud program she's the director of emerging tech at dot works north america and a thought leader practitioner software engineer architect and a passionate advocate for decentralized technology solutions and and data architectures and jamaa since we last had you on as a guest which was less than a year ago i think you've written two books in your spare time one on data mesh and another called software architecture the hard parts both published by o'reilly so how are you you've been busy i've been busy yes um good it's been a great year it's been a busy year i'm looking forward to the end of the year and the end of these two books but it's great to be back and um speaking with you well you got to be pleased with the the momentum that data mesh has and let's just jump back to the agenda for a bit and get that out of the way we're going to set the stage by sharing some etr data our partner our data partner on the spending profile and some of the key data sectors and then we're going to review the four key principles of data mesh just it's always worthwhile to sort of set that framework we'll talk a little bit about some of the dependencies and the data flows and we're really going to dig today into principle number three and a bit around the self-service data platforms and to that end we're going to talk about some of the learnings that shamak has captured since she embarked on the datamess journey with her colleagues and her clients and we specifically want to talk about some of the successful models for building the data mesh experience and then we're going to hit on some practical advice and we'll wrap with some thought exercises maybe a little tongue-in-cheek some of the community questions that we get so the first thing i want to do we'll just get this out of the way is introduce the spending climate we use this xy chart to do this we do this all the time it shows the spending profiles and the etr data set for some of the more data related sectors of the ecr etr taxonomy they they dropped their october data last friday so i'm using the july survey here we'll get into the october survey in future weeks but about 1500 respondents i don't see a dramatic change coming in the october survey but the the y-axis is net score or spending momentum the horizontal axis is market share or presence in the data set and that red line that 40 percent anything over that we consider elevated so for the past eight quarters or so we've seen machine learning slash ai rpa containers and cloud is the four areas where cios and technology buyers have shown the highest net scores and as we've said what's so impressive for cloud is it's both pervasive and it shows high velocity from a spending standpoint and we plotted the three other data related areas database edw analytics bi and big data and storage the first two well under the red line are still elevated the storage market continues to kind of plot along and we've we've plotted the outsourced it just to balance it out for context that's an area that's not so hot right now so i just want to point out that these areas ai automation containers and cloud they're all relevant to data and they're fundamental building blocks of data architectures as are the two that are directly related to data database and analytics and of course storage so it just gives you a picture of the spending sector so i wanted to share this slide jamark uh that that we presented in that you presented in your webinar i love this it's a taxonomy put together by matt turk who's a vc and he called this the the mad landscape machine learning and ai and data and jamock the key point here is there's no lack of tooling you've you've made the the data mesh concept sort of tools agnostic it's not like we need more tools to succeed in data mesh right absolutely great i think we have plenty of tools i think what's missing is a meta architecture that defines the landscape in a way that it's in step with organizational growth and then defines that meta architecture in a way that these tools can actually interoperable and to operate and integrate really well like the the clients right now have a lot of challenges in terms of picking the right tool regardless of the technology they go down the path either they have to go in and big you know bite into a big data solution and then try to fit the other integrated solutions around it or as you see go to that menu of large list of applications and spend a lot of time trying to kind of integrate and stitch this tooling together so i'm hoping that data mesh creates that kind of meta architecture for tools to interoperate and plug in and i think our conversation today around self-subjective platform um hopefully eliminate that yeah we'll definitely circle back because that's one of the questions we get all the time from the community okay let's review the four main principles of data mesh for those who might not be familiar with it and those who are it's worth reviewing jamar allow me to introduce them and then we can discuss a bit so a big frustration i hear constantly from practitioners is that the data teams don't have domain context the data team is separated from the lines of business and as a result they have to constantly context switch and as such there's a lack of alignment so principle number one is focused on putting end-to-end data ownership in the hands of the domain or what i would call the business lines the second principle is data as a product which does cause people's brains to hurt sometimes but it's a key component and if you start sort of thinking about it you'll and talking to people who have done it it actually makes a lot of sense and this leads to principle number three which is a self-serve data infrastructure which we're going to drill into quite a bit today and then the question we always get is when we introduce data meshes how to enforce governance in a federated model so let me bring up a more detailed slide jamar with the dependencies and ask you to comment please sure but as you said the the really the root cause we're trying to address is the siloing of the data external to where the action happens where the data gets produced where the data needs to be shared when the data gets used right in the context of the business so it's about the the really the root cause of the centralization gets addressed by distribution of the accountability end to end back to the domains and these domains this distribution of accountability technical accountability to the domains have already happened in the last you know decade or so we saw the transition from you know one general i.t addressing all of the needs of the organization to technology groups within the itu or even outside of the iit aligning themselves to build applications and services that the different business units need so what data mesh does it just extends that model and say okay we're aligning business with the tech and data now right so both application of the data in ml or inside generation in the domains related to the domain's needs as well as sharing the data that the domains are generating with the rest of the organization but the moment you do that then you have to solve other problems that may arise and that you know gives birth to the second principle which is about um data as a product as a way of preventing data siloing happening within the domain so changing the focus of the domains that are now producing data from i'm just going to create that data i collect for myself and that satisfy my needs to in fact the responsibility of domain is to share the data as a product with all of the you know wonderful characteristics that a product has and i think that leads to really interesting architectural and technical implications of what actually constitutes state has a product and we can have a separate conversation but once you do that then that's the point in the conversation that cio says well how do i even manage the cost of operation if i decentralize you know building and sharing data to my technical teams to my application teams do i need to go and hire another hundred data engineers and i think that's the role of a self-serve data platform in a way that it enables and empowers generalist technologies that we already have in the technical domains the the majority population of our developers these days right so the data platform attempts to mobilize the generalist technologies to become data producers to become data consumers and really rethink what tools these people need um and the last last principle so data platform is really to giving autonomy to domain teams and empowering them and reducing the cost of ownership of the data products and finally as you mentioned the question around how do i still assure that these different data products are interoperable are secure you know respecting privacy now in a decentralized fashion right when we are respecting the sovereignty or the domain ownership of um each domain and that leads to uh this idea of both from operational model um you know applying some sort of a federation where the domain owners are accountable for interoperability of their data product they have incentives that are aligned with global harmony of the data mesh as well as from the technology perspective thinking about this data is a product with a new lens with a lens that all of those policies that need to be respected by these data products such as privacy such as confidentiality can we encode these policies as computational executable units and encode them in every data product so that um we get automation we get governance through automation so that's uh those that's the relationship the complex relationship between the four principles yeah thank you for that i mean it's just a couple of points there's so many important points in there but the idea of the silos and the data as a product sort of breaking down those cells because if you have a product and you want to sell more of it you make it discoverable and you know as a p l manager you put it out there you want to share it as opposed to hide it and then you know this idea of managing the cost you know number three where people say well centralize and and you can be more efficient but that but that essentially was the the failure in your other point related point is generalist versus specialist that's kind of one of the failures of hadoop was you had these hyper specialist roles emerge and and so you couldn't scale and so let's talk about the goals of data mesh for a moment you've said that the objective is to extend exchange you call it a new unit of value between data producers and data consumers and that unit of value is a data product and you've stated that a goal is to lower the cognitive load on our brains i love this and simplify the way in which data are presented to both producers and consumers and doing so in a self-serve manner that eliminates the tapping on the shoulders or emails or raising tickets so how you know i'm trying to understand how data should be used etc so please explain why this is so important and how you've seen organizations reduce the friction across the data flows and the interconnectedness of things like data products across the company yeah i mean this is important um as you mentioned you know initially when this whole idea of a data-driven innovation came to exist and we needed all sorts of you know technology stacks we we centralized um creation of the data and usage of the data and that's okay when you first get started with where the expertise and knowledge is not yet diffused and it's only you know the privilege of a very few people in the organization but as we move to a data driven um you know innovation cycle in the organization as we learn how data can unlock new new programs new models of experience new products then it's really really important as you mentioned to get the consumers and producers talk to each other directly without a broker in the middle because even though that having that centralized broker could be a cost-effective model but if you if we include uh the cost of missed opportunity for something that we could have innovated well we missed that opportunity because of months of looking for the right data then that cost parented the cost benefit parameters and formula changes so um so to to have that innovation um really embedded data-driven innovation embedded into every domain every team we need to enable a model where the producer can directly peer-to-peer discover the data uh use it understand it and use it so the litmus test for that would be going from you know a hypothesis that you know as a data scientist i think there is a pattern and there is an insight in um you know in in the customer behavior that if i have access to all of the different informations about the customer all of the different touch points i might be able to discover that pattern and personalize the experience of my customer the liquid stuff is going from that hypothesis to finding all of the different sources be able to understanding and be able to connect them um and then turn them them into you know training of my machine learning and and the rest is i guess known as an intelligent product got it thank you so i i you know a lot of what we do here in breaking it house is we try to curate and then point people to new resources so we will have some additional resources because this this is not superficial uh what you and your colleagues in the community are creating but but so i do want to you know curate some of the other material that you had so if i bring up this next chart the left-hand side is a curated description both sides of your observations of most of the monolithic data platforms they're optimized for control they serve a centralized team that has hyper-specialized roles as we talked about the operational stacks are running running enterprise software they're on kubernetes and the microservices are isolated from let's say the spark clusters you know which are managing the analytical data etc whereas the data mesh proposes much greater autonomy and the management of code and data pipelines and policy as independent entities versus a single unit and you've made this the point that we have to enable generalists to borrow from so many other examples in the in the industry so it's an architecture based on decentralized thinking that can really be applied to any domain really domain agnostic in a way yes and i think if i pick one key point from that diagram is really um or that comparison is the um the the the data platforms or the the platform capabilities need to present a continuous experience from an application developer building an application that generates some data let's say i have an e-commerce application that generates some data to the data product that now presents and shares that data as as temporal immutable facts that can be used for analytics to the data scientist that uses that data to personalize the experience to the deployment of that ml model now back to that e-commerce application so if we really look at this continuous journey um the walls between these separate platforms that we have built needs to come down the platforms underneath that generate you know that support the operational systems versus supported data platforms versus supporting the ml models they need to kind of play really nicely together because as a user i'll probably fall off the cliff every time i go through these stages of this value stream um so then the interoperability of our data solutions and operational solutions need to increase drastically because so far we've got away with running operational systems an application on one end of the organization running and data analytics in another and build a spaghetti pipeline to you know connect them together neither of the ends are happy i hear from data scientists you know data analyst pointing finger at the application developer saying you're not developing your database the right way and application point dipping you're saying my database is for running my application it wasn't designed for sharing analytical data so so we've got to really what data mesh as a mesh tries to do is bring these two world together closer because and then the platform itself has to come closer and turn into a continuous set of you know services and capabilities as opposed to this disjointed big you know isolated stacks very powerful observations there so we want to dig a little bit deeper into the platform uh jamar can have you explain your thinking here because it's everybody always goes to the platform what do i do with the infrastructure what do i do so you've stressed the importance of interfaces the entries to and the exits from the platform and you've said you use a particular parlance to describe it and and this chart kind of shows what you call the planes not layers the planes of the platform it's complicated with a lot of connection points so please explain these planes and how they fit together sure i mean there was a really good point that you started with that um when we think about capabilities or that enables build of application builds of our data products build their analytical solutions usually we jump too quickly to the deep end of the actual implementation of these technologies right do i need to go buy a data catalog or do i need you know some sort of a warehouse storage and what i'm trying to kind of elevate us up and out is to to to force us to think about interfaces and apis the experiences that the platform needs to provide to run this secure safe trustworthy you know performance mesh of data products and if you focus on then the interfaces the implementation underneath can swap out right you can you can swap one for the other over time so that's the purpose of like having those lollipops and focusing and emphasizing okay what is the interface that provides a certain capability like the storage like the data product life cycle management and so on the purpose of the planes the mesh experience playing data product expense utility plan is really giving us a language to classify different set of interfaces and capabilities that play nicely together to provide that cohesive journey of a data product developer data consumer so then the three planes are really around okay at the bottom layer we have a lot of utilities we have that mad mac turks you know kind of mad data tooling chart so we have a lot of utilities right now they they manage workflow management you know they they do um data processing you've got your spark link you've got your storage you've got your lake storage you've got your um time series of storage you've got a lot of tooling at that level but the layer that we kind of need to imagine and build today we don't buy yet as as long as i know is this linger that allows us to uh exchange that um unit of value right to build and manage these data products so so the language and the apis and interface of this product data product experience plan is not oh i need this storage or i need that you know workflow processing is that i have a data product it needs to deliver certain types of data so i need to be able to model my data it needs to as part of this data product i need to write some processing code that keeps this data constantly alive because it's receiving you know upstream let's say user interactions with a website and generating the profile of my user so i need to be able to to write that i need to serve the data i need to keep the data alive and i need to provide a set of slos and guarantees for my data so that good documentation so that some you know someone who comes to data product knows but what's the cadence of refresh what's the retention of the data and a lot of other slos that i need to provide and finally i need to be able to enforce and guarantee certain policies in terms of access control privacy encryption and so on so as a data product developer i just work with this unit a complete autonomous self-contained unit um and the platform should give me ways of provisioning this unit and testing this unit and so on that's why kind of i emphasize on the experience and of course we're not dealing with one or two data product we're dealing with a mesh of data products so at the kind of mesh level experience we need a set of capabilities and interfaces to be able to search the mesh for the right data to be able to explore the knowledge graph that emerges from this interconnection of data products need to be able to observe the mesh for any anomalies did we create one of these giant master data products that all the data goes into and all the data comes out of how we found ourselves the bottlenecks to be able to kind of do those level machine level capabilities we need to have a certain level of apis and interfaces and once we decide and decide what constitutes that to satisfy this mesh experience then we can step back and say okay now what sort of a tool do i need to build or buy to satisfy them and that's that is not what the data community or data part of our organizations used to i think traditionally we're very comfortable with buying a tool and then changing the way we work to serve to serve the tool and this is slightly inverse to that model that we might be comfortable with right and pragmatists will will to tell you people who've implemented data match they'll tell you they spent a lot of time on figuring out data as a product and the definitions there the organizational the getting getting domain experts to actually own the data and and that's and and they will tell you look the technology will come and go and so to your point if you have those lollipops and those interfaces you'll be able to evolve because we know one thing's for sure in this business technology is going to change um so you you had some practical advice um and i wanted to discuss that for those that are thinking about data mesh i scraped this slide from your presentation that you made and and by the way we'll put links in there your colleague emily who i believe is a data scientist had some really great points there as well that that practitioners should dig into but you made a couple of points that i'd like you to summarize and to me that you know the big takeaway was it's not a one and done this is not a 60-day project it's a it's a journey and i know that's kind of cliche but it's so very true here yes um this was a few starting points for um people who are embarking on building or buying the platform that enables the people enables the mesh creation so it was it was a bit of a focus on kind of the platform angle and i think the first one is what we just discussed you know instead of thinking about mechanisms that you're building think about the experiences that you're enabling uh identify who are the people like what are the what is the persona of data scientists i mean data scientist has a wide range of personas or did a product developer the same what is the persona i need to develop today or enable empower today what skill sets do they have and and so think about experience as mechanisms i think we are at this really magical point i mean how many times in our lifetime we come across a complete blanks you know kind of white space to a degree to innovate so so let's take that opportunity and use a bit of a creativity while being pragmatic of course we need solutions today or yesterday but but still think about the experiences not not mechanisms that you need to buy so that was kind of the first step and and the nice thing about that is that there is an evolutionary there is an iterative path to maturity of your data mesh i mean if you start with thinking about okay which are the initial use cases i need to enable what are the data products that those use cases depend on that we need to unlock and what is the persona of my or general skill set of my data product developer what are the interfaces i need to enable you can start with the simplest possible platform for your first two use cases and then think about okay the next set of data you know data developers they have a different set of needs maybe today i just enable the sql-like querying of the data tomorrow i enable the data scientists file based access of the data the day after i enable the streaming aspect so so have this evolutionary kind of path ahead of you and don't think that you have to start with building out everything i mean one of the things we've done is taking this harvesting approach that we work collaboratively with those technical cross-functional domains that are building the data products and see how they are using those utilities and harvesting what they are building as the solutions for themselves back into the back into the platform but at the end of the day we have to think about mobilization of the large you know largest population of technologies we have we'd have to think about diffusing the technology and making it available and accessible by the generous technologies that you know and we've come a long way like we've we've gone through these sort of paradigm shifts in terms of mobile development in terms of functional programming in terms of cloud operation it's not that we are we're struggling with learning something new but we have to learn something that works nicely with the rest of the tooling that we have in our you know toolbox right now so so again put that generalist as the uh as one of your center personas not the only person of course we will have specialists of course we will always have data scientists specialists but any problem that can be solved as a general kind of engineering problem and i think there's a lot of aspects of data michigan that can be just a simple engineering problem um let's just approach it that way and then create the tooling um to empower those journalists great thank you so listen i've i've been around a long time and so as an analyst i've seen many waves and we we often say language matters um and so i mean i've seen it with the mainframe language it was different than the pc language it's different than internet different than cloud different than big data et cetera et cetera and so we have to evolve our language and so i was going to throw a couple things out here i often say data is not the new oil because because data doesn't live by the laws of scarcity we're not running out of data but i get the analogy it's powerful it powered the industrial economy but it's it's it's bigger than that what do you what do you feel what do you think when you hear the data is the new oil yeah i don't respond to those data as the gold or oil or whatever scarce resource because as you said it evokes a very different emotion it doesn't evoke the emotion of i want to use this i want to utilize it feels like i need to kind of hide it and collect it and keep it to myself and not share it with anyone it doesn't evoke that emotion of sharing i really do think that data and i with it with a little asterisk and i think the definition of data changes and that's why i keep using the language of data product or data quantum data becomes the um the most important essential element of existence of uh computation what do i mean by that i mean that you know a lot of applications that we have written so far are based on logic imperative logic if this happens do that and else do the other and we're moving to a world where those applications generating data that we then look at and and the data that's generated becomes the source the patterns that we can exploit to build our applications as in you know um curate the weekly playlist for dave every monday based on what he has listened to and the you know other people has listened to based on his you know profile so so we're moving to the world that is not so much about applications using the data necessarily to run their businesses that data is really truly is the foundational building block for the applications of the future and then i think in that we need to rethink the definition of the data and maybe that's for a different conversation but that's that's i really think we have to converge the the processing that the data together the substance substance and the processing together to have a unit that is uh composable reusable trustworthy and that's that's the idea behind the kind of data product as an atomic unit of um what we build from future solutions got it now something else that that i heard you say or read that really struck me because it's another sort of often stated phrase which is data is you know our most valuable asset and and you push back a little bit on that um when you hear people call data and asset people people said often have said they think data should be or will eventually be listed as an asset on the balance sheet and i i in hearing what you said i thought about that i said well you know maybe data as a product that's an income statement thing that's generating revenue or it's cutting costs it's not necessarily because i don't share my my assets with people i don't make them discoverable add some color to this discussion i think so i think it's it's actually interesting you mentioned that because i read the new policy in china that cfos actually have a line item around the data that they capture we don't have to go to the political conversation around authoritarian of um collecting data and the power that that creates and the society that leads to but that aside that big conversation little conversation aside i think you're right i mean the data as an asset generates a different behavior it's um it creates different performance metrics that we would measure i mean before conversation around data mesh came to you know kind of exist we were measuring the success of our data teams by the terabytes of data they were collecting by the thousands of tables that they had you know stamped as golden data none of that leads to necessarily there's no direct line i can see between that and actually the value that data generated but if we invert that so that's why i think it's rather harmful because it leads to the wrong measures metrics to measure for success so if you invert that to a bit of a product thinking or something that you share to delight the experience of users your measures are very different your measures are the the happiness of the user they decrease lead time for them to actually use and get value out of it they're um you know the growth of the population of the users so it evokes a very different uh kind of behavior and success metrics i do say if if i may that i probably come back and regret the choice of word around product one day because of the monetization aspect of it but maybe there is a better word to use but but that's the best i think we can use at this point in time why do you say that jamar because it's too directly related to monetization that has a negative connotation or it might might not apply in things like healthcare or you know i think because if we want to take your shortcuts and i remember this conversation years back that people think that the reason to you know kind of collect data or have data so that we can sell it you know it's just the monetization of the data and we have this idea of the data market places and so on and i think that is actually the least valuable um you know outcome that we can get from thinking about data as a product that direct cell an exchange of data as a monetary you know exchange of value so so i think that might redirect our attention to something that really matters which is um enabling using data for generating ultimately value for people for the customers for the organizations for the partners as opposed to thinking about it as a unit of exchange for for money i love data as a product i think you were your instinct was was right on and i think i'm glad you brought that up because because i think people misunderstood you know in the last decade data as selling data directly but you really what you're talking about is using data as a you know ingredient to actually build a product that has value and value either generate revenue cut costs or help with a mission like it could be saving lives but in some way for a commercial company it's about the bottom line and that's just the way it is so i i love data as a product i think it's going to stick so one of the other things that struck me in one of your webinars was one of the q a one of the questions was can i finally get rid of my data warehouse so i want to talk about the data warehouse the data lake jpmc used that term the data lake which some people don't like i know john furrier my business partner doesn't like that term but the data hub and one of the things i've learned from sort of observing your work is that whether it's a data lake a data warehouse data hub data whatever it's it should be a discoverable node on the mesh it really doesn't matter the the technology what are your your thoughts on that yeah i think the the really shift is from a centralized data warehouse to data warehouse where it fits so i think if you just cross that centralized piece uh we are all in agreement that data warehousing provides you know interesting and capable interesting capabilities that are still required perhaps as a edge node of the mesh that is optimizing for certain queries let's say financial reporting and we still want to direct a fair bit of data into a node that is just for those financial reportings and it requires the precision and the um you know the speed of um operation that the warehouse technology provides so i think um definitely that technology has a place where it falls apart is when you want to have a warehouse to rule you know all of your data and model canonically model your data because um it you have to put so much energy into you know kind of try to harness this model and create this very complex the complex and fragile snowflake schemas and so on that that's all you do you spend energy against the entropy of your organization to try to get your arms around this model and the model is constantly out of step with what's happening in reality because reality the model the reality of the business is moving faster than our ability to model everything into into uh into one you know canonical representation i think that's the one we need to you know challenge not necessarily application of data warehousing on a node i want to close by coming back to the issues of standards um you've specifically envisioned data mesh to be technology agnostic as i said before and of course everyone myself included we're going to run a vendor's technology platform through a data mesh filter the reality is per the matt turc chart we showed earlier there are lots of technologies that that can be nodes within the data mesh or facilitate data sharing or governance etc but there's clearly a lack of standardization i'm sometimes skeptical that the vendor community will drive this but maybe like you know kubernetes you know google or some other internet giant is going to contribute something to open source that addresses this problem but talk a little bit more about your thoughts on standardization what kinds of standards are needed and where do you think they'll come from sure i mean the you write that the vendors are not today incentivized to create those open standards because majority of the vet not all of them but some vendors operational model is about bring your data to my platform and then bring your computation to me uh and all will be great and and that will be great for a portion of the clients and portion of environments where that complexity we're talking about doesn't exist so so we need yes other players perhaps maybe um some of the cloud providers or people that are more incentivized to open um open their platform in a way for data sharing so as a starting point i think standardization around data sharing so if you look at the spectrum right now we have um a de facto sound it's not even a standard for something like sql i mean everybody's bastardized to call and extended it with so many things that i don't even know what this standard sql is anymore but we have that for some form of a querying but beyond that i know for example folks at databricks to start to create some standards around delta sharing and sharing the data in different models so i think data sharing as a concept the same way that apis were about capability sharing so we need to have the data apis or analytical data apis and data sharing extended to go beyond simply sql or languages like that i think we need standards around computational prior policies so this is again something that is formulating in the operational world we have a few standards around how do you articulate access control how do you identify the agents who are trying to access with different authentication mechanism we need to bring some of those our ad our own you know our data specific um articulation of policies uh some something as simple as uh identity management across different technologies it's non-existent so if you want to secure your data across three different technologies there is no common way of saying who's the agent that is acting uh to act to to access the data can i authenticate and authorize them so so those are some of the very basic building blocks and then the gravy on top would be new standards around enriched kind of semantic modeling of the data so we have a common language to describe the semantic of the data in different nodes and then relationship between them we have prior work with rdf and folks that were focused on i guess linking data across the web with the um kind of the data web i guess work that we had in the past we need to revisit those and see their practicality in the enterprise con context so so data modeling a rich language for data semantic modeling and data connectivity most importantly i think those are some of the items on my wish list that's good well we'll do our part to try to keep the standards you know push that push that uh uh movement jamaica we're going to leave it there i'm so grateful to have you uh come on to the cube really appreciate your time it's just always a pleasure you're such a clear thinker so thanks again thank you dave that's it's wonderful to be here now we're going to post a number of links to some of the great work that jamark and her team and her books and so you check that out because we remember we publish each week on siliconangle.com and wikibon.com and these episodes are all available as podcasts wherever you listen listen to just search breaking analysis podcast don't forget to check out etr.plus for all the survey data do keep in touch i'm at d vallante follow jamac d z h a m a k d or you can email me at david.velante at siliconangle.com comment on the linkedin post this is dave vellante for the cube insights powered by etrbwell and we'll see you next time you

Published Date : Oct 25 2021

SUMMARY :

all of the you know wonderful

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
60-dayQUANTITY

0.99+

oneQUANTITY

0.99+

40 percentQUANTITY

0.99+

matt turkPERSON

0.99+

two booksQUANTITY

0.99+

chinaLOCATION

0.99+

thousands of tablesQUANTITY

0.99+

dave vellantePERSON

0.99+

jamaacPERSON

0.99+

googleORGANIZATION

0.99+

siliconangle.comOTHER

0.99+

tomorrowDATE

0.99+

yesterdayDATE

0.99+

octoberDATE

0.99+

bostonLOCATION

0.99+

first stepQUANTITY

0.98+

jamarPERSON

0.98+

todayDATE

0.98+

jamaicaPERSON

0.98+

both sidesQUANTITY

0.98+

shamakPERSON

0.98+

davePERSON

0.98+

jamarkPERSON

0.98+

first oneQUANTITY

0.98+

o'reillyORGANIZATION

0.98+

bothQUANTITY

0.97+

each weekQUANTITY

0.97+

john furrierPERSON

0.97+

second principleQUANTITY

0.97+

jamaak dagani shamakPERSON

0.96+

less than a year agoDATE

0.96+

earlier this yearDATE

0.96+

three different technologiesQUANTITY

0.96+

jamaaPERSON

0.95+

each domainQUANTITY

0.95+

terabytes of dataQUANTITY

0.94+

three planesQUANTITY

0.94+

julyDATE

0.94+

last decadeDATE

0.93+

about 1500 respondentsQUANTITY

0.93+

decadesQUANTITY

0.93+

firstQUANTITY

0.93+

first twoQUANTITY

0.93+

dot worksORGANIZATION

0.93+

one key pointQUANTITY

0.93+

first two use casesQUANTITY

0.92+

last fridayDATE

0.92+

this weekDATE

0.92+

twoQUANTITY

0.92+

three otherQUANTITY

0.92+

ndorORGANIZATION

0.92+

first thingQUANTITY

0.9+

two dataQUANTITY

0.9+

lakeORGANIZATION

0.89+

four areasQUANTITY

0.88+

single toolQUANTITY

0.88+

north americaLOCATION

0.88+

single unitQUANTITY

0.87+

jamacPERSON

0.86+

one ofQUANTITY

0.85+

thingsQUANTITY

0.85+

david.velanteOTHER

0.83+

past eight quartersDATE

0.83+

four principlesQUANTITY

0.82+

daveORGANIZATION

0.82+

a lot of applicationsQUANTITY

0.81+

four main principlesQUANTITY

0.8+

sqlTITLE

0.8+

palo altoORGANIZATION

0.8+

emilyPERSON

0.8+

d vallantePERSON

0.8+

Kostas Roungeris & Matt Ferguson, Cisco | Cisco Live EU Barcelona 2020


 

>>live from Barcelona, Spain. It's the Cube covering Cisco Live 2020 right to you by Cisco and its ecosystem partners >>back. This is the Cube's coverage >>of Cisco Live 2020 here in Barcelona, doing about three and 1/2 days of wall to wall coverage here. Stew Minimum. My co host for this segment is Dave Volante. John Furrier is also here, scaring the floor and really happy to welcome to the program. Two first time guests. I believe so. Uh, Derek is the product manager of product marketing for Cloud Computing with Cisco, and sitting to his left is Matt Ferguson, who's director of product development, also with the Cisco Cloud Group. David here from Boston. Matt is also from the Boston area, and customers is coming over from London. So thanks so much for joining us. Thank you. All right, so obviously, cloud computing something we've been talking about many years. We've really found fascinating relationship. Cisco's had, with its customers a zealous through the partner ecosystem, had many good discussions about some of the announcements this week. Maybe start a little bit, you know, Cisco's software journey and positioning in the cloud space right now. >>So it's a really interesting dynamic when we started transitioning to multi cloud and we actually deal with Cloud and Compute coming together and we've had whether you're looking at three infrastructure ops organization or whether you're looking at the APS up operations or whether you're looking at, you know, your DEV environment, your security operations. Each organization has to deal with their angle, which they view, you know, multi cloud. Or they view how they actually operate within those the cloud computing context. And so whether you're on the infrastructure side, you're looking at compute. You're looking at storage. You're looking at resources. If you're on an app operator, you're looking at performance. You're looking at visibility assurance. If you are in the security operations you're looking at, maybe governance. You're looking at policy, and then when you're a developer, you really sort of thinking about see I CD. You're talking about agility, and there's very few organizations like Cisco that actually is looking at from a product perspective. All those various angles >>of multi cloud >> is definitely a lot of pieces. Maybe up level it for us a little bit. There's so many pieces way talk for so long. You know you don't talk to any company that doesn't have a cloud. Strategy doesn't mean that it's not going to change over time. And it means every company's got a known positioning. But talk about the relationship Cisco has with its customers and really the advisory condition that you want to have with >>its actually a very relevant question. To what? To what Matt is talking about, because Wei talked a lot about multi cloud as a trend and hybrid clouds and this kind of relationship between the traditional view of looking at computing data centers and then expanding to different clouds. You know, public cloud providers have now amazing platform capabilities. And if you think about it, the it goes back to what Matt said about I t ops and development kind of efforts. Why is this happening? Really, there's There's the study that we did with with an analyst, and there was an amazing a shocking stats around how within the next three years, organizations will have to support 50% more applications than they do now. And we have been trying to test this, that our events that made customer meetings etcetera, that is a lot of a lot of change for organizations. So if you think about why are they use, why do they need to basically let go and expand to those clouds? Is because they want to service. I T ops teams want to servers with capabilities, their developers faster, right? And this is where you have within the I T ops kind of theme organization. You have the security kind of frame through the compute frame, the networking where, you know Cisco has a traditional footprint. How do you blend all this? How do you bring all this together in a linear way to support individual unique application modernization efforts? I think that's what we're hearing from customers in terms of the feedback. And this is what influences our >>strategy to converts the different business units. And it's an area engineering effort, right? >>I want to poke at that a little bit. I mean, a couple years ago, I have to admit I was kind of a multi cloud skeptic. I always said that I thought it was more of a symptom that actually strategy a symptom of shadow I t and different workloads and so forth, but now kind of buying in because I think I t in particular has been brought in to clean up the crime scene. I often say so I think it is becoming a strategy. So if you could help us understand what you're hearing from customers in terms of their strategy towards multi cloud and how Cisco it was mapping into that, >>yeah, so So when we talk to customers, it comes back to the angle at which they're approaching the problem. And, like you said, that shadow I t. Has been probably around for longer than anybody want, cares to admit, because people want to move faster. Organizations want to get their product out to market sooner. And so what? What really is we're having conversations now about, you know, how do I get the visibility? How do I get you know, the policies in the governance so that I can actually understand either how much I'm spending in the cloud or whether I'm getting the actual performance that I'm looking for, that I need that connectivity. So I get the bandwidth, and so these are the kinds of conversations that we have with customers is going. I realized that this is going on now I actually have to Now put some governance and controls around. That is their products is their solutions, is there? You know, they're looking to Cisco to help them through this journey because it is a journey. Because as much as we talk about cloud and you know, companies that were born in the cloud cloud native there is a tremendous number of I see organizations that are just starting that journey that are just entering into this phase where they have to solve these problems. >>Yeah, I agree. And they're starting the journey with a deliberate strategy as opposed to Okay, we got this thing. But if you think about the competitive landscape, it's kind of interesting. And I want to try to understand where Cisco fits because again, you initially had companies that didn't know in a public cloud sort of pushing multi cloud. You say? Well, I guess they have to do that. But now you see, and those come out with Google, you see Microsoft leaning in way. Think eventually aws is gonna lead in. And then you say I'm kind of interested in working with some of these cloud agnostic not trying to force Now, now Cisco. A few years ago, you didn't really think about Cisco as a player. Now this goes right in the middle. I have said often that Cisco's in a great position John Furrier as well to connect businesses and from a source of networking strength, making a strong argument that we have the most cost effective, most secure, highest performance networks to connect clouds. That seems to be a pretty fundamental strength of yours. And does that essentially summarize your strategy? And And how does that map into the actions that you're taking in terms of products and services that you're bringing to market? >>I would say that I can I can I can take that. Yeah, for sure. It's chewy question for hours. So I was thinking about satellite you mentioned before. Like Okay, that's, you know, the world has turned around completely way seem to talk about Target satellite Is something bad happening? And now, suddenly we completely forgot about it, like let let free, free up the developers and let them do whatever they want. And basically that is what I think is happening out there in the market. So all of the solutions you mentioned in the go to market approaches and the architectures that the public cloud providers at least our offering out there. Certainly the Big Three have differences have their strengths on. And I think those things are closer to the developer environment. Basically, you know, if you're looking into something like AI ml, there's one provider that you go with. If you're looking for a mobile development framework, you're gonna go somewhere else. If you're looking for a D, are you gonna go somewhere else? Maybe not a big cloud, but your service provider. But you've been dealing with all this all this time, so you know that they have their accreditation that you're looking for. So where does Cisco come in? You know, we're not a public cloud provider way offer products as a service from our data centers and our partners data centers. But at the the way that the industry sees a cloud provider a public cloud like AWS Azure, Google, Oracle, IBM, etcetera, we're not that we don't do that. Our mission is to enable organizations with software hardware products SAS products to be able to facilitate their connectivity, security, visibility, observe ability, and in doing business and in leveraging the best benefits from those clubs. So way kind of way kind of moved to a point where we flip around the question, and the first question is, Who is your club provider? What? How many? Tell us the clouds you work with, and we can give you the modular pieces you can put we can put together for you. So these, so that you can make the best out of >>your club. Being able to do that across clouds in an environment that is consistent with policies that are consistent, that represent the edicts of your organization, no matter where your data lives, that's sort of the vision and the way >>this is translated into products into Cisco's products. You naturally think about Cisco as the connectivity provider networking. That's that's really sort of our, you know, go to in what we're also when we have a significant computing portfolio as well. So connectivity is not only the connectivity of the actual wire between geography is point A to point B. In the natural routing and switching world, there's connectivity between applications between compute and so this week. You know, the announcements were significant in that space when you talk about the compute and the cloud coming together on a single platform, that gives you not only the ability to look at your applications from an experience journey map so you can actually know where problems might occur in the application domain. You can actually, then go that next level down into the infrastructure level and you can say, Okay, maybe I'm running out of some sort of resource, whether it's compute resource, whether it's memory, whether it's on your private cloud that you have enabled on Prem, or whether it's in the public cloud, that you have that application residing and then, quite candidly, you have the actual hardware itself. So inter site. It has an ability to control that entire stack so you can have that visibility all the way down to the hardware layer. >>I'm glad you brought up some of the applications. I wonder if we could stay there for a moment. Talk about some of the changing patterns for customers. A lot of talk in the industry about cloud native often gets conflated with micro services, container ization and lots of the individual pieces there. But when one of Our favorite things have been talking about this week is software that really sits at the application layer and how that connects down through some of the infrastructure pieces. So help us understand what you're hearing from customers and how you're helping them through this transition to cost. You're saying, Absolutely, there's going to be lots of new applications, more applications and they still have the old stuff that they need to continue to manage because we know in I t nothing ever goes away. Yeah, >>that's that's definitely I was I was thinking, you know, there's there's a vacuum at the moment on and there's things that Cisco is doing from from a technology perspective to fill that gap between application. What you see when it comes to monitoring, making sure your services are observable. And how does that fit within the infrastructure stack, You know, everything upwards of the network layer. Basically, that is changing dramatically. Some of the things that matter touched upon with regards to, you know, being able to connect the networking, the security and the infrastructure of the compute infrastructure that the developers basically are deploying on top. So there's a lot of the desert out of things on continue ization. There's a lot of, in fact, it's one part of the off the shelf inter site of the stack that you mentioned and one of the big announcements. Uh huh. You know that there's a lot of discussion in the industry around. Okay, how does that abstract further the conversation on networking, for example? Because that now what we're seeing is that you have a huge monoliths enterprise applications that are being carved down into micro services. Okay, they know there's a big misunderstanding around what is cloud native? Is it related to containers? Different kind of things, right? But containers are naturally the infrastructure defacto currency for developers to deploy because of many, many benefits. But then what happens between the kubernetes layer, which seems to be the standard and the application? Who's going to be managing services talking to each other that are multiplying? You know, things like service mesh, network service mess? How is the never evolving to be able to create this immutable infrastructure for developers to deploy applications? So there's so many things happening at the same time where Cisco has actually a lot of taking a lot of the front seat. Leading that conversation >>is where it gets really interesting. Sort of hard to squint through because you mentioned kubernetes is the de facto standard, but it's a defacto standard that's open everybody's playing with. But historically, this industry has been defined by a leader comes out with a de facto standard kubernetes, not a company. It's an open standard, so but there's so many other components than containers. And so history would suggest that there's going to be another defacto standard or multiple standards that emerge. And your point earlier. You got to have the full stack. You can't just do networking. You can't just do certain if you so you guys are attacking that whole pie. So how do you think this thing will evolve? I mean, you guys obviously intend to put out a stat cast a wide net as possible, captured not only your existing install basement attract, attract others on you're going aggressively at it as a czar. Others How do you see it shaking out? You see you know, four or five pockets, you see one leader emerging. I mean, customers would love all you guys to get together and come up with standards. That's not going to happen. So where it's jump ball right now? >>Well, yeah. You think about, you know, to your point regarding kubernetes is not a company, right? It is. It is a community driven. I mean, it was open source by a large company, but it's community driven now, and that's the pace at which open source is sort of evolving. There is so much coming at I t organizations from a new paradigm, a new software, something that's, you know, the new the shiny object that sort of everybody sort of has to jump onto and sort of say, that is the way we're gonna function. So I t organizations have to struggle with this influx of just every coming at them and every angle. And I think what starting toe happen is the management and the you know that Stack who controls that or who is helping i t organizations to manage it for them. So really, what we're trying to say is there's elements that have to put together that have to function, and kubernetes is just one example Docker, the operating system that associated with it that runs all that stuff then you have the application that goes right sides on top of it. So now what we have to have is things like what we just announced this week. Hx AP the application platform for a check so you have the Compute cluster, but then you have the stack on top of that that's managed by an organization that's looking at the security that's looking at the the actual making opinions about what should go in the stack and managing that for you. So you don't have to deal with that because you just focus on the application development. Yeah, >>I mean, Cisco's in a strong position to do. There's no question about it. To me, it comes down to execution. If you guys execute and deliver on the products and services that you say, you know, you announced, for instance, this weekend previously, and you continue on a road map, you're gonna get a fair share of this market place. I think there's no question >>so last topic before we let you go is love your viewpoint on customers. What's separating kind of leaders from you know, the followers in this space, you know, there's so much data out there. And I'm a big fan of the State of Dev Ops report Help separate, You know, some not be not. Here's the technology or the piece, but the organizational and, you know, dynamics that you should do. So it sounds like you like that report also, love. What do you hear from customers? How do you help guide them towards becoming leaders in the cloud space? >>Yeah, The State of Dev Ops report was fascinating. I mean, they've been doing that for a number of years now. Yeah, exactly. And really what? It's sort of highlighting is two main factors that I think that are in this revolution or the third paradigm shift. Our journey we're going through, there's the technology side for sure, and so that's getting more complex. You have micro services, you have application explosion. You have a lot of things that are occurring just in technology that you're trying to keep up. But then it's really about the human aspect of human elements, the people about it. And that's really I think, what separates you know, the elites that are really sort of, you know, just charging forward and ahead because they've been able to sort of break down the silos because really, what you're talking about in cloud Native Dev Ops is how you take the journey of the experience of the service from end end from the development all the way to production. And how do you actually sort of not have organizations that look at their domain their data, set their operations and then have to translate that or have to sort of you have another conversation with another organization that that doesn't look at that, That has no experience of that? So that is what we're talking about, that end and view. >>And in addition to all the things we've been talking about, I think security's a linchpin here. You guys are executing on security. You got a big portfolio and you've seen a lot of M and A and a lot of companies trying to get in, and it's gonna be interesting to see how that plays out. But that's going to be a key because organizations are going to start there from a strategy standpoint, and they build out >>Yeah, absolutely. If you follow Dev ops methodologies, security gets baked in along the way so that you're not having to 100% gone after anything, just give you the final word. >>I was just a follow up with You. Got some other model was saying, There's so many, there's what's happening out there Is this democracy around? Standards with is driven by communities and way love that in fact, Cisco is involved in many open sores community projects. But you asked about customers and just right before you were asking about you know who is gonna be the winner. There's so many use cases. >>Uh huh. >>There's so much depth in Tim's off. You know what customers want to do with on top of kubernetes, you know, take Ai Ml, for example, something that we have way have some, some some offering services on there's cast. A mother wants to ai ml their their container stuck. Their infrastructure will be so much different to someone else, is doing something just hosting. And there's always going to be a SAS provider that is niche servicing some oil and gas company, you know, which means that the company of that industry will go and follow that instead of just going to a public cloud provider that is more agnostic. Does that make sense? Yeah. >>Yeah. There's relationships that exist that are just gonna get blown away. That add value today. And they're not going to just throw him out. Exactly. >>Well, thank you so much for helping us understand the updates where your customers are driving super exciting space. Look forward to keeping an eye on it. Thanks so much. Alright, there's still lots more coming here from Cisco Live 2020 in Barcelona. People are standing watching all the developer events, lots going on the floor and we still have more. So thank you for watching the Cube. Yeah, yeah.

Published Date : Jan 30 2020

SUMMARY :

Cisco Live 2020 right to you by Cisco and its ecosystem This is the Cube's coverage start a little bit, you know, Cisco's software journey and positioning in If you are in the security operations you're looking at, maybe governance. its customers and really the advisory condition that you want to have with And this is where you have within the I T ops kind of theme strategy to converts the different business units. So if you could help us understand what you're hearing How do I get you know, the policies in the governance so that And I want to try to understand where Cisco fits because again, you initially So all of the solutions you mentioned in the go to market approaches and that is consistent with policies that are consistent, that represent the edicts of your organization, It has an ability to control that entire stack so you can have that that really sits at the application layer and how that connects down through some There's a lot of, in fact, it's one part of the off the shelf inter site of the stack that you mentioned Sort of hard to squint through because you mentioned kubernetes is the example Docker, the operating system that associated with it that runs all that stuff then you have the application you know, you announced, for instance, this weekend previously, and you continue on a road map, you're gonna get a but the organizational and, you know, dynamics that you should do. data, set their operations and then have to translate that or have to sort of you have And in addition to all the things we've been talking about, I think security's a linchpin here. not having to 100% gone after anything, just give you the final word. customers and just right before you were asking about you know who is gonna be the winner. on top of kubernetes, you know, take Ai Ml, for example, something that we have way And they're not going to just throw him out. So thank you for

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CiscoORGANIZATION

0.99+

Dave VolantePERSON

0.99+

Matt FergusonPERSON

0.99+

DerekPERSON

0.99+

BostonLOCATION

0.99+

DavidPERSON

0.99+

LondonLOCATION

0.99+

BarcelonaLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

Matt FergusonPERSON

0.99+

MattPERSON

0.99+

GoogleORGANIZATION

0.99+

100%QUANTITY

0.99+

John FurrierPERSON

0.99+

Kostas RoungerisPERSON

0.99+

AWSORGANIZATION

0.99+

IBMORGANIZATION

0.99+

first questionQUANTITY

0.99+

Cisco Cloud GroupORGANIZATION

0.99+

Barcelona, SpainLOCATION

0.99+

OracleORGANIZATION

0.99+

two main factorsQUANTITY

0.99+

TimPERSON

0.99+

WeiPERSON

0.99+

this weekDATE

0.98+

1/2 daysQUANTITY

0.98+

one providerQUANTITY

0.98+

Two first timeQUANTITY

0.98+

Each organizationQUANTITY

0.98+

SASORGANIZATION

0.97+

single platformQUANTITY

0.97+

fourQUANTITY

0.97+

third paradigmQUANTITY

0.96+

todayDATE

0.96+

five pocketsQUANTITY

0.96+

oneQUANTITY

0.95+

Cisco Live 2020EVENT

0.93+

State ofTITLE

0.92+

Live 2020COMMERCIAL_ITEM

0.91+

about threeQUANTITY

0.89+

this weekendDATE

0.89+

State of Dev OpsTITLE

0.89+

CubeCOMMERCIAL_ITEM

0.89+

one leaderQUANTITY

0.87+

couple years agoDATE

0.87+

one partQUANTITY

0.84+

50% moreQUANTITY

0.83+

one exampleQUANTITY

0.82+

TargetORGANIZATION

0.81+

Cisco Live EU Barcelona 2020EVENT

0.81+

few years agoDATE

0.76+

next three yearsDATE

0.7+

awsORGANIZATION

0.68+

hoursQUANTITY

0.66+

threeQUANTITY

0.65+

CiscoCOMMERCIAL_ITEM

0.64+

Native DevORGANIZATION

0.63+

OpsTITLE

0.56+

HxORGANIZATION

0.52+

MORGANIZATION

0.51+

StackORGANIZATION

0.5+

yearsQUANTITY

0.48+

APORGANIZATION

0.38+

AzureTITLE

0.38+

DockerTITLE

0.36+

ThreeOTHER

0.2+

Val Bercovici, PencilDATA & Ed Yu, StrongSalt | AWS re:Inforce 2019


 

>> live from Boston, Massachusetts. It's the Cube covering A W s reinforce 2019. Brought to you by Amazon Web service is and its ecosystem partners. >> Hey, welcome back and run cubes. Live coverage of A W S Amazon Webster's reinforced their inaugural conference around security here in Boston. Messages. I'm John for a day. Volante Day we've been talking about Blockchain has been part of security, but no mention of it here. Amazon announced a Blockchain intention, but was more of a service model. Less of a pure play infrastructure or kind of a new game changes. So we thought we would get our friends to come on, the Cuban tell. Tell us about it. Val Birch, Avicii CEO and founder. A pencil day that Cube alumni formerly of NetApp, among other great companies, and Ed You, founder and CEO of Strong Salt. Welcome to the Q. Tell us why aren't we taught him a Blockchain at a security conference on cloud computing, where they always resource is different. Paradigm is decentralized. What's your take? >> So maybe having been in this world for about 18 24 months now, Enterprise lodging reinvents about six months ago and jazz he mentioned that he finally understood US enterprise an opportunity, and it was the integrity value, finest complex, even announced a specific product announced database available, >> maybe bythe on cryptographic verifiability of transactions minus the complexity of smart contract wallets. Wait, you party with Amazon way too. Versions right? One for distributed use cases. When I call, everyone rises. Never like you need to know what >> the Amazon wants to be that hard on top like complexity. But the reality is, they're they're They're world is targeting a new generation star 14 show is the new generation of developing >> a >> new generation of David. They were. Some of those are in trouble, and I'm hard core on this because it's just so obvious. >> I just can't get him behind myself if you don't >> see this out quicker. The new developers are younger and older systems people. There's a range of ages doing it. They're they're seeing the agility, and it's a cultural shift, not just the age thing. Head this. They're not here right now. This is the missing picture of this show, and my criticism of reinforces big, gaping hole around crypto and blocks, >> and I actually know that people I don't see anything here because it is difficult to currency. >> Blocking is very important that people understand way. Launch strong allows you to see the launching. I don't think that works. Basically, Just like Well, well said everything you do, you always have a single source. I think that's something that people doing this thing here. You want to get your thoughts on this because you made a comment >> about security native being the team here and security native implying that Dev ops what they did for configuration hardening the infrastructures code. You have to consider this token economic business model side of it with the apple cases, a decision application is still an application. Okay. Blockchain is still in infrastructure dynamic their software involved. I mean, we're talking about the same thing is they're lost in translation. In your opinion? >> Well, yeah, I think that you know, to your point, Val, if you can abstract that complexity away, But the fundamentals of of cryptography and software engineering and game theory coming together is what always has fascinated me about this space. And so you're right. I think certainly enterprise customers don't wanna you know, they hear crypto, though no, although it's interesting it was just a conference IBM yesterday. They talk a lot about Blockchain. Don't talk about crypto to me. They go together. Of course, IBM. They don't like to talk a lot about job loss and automation, but But the reality is it's there and it's it's it's has a lot of momentum, which is why you started the company. >> Yeah, we're actually seeing it all over right now. And again, our thing is around reducing, If not eliminating the friction towards adopting Blockchain so less is more. In our case, we're explicitly choosing not to do crypto wallets or currency transactions. It's that Andy Jassy observation the integrity value, the core integrity, value for financial reconciliation, for detecting supply chain counterfeiting for tracking assets and inventory across to your distribution. Unifying multiple source systems of record into a shared state. Those are the kinds of applications received >> culture, and there's so many different use cases, obviously, so >> an Amazon likes to use that word. Words raised the bar, which is more functionality, but on the other, phrases undifferentiated, heavy lifting. There's a lot of details involved in some of those complexity exactly what you're talking about that can be automated away. That's goodness. But you still have a security problem of mutability, which is a beautiful thing with Blockchain. >> Actually, a lot of times people actually forgot to mention one thing that blotchy and all you do that's actually different before was Actually privacy is actually not just security is also privacy, which actually is getting bigger and bigger. As we know, it's something that people feel very strongly about because it's something they feel personal about. And that's something that, in fact, took economics encourages a lot of things that enables privacy that was not able to do before. >> Well, look at Facebook. What do you think about >> face? I'm wonder that you know, I'm a public face book critic. I think they've been atrocious job on the privacy front so far in protecting our data. On the other hand, if you know it's kind of like the mullahs report, if you actually read Facebook's white paper, it's a it's not a launch. It's an announcement. That's a technical announcement. It's so well written, designed so far, and it's Facebook doesn't completely control it. They do have a vision for program ability. They're evolving it from being a permissions toe, ultimately a permission less system. So on paper, I like what I read. And I think it will start to, you know, popularizing democratize the notion of crypto amongst the broader population. I'm going to take a much more weight see approach. Just you know, >> I always love Facebook. I think the den atrocious job. But I'm addicted. I have all my stuff on there, um, centralized. They're bringing up, they bring in an education. Bitcoin is up for a reason. They're bringing the masses. They're showing that this is real market. This is kind of like when the web was still viewed as Kitty Playground for technologists say, Oh, well, it's so slow. And that was for dummies. And you had the Web World Wide Web. So when that hit, that same arguments went down right this minute, crypto things for years. But with Facebook coming, it really legitimizes that well, you bring 2,000,000,000 people to the party. Exactly a lot of good. Now the critics of Facebook is copied pass craft kind of model and there's no way they're gonna get it through because the world's not gonna let Facebook running run commerce and currents. It's like it's like and they don't do it well anyway. So I think it's gonna be a game changing market making move. I think they'll have a play in there, but I don't think that's not gonna have a global force. Says a >> lot that you get 100 companies to put up 10 >> 1,000,000 Starship is already the first accomplice. >> They don't need any more money. We have my dear to us, but >> still the power but the power of that ecosystem to me. I was a big fan of this because I think it gives credibility. So many companies get get interested in it, and I'm not sure exactly what's gonna come out of it. It's interesting that, you know, Bitcoins up. They said, Oh, cell, you're becoming like No, no, no, this is This is a very mature >> Well, I I think open is gonna always win. If you look at you know, the Web's kind of one example of kind of maturity argument. I think the rial analog for me, at least my generation value probably relate to this. David, you as well, you know, I've been born yet you are But, you know, T c p I p came after S n a which IBM on the deck net was the largest network at that time to >> not serious. Says >> mammal. Novell was land all three proprietary network operating systems. So proprietary Narcisse decimated by T c p i p. So to me, I think even their Facebook does go in there. They will recognize that unless they stay open, I think open will always win. I think I think this is the beginning of the death of the closed platform. >> Yeah, they're forced her. I think they have to open it up because if you didn't open up, people won't trust them, and people will use them. And if a Blockchain if you don't have a community behind it, there will be nothing. >> Well, so the thing about the crypto spraying everywhere with crypto winter, But but to your point d c p i p h t t p d >> N s SMTP >> Those were government funded or academic funded protocols. People stop spending money on him, and then the big Internet companies just co opted. No, no, that's what G mails built on. >> Well, I've always said >> so But when you finish the thought, is all this crypto money that came in drove innovation? Yeah, So you're seeing, you know, this new Internet emerge, and I think it's it's really think people, you know, sort of overlooked a lot of the innovation that's >> coming. I have always said, Dave, that Facebook is what the Web would look like if Tim Berners Lee took venture financing. Okay, because what they had at the time was a browser and the way that stand up websites for self service information. They kept it open and it drives. Facebook became basically the Web's version of a, well, lengthen does the same Twitter has opened. They have no developer community. So yeah, I think it is the only company in my opinion, actually does a good job opening up their data. Now they charge you for that. It brings up way still haven't encrypt those. The only community that's entire ethos is based on openness and community you mentioned. And that is a key word >> in traditional media. Of course, focus on the bad stuff that happens, but you know those of us in the business who will pay attention to it, see There's a lot of goodness to is a lot of mission driven, a lot of openness, and it's a model for innovation. What do you guys think about the narrative now to break up big tech? You know you're hearing Facebook, Amazon, Google coming under fire. What are your thoughts on that? >> So I wrote a block, maybe was ahead of its time about 18 months ago. Is coincided with Ginny Rometty, a Davos and 2018 2019 talking about data responsibility. Reason we're having this conversation is at the tech industry. By and large and especially the fang stocks or whatever we're calling them now have been irresponsible with our data. The backlash is palpable in Europe. It's law in Europe. Backlash we knew was going to start at the state level here. There's already ahead of my personal schedule. Federal discussions, FTC DOJ is in a couple weeks ago, so it's inevitable that this sort of tech reckoning is coming in. Maur responsibility is gonna have to be demonstrated by all the custodians of our data, and that's why we're positioning. Check it as a chain of custody is a service to demonstrate to the regulators your customers, your partners, suppliers, you know, transparency, irrefutable transparency, using Blockchain for how you're handling data. You know, if you don't have that, transparency can prove it. Or back to the same old discussions were back Thio Uninformed old legislators making you know Internet, his tubes type regulations. So here, here >> and DOJ, you could argue that they may be too slow to respond to Microsoft back in the nineties. I'm not sure breaking up big tech is the right thing, because I think it's almost like a t. The little Tex will become big checks again, but they should not be breaking the law. >> I think there's a reason why is there's actually a limitation off. What is possible in technology because they understand and also Facebook understands well, is that it's actually very, very hard to have data that's owned by your customers. But you are the one who's keeping track over everything, and you are the one using the data right. It's like a no win, because if you think about encryption cryptography, yes, you can make the data encrypted. That way, the customer has the key. They control it, but then Facebook can offer the service is. So now you have a Congress thinking, Well, if there's no technological way of doing this, what can you do in a legal perspective on a, you know, on the law perspective, toddy make it so that the customer actually owned the data. We actually think that is a perfect reason why you have to actually fix the book. Actually, technical should be built on our platform because we actually allow them to have a day that's encrypted and stupid able to operations holiday tha if the customer give them the permission to do so. And I think that's the perfect word way to go forward. And I think Blockchain is the fundamental thing that brings everybody together, you know, way that actually benefits everyone knows >> and take him into explain strong salt your project. What's it about? What's the mission? Where you >> so so we see strong saw as actually privacy. First, we literally are beauty, a platform where developers including Facebook linked and salesforce can't you build on top of platform, right? So what happens when you do this is that they actually give the data governess to the customers, customers Mashona data. But because our cryptography they actually can offer service is to the customers. When a customer allowed them to do so, for example, we have something. All search of encryption allows you to encrypt the data and still give the search. Aubrey on the data without decrypting the data. First, by giving the power to developers and also the community there, you can have our abstract you currently use. But they're not hard to use that frictionless and still offer the same service that Frank Facebook or sell stolen offer the favor. >> You could do some discovery on it. >> You can't do things >> some program ability around >> exactly, even though the data is encrypted. But custom owns the day. So the customer has to give them permission to do so Right this way. Actually, in fact, launched the first app that I told you it's called strong vote. You can Donald ios or Andrew it And you can't you see the Blockchain play little You can see the rocking your fingerprint. I think a fingertip to see what happens to a data. You see everything that happens when Sheriff I or you open a fire or something, I guess. >> Congratulations, Val. Give a quick plug for your project chain kid into the new branding. They're like it. Pencil data. Where are you on your project? >> So after nine months of hard selling, we're finding out what customers actually paying for right now. In our case, it's hardening their APS, their data and their logs and wrapping the chain of custody around those things. And the use case of the security conference like this is actually quite existential When you think about it, One of the things that the industry doesn't talk enough about is that every attack we read about in the headlines was three privilege escalation. So the attackers somehow hacked. Your Web server managed to get administrative credentials and network or domain administrative credentials. And here's what professional attackers do once they have godlike authority on your network. They identify all the installed security solutions, and they make themselves invisible because they can. After that, they operate with impunity. Our technology, the security use case that we're seeing a lot of traction is, is we can detect that we're applying Blockchain. We're agnostic, so bring your own Blockchain in our case. But we're able >> chain kit a product. Is it a development environment >> globally. Available service Jose on AWS rest ful AP eyes and fundamentally were enabling developers to harden their app stuff to wrap a chain of custody around key data or logs in their laps so that when the attacker's attempt a leverage at administrative authority and tamper with locks tamper >> with service, not a software, >> it's a apply. It's a developer oriented service, but >> this is one of the biggest problems and challenges security today. You see the stat after you get infiltrated. It takes 250 or 300 days to even detect, and I have not heard that number shrink. I've heard people aspire number streaking this. >> We can get it down to realize a crime tip of the spear. That's what we're excited to be here. We're excited to talk about One of the dirty secrets of the security industry is that it shouldn't take a year to detect in advance attack. >> Guys, Thanks for coming on. Cuban sharing your insight. Concussions in your head. Well, great to see you. >> Likewise. And thank you, j for having us on here, and we're looking forward to coming back and weigh. Appreciate. Absolutely >> thankful. Spj Thanks for you. >> It was always paying it forward. Of course, really the most important conversation, that security is gonna be a Blockchain type of implementation. This is a reality that's coming very soon, but we're here. They do is reinforce. I'm talking about the first conference with Amazon Web sources dedicated to sightsee. So's Cee Io's around security jumper. Develop the stables for more coverage. After this short break, >> my name is David.

Published Date : Jun 25 2019

SUMMARY :

Brought to you by Amazon Web service is Welcome to the Q. Tell us why aren't we taught him a Blockchain at a security conference Never like you need But the reality is, Some of those are in trouble, and I'm hard core on this because it's just so This is the missing picture of this show, and my criticism of reinforces to currency. Launch strong allows you to see the launching. You have to consider this token economic business a lot of momentum, which is why you started the company. It's that Andy Jassy observation the integrity value, the core integrity, value for financial But you still have a security problem of mutability, Actually, a lot of times people actually forgot to mention one thing that blotchy and all you do that's actually What do you think about And I think it will start to, you know, popularizing democratize the notion of crypto amongst the And you had the Web World Wide Web. We have my dear to us, but still the power but the power of that ecosystem to me. If you look at you know, the Web's kind of one example of kind of maturity not serious. I think I think this is the beginning of the death of the closed platform. I think they have to open it up because if you didn't open up, people won't trust them, No, no, that's what G mails built on. Now they charge you for that. Of course, focus on the bad stuff that happens, but you know those of us You know, if you don't have that, and DOJ, you could argue that they may be too slow to respond to Microsoft We actually think that is a perfect reason why you have to actually fix the book. Where you and also the community there, you can have our abstract you currently use. So the customer has to give them Where are you on your project? They identify all the installed security solutions, and they make themselves invisible because Is it a development environment data or logs in their laps so that when the attacker's attempt a leverage at administrative It's a developer oriented service, but You see the stat after you get infiltrated. We can get it down to realize a crime tip of the spear. great to see you. And thank you, j for having us on here, and we're looking forward to coming back and weigh. Spj Thanks for you. I'm talking about the first conference with Amazon Web sources dedicated to sightsee.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

AmazonORGANIZATION

0.99+

DavePERSON

0.99+

BostonLOCATION

0.99+

FacebookORGANIZATION

0.99+

IBMORGANIZATION

0.99+

EuropeLOCATION

0.99+

GoogleORGANIZATION

0.99+

Andy JassyPERSON

0.99+

250QUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

100 companiesQUANTITY

0.99+

Ed YouPERSON

0.99+

TwitterORGANIZATION

0.99+

yesterdayDATE

0.99+

NovellORGANIZATION

0.99+

Boston, MassachusettsLOCATION

0.99+

300 daysQUANTITY

0.99+

Strong SaltORGANIZATION

0.99+

FirstQUANTITY

0.99+

Ed YuPERSON

0.99+

JohnPERSON

0.99+

2018DATE

0.99+

CongressORGANIZATION

0.99+

2,000,000,000 peopleQUANTITY

0.99+

nine monthsQUANTITY

0.99+

Ginny RomettyPERSON

0.99+

AWSORGANIZATION

0.99+

first appQUANTITY

0.99+

Donald iosPERSON

0.99+

10QUANTITY

0.98+

Val BercoviciPERSON

0.98+

AubreyPERSON

0.98+

Kitty PlaygroundTITLE

0.98+

appleORGANIZATION

0.98+

AndrewPERSON

0.98+

Amazon WebORGANIZATION

0.98+

oneQUANTITY

0.97+

single sourceQUANTITY

0.97+

Amazon WebsterORGANIZATION

0.96+

first conferenceQUANTITY

0.96+

first accompliceQUANTITY

0.96+

FTC DOJORGANIZATION

0.95+

DavosORGANIZATION

0.94+

CubeORGANIZATION

0.94+

about 18 24 monthsQUANTITY

0.94+

NetAppORGANIZATION

0.94+

Val BirchPERSON

0.94+

MashonaORGANIZATION

0.94+

couple weeks agoDATE

0.94+

USLOCATION

0.94+

OneQUANTITY

0.93+

about 18 months agoDATE

0.92+

2019DATE

0.92+

a dayQUANTITY

0.92+

StrongSaltORGANIZATION

0.92+

threeQUANTITY

0.92+

SheriffPERSON

0.92+

Tim Berners LeePERSON

0.91+

todayDATE

0.91+

a yearQUANTITY

0.91+

about six months agoDATE

0.91+

NarcissePERSON

0.9+

CubanPERSON

0.89+

ninetiesDATE

0.87+

one thingQUANTITY

0.86+

ParadigmORGANIZATION

0.86+

one exampleQUANTITY

0.85+

JosePERSON

0.82+

ValPERSON

0.8+

A W sEVENT

0.79+

Stephan Fabel, Canonical | KubeCon 2018


 

>> Live, from the Seattle, Washington. It's theCUBE, covering KubeCon and CloudNativeCon, North America 2018, brought to you by Red Hat, the Cloud Native Computing Foundation and it's ecosystem partners. >> Welcome back everyone. We're live here in Seattle for theCUBE's exclusive coverage of KubeCon and CloudNativeCon 2018. I'm John Furrier at Stuart Miniman. Our next guest Stephan Fabel, who is the Director of Product Management at Canonical. CUBE alumni, welcome back. Good to see you. >> Thank you. Good to see you too. Thanks for having me. >> You guys are always in the middle of all the action. It's fun to talk to you guys. You have a pulse on the developers, you have pulse on the ecosystem. You've been deep in it for many, many years. Great value. What's hot here, what's the announcement, what's the hard news? Let's get to the hard news out of the way. What's happening? What's happening here at the show for you guys? >> Yeah, we've had a great number of announcements, a great number of threads of work that came into fruition over the last couple of months, and now just last week where we announced hardware reference architectures with our hardware partners, Dell and SuperMicro. We announced ARM support, ARM64 support for Kubernetes. We released our version 1.13 of our Charmed Distribution of Kubernetes, last week And we also released, very proud to release, MicroK8s. Kubernetes in a single snap for your workstation in the latest release 1.13. >> Maybe explain that, 'cause we often talk about scale, but there is big scale, and then we're talking about edge, we're talking about so many of these things. >> That's right. >> That small scale is super important, so- >> It really is, it really is, so, MicroK8s came out of this idea that we want to enable a developer to just quickly standup a Kubernetes cluster on their workstation. And it really came out of this idea to really enable, for example, AIML work clouds, locally from development on the workstation all the way to on-prem and into the public cloud. So that's kind of where this whole thing started. And it ended up being quite obvious to us that if we do this in a snap, then we actually can also tie this into appliances and devices at the edge. Now we're looking at interesting new use cases for Kubernetes at the edge as an actual API end point. So it's a quite nice. >> Stephan talk about ... I want to take a step back. There's kind of dynamics going on in the Kubernetes wave, which by the way is phenomenal, 8000 people here at KubeCon, up from 4000. It's got that hockey stick growth. It's almost like a Moore's Law, if you will, for the events. You guys have been around, so you have a lot of existing big players that have been in the space for a while, doing a lot of work around cloud, multi-cloud, whatever ... That's the new word, but again, you guys have been there. You got like the Cisco's of the world, you guys, big players actively involved, a lot of new entrants coming in. What's your perspective of what's happening here? A lot of people looking at this scratching their head saying: Okay I get Kubernetes, I get the magic. Kubernetes enables a lot of things. What's the impact to me? What's in it for me as an enterprise or a developer? How do you guys see this market place developing? What's really going on here? >> Well I think that the draw to this conference and to technology and all the different vendors et cetera, it's ultimately a multi-cloud experience, right? It is about enabling workload portability and enabling the operator to operate Kubernetes, independently of where that is being deployed. That's actually also the core value proposition of our charmed Kubernetes. The idea that a single operational paradigm allows you to experience, to deploy, lifecycle manage and administer Kubernetes on-prem, as well as any of the public clouds, as well as on other virtual substrates, such as VMware. So ultimately I think the consolidation of application delivery into a single container format, such as Docker and other compatible formats, OCI formats right? That was ultimately a really good thing, 'cause it enabled that portability. Now I think the question is, I know how to deploy my applications in multiple ways, 'cause it's always the same API, right? But how do I actually manage a lot of Kubernetes clusters and a lot of Kubernetes API end points all over the place? >> So break down the hype and reality, because again, a lot of stuff looks good on paper. Love the soundbites of people saying, "Hey, Kubernetes," all this stuff. But people admitting some things that need to be done, work areas. Security is a big concern and people are working on that. Where is the reality? Where does the rubber meet the road when it comes down to, "Okay, I'm an enterprise. What am I buying into with Kubernetes? How do I get there?" We heard Lyft take an approach that's saying, "Look, it solved one problem." Get a beachhead and take the incremental approach. Where's the hype, where's the reality? Separate that for us. >> I think that there is certainly a lot of hype around the technology aspect of Kubernetes. Obviously containerization is invoked. This is how developers choose to engage in application development. We have Microservices architecture. All of those things we're very well aware of and have been around for quite some time and in the conversation. Now looking at container management, container orchestration at scale, it was a natural fit for something like Kubernetes to become quite popular in this space. So from a technology perspective I'm not surprised. I think the rubber meets the road, as always, in two things: In economics and in operations. So if I can roll out more Kubernetes clusters per day, or more containers per day, then my competitor ... I gain a competitive advantage, that the cost per container is ultimately what's going to be the deciding factor here. >> Yeah, Stephan, when I think about developers how do I start with something and then how do I scale it out in the economics of that? I think Canonical has a lot of experience with that to share. What are you seeing ... What's the same, what's different about this ecosystem, CloudNative versus, when we were just talking about Linux or previous ways of infrastructure? >> Well I think that ultimately Kubernetes, in and of itself, is a mechanism to enable developers. It plays one part in the whole software development lifecycle. It accelerates a certain part. Now it's on us, distributors of Kubernetes, to ensure that all the other portions of this whole lifecycle and ecosystem around Kubernetes, where do I deploy it? How do I lifecycle manage it? If there's a security breach like last Monday, what happens to my existing stack and how does that go down? That acceleration is not solved by Kubernetes, it's solved for Kubernetes. >> Your software lives in lots and lots of environments. Maybe you can help clarify for people trying to understand how Kubernetes fits, and when you're playing with the public cloud, your Kubernetes versus their Kubernetes. The distinction I think is, there's a lot of nuance there that people may need help with. >> That's true, yeah. So I think that, first of all, we always distance ourself from the notion of having our Kubernetes. I think we have a distribution of Kubernetes. I think there is conformance, tests that are in place that they're in place for a reason. I think it is the right approach, and we won't install a fourth version of Kubernetes anytime soon. Certainly, that is one of the principles we adhere to. What is different about our distribution of Kubernetes is the operational tooling and the ability to really cookie-cutter out Kubernetes clusters that feel identical, even though they're distributed and spread across multiple different substrates. So I think that is really the fundamental difference of our Kubernetes distribution versus others that are out there on the market. >> The role of developers now, 'cause obviously you're seeing a lot of different personas emerging in this world. I'm just going to lay them out there and I want to get your reaction. The classic application developer, the ones who are sitting there writing code inside a company. It could be a consumer company like Lyft or an enterprise company that needs ... They're rebuilding inside, so it's clear that CIOs or enterprises, CXOs or whatever the title is, they're bringing more software in-house, bringing that competitive advantage under application development. You have the IT pro expert, practitioner kind of role, classic IT, and then you got the opensource community vibe, this show. So you got these three things inter-playing with each other, this show, to me feels a lot like an opensource show, which it is, but it also feels a lot like an IT show. >> Which it also is. >> It also is, and it feels like an app development show, which it also is. So, opportunity, challenge, is this a marketplace condition? What's you thoughts on these kind of personas? >> Well I think it's really a question of how far are you willing to go in your implementation of devops cultural change, right? If you look at that notion of devops and that movement that has really taken ahold in people's minds and hearts over the last couple of years, we're still far off in a lot of ways and a lot of places, right? Even the places who are saying they're doing devops, they're still quite early, if at all, on that adoption curve. I think bringing operators, developers and IT professionals together in a single show is a great way for the community and for the market to actually engage in a larger devops conversation, without the constraint of the individual enterprise that those teams find themselves in. If you can just talk about how you should do something better and how would that work, and there is other kinds of personas and roles at the same table, it is much better that you have the conversation without the constraint of like a deadline or a milestone, or some outage somewhere. Something is always going on. Being able to just have that conversation around a technology and really say, "Hey, this is going to be the one, the vehicle that we use to solve this problem and further that conversation," I think it's extremely powerful. >> Yeah, and we always talk about who's winning and who's losing. It's what media companies do. We do it on theCUBE, we debate it. At the end of the day we always like ... There's no magic quadrant for this kind of market, but the scoreboard can be customers. Amazon's got over 5000 reputable customers. I don't know how many CNCF has. It's probably a handful, not 5000. The customer implications are really where this is going. Multi-cloud equals choice. What's your conversations like with customers? What do you see on the customer landscape in terms of appetite, IQ, or progress for devops? We were talking, not everyone's on server lists yet and that's so obvious that's going to be a big thing. Enterprises are hot right now and they want the tech. Seeing the cloud growth, where's your customer-base? What are those conversations like? Where are they in the adoption of CloudNative? >> It's an extremely interesting question actually, because it really depends on whether they started with PaaS or not. If they ever had a PaaS strategy then they're mostly disillusioned. They came out, they thought it was going to solve a huge problem for them and save them a lot of money, and it turns out that developers want more flexibility than any PaaS approach really was able to offer them. So ultimately they're saying, "You know what, let's go back to basics." I'll just give you a Kubernetes API end point. You already know how to deal with everything else beyond that, and actually you're not cookie-cuttering out post ReSQueL- >> Kubernetes is a reset to PaaS. >> It really does. It kind of disrupted that whole space, and took a step back. >> All right, Stephan, how about Serverless. So a lot of discussion about Knative here. We've been teasing out where that fits compared to functions from AWS and Azure. What's the canonical take on this? What are you hearing from your customers? >> So Serverless is one of those ... Well it's certainly a hot technology and a technology of interest to our customers, but we have longstanding partnerships with Galactic Fog and others in place around Serverless. I haven't seen real production deployments of that yet, and frankly it's probably going to take a little bit longer before that materializes. I do think that there's a lot of efforts right now in containerization. Lots of folks are at that point where they are ready to, and are already running containerized workloads. I think they're busy now implementing Kubernetes. Once they have done that, I think they'll think a little bit more about Serverless. >> One of the things that interest me about this ecosystem is the rise of Kubernetes, the rise of choice, the rise of a lot of tools, a lot of services, trying to fend off the tsunami wave that's hit the beach out of Amazon. I've always said in theCUBE that that's ... They're going to take as much inland territory on this tsunami unless someone puts up a sea wall. I think this is this community here. The question is, is that ... And I want to get your expert opinion on this, because the behemoths, the big guys are getting richer. The innovation's coming from them, they have scale. You mentioned that as a key point in the value of Kubernetes, is scale, as one of those players, I would consider in the big size, not like a behemoth like an Amazon, you got a unique position. How can the industry move forward with disruption and innovation, with the big guys dominating? What has to happen? Is there going to change the size of certain TAMs? Is there going to be new service providers emerging? Something's got to give, either the big guys get richer at the expense of the little guys, or market expands with new categories. How do you guys look at that? Developers are out there, so is it promising look to new categories, but your thoughts. >> I think it's ... So a technology perspective certainly would be, there could be a disruptive technology that comes in and just eats their lunch, which I don't believe is going to happen, but I think it might actually be a more of a market functionality actually. If it goes down to the economics, and as they start to compete there will be a limit to the race to the bottom. So if I go in on an economical advantage point as a public cloud, then I can only take that so far. Now, I can still take it a lot further, but there's going to be a limit to that ultimately. So, I would say that all of the public clouds, we see that increasingly happening, are starting to differentiate. So they're saying, "Come to me for IML." "Come to me for a rich service catalog." "Come to me for workload portability," or something like that, right? And we'll se more differentiation as time goes on. I think that will develop in a little bit of a bubble, to the point where actually other players who are not watching, for example, Chinese clouds, right? Very large, very influential, very rich in services, they can come in and disrupt their market in a totally different way than a technology ever could. >> So key point you mentioned earlier, I want to pivot on that and get to the AI conversation, but scale is a competitive advantage. We've seen that on theCUBE, we see it in the marketplace. Kubernetese by itself is great but at scale it gets better, got nobs and policy. AI is a great example of where a dormant computer science concept that has not yet been unleashed ... Well, it gets unleashed by cloud. Now that's proliferating. AI, what else is out there? How do you see this trend around just large-scale Kubernetes, AI and machine learning coming on around the corner? That's going to be unique, and is new. So you mentioned the Chinese cloud could be a developer here. It's a lever. >> Absolutely, we've been involved with kubeflow since the early days. Early days, it's barely a year, so what early days? It's a year old. >> It's yesterday. >> So a year a ago we started working with kubeflow, and we published one of the first tutorials of how to actually get that up and running and started on Ubuntu, and with our distribution of Kubernetes, and it has since been a focal point of our distribution. We do a couple of things with kubeflow. So the first thing, something that we can bring as a unique value preposition is, because we're the operating system for almost all GKE, all of AKS, all EKS, such a strong standing as an operating system, and have strong partnerships with folks like NVIDIA. It was kind of one of the big milestones that we tried to achieve and we've since completed, actually as another announcement since last week, is the full automatic deployment of GPU enablement on Kubernetes clusters, and have that identical experience happen across the public clouds. So, GPGPU enablement on Kubernetes, as one of the key enablers for projects like kubeflow, which gives you machine learning stacks on demand, right? And then a parallel, we've been working with kubeflow in the community, very active, formed a steering committee to really get the industry perspective into the needs of kubeflow as a community and work with everybody else in that community to make sure that kubeflow releases on time, and hopefully soon, and a 1.0, which is due this summer, but right now they're focused on 0.4. That's a key area of innovation though, opportunity. >> Oh, absolutely. >> I see Amazon's certainly promoting that. What else is new? I've got one last question for you. What's next for you guys? Get a quick plugin for Canonical. What's coming around the corner, what's up? >> We're definitely happy to continue to work on GPGPU enablement. I think that is one of the key aspects that needs to stay ... That we need to stay on top of. We're looking at Kubernates across many different use cases now, especially with our IoT, open to core operating system, which we'll release shortly, and here actually having new use cases for AIML inference. For example, out at the edge looking at drones, robots, self-driving cars, et cetera. We're working with a bunch of different industry partners as well. So increased focus on the devices side of the house can be expected in 2019. >> And that's key these data, in a way that's really relevant. >> Absolutely. >> All right, Stephan, thanks for coming on theCUBE. I appreciate it, Canonical's. Great insight here, bringing in more commentary to the conversation here at KubeCon, CoudNativeCon. Large-scale deployments as a competitive advantage. Kubernetes really does well there: Data, machine learning, AI, all a part of the value and above and below Kubernatese. We're seeing a lot of great advances. CUBE coverage here in Seattle. We'll be back with more after this short break. (digital music)

Published Date : Dec 13 2018

SUMMARY :

North America 2018, brought to you by Red Hat, Good to see you. Good to see you too. You guys are always in the middle of all the action. in the latest release 1.13. Maybe explain that, 'cause we often talk about scale, and into the public cloud. What's the impact to me? and enabling the operator to operate Kubernetes, that need to be done, work areas. I gain a competitive advantage, that the cost per container in the economics of that? in and of itself, is a mechanism to enable developers. that people may need help with. Certainly, that is one of the principles we adhere to. You have the IT pro expert, practitioner kind of role, What's you thoughts on these kind of personas? and really say, "Hey, this is going to be the one, At the end of the day we always like ... You already know how to deal It kind of disrupted that whole space, and took a step back. What's the canonical take on this? of interest to our customers, One of the things that interest me about this ecosystem and as they start to compete there will be a limit around the corner? since the early days. in that community to make sure What's coming around the corner, what's up? So increased focus on the devices side of the house in a way that's really relevant. AI, all a part of the value and above and below Kubernatese.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StephanPERSON

0.99+

2019DATE

0.99+

Stephan FabelPERSON

0.99+

NVIDIAORGANIZATION

0.99+

SeattleLOCATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

John FurrierPERSON

0.99+

CanonicalORGANIZATION

0.99+

DellORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

last weekDATE

0.99+

KubeConEVENT

0.99+

CiscoORGANIZATION

0.99+

SuperMicroORGANIZATION

0.99+

yesterdayDATE

0.99+

8000 peopleQUANTITY

0.99+

last MondayDATE

0.99+

one partQUANTITY

0.99+

CloudNativeConEVENT

0.99+

ServerlessORGANIZATION

0.99+

LyftORGANIZATION

0.99+

two thingsQUANTITY

0.98+

oneQUANTITY

0.98+

a yearQUANTITY

0.98+

Seattle, WashingtonLOCATION

0.98+

LinuxTITLE

0.97+

a year a agoDATE

0.97+

first thingQUANTITY

0.97+

KubernetesTITLE

0.97+

first tutorialsQUANTITY

0.96+

CloudNativeCon 2018EVENT

0.96+

UbuntuTITLE

0.96+

threeQUANTITY

0.96+

ChineseOTHER

0.96+

OneQUANTITY

0.95+

one problemQUANTITY

0.95+

waveEVENT

0.95+

kubeflowTITLE

0.95+

single showQUANTITY

0.94+

5000QUANTITY

0.94+

last couple of monthsDATE

0.94+

CUBEORGANIZATION

0.93+

AKSORGANIZATION

0.93+

fourth versionQUANTITY

0.92+

KuberneteseTITLE

0.92+

one last questionQUANTITY

0.92+

this summerDATE

0.92+

4000QUANTITY

0.91+

CNCFORGANIZATION

0.91+

MicroK8sORGANIZATION

0.91+

KubeCon 2018EVENT

0.91+

singleQUANTITY

0.87+

ARMORGANIZATION

0.87+

last couple of yearsDATE

0.86+

firstQUANTITY

0.85+

single containerQUANTITY

0.85+

North America 2018EVENT

0.84+

CoudNativeConORGANIZATION

0.83+

Stephan Ewen, data Artisans | Flink Forward 2018


 

>> Narrator: Live from San Francisco. It's the CUBE covering Flink Forward brought to you by data Artisans. >> Hi, this is George Gilbert. We are at Flink Forward. The conference put on by data Artisans for the Apache Flink community. This is the second Flink Forward in San Francisco and we are honored to have with us Stephan Ewen, co-founder of data Artisans, co-creator of Apache Flink, and CTO of data Artisans. Stephan, welcome. >> Thank you, George. >> Okay, so with others we were talking about the use cases they were trying to solve but you put together the sort of all the pieces in your head first and are building out, you know, something that's ultimately gets broader and broader in its applicability. Help us, now maybe from the bottom up, help us think through the problems you were trying to solve and and let's start, you know, with the ones that you saw first and then how the platform grows so that you can solve more and more a broader scale of problems. >> Yes, yeah, happy to do that. So, I think we have to take a bunch of step backs and kind of look at what is the let's say the breadth or use cases that we're looking at. How did that, you know, influence some of the inherent decisions and how we've built Flink? How does that relate to what we presented earlier today, the in Austrian processing platform and so on? So, starting to work on Flink and stream processing. Stream processing is an extremely general and broad paradigm, right? We've actually started to say what Flink is underneath the hood. It's an engine to do stateful computations over data streams. It's a system that can process data streams as a batch processor processes, you know, bounded data. It can process data streams as a real-time stream processor produces real-time streams of events. It can handle, you know, data streams as in sophisticated event by event, stateful, timely, logic as you know many applications that are, you know, implemented as data-driven micro services or so and implement their logic. And the basic idea behind how Flink takes its approach to that is just start with the basic ingredients that you need that and try not to impose any form of like various constraints and so on around the use of that. So, when I give the presentations, I very often say the basic building blocks for Flink is just like flowing streams of data, streams being, you know, like received from that systems like Kafka, file systems, databases. So, you route them, you may want to repartition them, organize them by key, broadcast them, depending on what you need to do. You implement computation on these streams, a computation that can keep state almost as if it was, you know, like a standalone java application. You don't think necessarily in terms of writing state or database. Think more in terms of maintaining your own variables or so. Sophisticated access to tracking time and progress or progress of data, completeness of data. That's in some sense what is behind the event time streaming notion. You're tracking completeness of data as for a certain point of time. And then to to round this all up, give this a really nice operational tool by introducing this concept of distributed consistent snapshots. And just sticking with these basic primitives, you have streams that just flow, no barrier, no transactional barriers necessarily there between operations, no microbatches, just streams that flow, state variables that get updated and then full tolerance happening as an asynchronous background process. Now that is what is in some sense the I would say kind of the core idea and what helps Flink generalize from batch processing to, you know, real-time stream processing to event-driven applications. And what we saw today is, in the presentation that I gave earlier, how we use that to build a platform for stream processing and event-driven applications. That's taking some of these things and in that case I'm most prominently the fourth aspect the ability to draw like some application snapshots at any point in time and and use this as an extremely powerful operational tool. You can think of it as being a tool to archive applications, migrate applications, fork applications, modify them independently. >> And these snapshots are essentially your individual snapshots at the node level and then you're sort of organizing them into one big logical snapshot. >> Yeah, each node is its own snapshot but they're consistently organized into a globally consistent snapshot, yes. That has a few very interesting and important implications for example. So just to give you one example where this makes really things much easier. If you have an application that you want to upgrade and you don't have a mechanism like that right, what is the default way that many folks do these updates today? Try to do a rolling upgrade of all your individual nodes. You replace one then the next, then the next, then the next but that has this interesting situation where at some point in time there's actually two versions of the application running at the same time. >> And operating on the same sort of data stream. >> Potentially, yeah, or on some partitions of the data stream, we have one version and some partitions you have another version. You may be at the point we have to maintain two wire formats like all pieces of your logic have to be written in understanding both versions or you try to you know use the data format that makes this a little easier but it's just inherently a thing that you don't even have to worry about it if you have this consistent distributed snapshots. It's just a way to switch from one application to the other as if nothing was like shared or in-flight at any point in time. It just gets many of these problems just out of the way. >> Okay and that snapshot applies to code and data? >> So in Flink's architecture itself, the snapshot applies first of all only to data. And that is very important. >> George: Yeah. >> Because what it actually allows you is to decouple the snapshot from the code if you want to. >> George: Okay. >> That allows you to do things like we showed earlier this morning. If you actually have an earlier snapshot where the data is correct then you change the code but you introduce the back. You can just say, "Okay, let me actually change the code "and apply different code to a different snapshot." So, you can actually, roll back or roll forward different versions of code and different versions of state independently or you can go and say when I'm forking this application I'm actually modifying it. That is a level of flexibility that's incredible to, yeah, once you've actually start to make use of it and practice it, it's incredibly useful. It's been actually almost, it's been one of the maybe least obvious things once you start to look into stream processing but once you actually started production as stream processing, this operational flexibility that you get there is I would say very high up for a lot of users when they said, "Okay this is "why we took Flink to streaming production and not others." The ability to do for example that. >> But this sounds then like with some stream processors the idea of the unbundling the database you have derived data you know at different sync points and that derived data is you know for analysis, views, whatever, but it sounds like what you're doing is taking a derived data of sort of what the application is working on in progress and creating essentially a logically consistent view that's not really derived data for some other application use but for operational use. >> Yeah, so. >> Is that a fair way to explain? >> Yeah, let me try to rephrase it a bit. >> Okay. >> When you start to take this streaming style approach to things, which you know it's been called turning the database inside out, unbundling the database, your input sequence of event is arguably the ground truth and what the stream processor computes is as a view of the state of the world. So, while this sounds you know this sounds at first super easy and you know views, you can always recompute a few, right? Now in practice this view of the world is not just something that's just like a lightweight thing that's only derived from the sequence of events. it's actually the, it's the state of the world that you want to use. It might not be fully reproducible just because either the sequence of events has been truncated or because the sequence events is just like too plain long to feasibly recompute it in a reasonable time. So, having a way to work with this in a way that just complements this whole idea of you know like event-driven, log-driven architecture very cleanly is kind of what this snapshot tool also gives you. >> Okay, so then help us think so that sounds like that was part of core Flink. >> That is part of core Flink's inherent design, yes. >> Okay, so then take us to the the next level of abstraction. The scaffolding that you're building around it with the dA platform and how that should make that sort of thing that makes stream processing more accessible, how it you know it empowers a whole other generation. >> Yeah, so there's different angles to what the dA platform does. So, one angle is just very pragmatically easing rollout of applications by having a one way to integrate the you know the platform with your metrics. Alerting logins, the ICD pipeline, and then every application that you deploy over there just like inherits all of that like every edge in the application developer doesn't have to worry about anything. They just say like this is my piece of code. I'm putting it there and it's just going to be hooked in with everything else. That's not rocket science but it's extremely valuable because there's like a lot of tedious bits here and there that you know otherwise eat up a significant amount of the development time. Like technologically maybe more challenging part that this solves is the part where we're really integrating the application snapshot, the compute resources, the configuration management and everything into this model where you don't think about I'm running a Flink job here. That Flink job has created a snapshot that is running around here. There's also a snapshot here which probably may come from that Flink application. Also, that Flink application was running. That's actually just a new version of that Flink application which is the let's say testing or acceptance run for the version that we're about to deploy here and so like tying all of these things together. >> So, it's not just the artifacts from one program, it's how they all interrelate? >> It gives you the idea of exactly of how they all interrelate because an application over its lifetime will correspond to different configurations different code versions, different different deployments on production a/b testing and so on and like how all of these things kind of work together how they interplay right, Flink, like I said before Flink deliberately couples checkpoints and code and so on in a rather loose way to allow you to to evolve the code differently then and still be able to match a previous snapshot into a newer code version and so on. We make heavy use of that but we we cannot give you a good way of first of all tracking all of these things together how do they how do they relate, when was which version running, what code version was that, having a snapshots we can always go back and reinstate earlier versions, having the ability to always move on a deployment from here to there, like fork it, drop it, and so on. That is one part of it and the other part of it is the tight integration with with Kubernetes which is initially container sweet spot was stateless compute and the way stream processing is, how architecture works is the nodes are inherently not stateless, they have a view of the state of the world. This is recoverable always. You can also change the number of containers and with Flink and other frameworks you have the ability to kind of adjust this and so on, >> Including repartitioning the-- >> Including repartitioning the state, but it's a thing that you have to be often quite careful how to do that so that it all integrates exactly consistency, like the right containers are running at the right point in time with the exact right version and there's not like there's not a split brain situation where this happens to be still running some other partitions at the same time or you're running that container goes down and it's this a situation where you're supposed to recover or rescale like, figuring all of these things out, together this is what they like the idea of integrating these things in a very tight way gives you so think of it as the following way, right? You start with, initially you just start with Docker. Doctor is a way to say, I'm packaging up everything that a process needs, all of its environment to make sure that I can deploy it here and here in here and just always works it's not like, "Oh, I'm missing "the correct version of the library here," or "I'm "interfering with that other process on a port." On top of Docker, people added things like Kubernetes to orchestrate many containers together forming an application and then on top of Kubernetes there are things like Helm or for certain frameworks there's like Kubernetes Operators and so on which try to raise the abstraction to say, "Okay we're taking care of these aspects that this needs in addition to a container orchestration," we're doing exactly that thing like we're raising the abstraction one level up to say, okay we're not just thinking about the containers the computer and maybe they're like local persistent storage but we're looking at the entire state full application with its compute, with its state with its archival storage with all of it together. >> Okay let me sort of peel off with a question about more conventionally trained developers and admins and they're used to databases for a batch and request response type jobs or applications do you see them becoming potential developers of continuous stream processing apps or do you see it only mainly for a new a new generation of developers? >> No, I would I would actually say that that a lot of the like classic... Call it request/response or call it like create update, delete create read update delete or so application working against the database, there's this huge potential for stream processing or that kind of event-driven architectures to help change this view. There's actually a fascinating talk here by the folks from (mumbles) who implemented an entire social network in this in this industry processing architecture so not against the database but against a log in and a stream processor instead it comes with some really cool... with some really cool properties like very unique way of of having operational flexibility too at the same time test, and evolve run and do very rapid iterations over your-- >> Because of the decoupling? >> Exactly, because of the decoupling because you don't have to always worry about okay I'm experimenting here with something. Let me first of all create a copy of the database and then once I actually think that this is working out well then, okay how do I either migrate those changes back or make sure that the copy of the database that I did that bring this up to speed with a production database again before I switch over to the new version and so like so many of these things, the pieces just fall together easily in the streaming world. >> I think I asked this of Kostas, but if a business analyst wants to query the current state of what's in the cluster, do they go through some sort of head node that knows where the partitions lay and then some sort of query optimizer figures out how to execute that with a cost model or something? In other words, if you want it to do some sort of batcher interactive type... >> So there's different answers to that, I think. First of all, there's the ability to log into the state of link as in you have the individual nodes that maintains they're doing the computation and you can look into this but it's more like a look up thing. >> It's you're not running a query as in a sequel query against that particular state. If you would like to do something like that, what Flink gives you as the ability is as always... There's a wide variety of connectors so you can for example say, I'm describing my streaming computation here, you can describe in an SQL, you can say the result of this thing, I'm writing it to a neatly queryable data store and in-memory database or so and then you would actually run the dashboard style exploratory queries against that particular database. So Flink's sweet spot at this point is not to run like many small fast short-lived sequel queries against something that is in Flink running at the moment. That's not what it is yet built and optimized for. >> A more batch oriented one would be the derived data that's in the form of a materialized view. >> Exactly, so this place, these two sites play together very well, right? You have the more exploratory better style queries that go against the view and then you have the stream processor and streaming sequel used to continuously compute that view that you then explore. >> Do you see scenarios where you have traditional OLTP databases that are capturing business transactions but now you want to inform those transactions or potentially automate them with machine learning. And so you capture a transaction, and then there's sort of ambient data, whether it's about the user interaction or it's about the machine data flowing in, and maybe you don't capture the transaction right away but you're capturing data for the transaction and the ambient data. The ambient data you calculate some sort of analytic result. Could be a model score and that informs the transaction that's running at the front end of this pipeline. Is that a model that you see in the future? >> So that sounds like a formal use case that has actually been run. It's not uncommon, yeah. It's actually, in some sense, a model like that is behind many of the fraud detection applications. You have the transaction that you capture. You have a lot of contextual data that you receive from which you either built a model in the stream processor or you built a model offline and push it into the stream processor. As you know, let's say a stream of model updates, and then you're using that stream of model updates. You derive your classifiers or your rule engines, or your predictor state from that set of updates and from the history of the previous transactions and then you use that to attach a classification to the transaction and then once this is actually returned, this stream is fed back to the part of the computation that actually processes that transaction itself to trigger the decision whether to for example hold it back or to let it go forward. >> So this is an application where people who have built traditional architectures would add this capability on for low latency analytics? >> Yeah, that's one way to look at it, yeah. >> As opposed to a rip and replace, like we're going to take out our request/response in our batch and put in stream processing. >> Yeah, so that is definitely a way that stream processing is used, that you you basically capture a change log or so of whatever is happening in either a database or you just immediately capture the events, the interaction from users and devices and then you let the stream processor run side by side with the old infrastructure. And just exactly compute additional information that, even a mainframe database might in the end used to decide what to do with a certain transaction. So it's a way to complement legacy infrastructure with new infrastructure without having to break off or break away the legacy infrastructure. >> So let me ask in a different direction more on the complexity that forms attacks for developers and administrators. Many of the open source community products slash projects solve narrow sort of functions within a broader landscape and there's a tax on developers and admins and trying to make those work together because of the different security models, data models, all that. >> There is a zoo of systems and technologies out there and also of different paradigms to do things. Once systems kind of have a similar paradigm, or a tier in mind, they usually work together well, but there's different philosophical takes-- >> Give me some examples of the different paradigms that don't fit together well. >> For example... Maybe one good example was initially when streaming was a rather new thing. At this point in time stream processors were very much thought of as a bit of an addition to the, let's say, the batch stack or whatever ever other stack you currently have, just look at it as an auxiliary piece to do some approximate computation and a big reason why that was the case is because, the way that these stream processors thought of state was with a different consistency model, the way they thought of time was actually different than you know like the batch processors of the databases at which use time stem fields and the early stream processors-- >> They can't handle event time. >> Exactly, just use processing time, that's why these things you know you could maybe complement the stack with that but it didn't really go well together, you couldn't just say like, okay I can actually take this batch job kind of interpret it also as a streaming job. Once the stream processors got a better interpretation. >> The OEM architecture. >> Exactly. So once the stream processors adopted a stronger consistency model a time model that is more compatible with reprocessing and so on, all of these things all of a sudden fit together much better. >> Okay so, do you see that vendors who are oriented around a single paradigm or unified paradigm, do you see them continuing to broaden their footprint so that they can essentially take some of the complexity off the developer and the admin by providing something that, one throat to choke with the pieces that were designed to work together out-of-the-box, unlike some of the zoos with the former Hadoop community? In other words, lot of vendors seem to be trying to do a broader footprint so that it's something that's just simpler to develop to and to operate? >> There there are a few good efforts happening in that space right now, so one that I really like is the idea of standardizing on some APIs. APIs are hard to standardize on but you can at least standardize on semantics, which is something, that for example Flink and Beam have been very keen on trying to have an open discussion and a road map that is very compatible in thinking about streaming semantics. This has been taken to the next level I would say with the whole streaming sequel design. Beam is adding adding stream sequel and Flink is adding stream sequel, both in collaboration with the Apache CXF project, so very similar standardized semantics and so on, and the sequel compliancy so you start to get common interfaces, which is a very important first step I would say. Standardizing on things like-- >> So sequel semantics are across products that would be within a stream processing architecture? >> Yes and I think this will become really powerful once other vendors start to adopt the same interpretation of streaming sequel and think of it as, yes it's a way to take a changing data table here and project a view of this changing data table, a changing materialized view into another system, and then use this as a starting point to maybe compute another derive, you see. You can actually start and think more high-level about things, think really relational queries, dynamic tables across different pieces of infrastructure. Once you can do something like interplay in architectures become easier to handle, because even if on the runtime level things behave a bit different, at least you start to establish a standardized model, in thinking about how to compose your architecture and even if you decide to change on the way, you frequently saved the problem of having to rip everything out and redesign everything because the next system that you bring in just has a completely different paradigm that it follows. >> Okay, this is helpful. To be continued offline or back online on the CUBE. This is George Gilbert. We were having a very interesting and extended conversation with Stephan Ewen, CTO and co-founder of data Artisans and one of the creators of Apache Flink. And we are at Flink Forward in San Francisco. We will be back after this short break.

Published Date : Apr 12 2018

SUMMARY :

brought to you by data Artisans. This is the second Flink Forward in San Francisco how the platform grows so that you can solve with the basic ingredients that you need that and then you're sort of organizing them So just to give you one example where this makes have to worry about it if you have this consistent the snapshot applies first of all only to data. the snapshot from the code if you want to. that you get there is I would say very high up and that derived data is you know for analysis, approach to things, which you know it's been called like that was part of core Flink. more accessible, how it you know it empowers and everything into this model where you and so on in a rather loose way to allow you to raise the abstraction to say, "Okay we're taking care that a lot of the like classic... make sure that the copy of the database that I did that In other words, if you want it to do the state of link as in you have the individual nodes or so and then you would actually run of a materialized view. go against the view and then you have the stream processor Is that a model that you see in the future? You have the transaction that you capture. As opposed to a rip and replace, and devices and then you let the stream processor run Many of the open source community there and also of different paradigms to do things. Give me some examples of the different paradigms that the batch stack or whatever ever other stack you currently you know you could maybe complement the stack with that So once the stream processors right now, so one that I really like is the idea of to maybe compute another derive, you see. and one of the creators of Apache Flink.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
George GilbertPERSON

0.99+

Stephan EwenPERSON

0.99+

GeorgePERSON

0.99+

StephanPERSON

0.99+

San FranciscoLOCATION

0.99+

FlinkORGANIZATION

0.99+

one versionQUANTITY

0.99+

both versionsQUANTITY

0.99+

two sitesQUANTITY

0.99+

Apache FlinkORGANIZATION

0.99+

two versionsQUANTITY

0.99+

Flink ForwardORGANIZATION

0.99+

secondQUANTITY

0.99+

oneQUANTITY

0.99+

todayDATE

0.98+

fourth aspectQUANTITY

0.98+

javaTITLE

0.98+

ArtisansORGANIZATION

0.98+

one programQUANTITY

0.97+

one wayQUANTITY

0.97+

bothQUANTITY

0.97+

KubernetesTITLE

0.97+

one angleQUANTITY

0.97+

KafkaTITLE

0.96+

one partQUANTITY

0.96+

first stepQUANTITY

0.96+

two wire formatsQUANTITY

0.96+

firstQUANTITY

0.96+

FirstQUANTITY

0.94+

each nodeQUANTITY

0.94+

BeamORGANIZATION

0.94+

one exampleQUANTITY

0.94+

CTOPERSON

0.93+

2018DATE

0.93+

DockerTITLE

0.92+

ApacheORGANIZATION

0.91+

one good exampleQUANTITY

0.91+

single paradigmQUANTITY

0.9+

one applicationQUANTITY

0.89+

FlinkTITLE

0.86+

nodeTITLE

0.79+

KostasORGANIZATION

0.76+

earlier this morningDATE

0.69+

CUBEORGANIZATION

0.67+

SQLTITLE

0.64+

HelmTITLE

0.59+

CXFTITLE

0.59+

Action Item | March 30, 2018


 

>> Hi, I'm Peter Burris and welcome to another Wikibon Action Item. (electronic music) Once again, we're broadcasting from theCUBE studios in beautiful Palo Alto. Here in the studio with me are George Gilbert and David Floyer. And remote, we have Neil Raden and Jim Kobielus. Welcome everybody. >> David: Thank you. >> So this is kind of an interesting topic that we're going to talk about this week. And it really is how are we going to find new ways to generate derivative use out of many of the applications, especially web-based applications that are have been built over the last 20 years. A basic premise of digital business is that the difference between business and digital business is the data and how you craft data as an asset. Well, as we all know in any universal Turing machine, data is the basis for representing both the things that you're acting upon but also the algorithms, the software itself. Software is data and the basic principles of how we capture software oriented data assets or software assets and then turn them into derivative sources of value and then reapply them to new types of problems is going to become an increasingly important issue as we think about the world of digital business is going to play over the course of the next few years. Now, there are a lot of different domains where this might work but one in particular that's especially as important is in the web application world where we've had a lot of application developers and a lot of tools be a little bit more focused on how we use web based services to manipulate things and get software to do the things we want to do and also it's a source of a lot of the data that's been streaming into big data applications. And so it's a natural place to think about how we're going to be able to create derivative use or derivative value out of crucial software assets. How are we going to capture those assets, turn them into something that has a different role for the business, performs different types of work, and then reapply them. So to start the conversation, Jim Kobielus. Why don't you take us through what some of these tools start to look like. >> Hello, Peter. Yes, so really what we're looking at here, in order to capture these assets, the web applications, we first have to generate those applications and the bulk of that worker course is and remains manual. And in fact, there is a proliferation of web application development frameworks on the market and the range of them continues to grow. Everything from React to Angular to Ember and Node.js and so forth. So one of the core issues that we're seeing out there in the development world is... are there too many of these. Is there any prospect for simplification and consolidation and convergence on web application development framework to make the front-end choices for developers a bit easier and straightforward in terms of the front-end development of JavaScript and HTML as well as the back-end development of the logic to handle the interactions; not only with the front-end on the UI side but also with the infrastructure web services and so forth. Once you've developed the applications, you, a professional programmer, then and only then can we consider the derivative uses you're describing such as incorporation or orchestration of web apps through robotic process automation and so forth. So the issue is how can we simplify or is there a trend toward simplification or will there soon be a trend towards simplification of a front-end manual development. And right now, I'm not seeing a whole lot of action in this direction of a simplification on the front-end development. It's just a fact. >> So we're not seeing a lot of simplification and convergence on the actual frameworks for creating software or creating these types of applications. But we're starting to see some interesting trends for stuff that's already been created. How can we generate derivative use out of it? And also per some of our augmented programming research, new ways of envisioning the role that artificial intelligence machine learning, etc, can play in identifying patterns of utilization so that we are better able to target those types of things that could be used for derivative or could be applied to derivative use. Have I got that right, Jim? >> Yeah, exactly. AI within robotic process automation, anything that could has already been built can be captured through natural language processing, through a computer image recognition, OCR, and so forth. And then trans, in that way, it's an asset that can be repurposed in countless ways and that's the beauty RPA or where it's going. So the issue is then not so much capture of existing assets but how can we speed up and really automate the original development of all that UI logic? I think RPA is part of the solution but not the entire solution, meaning RPA provides visual front-end tools for the rest of us to orchestrate more of the front-end development of the application UI and interaction logic. >> And it's also popping up-- >> That's part of broader low-code-- >> Yeah, it's also popping up at a lot of the interviews that we're doing with CIOs about related types of things but I want to scope this appropriately. So we're not talking about how we're going to take those transaction processing applications, David Floyer, and envelope them and containerize them and segment them and apply a new software. That's not what we're talking about, nor are we talking about the machine to machine world. Robot process automation really is a tool for creating robots out of human time interfaces that can scale the amount of work and recombine it in different ways. But we're not really talking about the two extremes. The hardcore IoT or the hardcore systems of record. Right? >> Absolutely. But one question I have for Jim and yourself is the philosophy for most people developing these days is mobile first. The days of having an HTML layout on a screen have gone. If you aren't mobile first, that's going to be pretty well a disaster for any particular development. So Jim, how does RPA and how does your discussion fit in with mobile and all of the complexity that mobile brings? All of the alternative ways that you can do things with mobile. >> Yeah. Well David, of course, low-code tools, there are many. There are dozens out there. There are many of those that are geared towards primarily supporting of fast automated development of mobile applications to run on a variety of devices and you know, mobile UIs. That's part of the solution as it were but also in the standard web application development world. know there's these frameworks that I've described. Everything from React to Angular to Vue to Ember, everything else, are moving towards a concept, more than concept, it's a framework or paradigm called progressive web apps. And what progressive web apps are all about, that's really the mainstream of web application development now is blurring the distinction between mobile and web and desktop applications because you build applications, JavaScript applications for browsers. The apps look and behave as if they were real-time interactive in memory mobile apps. What that means is that they download fresh content throughout a browsing session progressively. I'm putting to determine air quotes because that's where the progressive web app comes in. And they don't require the end-user to visit an app store or download software. They don't require anything in terms of any special capabilities in terms of synchronizing data from servers to run in memory natively inside of web accessible containers that are local to the browser. They just feel mobile even though they, excuse me, they may be running on a standard desktop with narrowband connectivity and so forth. So they scream and they scream in the context of their standard JavaScript Ajax browser obsession. >> So when we think about this it got, jeez Jim it almost sounds like like client-side Java but I think you're we're talking about something, as you said, that that evolves as the customer uses it and there's a lot of techniques and approaches that we've been using to do some of those things. But George Gilbert, the reason I bring up the notion of client-side Java is because we've seen other initiatives over the years try to do this. Now, partly they failed because, David Floyer, they focused on too much and tried to standardize or presume that everything required a common approach and we know that that's always going to fail. But what are some of the other things that we need to think about as we think about ways of creating derivative use out of software or in digital assets. >> Okay, so. I come at it from two angles. And as Jim pointed out, there's been a Cambrian explosion of creativity and innovation on frankly on client-side development and server-side development. But if you look at how we're going to recombine our application assets, we tried 20 years ago with EAI but that was, and it's sort of like MuleSoft but only was for on-prem apps. And it didn't work because every app was bespoke essentially-- >> Well, it worked for point-to-point classes of applications. >> Yeah, but it required bespoke development for every-- >> Peter: Correct. >> Every instance because the apps were so customized. >> Peter: And the interfaces were so customized. >> Yes. At the same time we were trying to build higher-level application development capabilities on desktop productivity tools with macros and then scripting languages, cross application, and visual development or using applications as visual development building blocks. Now, you put those two things together and you have the ability to work with user interfaces by building on, I'm sorry, to work with applications that have user interfaces and you have the functionality that's in the richer enterprise applications and now we have the technology to say let's program by example on essentially a concrete use case and a concrete workflow. And then you go back in and you progressively generalize it so it can handle more exception conditions and edge conditions. In other words, you start with... it's like you start with the concrete and you get progressively more abstract. >> Peter: You start with the work that the application performs. >> Yeah. >> And not knowledge of the application itself. >> Yes. But the key thing is, as you said, recombining assets because we're sort of marrying the best of EAI world with the best of the visual client-side development world. Where, as Jim points out, machine learning is making it easier for the tools to stay up to date as the user interfaces change across releases. This means that, I wouldn't say this as easy as spreadsheet development, it's just not. >> It's not like building spreadsheet macros but it's more along those lines. >> Yeah, but it's not as low-level as just building raw JavaScript because, and Jim's great example of JavaScript client-side frameworks. Look at our Gmail inbox application that millions of people use. That just downloads a new version whenever they want to drop it and they're just shipping JavaScript over to us. But the the key thing and this is, Peter, your point about digital business. By combining user interfaces, we can bridge applications that were silos then we can automate the work the humans were doing to bridge those silos and then we can reconstitute workflows in much more efficient-- >> Around the digital assets, which is kind of how business ultimately evolves. And that's a crucial element of this whole thing. So let's change direction a little bit because we're talking about, as Jim said, we've been talking about the fact that there are all these frameworks out there. There may be some consolidation on the horizon, we're researching that right now. Although there's not a lot of evidence that it's happening but there clearly is an enormous number of digital assets that are in place inside these web-based applications, whether it be relative to mobile or something else. And we want to find derivative use of or we want to create derivative use out of them and there's some new tools that allow us to do that in a relatively simple straightforward way, like RPA and there are certainly others. But that's not where this ends up. We know that this is increasingly going to be a target for AI, what we've been calling augmented programming and the ability to use machine learning and related types of technologies to be able to reveal, make transparent, gain visibility into, patterns within applications and within the use of data and then have that become a crucial feature of the development process. And increasingly even potentially to start actually creating code automatically based on very clear guidance about what work needs to be performed. Jim, what's happening in that world right now? >> Oh, let's see. So basically, I think what's going to happen over time is that more of the development cycle for web applications will incorporate not just the derivative assets, the AI to be able to decompose existing UI elements and recombine them. Enable flexible and automated recombination in various ways but also will enable greater tuning of the UI in an automated fashion through A/B testing that's in line to the development cycle based on metrics that AI is able to sift through in terms of... different UI designs can be put out into production applications in real time and then really tested with different categories of users and then the best suited or best fit a design based on like reducing user abandonment rates and speeding up access to commonly required capabilities and so forth. The metrics can be rolled in line to the automation process to automatically select the best fit UI design that had been developed through automated means. In other words, this real-world experimentation of the UI has been going on for quite some time in many enterprises and it's often, increasingly it involves data scientists who are managing the predictive models to sort of very much drive the whole promotion process of promoting the best fit design to production status. I think this will accelerate. We'll take more of these in line metrics on UI and then we brought, I believe, into more RPA style environments so the rest of us building out these front ends are automating more of our transactions and many more of the UIs can't take advantage of the fact that we'll let the infrastructure choose the best fit of the designs for us without us having to worry about doing A/B testing and all that stuff. The cloud will handle it. >> So it's a big vision. This notion of it, even eventually through more concrete standard, well understood processes to apply some of these AIML technologies to being able to choose options for the developer and even automate some elements of those options based on policy and rules. Neil Raden, again, we've been looking at similar types of things for years. How's that worked in the past and let's talk a bit about what needs to happen now to make sure that if it's going to work, it's going to work this time. >> Well, it really hasn't worked very well. And the reason it hasn't worked very well is because no one has figured out a representational framework to really capture all the important information about these objects. It's just too hard to find them. Everybody knows that when you develop software, 80% of it is grunt work. It's just junk. You know, it's taking out the trash and it's setting things up and whatever. And the real creative stuff is a very small part of it. So if you could alleviate the developer from having to do all that junk by just picking up pieces of code that have already been written and tested, that would be big. But the idea of this has been overwhelmed by the scale and the complexity. And people have tried to create libraries like JavaBeans and object-oriented programming and that sort of thing. They've tried to create catalogs of these things. They've used relational databases, doesn't work. My feeling and I hate to use the word because it always puts people to sleep is some kind of ontology that's deep enough and rich enough to really do this. >> Oh, hold on Neil, I'm feeling... (laughs) >> Yeah. Well, I mean, what good is it, I mean go to Git, right. You can find a thousand things but you don't know which one is really going to work for you because it's not rich enough, it doesn't have enough information. It needs to have quality metrics. It needs to have reviews by people who have used converging and whatever. So that's that's where I think we run into trouble. >> Yeah, I know. >> As far as robots, yeah? >> Go ahead. >> As far as robots writing code, you're going to have the same problem. >> No, well here's where I think it's different this time and I want to throw it out to you guys and see if it's accurate and we'll get to the action items. Here's where I think it's different. In the past, partly perhaps because it's where developers were most fascinated, we try to create object-oriented database and object oriented representations of data and object oriented, using object oriented models as a way of thinking about it. And object oriented code and object oriented this and and a lot of it was relatively low in the stack. And we try to create everything from scratch and it turned out that whenever we did that, it was almost like CASE from many years ago. You create it in the tool and then you maintain it out of the tool and you lose all organization of how it worked. What we're talking about here, and the reason why I think this is different, I think Neil is absolutely right. It's because we're focusing our attention on the assets within an application that create the actual business value. What does the application do and try to encapsulate those and render those as things that are reusable without necessarily doing an enormous amount of work on the back-end. Now, we have to be worried about the back-end. It's not going to do any good to do a whole bunch of RPA or related types of stuff on the front-end that kicks off an enormous number of transactions that goes after a little server that's 15 years old. That's historically only handled a few transactions a minute. So we have to be very careful about how we do this. But nonetheless, by focusing more attention on what is generating value in the business, namely the actions that the application delivers as opposed to knowledge of the application itself, namely how it does it then I think that we're constraining the problem pretty dramatically subject to the realities of what it means to actually be able to maintain and scale applications that may be asked to do more work. What do you guys think about that? >> Now Peter, let me say one more thing about this, about robots. I think you're all a lot more sanguine about AI and robots doing these kinds of things. I'm not. Let me read to you have three pickup lines that a deep neural network developed after being trained to do pickup lines. You must be a tringle? 'Cause you're the only thing here. Hey baby, you're to be a key? Because I can bear your toot? Now, what kind of code would-- >> Well look, the problems look, we go back 50 years and ELIZA and the whole notion of whatever it was. The interactive psychology. Look, let's be honest about this. Neil, you're making a great point. I don't know that any of us are more or less sanguine and that probably is a good topic for a future action item. What are the practical limits of AI and how that's going to change over time. But let's be relatively simple here. The good news about applying AI inside IT problems is that you're starting with engineered systems, with engineered data forms, and engineered data types, and you're working with engineers, and a lot of that stuff is relatively well structured. Certainly more structured than the outside world and it starts with digital assets. That's why a AI for IT operations management is more likely. That's why AI for application programming is more likely to work as opposed to AI to do pickup lines, which is as you said semantically it's all over the place. There's very, very few people that are going to conform to a set of conventions for... Well, I want to move away from the concept of pickup lines and set conventions for other social interactions that are very, very complex. We don't look at a face and get excited or not in a way that corresponds to an obvious well-understood semantic problem. >> Exactly, the value that these applications deliver is in their engagement with the real world of experience and that's not the, you can't encode the real world of human lived experience in a crisp clear way. It simply has to be proven out in the applications or engagement through people or not through people, with the real world outcome and then some outcomes like the ones that Neil read off there, in terms of those ridiculous pickup lines. Most of those kinds of automated solutions won't make a freaking bit of sense because you need humans with their brains. >> Yeah, you need human engagement. So coming back to this key point, the constraint that we're putting on this right now and the reason why certainly, perhaps I'm a little bit more ebullient than you might be Neil. But I want to be careful about this because I also have some pretty strong feelings about where what the limits of AI are, regardless of what Elon Musk says. That at the end of the day, we're talking about digital objects, not real objects, that are engineered, not, haven't evolved over a few billion years, to deliver certain outputs and data that's been tested and relatively well verified. As opposed to have an unlimited, at least from human experience standpoint, potential set of outcomes. So in that small world and certainly the infrastructure universe is part of that and what we're saying is increasingly the application development universe is going to be part of that as part of the digital business transformation. I think it's fair to say that we're going to start seeing AI machine learning and some of these other things being applied to that realm with some degree of success. But, something to watch for. All right, so let's do action item. David Floyer, why don't we start with you. Action item. >> In addressing this, I think that the keys in terms of business focus is first of all mobiles, you have to design things for mobile. So any use of any particular platform or particular set of tools has to lead to mobile being first. And the mobiles are changing rapidly with the amount of data that's being generated on the mobile itself, around the mobile. So that's the first point I would make from a business perspective. And the second is that from a business perspective, one of the key things is that you can reduce cost. Automation must be a key element of this and therefore designing things that will take out tasks and remove tasks, make things more efficient, is going to be an incredibly important part of this. >> And reduce errors. >> And reduce errors, absolutely. Probably most important is reduce errors. Is to take those out of the of the chain and where you can speed things up by removing human intervention and human tasks and raising what humans are doing to a higher level. >> Other things. George Gilbert, action item. >> Okay, so. Really quickly on David's point that we have many more application forms and expressions that we have to present like mobile first. And going back to using RPA as an example. The UiPath product that we've been working with, the core of its capability is to be able to identify specific UI elements in a very complex presentation, whether it's on a web browser or whether it's on a native app on your desktop or whether it's mobile. I don't know how complete they are on mobile because I'm not sure if they did that first but that core capability to identify in a complex, essentially collection and hierarchy of UI elements, that's what makes it powerful. Now on the AI part, I don't think it's as easy as pointing it at one app and then another and say go make them talk. It's more like helping you on the parts where they might be a little ambiguous, like if pieces move around from release to release, things like that. So my action item is say start prototyping with the RPA tools because that's probably, they're probably robust enough to start integrating your enterprise apps. And the only big new wrinkle that's come out in the last several weeks that is now in everyone's consciousness is the MuleSoft acquisition by Salesforce because that's going back to the EAI model. And we will see more app to app integration at the cloud level that's now possible. >> Neil Raden, action item. >> Well, you know, Mark Twain said, there's only two kinds of people in the world. The kind who think there are only two kinds of people in the world and the ones who know better. I'm going to deviate from that a little and say that there's really two kinds of software developers in the world. They're the true computer scientists who want to write great code. It's elegant, it's maintainable, it adheres to all the rules, it's creative. And then there's an army of people who are just trying to get something done. So the boss comes to you and says we've got to get a new website up apologizing for selling the data of 50 million of our customers and you need to do it in three days. Now, those are the kind of people who need access to things that can be reused. And I think there's a huge market for that, as well as all these other software development robots so to speak. >> Jim Kobielus, action item. >> Yeah, for simplifying web application development, I think that developers need to distinguish between back-end and front-end framework. There's a lot of convergence around the back-end framework. Specifically Node.js. So you can basically decouple the decision in terms of front-end frameworks from that and you need to write upfront. Make sure that you have a back-end that supports many front ends because there are many front ends in the world. Secondly, the front ends themselves seem to be moving towards React and Angular and Vue as being the predominant ones. You'll find more programmers who are familiar with those. And then thirdly, as you move towards consolidation on to fewer frameworks on the front-end, move towards low-code tools that allow you just with the push of a button, you know visual development, being able to deploy the built out UI to a full range of mobile devices and web applications. And to close my action item... I'll second what David said. Move toward a mobile first development approach for web applications with a focus on progressive web applications that can run on mobiles and others. Where they give a mobile experience. With intermittent connectivity, with push notifications, with a real-time in memory fast experience. Move towards a mobile first development paradigm for all of your your browser facing applications and that really is the simplification strategy you can and should pursue right now on the development side because web apps are so important, you need a strategy. >> Yeah, so mobile irrespective of the... irrespective of the underlying biology or what have you of the user. All right, so here's our action item. Our view on digital business is that a digital business uses data differently than a normal business. And a digital business transformation ultimately is about how do we increase our visibility into our data assets and find new ways of creating new types of value so that we can better compete in markets. Now, that includes data but it also includes application elements, which also are data. And we think increasingly enterprises must take a more planful and purposeful approach to identifying new ways of deriving additional streams of value out of application assets, especially web application assets. Now, this is a dream that's been put forward for a number of years and sometimes it's work better than others. But in today's world we see a number of technologies emerging that are likely, at least in this more constrained world, to present a significant new set of avenues for creating new types of digital value. Specifically tools like RPA, remote process automation, that are looking at the outcomes of an application and allow programmers use a by example approach to start identifying what are the UI elements, what those UI elements do, how they could be combined, so that they can be composed into new things and thereby provide a new application approach, a new application integration approach which is not at the data and not at the code but more at the work that a human being would naturally do. These allow for greater scale and greater automation and a number of other benefits. The reality though is that you also have to be very cognizant as you do this, even though you can find these, find these assets, find a new derivative form and apply them very quickly to new potential business opportunities that you have to know what's happening at the back-end as well. Whether it's how you go about creating the assets, with some of the front-end tooling, and being very cognizant of which front ends are going to be better or not better or better able at creating these more reusable assets. Or whether you're talking about still how relatively mundane things like how a database serialized has access to data and will fall over because you've created an automated front-end that's just throwing a lot of transactions at it. The reality is there's always going to be complexity. We're not going to see all the problems being solved but some of the new tools allow us to focus more attention on where the real business value is created by apps, find ways to reuse that, and apply it, and bring it into a digital business transformation approach. All right. Once again. George Gilbert, David Floyer, here in the studio. Neil Raden, Jim Kobielus, remote. You've been watching Wikibon Action Item. Until next time, thanks for joining us. (electronic music)

Published Date : Mar 30 2018

SUMMARY :

Here in the studio with me are and get software to do the things we want to do and the range of them continues to grow. and convergence on the actual frameworks and that's the beauty RPA or where it's going. that can scale the amount of work and all of the complexity that mobile brings? but also in the standard web application development world. and we know that that's always going to fail. and innovation on frankly on client-side development classes of applications. and you have the ability to work with user interfaces that the application performs. But the key thing is, as you said, recombining assets but it's more along those lines. and they're just shipping JavaScript over to us. and the ability to use machine learning and many more of the UIs can't take advantage of the fact some of these AIML technologies to and rich enough to really do this. Oh, hold on Neil, I'm feeling... I mean go to Git, right. you're going to have the same problem. and the reason why I think this is different, Let me read to you have three pickup lines and how that's going to change over time. and that's not the, you can't encode and the reason why certainly, one of the key things is that you can reduce cost. and where you can speed things up George Gilbert, action item. the core of its capability is to So the boss comes to you and says and that really is the simplification strategy that are looking at the outcomes of an application

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JimPERSON

0.99+

David FloyerPERSON

0.99+

Jim KobielusPERSON

0.99+

Neil RadenPERSON

0.99+

DavidPERSON

0.99+

Peter BurrisPERSON

0.99+

George GilbertPERSON

0.99+

Mark TwainPERSON

0.99+

NeilPERSON

0.99+

PeterPERSON

0.99+

March 30, 2018DATE

0.99+

80%QUANTITY

0.99+

50 millionQUANTITY

0.99+

Palo AltoLOCATION

0.99+

Node.jsTITLE

0.99+

JavaTITLE

0.99+

SalesforceORGANIZATION

0.99+

two kindsQUANTITY

0.99+

secondQUANTITY

0.99+

first pointQUANTITY

0.99+

bothQUANTITY

0.99+

AngularTITLE

0.99+

JavaScriptTITLE

0.99+

Elon MuskPERSON

0.99+

MuleSoftORGANIZATION

0.99+

two anglesQUANTITY

0.99+

oneQUANTITY

0.99+

GmailTITLE

0.98+

millions of peopleQUANTITY

0.98+

two thingsQUANTITY

0.98+

two extremesQUANTITY

0.98+

three daysQUANTITY

0.98+

dozensQUANTITY

0.98+

one questionQUANTITY

0.98+

ReactTITLE

0.98+

one appQUANTITY

0.97+

EmberTITLE

0.97+

VueTITLE

0.97+

firstQUANTITY

0.96+

20 years agoDATE

0.96+

todayDATE

0.96+

this weekDATE

0.95+

SecondlyQUANTITY

0.94+

AjaxTITLE

0.94+

JavaBeansTITLE

0.93+

RPATITLE

0.91+

WikibonTITLE

0.91+

thirdlyQUANTITY

0.9+

theCUBEORGANIZATION

0.88+

CASETITLE

0.88+

Dr. Angel Diaz, IBM - IBM Interconnect 2017 - #ibminterconnect - #theCUBE


 

>> Announcer: Live from Las Vegas, it's theCUBE, covering Interconnect 2017. Brought to you by IBM. >> Hey, welcome back everyone. We're live here in Las Vegas at the Mandalay Bay for IBM InterConnect 2017 exclusive Cube coverage. I'm John Furrier, my co-host Dave Vellante, our next guest Dr. Angel Diaz who is the vice president of developer technology. Also you know him from the open source world. Great to see you again. >> Nice to see you. Thanks for spending time with us. >> Thank you. >> Boy, Blockchain, open source, booming, cloud-native, booming, hybrid cloud, brute force but rolling strong. Enterprise strong, if you will, as your CEO Ginni Rometty started talking about yesterday. Give us the update on what's going on with the technology and developers because this is something that you guys, you personally, have been spending a lot of time with. Developer traction, what's the update? >> Well you know if you look at history there's been this democratization of technology. Right, everything from object oriented programming to the internet where we realize if we created open communities you can build more skill, more value, create more innovation. And each one of these layers you create abstractions. You reduce the concept count of what developers need to know to get work done and it's all about getting work done faster. So, you know, we've been systematically around cloud, data, and AI, working really hard to make sure that you have open source communities to support those. Whether it's in things like compute, storage, and network, platform as a service like say Cloud Foundry, what we're doing around the open container initiatives and the Cloud Native Computing Foundation to all the things you see in the data space and everywhere else. So it's real exciting and it's real important for developers. >> So two hot trends that we're tracking obviously, one's pretty obvious. That's machine learning in cloud. Really hand and glove together. You see machine learning really powering the AI, hitting IOT all the way up to apps and wearables and what not, autonomous vehicles. Goes on and on. The other one is Kubernetes, and Kubernetes, the rise of Kubernetes has really brought the containers to a whole nother level around multi-cloud. People might not know it, but you are involved in the CNCF formation, which is Kubernetes movement, which was KubeCon, then it became part of the Linux Foundation. So, IBM has had their hand in these two trends pretty heavily. >> Angel: Oh yeah, absolutely. >> Give the perspective, because the Kubernetes one, in particular, we'll come back to the machine learning, but Kubernetes is powering a whole nother abstraction layer around helping containers go to the next level with microservices, where the develop equation has changed. It's not just the person writing code anymore, a person writing code throws off an application that has it's own life in relationship to other services in the community, which also has analytics tied to it. So, you're seeing a changing dynamic on this potential with Kubernetes. How important is Kubernetes, and what is the real impact? >> No, it is important. And what there actually is, there's a couple of, I think, application or architecture trends that are fundamentally changing how we build applications. So one of them I'll call, let's call it Code First. This is where you don't even think about the Kubernetes layer. All you do is you want to write your code and you want to deploy your code, and you want it to run. That's kind of the platform. Something like Cloud Foundry addresses the Code First approach. Then there's the whole event-drive architecture world. Serverless, right? Where it has a particular use case, event-driven, standing, stuff up and down, dealing with many types of inputs, running rules. Then you have, let's say the more transactional type applications. Microservices, right? These three thing, when combined allows you to kind of break the shackles of the monolith of old application architectures, and build things the way that best suit your application model, and then come together in much more coherent way. Specifically in Kubernetes, and that whole container stuff. You think think about it, initially, when, containers have been around a long time, as we all know, and Docker did a great job in making container accessible and easy, right? And we worked really closely with them to create some multisource activities around the base container definitions, the open container initiative in the Linux Foundation. But of course, that wasn't enough. We need to then start to build the management and the orchestration around that. So we teamed up with others and started to kind of build this Kubernetes-based community. You know, Docker just recently brought ContainerD into the CNCF, as well, as another layer. They are within the equation. But by building this, it's almost just Russian doll of capability, right, you know, you're able to go from one paradigm, whether it's a serverless paradigm running containers, or having your microservices become use in serverless or having Code First kick off something, you can have these things work well together. And I think that's the most exciting part of what we're doing at Kubernetes, what we're doing in serverless, and what we're doing, say, in this Code First world. >> So, development's always been kind of an art form. How is that art form evolving and changing as these trends that you're describing-- >> Oh, that's a great, I love that. 'Cause I always think of ourselves as computer science artists. You and I haven't spoken about that. That's awesome. Yeah, because, you know, it is an art form, right? Your screen is your canvas, right, and colors are the services that you can bring in to build, and the API calls, right? And what's great is that your canvas never ends, because you have, say, a cloud infrastructure, which is infinitely scalable or something, right? So, yeah. But the definition of the developer is changing because we're kind of in this next phase of lowering concept count. Remember I told you this lowering of concept count. You know, I love those O'Reilly books. The little cute animals. You know, as a developer today, you don't have to buy as many of those books, because a lot of it is done in the API calls that you've used. You don't write sorting algorithms anymore. Guess what, you don't need to do speech to text algorithms. You don't need to do some analysis algorithms. So the developer is becoming a cognitive developer and a data science developer, in addition to a application developer. And that is the future. And it's really important that folks skill up. Because the demand has increased dramatically in those areas, and the need has increased as well. So it's very exciting. >> So the thing about that, that point about cognitive developer, is that in the API calls, and the reason why we don't buy all those books is, the codes out there are already in open source and machine learning is a great example, if you look at what machine learning is doing. 'Cause now you have machine learning. It used to be an art and a science. You had to be a great computer scientist and understand algorithms, and almost have that artistic view. But now, as more and more machine learning comes out, you can still write custom machine learning, but still build on libraries that are already out there. >> Exactly. So what does that do? That reduces the time it takes to get something done. And it increases the quality of what you're building, right? Because, you know, this subroutine or this library has been used by thousands and thousands of other people, it's probably going to work pretty well for your use case, right? But I can stress the importance of this moment you brought up. The cognitive data application developer coming together. You know, when the Web happened, the development market blew up in orders of magnitude. Because everybody's is sort of learning HTML, CSS, Javascript, you know, J2E, whatever. All the things they needed to build, you know, Web Uize and transactional applications. Two phase commit apps in the back, right? Here we are again, and it's starting to explode with the microservices, et cetera, all the stuff you mentioned, but when you add cognitive and data to the equation, it's just going to be a bigger explosion than the Web days. >> So we were talking with some of the guys from IBM's GBS, the Global Business Services, and the GTS, Global Technology Services, and interesting things coming out. So if you take what you're saying forward, and you open innovation model, you got business model stacks and technology stacks. So process, stacks, you know, business process, and then technology, and they now have to go hand-in-hand. So if you take what you're saying about, you know, open source, open all of this innovation, and add say, Blockchain to it, you now have another developer type. So the cognitive piece is also contributing to what looks like to be a home run with Blockchain going open source, with the ledger. So now you have the process and the stacks coming together. So now, it's almost the Holy Grail. It used to be this, "Hey, those business processor guys, they did stuff, and then the guys coded it out, built stacks. Now they're interdependent a bit. >> Yeah. Well I mean, what's interesting to me about Blockchain, I always think of, at this point about business processes, you know, business processes have always been hard to change, right? You know, once you have partners in your ecosystem, it's hard to change. Things like APIs and all the technology allows it to be much quicker now. But with Blockchain, you don't need a human involved in the decision of who's in your partner network as long as they're trusted, right? I remember when Jerry Cuomo and Chris Ferris, in my team, he's the chairman of the Blockchain, of the hyperledger group, we're talking initially when we kind of brought it to the Linux Foundation. We were talking a lot about transactions, because you know, that was one of the initial use cases. But we always kind of new that there's a lot of other use cases for this, right, in addition to that. I mean, you know, the government of China is using Blockchain to deal with carbon emissions. And they have, essentially, an economy where folks can trade, essentially, carbon units to make sure that as an industry segment, they don't go over, as an example. So you can have people coming in and out of your business process in a much more fluid way. What fascinates me about Blockchain, and it's a great point, is it takes the whole ecosystem to another level because now that they've made Blockchain successful, ecosystem component's huge. That's a community model, that's just like open source. So now you've got the confluence of open source software, now with people in writing just software, and now microservices that interact with other microservices. Not agile within a company, agile within other developers. >> Angel: Right. >> So you have a data piece that ties that together, but you also have the process and potential business model disruption, a Blockchain. So those two things are interesting to me. But it's a community role. In your expert opinion on the community piece, how do you think the community will evolve to this new dynamic? Do you think it's going to take the same straight line growth of open source, do you think there's going to be a different twist to it? You mentioned this new persona is already developing with cognitive. How do you see that happening? >> Yes, I do. There's two, let's say three points. The first on circling the community, what we've been trying to do, architecturally, is build an open innovation platform. So all these elements that make up cloud, data, AI, are open so that people can innovate, skills can grow, anything, grow faster. So the communities are actually working together. So you see lots of intralocks and subcommittees and subgroups within teams, right? Just say this kind of nesting of technology. So I think that's one megatrend that will continue-- >> Integrated communities, basically. >> Integrated communities. They do their own thing. >> Yeah. >> But to your point earlier, they don't reinvent the wheel. If I'm in Cloud Foundry and I need a container model, why am I going to create my own? I'll just use the open compute initiative container model, you know what I'm saying? >> Dave: And the integration point is that collaboration-- >> Is that collaboration, right. And so we've started to see this a lot, and I think that's the next megatrend. The second is, we just look at developers. In all this conversation, we've been talking about the what? All the technology. But the most important thing, even more so than all of this stuff, is the how. How do I actually use the technology? What is the development methodology of how I add scale, build these applications? People call that DevOp, you know, that whole area. We at IBM announced about a year and a half ago, at Gene Kim's summit, he does DevOps, the garage method, and we open sourced it, which is a methodology of how you apply Agile and all the stuff we've learned in open source, to actually using this technology in a productive way at scale. Often times people talk about working in theses little squads and so forth, but once you hire, say you've got 10 people in San Francisco, and you're going to hire one in San Ramon, that person might as well be on Mars. Because if you're not on the team there, you're not in the decision process. Well, that's not reality. Open source is not that way, the world doesn't behave that way. So this is the methodology that we talked about. The how is really important. And then the third thing, is, if you can help developers, interlock communities, teach them about the how to do this effectively, then they want samples to fork and go. Technology journeys, physical code. So what you're start to see a lot of us in open source, and even IBM, is provide starters that show people how to use the technology, add the methodology, and then help them on their journey to get value. >> So at the base level, there's a whole new set of skills that are emerging. You mentioned the O'Reilly books before, it was sort of a sequential learning process, and it seems very nonlinear now, so what do you recommend for people, how do they go about capturing knowledge, where do they start? >> I think there's probably two or three places. The first one is directly in the open source communities. You go to any open source community and there's a plethora of information, but more so, if you hang out in the right places, you know, IRC channels or wherever, people are more than willing to help you. So you can get education for free if you participate and contribute and become a good member of a community. And, in fact, from a career perspective today, that's what developers want. They want that feeling of being part of something. They want the merit badge that you get for being a core committer, the pride that comes with that. And frankly, the marketability of yourself as a developer, so that's probably the first place. The second is, look, at IBM, we spend a huge amount of time trying to help developers be productive, especially in open source, as we started this conversation. So we have a place, developer.ibm.com. You go there and you can get links to all the relevant open source communities in this open innovation platform that I've talked about. You can see the methodologies that I spoke about that is open. And then you could also get these starter code journeys that I spoke about, to help you get started. So that's one place-- >> That's coming out in April, right? >> That's right. >> The journeys. >> Yeah, but you can go now and start looking at that, at developer.ibm.com, and not all of it is IBM content. This is not IBM propaganda here, right? It is-- >> John: Real world examples. >> Real world examples, it's real open source communities that either we've helped, we've shepherded along. And it is a great place, at least, to get your head around the space and then you can subset it, right? >> Yeah. So tell us about, at the last couple of minutes we have, what IBM's doing right now from a technology, and for developers, what are you guys doing to help developers today, give the message from what IBM's doing. What are you guys doing? What's your North Star? What's the vision and some of the things you're doing in the marketplace people can get involved in? You mentioned the garage as one. I'm sure there's others. >> Yeah, I mean look, we are m6anically focused on helping developers get value, get stuff done. That's what they want to do, that's what our clients want to do, and that's what turns us on. You build your art, you talk, you're going back to art, you build your drawing, you want to look at it. You want it to be beautiful. You want others to admire it, right? So if we could help you do that, you'll be better for it, and we will be better for it. >> As long as they don't eat their ear, then they're going to be fine. >> It's subjective, but give value of what they do. So how do they give value? They give value by open technologies and how we've built, essentially, cloud, data, AI, right? So art, arts technology adds value. We get value out of the methodology. We help them do this, it's around DevOps, tooling around it, and then these starters, these on-ramps, right, to getting started. >> I got to ask you my final question, a more personal one, and Dave and I talk about this all the time off camera, being an older guy, computer science guy, you're seeing stuff now that was once a major barrier, whether it's getting access to massive compute, machine learning, libraries, the composability of the building blocks that are out there, to create art, if you will, it's phenomenal. To me, it's just like the most amazing time to be be a computer scientist, or in tech, in general, building stuff. So I'm going to ask you, what are you jazzed up about? Looking back, in today's world, the young guns that are coming onto the scene not knowing that we walked barefoot in the snow to school, back in the old days. This is like, it's a pretty awesome environment right now. Give us personal color on your take on that, the change and the opportunity. >> Yeah, so first of all, when you mentioned older guys, you were referring to yourselves, right? Because this is my first year at IBM. I just graduated, there's nothing old here, guys. >> John: You could still go to, come on (laughs). >> What does that mean? Look you know, there's two things I'm going to say. Two sides of the equation. First of all, the fundamentals of computer science never go away. I still teach, undergrad seminars and so forth, and you have to know the fundamentals of computer science. That does not go away because you can write bad code. No matter what you're doing or how many abstractions you have, there are fundamental principles you need to understand. And that guides you in building better art, okay? Now putting that aside, there is less that you need to know all the time, to get your job done. And what excites me the most, so back when we worked on the Web in the early 90s, and the markup languages, right, and I see some in the audience there, Arno, hey, Arno, who helped author some of the original Web standards with me, and he was with the W3C. The use cases for math, for the Web, was to disseminate physics, that's why Tim did it, right? The use case for XML. I was co-chair of the mathematical markup language. That was a use case for XML. We had no idea that we would be using these same protocols, to power all the apps on your phone. I could not imagine that, okay? If I would have, trust me, I would have done something. We didn't know. So what excites me the most is not being able to imagine what people will be able to create. Because we are so much more advanced than we were there, in terms of levels of abstraction. That's what's, that's the exciting part. >> All right. Dr. Angel Diaz, great to have you on theCUBE. Great inspiration. Great time to be a developer. Great time to be building stuff. IOT, we didn't even get to IOT, I mean, the prospects of what's happening in industrialization, I mean, just pretty amazing. Augmented intelligence, artificial intelligence, machine learning, really the perfect storm for innovation. Obviously, all in the open. >> Angel: Yes. Awesome stuff. Thanks for coming on the theCUBE. Really appreciate it. >> Thank you guys, appreciate it. >> IBM, making it happen with developers. Always have been. Big open source proponents. And now they got the tools, they got the garages for building. I'm John Furrier, stay with us, there's some great interviews. Be right back with more after this short break. (tech music)

Published Date : Mar 22 2017

SUMMARY :

Brought to you by IBM. Great to see you again. Nice to see you. that you guys, you personally, to all the things you see in the data space in the CNCF formation, which is Kubernetes movement, It's not just the person writing code anymore, and you want to deploy your code, and changing as these trends that you're describing-- and colors are the services that you can bring in about cognitive developer, is that in the API calls, All the things they needed to build, you know, So if you take what you're saying forward, You know, once you have partners in your ecosystem, So you have a data piece that ties that together, So you see lots of intralocks and subcommittees They do their own thing. you know what I'm saying? about the how to do this effectively, So at the base level, there's a whole new set of skills that I spoke about, to help you get started. Yeah, but you can go now and start looking at that, around the space and then you can subset it, right? and for developers, what are you guys doing So if we could help you do that, you'll be better for it, then they're going to be fine. to getting started. I got to ask you my final question, a more personal one, Yeah, so first of all, when you mentioned older guys, that you need to know all the time, to get your job done. Dr. Angel Diaz, great to have you on theCUBE. Thanks for coming on the theCUBE. And now they got the tools, they got the garages

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

IBMORGANIZATION

0.99+

DavePERSON

0.99+

twoQUANTITY

0.99+

Ginni RomettyPERSON

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

Linux FoundationORGANIZATION

0.99+

Global Business ServicesORGANIZATION

0.99+

ArnoPERSON

0.99+

Global Technology ServicesORGANIZATION

0.99+

Chris FerrisPERSON

0.99+

GTSORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

AprilDATE

0.99+

TimPERSON

0.99+

San RamonLOCATION

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

thousandsQUANTITY

0.99+

Angel DiazPERSON

0.99+

10 peopleQUANTITY

0.99+

Las VegasLOCATION

0.99+

first yearQUANTITY

0.99+

Mandalay BayLOCATION

0.99+

yesterdayDATE

0.99+

Two sidesQUANTITY

0.99+

Jerry CuomoPERSON

0.99+

MarsLOCATION

0.99+

two thingsQUANTITY

0.99+

firstQUANTITY

0.99+

secondQUANTITY

0.99+

first oneQUANTITY

0.99+

oneQUANTITY

0.99+

three pointsQUANTITY

0.98+

one paradigmQUANTITY

0.98+

AngelPERSON

0.98+

three placesQUANTITY

0.98+

GBSORGANIZATION

0.98+

developer.ibm.comOTHER

0.97+

early 90sDATE

0.97+

each oneQUANTITY

0.97+

JavascriptTITLE

0.96+

third thingQUANTITY

0.96+

todayDATE

0.96+

O'ReillyORGANIZATION

0.95+

KubernetesTITLE

0.95+

Dr.PERSON

0.95+

one placeQUANTITY

0.94+

J2ETITLE

0.94+

BlockchainTITLE

0.93+

threeQUANTITY

0.93+

DockerORGANIZATION

0.91+

CNCFORGANIZATION

0.91+

HTMLTITLE

0.91+

two hot trendsQUANTITY

0.9+

Interconnect 2017EVENT

0.89+

BlockchainORGANIZATION

0.88+

AgileTITLE

0.87+

about a year and a half agoDATE

0.87+

Don Tapscott | IBM Interconnect 2017


 

>> Narrator: Live from Las Vegas, it's the Cube. Covering Interconnect 2017. Brought to you by IBM. >> OK, welcome back everyone. We're here live in Las Vegas. I'm wearing the Blockchain Revolution hat right here. Of course, I'm John Furrier with the Cube, and my co host Dave Vellante, we're excited to have celebrity author, thought leader, futurist and fill in the blank on the title Don Tapscott, who's the author of the Blockchain Revolution. Legend in the industry, thought leader, you and your son a compelling new book, but you've been on the fringe of all the game changing technologies going back with social media, we've been following your work, it's been great. Now we're at the front range of Blockchain, OK? Now it's becoming pretty clear to some of the innovators like IBM and others that it's not about Bitcoin alone, it's about the Blockchain Revolution, the Blockchain itself. Welcome to the Cube and what's going on? What is Blockchain? (laughing) >> Well, it's great to hear, be here. The one thing you didn't mention is I play keyboards in a rock band. So. >> The most interesting man on the Cube right now. >> We used to do a concert every year whether our public demanded it or not, but no, we're a charity event. We've raised a few million dollars for good causes. Anyway. I think, along with my son Alex, we figured this out a couple of years ago that this is the second era of the internet. For the first few decades, we've had the internet of information. And if I send you some information, PDF, PowerPoint, E-mail, even with the website, I keep the original. I'm sending you a copy. That doesn't work so great for assets. Like money, stocks, bonds. Identities, votes. Music, art. Loyalty points. If I send you $100, it's really important I don't still have the money, and I can't send it to you. So this has been called the double spend problem by cryptographers for a long time. And Blockchain solves this problem. We've had the internet of information, now we're getting the internet of value. Where anything of value, from money to votes to music can be exchanged peer to peer. And where we can transact, keep records, and trust each other without powerful intermediaries. Now that doesn't mean intermediaries are going to go away, but they're going to have to embrace this technology or they will be toast. >> I mean, this is clear, you see the distributive computing paradigm, I mean, we're all network guys and by training, you can follow this revolution. But now when you start thinking about trust and value and you talk about digitizing the world. So, if you go to digital transformation, that's the thesis, that we're in this digital transformation, you're digitizing money, you're digitizing transactions. Explain more on the value piece because now if everything's going digital, there now needs to be a new model around how to handle the transactions at scale, and with security problems, hackers. >> Yeah, OK. Well that gets to a couple of really good points. First of all, what is digital? You know, you think, "Well, I tap my card at Starbucks "and bits go through all these networks and different "companies with different computer systems and three "days later a settlement occurs." But that's actually a bunch of messages. It's not money. Money, cash, is a bare instrument. If you have cash in your pocket, you are the bearer of that instrument, which means that you own it. And what we're talking about is something very different here, of creating digital cash. That's stored on a global ledger. So, rather than there being a three day settlement period, there's no settlement period because you're just making a change in the database. And this is a very revolutionary concept. And as for security, I mean, think about, I don't know, you're right, it's not about Bitcoin. But if we took the case of the Bitcoin Blockchain. If I wanted to hack that, I'd have to hack that 10 minute block that has all those transactions, which is linked to the previous block and the previous block, I'd have to hack the entire history of commerce on that Blockchain, not just on one computer, but simultaneously across millions of computers, all using the highest level of cryptography, while the most powerful computing resource in the world, the minors are watching me to make sure I don't mess around. Now, I won't say it's impossible, just like I suppose it's not impossible to take a Chicken McNugget and turn it back into a chicken, but it's really hard to do. A lot, and so these systems are way more secure than our current systems. >> Yes, it fundamentally impossible, and you don't have a third party verification system that's also an exposure area, it's globally distributed, right, so let's go back to what is Blockchain? What's the Blockchain 101? >> Well, Blockchain is a distributed ledger where anything of value, from money to votes, and music can be stored, transacted, managed, in a secure and confidential way, and where trust between parties is established, not by a big intermediary, but by cryptography, by collaboration, and some clever code. >> So, talk about the premise of the book. Sort of why you wrote it and what the fundamental premise is. >> Well, three years ago, three years and five weeks ago, at a father son ski trip, over a large piece of beef, and a very nice bottle of wine, Alex and I started thinking about what all this means. And we decided to work together. And he wrote a very cogent paper about how this new ecosystem could govern itself and my publisher got wind of it and said, "That sounds like a book." So we launched a dozen projects, couple of years ago, on how this technology changes, not just financial services, how it changes the corporation and the deep structure and architecture of the firm. How it changes every industry. How it changes government. Democracy, there's an opportunity to end the crisis of legitimacy of our democratic institutions. But what it means for culture and so on. And then we wrote the book. And it was published in May 10th last year, it's been a big best seller, it's the best selling book on Blockchain. It's actually the only real book on Blockchain. In some countries it was ridiculous. For a while, in Canada, it was competing with Harry Potter and an adult coloring book, as the best selling book in the country. >> That's the state of our culture right there. (laughing) >> What is an adult coloring book, anyway? (laughing) >> That's the million dollar question right there. >> There are a lot of geeky books on Blockchain, but this-- >> Well, actually, there aren't, there are books on crypto currency, on Bitcoin. >> Yeah, absolutely. >> And but the only real book on Blockchain is Blockchain Revolution. >> So, but you're really focusing on the business impact, organizational impact, even societal impact, so explain the premise. >> Well, where do we start? Let's start with the firm. Corporation, foundation of capitalism, based on double entry accounting. That's what enabled capitalism. Well, with Blockchain, you get a third entry onto the ledger, so you have triple entry accounting, so you don't need, say, audits. Every year, because there's an annual audit. That's just the beginning. Because the reason that we have firms, according to the Nobel Prize winning economist Ronald Coase, is that the transaction costs in an open market, like the cost of search, finding all the right people and information, the cost of contracting, for every little activity we're contract prohibitive. The cost of coordination, getting all these people to work together, didn't know each other. The cost of establishing trust, all of that in an open market is prohibitive, so we bring that inside the boundaries of a firm. Well, Blockchain will devastate those transaction costs. So we're talking about a fundamental change in how we orchestrate capability, in our economy, to innovate, to create goods and services. And for that matter, to create public value. So this is not some interesting little technology. This is the second era of the internet. I think it's going to be bigger than the first era was. >> So the internet, I mean, the value creation side. So let's take that additional asset side. So assume everything's digitized, got IOTs out there, industrial IOT, wearables, smart cars, smart cities, smart everything, but now you've got to create value as a firm, so let's roll that forward, we have the now somewhat frictionless transactional environment in an open market, how do firms create value out of those digital assets? >> Well, they'll create value in some ways that are radically different than today. So let me give you an example. Who are the big digital value disrupters today? Well, you can start with the so called sharing economy. You know, Uber, Airbnb, Lyft. >> The Cube. >> Sorry? >> The Cube. (laughing) We're disrupting the world right now. >> Well, you're actually not a sharing economy company in the sense that I think. >> In the traditional sense. >> Actually, I don't think they are, either. I mean, the reason that Uber's successful is precisely because it doesn't share. It's a service aggregator. So, why do you need a $70,000,000,000 corporation to do what Uber does? It could be done by a distributed ledger with some smart contracts and autonomous agents. Everything that the corporation does could be done by software. Airbnb. You know, how about, we'll call it B Airbnb, Blockchain Airbnb. So, you go onto your mobile device, and you're looking for a place, and you're going to be in Vegas, and all the hotels are booked because of IBM, and then you find a place, you book it, and then you show up, you turn your key that starts a smart contract payment to the owner of the apartment or the room, and you check out, you turn your key, it's closed. The software has a payment system built into it. So the renter of the room gets paid. You enter a five star on your device. And that's immutable, and it's a five star rating on a Blockchain. Everything that Airbnb as a company does could actually be done by this software. So, Bob Dylan, there's something going on here and you don't know what it is, I mean, people are all locked in an old paradigm about what's disruption. Get ready for this. >> So what's the impact, I mean, not the impact, what's the inhibitor, so, obviously, any new technology you see all the naysayers, so obviously this is a great vision, what's going to be the impediment? >> Well, they are all kinds of impediments and inhibitors, and there are all kinds of ways that this can get messed up. A big one is that we're overcoming now is that people think, well this is about Bitcoin, well, it's not. The real pony here is the underlying technology of Blockchain, and that's the biggest innovation in computer science in a generation, I think. But also, you know, I wrote this 1992 in Paradigm Shift, I said, when you get a new paradigm, it's a new mental model, and these things cause dislocation and disruption and uncertainty, and they're nearly always received with coolness. I mean, you guys know what it's like to be received with coolness as you introduce a new idea as do I, going back to the '70s. But, and vested interests fight against change. And leaders of old paradigms have great difficulties embracing the new. So you think about a company like Western Union that can charge 10% for remittances that take four to seven days. Well, with new tools, they don't take four to seven days, they take minutes, and they charge, based on Blockchain, they charge a point and a half. So, it's the old-- >> The inhibitors, they got to get their solutions out there so that they could go after and eat some of the lunch of the older guys. >> Well, they have to eat their own lunch, that's-- >> Western Union could be disrupted by a new entrant, right? So you got a new entrant coming in, they got to cannibalize themselves-- >> And at that point, it tips, there are enough disruptive entrants, right? >> So, it's all those inhibitors to change and for the IT people that are at this event, this is an exciting opportunity, but you do need to learn a new kind of knowledge base to function in this distributed ledger environment. You need to learn about hyper ledger, for starters, because that's the real enterprise platform. >> All right, so folks watching, like my son who helps us out sometimes as well, you have a father son relationship, which is super inspirational. He's, say he wants to get involved in Blockchain. He wants to jump right in, he's kind of a hacker type, what does he do? How does he get involved? Obviously read the book, Blockchain Revolution, get the big picture. Is there other things you'd advise? >> Well, buying the book in massive volume is always a good first step, no. Seriously. Well, one thing I always say to people is personal use is a precondition for any kind of comprehension. So just go get yourself a wallet for some crypto currency and download it and you'll learn all about public key encryption and so on. But I think in a company there are a number of things that managers need to do. Need to start doing pilots, sandboxes, developing and understanding use cases, and our new Blockchain research institute is going to be a big help in that. But also, for an IT person, is your son an IT guy or he's more an entrepreneur? >> No, he's 21 years old. >> He's 21. >> He doesn't know anything about IT. >> He's a computer science guy. >> He's born in the cloud. IT, can't spell IT. >> Well. (laughing) >> IT's for old guys like us. (laughing) >> We're telling him what he should do, he should be here telling us what we should do. >> John: That's why we hired him, he's a little guinea pig. >> Digital natives, you know, we're digital immigrants, we had to learn the language. But, for the IT people, it's all about not just experimenting, but about moving towards operational systems and about architecture. Because our architectures are based on traditional computing environments and this is something from Paradigm Shift, you remember, I interviewed Max Hopper who invented the Sabre Reservation System for American Airlines, and he says, "The big problem, Don, "is that if I don't have a target architecture, "every time I spend a dollar, I'm building up my legacy "and making it worse by investing in IT." And so that's where I came up with this formulation, yeah, God may have created the world in six days, but he didn't have an installed base to start with. (laughing) So, what we need to do is to start to think about architectures that embrace Blockchain. And this is an historic new opportunity for anybody who cares about IT. >> Is the disruptive enabler for Blockchain the fact that we're now fully connected as a society, or is it something else that we don't see? What's your view on, what's the real wealth creating disruptive enabler? >> Well, you can sense that the rate of change is a lot faster for the second generation than the first. 1993, '94, when I wrote the Digital Economy, it was dial-up. Ebay. >> 14 four. >> Amazon didn't exist. >> Actually 98 I think it was. >> When I wrote that book. Google was five years away. Facebook was 10 years away, so but now we've got wireless, we've got IP everywhere. We've got mobility. We've got the cloud, we've got all the preconditions for this new innovation to happen a lot faster. And that's why, I mean, a year ago, there wasn't a lot of talk at this event about Blockchain. Today it's the big buzz. >> I wonder if you could talk about other applications. You talk about hyper ledger, it's a great place for a starting point, especially for IBM, but one of the areas I'm excited about is security. You know, like the MIT Enigma Project, and there are others, you know, security is such a problem. Every year we look back, John and I, we say, do we feel more secure? And no, we feel less secure. What about the application of Blockchain in security use cases? >> Well, Blockchains are more secure in a number of ways. One is they're harder to hack than traditional servers. And people say, "No, our company, we're bulletproof." Right, tell that to JP Morgan and Home Depot-- >> Target fidelity-- >> The Democratic National Convention, but also tell it to the CIA. I mean, if the CIA can be hacked, then any of these traditional server technologies can be hacked. So that, alone, is a huge case to move towards hyper ledger and these other type platforms. But you said, "I feel less secure these days." And that's a really interesting statement. Because I think that, in many ways, the security of the person has been undermined by the internet of information, as well. That, first of all, we don't own the data that we create. That's a crazy situation. We all create this massive new asset. It's a new asset class. Probably more important than industrial plant, in the industrial age. Maybe more important than land in the agrarian age. We create it, but these data frackers, you know, like-- >> Facebook. >> --Facebook. Own it and that's a big problem. The virtual you is not owned by you. So we need to get our identity back and to manage it responsibly, and people who say to me, "Well, Don, privacy's dead, get over it." This is foolishness. Privacy is the foundation of freedom. And all these things are happening in our world today that undermine our basic security. Our identity's being taken away from us. Or the fact that things happen in this digital world that we don't know, what are the underlying algorithms? If I take this, and I drop it, that's called gravity. I know what's going to happen. But if I go onto Facebook and I do certain things, I have no idea what are the algorithms that's determining what's happening with that and how the data is used. So-- >> Hello fake news. That's how fake news came about. >> Well, yeah, totally. >> People don't know what to trust and it's like, wait a minute. >> Exactly, and well, this has led, also, to a total fragmentation of public discourse, where we've all ended up in these little self reinforcing echo chambers where the purpose of information is not to inform us, it's to, I don't know, give us comfort. >> Divide people. >> Yeah. So, I'm not saying that Blockchains can fix everything, in fact, they can't fix anything, it's humans that fix things. But the key point that Alex and I make in the book is that once again the technology genie has escaped from the bottle, and it was summoned by this person that we don't even know who they are. At a very uncertain time in history. But it's giving us another kick at the can. To sort of fix these problems. To make a world where trust is embedded in everything and where things are trustworthy, and where people are trustworthy, and maybe we can rewrite the whole economic power grid and the old order of things for the better. And that's really important. >> My final question for you, and this is kind of a thought provoking question. Every major revolution, you see, big one, you've seen a counter culture, '60s, computer revolution, PC revolution, are we on the edge now of a new counter culture developing? Because the things you're kind of teasing out is this new generation, is it the '60s version of tech hippies or is there going to be a, because you're getting at radical reconfiguration, radical value creation, this is good evolution, and fast. So you can almost see the young generation, like my son, you're talking about, teaching us how to do it, that's a counter culture. Do you see that happening? >> Well, first of all, I see this change in culture profoundly, so artists can get fairly compensated for the work they create. Imogen Heap puts her song on a Blockchain platform, and the song's inside a smart contract that specifies the IP rights. And you want to listen to it, maybe it's free, you want to put it in your movie, it costs more. The way she describes it is the song acts as a business, and it has a bank account. So, we can profoundly change many aspects of culture, bringing more justice to our culture. But I'm not sure there'll be a counter culture in the traditional sense because you've got people embracing Blockchain that want to fix a bunch of problems, but also people who want to make large organizations more competitive and more effective. The smart banks are embracing this because they know they can cut their transaction costs in half, probably. And they know that if they don't do it, somebody else will. >> And IBM's embracing it because they write software and they service all those firms with technology. >> Well, IBM, the case of IBM is really interesting, and I'll end on that one. That if you think about it, and I go back, I mean, there were only main frames when I started, and IBM was the leader of the bunch, right? And then all the bunch died, but IBM somehow reinvented itself and it got into mini computers and then we saw the rise of the PC and IBM invented the IBM PC, and then we got into the internet, and once again, all these companies died off but somehow IBM was able to find within itself the leadership to transform itself. And I'm, I won't say I'm shocked, but I have to tell you, I'm really delighted that IBM has figured this one out and is driving hard to be a leader of this next generation of the internet. >> And they're driving open source, too, to give IBM a plug, Don Tapscott, great to have you on the Cube. Good luck with your speech today. A legend in the industry, great thinker, futurist. Amazing work. Blockchain is the next revolution, it will impact, it's an opportunity for entrepreneurs, this is a disruptive enabler, you can literally take down incumbent businesses. Changing the nature of the firm, radical economical change. Thanks so much for sharing the insight. >> Nice hat, too. >> I got a nice hat. I got a free bowl of soup with this hat, as they say-- >> Don: It's all about the Blockchain, baby. >> It's all about the Blockchain. >> It's all about the Blockchain. >> More Blockchain Cube analysis as we disrupt you with more coverage, I'm John Furrier, Dave Velante, stay with us. (musical sting)

Published Date : Mar 21 2017

SUMMARY :

Narrator: Live from Las Vegas, it's the Cube. Legend in the industry, thought leader, you and your son Well, it's great to hear, be here. man on the Cube right now. still have the money, and I can't send it to you. Explain more on the value piece because now if of that instrument, which means that you own it. Well, Blockchain is a distributed ledger where So, talk about the premise of the book. and architecture of the firm. That's the state Well, actually, And but the only real book on Blockchain is focusing on the business impact, organizational impact, the ledger, so you have triple entry accounting, So the internet, I mean, the value creation side. Who are the big digital value disrupters today? We're disrupting the world right now. in the sense that I think. the hotels are booked because of IBM, and then you find of Blockchain, and that's the biggest innovation of the older guys. because that's the real enterprise platform. get the big picture. Well, buying the book in massive volume He's born in the cloud. (laughing) IT's for old guys like us. he should be here telling us what we should do. But, for the IT people, it's all about faster for the second generation than the first. Today it's the big buzz. You know, like the MIT Enigma Project, Right, tell that to JP Morgan and Home Depot-- I mean, if the CIA can be hacked, then any of these Or the fact that things happen in this digital world That's how fake news came about. to trust and it's like, wait a minute. fragmentation of public discourse, where we've all is that once again the technology genie has escaped Because the things you're kind of teasing out and the song's inside a smart contract that specifies And IBM's embracing it the leadership to transform itself. a plug, Don Tapscott, great to have you on the Cube. I got a free bowl of soup with this hat, as they say-- More Blockchain Cube analysis as we disrupt you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Western UnionORGANIZATION

0.99+

IBMORGANIZATION

0.99+

UberORGANIZATION

0.99+

Dave VelantePERSON

0.99+

JohnPERSON

0.99+

Don TapscottPERSON

0.99+

AlexPERSON

0.99+

JP MorganORGANIZATION

0.99+

John FurrierPERSON

0.99+

Home DepotORGANIZATION

0.99+

CIAORGANIZATION

0.99+

Ronald CoasePERSON

0.99+

$100QUANTITY

0.99+

10%QUANTITY

0.99+

10 minuteQUANTITY

0.99+

Bob DylanPERSON

0.99+

Max HopperPERSON

0.99+

1992DATE

0.99+

CanadaLOCATION

0.99+

five starQUANTITY

0.99+

VegasLOCATION

0.99+

AmazonORGANIZATION

0.99+

21QUANTITY

0.99+

Las VegasLOCATION

0.99+

fourQUANTITY

0.99+

LyftORGANIZATION

0.99+

PowerPointTITLE

0.99+

Imogen HeapPERSON

0.99+

three dayQUANTITY

0.99+

AirbnbORGANIZATION

0.99+

10 yearsQUANTITY

0.99+

1993DATE

0.99+

FacebookORGANIZATION

0.99+

EbayORGANIZATION

0.99+

second generationQUANTITY

0.99+

American AirlinesORGANIZATION

0.99+

seven daysQUANTITY

0.99+

third entryQUANTITY

0.99+

firstQUANTITY

0.99+

StarbucksORGANIZATION

0.99+

million dollarQUANTITY

0.99+

one computerQUANTITY

0.99+

five yearsQUANTITY

0.99+

$70,000,000,000QUANTITY

0.99+

six daysQUANTITY

0.99+

a year agoDATE

0.98+

TodayDATE

0.98+

DonPERSON

0.98+

todayDATE

0.98+

May 10th last yearDATE

0.98+

three years agoDATE

0.98+

GoogleORGANIZATION

0.98+

first stepQUANTITY

0.98+

'94DATE

0.97+

CubeCOMMERCIAL_ITEM

0.97+

Harry PotterTITLE

0.97+

Paradigm ShiftORGANIZATION

0.97+

a dollarQUANTITY

0.96+

second eraQUANTITY

0.96+

couple of years agoDATE

0.96+

triple entryQUANTITY

0.96+

Paradigm ShiftTITLE

0.96+