Image Title

Search Results for Monolith:

Radhika Krishnan, Hitachi Vantara and Peder Ulander, MongoDB | MongoDB World 20222


 

(upbeat music) >> Welcome back to the Javits in the big apple, New York City. This is theCUBE's coverage of MongoDB World 2022. We're here for a full day of coverage. We're talking to customers, partners, executives and analysts as well. Peder Ulander is here. He's the Chief Marketing Officer of MongoDB and he's joined by Radhika Krishnan, who's the Chief Product Officer at Hitachi Ventara. Folks, welcome back to theCUBE. Great to see you both again. >> Good to see you. >> Thank you David, it's good to be back again. >> Peder, first time since 2019, we've been doing a lot of these conferences and many of them, it's the first time people have been out in a physical event in three years. Amazing. >> I mean, after three years to come back here in our hometown of New York and get together with a few thousand of our favorite customers, partners, analysts, and such, to have real good discussions around where we're taking the world with regards to our developer data platform. It's been great. >> I think a big part of that story of course, is ecosystem and partnerships and Radhika, I remember I was at an event when Hitachi announced its strategy and it's name change, and really tried to understand why and the what's behind that. And of course, Hitachi's a company that looks out over the long term, and of course it has to perform tactically, but it thinks about the future. So give us the update on what's new at Hitachi Ventara, especially as it relates to data. >> Sure thing, Dave. As many, many folks might be aware, there's a very strong heritage that Hitachi has had in the data space, right. By virtue of our products and our presence in the data storage market, which dates back to many decades, right? And then on the industrial side, the parent company Hitachi has been heavily focused on the OT sector. And as you know, there is a pretty significant digital transformation underway in the OT arena, which is all being led by data. So if you look at our mission statement, for instance, it's actually engineering the data driven because we do believe that data is the fundamental platform that's going to drive that digital transformation, irrespective of what industry you're in. >> So one of the themes that you guys both talk about is modernization. I mean, you can take a cloud, I remember Alan Nance, who was at the time, he was a CIO at Philips, he said, look, you could take a cloud workload, or on-prem workload, stick it into the cloud and lift it and shift it. And in your case, you could just put it on, run it on an RDBMS, but you're not going to affect the operational models. >> Peder Ulander: It's just your mess for less, man. >> If you do that. >> It's your mess, for less. >> And so, he goes, you'll get a few, you know, you'll get a couple of zeros out of that. But if you want to have, in his case, billion dollar impact to the business, you have to modernize. So what does modernize mean to each of you? >> Maybe Peder, you can start. >> Yeah, no, I'm happy to start. I think it comes down to what's going on in the industry. I mean, we are truly moving from a world of data centers to centers of data, and these centers of data are happening further and further out along the network, all the way down to the edges. And if you look at the transformation of infrastructure or software that has enabled us to get there, we've seen apps go from monoliths to microservices. We've seen compute go from physical to serverless. We've seen networking go from old wireline copper to high powered 5G networks. They've all transformed. What's the one layer that hasn't completely transformed yet, data, right? So if we do see this world where things are getting further and further out, you've got to rethink your data architecture and how you basically support this move to modernization. And we feel that MongoDB with our partners, especially with Hitachi, we're best suited to really kind of help with this transition for our customers as they move from data centers to centers of data. >> So architecture. And at the failure, I will say this and you tell me if you agree or not. A lot of the failures of sort of the big data architectures of today are there's, everything's in this monolithic database, you've got to go through a series of hyper-specialized professionals to get to the data. If you're a business individual, you're so frustrated because the market's changing faster than you can get answers. So you guys, I know, use this concept of data fabric, people talk about data mesh. So how do you think, Radhika, about modernization in the future of data, which by its very nature is distributed? >> Yeah. So Dave, everybody talks about the hybrid cloud, right? And so the reality is, every one of our customers is having to deal with data that's straddled across on-prem as well as the public cloud and many other places as well. And so it becomes incredibly important that you have a fairly seamless framework, that's relatively low friction, that allows you to go from the capture of the data, which could be happening at the edge, could be happening at the core, any number of places, all the way to publish, right. Which is ultimately what you want to do with data because data exists to deliver insights, right? And therefore you dramatically want to minimize the friction in the process. And that is exactly what we're attempting to do with our data fabric construct, right. We're essentially saying, customers don't have to worry about, like you mentioned, they may have federated data structures, architectures, data lakes, fitting in multiple locations. How do you ensure that you're not having to double up custom code in order to drive the pipelines, in order to drive the data movement from one location to the other and so forth. And so essentially what we're providing is a mechanism whereby they can be confident about the quality of the data at the end of the day. And this is so paramount. Every customer that I talk to is most worried about ensuring that they have data that is trustworthy. >> So this is a really important point because I've always felt like, from a data quality standpoint, you know you get the data engineers who might not have any business context, trying to figure out the quality problem. If you can put the data responsibility in the hands of the business owner, who, he or she, has context, that maybe starts to solve this problem. There's some buts though. So infrastructure becomes an operational detail. Let's hide that. Don't worry about it. Figure it out, okay, so the business can run, but you need self-service infrastructure and you have to figure out how to have federated governance so that the right people can have access. So how do you guys think about that problem in the future? 'Cause it's almost like this vision creates those two challenges. Oh, by the way, you got to get your organization behind it. Right, 'cause there's an organizational construct as well. But those are, to me, wonderful opportunities but they create technology challenges. So how are you guys thinking about that and how are you working on it? >> Yeah, no, that's exactly right, Dave. As we talk to data practitioners, the recurring theme that we keep hearing is, there is just a lot of use cases that require you to have deep understanding of data and require you to have that background in data sciences and so on, such as data governance and vary for their use cases. But ultimately, the reason that data exists is to be able to drive those insights for the end customer, for the domain expert, for the end user. And therefore it becomes incredibly important that we be able to bridge that chasm that exists today between the data universe and the end customer. And that is what we essentially are focused on by virtue of leaning into capabilities like publishing, right? Like self, ad hoc reporting and things that allow citizen data scientists to be able to take advantage of the plethora of data that exists. >> Peder, I'm interested in this notion of IT and OT. Of course, Hitachi is a partner, established in both. Talk about Mongo's position in thinking. 'Cause you've got on-prem customers, you're running now across all clouds. I call it super cloud connecting all these things. But part of that is the edge. Is Mongo running there? Can Mongo run there, sort of a lightweight version? How do you see that evolve? Give us some details there. >> So I think first and foremost, we were born on-prem, obviously with the origins of MongoDB, a little over five years ago, we introduced Atlas and today we run across a hundred different availability zones around the globe, so we're pretty well covered there. The third bit that I think people miss is we also picked up a product called Realm. Realm is an embedded database for mobile devices. So if you think about car companies, Toyota, for example, building connected cars, they'll have Realm in the car for the telemetry, connects back into an Atlas system for the bigger operational side of things. So there's this seamless kind of, or consistency that runs between data center to cloud to edge to device, that MongoDB plays across all the way through. And then taking that to the next level. We talked about this before we sat down, we're also building in the security elements of that because obviously you not only have that data in rest and data in motion, but what happens when you have that data in use? And announced, I think today? We purchased a little company, Aroki, experts in encryption, some of the smartest security minds on the planet. And today we introduce query-able encryption, which basically enables developers, without any security background, to be able to build searchable capabilities into their applications to access data and do it in a way where the security rules and the privacy all remain constant, regardless of whether that developer or the end user actually knows how that works. >> This is a great example of people talk about shift left, designing security in, for the developer, right from the start, not as a bolt-on. It's a great example. >> And I'm actually going to ground that with a real life customer example, if that's okay, Dave. We actually have a utility company in North Carolina that's responsible for energy and water. And so you can imagine, I mean, you alluded to the IO to use case, the industrial use case and this particular customer has to contend with millions of sensors that are constantly streaming data back, right. And now think about the challenge that they were encountering. They had all this data streaming in and in large quantities and they were actually resident on numerous databases, right. And so they had this very real challenge of getting to that quality data that I, data quality that I talked about earlier, as well, they had this challenge of being able to consolidate all of it and make sense of it. And so that's where our partnership with MongoDB really paid off where we were able to leverage Pentaho to integrate all of the data, have that be resident on MongoDB. And now they're leveraging some of the data capabilities, the data fabric capabilities that we bring to the table to actually deliver meaningful insights to their customers. Now their customers are actually able to save on their electricity and water bills. So great success story right there. >> So I love the business impact there, and also you mentioned Pentaho, I remember that acquisition was transformative for Hitachi because it was the beginning of sort of your new vector, which became Hitachi Ventara. What is Lumada? That's, I presume the evolution of Pentaho? You brought in organic, and added capabilities on top of that, bringing in your knowledge of IOT and OT? Explain what Lumada is. >> Yeah, no, that's a great question, Dave. And I'll say this, I mentioned this early on, we fundamentally believe that data is the backbone for all digital transformation. And so to that end, Hitachi has actually been making a series of acquisitions as well as investing organically to build up these data capabilities. And so Pentaho, as you know, gives us some of that front-end capability in terms of integrations and so forth. And the Lumada platform, the umbrella brand name is really connoting everything that we do in the data space that allow customers to go through that, to derive those meaningful insights. Lumada literally stands for illuminating data. And so that's exactly what we do. Irrespective of what vertical, what use case we're talking about. As you know very well, Hitachi is very prominent in just about every vertical. We're in like 90% of the Fortune 500 customers across banking and financial, retail, telecom. And as you know very well, very, very strong in the industrial space as well. >> You know, it's interesting, Peder, you and Radhika were both talking about this sort of edge model. And so if I understand it correctly, and maybe you could bring in sort of the IOT requirements as well. You think about AI, most of the AI that's done today is modeling in the cloud. But in the future and as we're seeing this, it's real-time inferencing at the edge and it's massive amounts of data. But you're probably not, you're going to persist some, I'm hearing, probably not going to persist all of it, some of it's going to be throwaway. And then you're going to send some back to the cloud. I think of EVs or, a deer runs in front of the vehicle and they capture that, okay, send that back. The amounts of data is just massive. Is that the right way to think about this new model? Is that going to require new architectures and hearing that Mongo fits in. >> Yeah. >> Beautifully with that. >> So this is a little bit what we talked about earlier, where historically there have been three silos of data. Whether it's classic system of record, system of engagement or system of intelligence and they've each operated independently. But as applications are pushing in further and further to the edge and real time becomes more and more important, you need to be able to take all three types of workloads or models, data models and actually incorporate it into a single platform. That's the vision we have behind our developer data platform. And it enables us to handle those transactional, operational and analytical workloads in real time, right. One of the things that we announced here this week was our columnar indexing, which enables some of that step into the analytics so that we can actually do in-app analytics for those things that are not going back into the data warehouse or not going back into the cloud, real time happening with the application itself. >> As you add, this is interesting, as basically Mongo's becoming this all-in-one database, as you add those capabilities, are you able to preserve, it sounds like you've still focused on simplicity, developer product productivity. Are there trade off, as you add, does it detract from those things or are you able to architecturally preserve those? >> I think it comes down to how we're thinking through the use case and what's going to be important for the developers. So if you look at the model today, the legacy model was, let's put it all in one big monolith. We recognize that that doesn't work for everyone but the counter to that was this explosion of niche databases, right? You go to certain cloud providers, you get to choose between 15 different databases for whatever workload you want. Time series here, graph here, in-memory here. It becomes a big mess that is pushed back on the company to glue back together and figure out how to work within those systems. We're focused on really kind of embracing the document model. We obviously believe that's a great general purpose model for all types of workloads. And then focusing in on not taking a full search platform that's doing everything from log management all the way through in-app, we're optimizing for in-app experiences. We're optimizing analytics for in-app experiences. We're optimizing all of the different things we're doing for what the developer is trying to go accomplish. That helps us maintain consistency on the architectural design. It helps us maintain consistency in the model by which we're engaging with our customers. And I think it helps us innovate as quickly as we've been been able to innovate. >> Great, thank you. Radhika, we'll give you the last word. We're seeing this convergence of function in the data based, data models, but at the same time, we're seeing the distribution of data. We're not, you're clearly not fighting that, you're embracing that. What does the future look like from Hitachi Ventara's standpoint over the next half decade or even further out? >> So, we're trying to lean into what customers are trying to solve for, Dave. And so that fundamentally comes down to use cases and the approaches just may look dramatically different with every customer and every use case, right? And that's perfectly fine. We're leaning into those models, whether that is data refining on the edge or the core or the cloud. We're leaning into it. And our intent really is to ensure that we're providing that frictionless experience from end to end, right. And I'll give a couple of examples. We had this very large bank, one of the top 10 banks here in the US, that essentially had multiple data catalogs that they were using to essentially sort through their metadata and make sense of all of this data that was coming into their systems. And we were able to essentially, dramatically simplify it. Cut down on the amount of time that it takes to deliver insights to them, right. And it was like, the metric shared was 600% improvement. And so this is the kind of thing that we're manically focused on is, how do we deliver that quantifiable end-customer improvement, right? Whether it's in terms of shortening the amount to drive the insights, whether it's in terms of the number of data practitioners that they have to throw at a problem, the level of manual intervention that is required, so we're automating everything. We're trying to build in a lot of security as Peder talked about, that is a common goal for both sides. We're trying to address it through a combination of security solutions at varying ends of the spectrum. And then finally, as well, delivering that resiliency and scale that is required. Because again, the one thing we know for sure that we can take for granted is data is exploding, right? And so you need that scale, you need that resiliency. You need for customers to feel like there is high quality, it's not dirty, it's not dark and it's something that they can rely upon. >> Yeah, if it's not trusted, they're not going to use it. The interesting thing about the partnership, especially with Hitachi, is you're in so many different examples and use cases. You've got IT. You've got OT. You've got industrial and so many different examples. And if Mongo can truly fit into all those, it's just, the rocket ship's going to continue. Peder, Radhika, thank you so much for coming back in theCUBE, it's great to see you both. >> Thank you, appreciate it. >> Thank you, my pleasure. >> All right. Keep it right there. This is Dave Vellante from the Javits Center in New York City at MongoDB World 2022. We'll be right back. (upbeat music)

Published Date : Jun 7 2022

SUMMARY :

Great to see you both again. good to be back again. and many of them, it's the and such, to have real good discussions that looks out over the long term, has had in the data space, right. So one of the themes that your mess for less, man. impact to the business, And if you look at the And at the failure, I will say this And so the reality is, so that the right people can have access. and the end customer. But part of that is the edge. and the privacy all remain constant, designing security in, for the developer, And I'm actually going to ground that So I love the business impact there, We're in like 90% of the Is that the right way to One of the things that we or are you able to but the counter to that was this explosion in the data based, data models, and the approaches just may it's great to see you both. from the Javits Center

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

HitachiORGANIZATION

0.99+

DavePERSON

0.99+

ToyotaORGANIZATION

0.99+

Radhika KrishnanPERSON

0.99+

Dave VellantePERSON

0.99+

Alan NancePERSON

0.99+

600%QUANTITY

0.99+

RadhikaPERSON

0.99+

USLOCATION

0.99+

North CarolinaLOCATION

0.99+

90%QUANTITY

0.99+

New York CityLOCATION

0.99+

MongoORGANIZATION

0.99+

todayDATE

0.99+

PentahoORGANIZATION

0.99+

New YorkLOCATION

0.99+

ArokiORGANIZATION

0.99+

oneQUANTITY

0.99+

three yearsQUANTITY

0.99+

MongoDBORGANIZATION

0.99+

both sidesQUANTITY

0.99+

PhilipsORGANIZATION

0.99+

third bitQUANTITY

0.99+

two challengesQUANTITY

0.99+

15 different databasesQUANTITY

0.98+

three yearsQUANTITY

0.98+

bothQUANTITY

0.98+

Hitachi VentaraORGANIZATION

0.98+

LumadaORGANIZATION

0.98+

Peder UlanderPERSON

0.98+

2019DATE

0.98+

PederPERSON

0.98+

this weekDATE

0.97+

billion dollarQUANTITY

0.97+

firstQUANTITY

0.97+

first timeQUANTITY

0.97+

OneQUANTITY

0.96+

one layerQUANTITY

0.95+

eachQUANTITY

0.95+

millions of sensorsQUANTITY

0.94+

single platformQUANTITY

0.94+

over five years agoDATE

0.93+

theCUBEORGANIZATION

0.92+

next half decadeDATE

0.92+

MongoDBTITLE

0.91+

10 banksQUANTITY

0.91+

AtlasTITLE

0.9+

Javits CenterLOCATION

0.89+

one locationQUANTITY

0.88+

Hitachi VantaraORGANIZATION

0.87+

Sebastian Mass, Bitmarck | Red Hat Summit 2022


 

>>Welcome back to Boston. We're down in the Seaport. This is the Cube's coverage of red hat summit, 2022. I'm Dave ante with my co-host Paul Gillon, Sebastian Moes. Here he is a senior enterprise architect at bit mark Sebastian. Thanks for coming to the queue. Welcome to the United States. Good to have you in Boston. >>Thank you. Thank you for the invitation. It's uh, good to be on a live summit again after, uh, those, uh, testing two years >>Strange, isn't it? I mean, people kind of don't know what to do. Shake, bump this bump, >>And >>It's like, but where everybody wants to get out of the, the home, the lockdown and, you know, there's a real pent up demand. Tell us about bit mark. >>Um, bit mark is a managed service, uh, provider for, um, German statutory health insurance companies. Um, we manage about our software that we develop, um, is for about 85% of the, uh, German health insurance companies. Um, we have, uh, not only do we build the software, we also have data centers where we run software for, for our customers. Um, and it's everything that a health insurance company is, uh, mandatory to have to run their business, so to >>Speak what, what's the life of an enterprise architect like these days and how, how has it evolved? How has it changed? Uh, I mean, independent of the pandemic, will we get to that, but, but, you know, technology changes, organizational objectives of, of changed the public policy changes. How, how was your, the life of an enterprise architect changed? >>Um, well we, we have this, uh, big monolith JG E application that is, uh, run on JBO. Um, and now we want to, we want to change that into a more modern environment and using, uh, OpenShift to do that. Um, and yeah, there's, uh, there's a lot of reg regulatory things that come up that need to be, um, need to be figured in. Uh, there is new demands that our customers have that we need to figure out how to get to market, uh, and to be able to deliver software more faster and, you know, make the turnaround, uh, or have the turnaround be less. >>So kind of following the technology trends of going from big monolith to microservices and containerization and distributed data, the, the, >>The whole, the scalability, uh, you know, and quick turnaround, that is, that is the main focus. >>So the application that you're here talking about this pace to face in application, kind of a new market for you, a new direction, is this part of that overall shift to a more modular microservices based, uh, >>Structure? Um, well, we, we, we had applications like this before, but this is a new branch of it because, um, there's a strong drive in Germany too, for more digital digitalization. Um, and to have a new interacting model with the customer from basic things to more advanced features like medication services, vaccination status, um, managing your allergies, and that's an edit value that we want to give, uh, for our customers. So they can, their customers can benefit. >>I dunno what it's like in Germany, but in the United States used to call up the doctor and say, Hey, can I just, can we do this over the phone? No, you gotta come into the office. Mm-hmm <affirmative> and then of course, with the pandemic, it was like, you can't come into the office. It was just total flipping, cuz you could get 80% of what you needed done, and this is what your app enabled essentially. Right? >>Yeah. And, and some that and some added value as well, uh, to, to give, um, yeah, a benefit for using this, uh, online interaction for, um, the insured people, the, the patients, >>Essentially a digital gateway, including your data. Well, that's the other thing you can't get right. As a patient, you can't ever get through your data, it's like right. You >>Get it, but nobody else can >>Get it. <laugh> sometimes it's hard for you to get it cuz of again, in the United States, HIPAA and the, and the, and the requirements for privacy restrict often access to, to data, you have to go through hoops to get it. So, uh, so, so that experience is what you codified in your application. Yes. >>Um, yes, we have this, uh, unique data set of all health related information that people have to, uh, interact with in, in when they're sick or when they deal with their healthcare company. Um, and yeah, we wanna provide that data to the customers. So they're able to look at it. Um, there's also the, uh, electronic patient folder. You can say, um, where there's data like medical exams and stuff in there that they have access to. We provide that as well for, for our customers. Um, but, uh, yeah, it is about the interaction and that I can see when I put something in to my insurance company via email or the doctor put something in that I have the interaction on my phone and see when it was delivered, um, to them when it's active, when I get the money, stuff like that. >>Now this application is built on OpenShift, it's cloud native, uh, has all the constructs. How different was that for your development team from building something like you mentioned, the monolithic Jbos application that you already have, how different was building the cloud native, uh, >>Constructs. Um, it is quite different. I mean, it's building software, there's a lot of the same things involved. We've been, we've done agile and scrum, uh, before and so on, but we now have a, um, we're trying to be, or no, we're actually achieved to be faster in bringing this to market, um, deploying it in different data centers, doing it all automatically doing automatic tested, uh, right as part of the pipeline. Um, there's, there's a lot of huge steps that we can, we're able to take because of the technology. And that's why we did go there in the first place. That's why we said, okay, this is, it needs to be, uh, cloud native. >>You found that red, red hat had the full suite of tools that you needed. >>Um, yeah, I mean, we, there's some open source stuff that we've also integrated into the pipeline and everything, but there's a lot of, for example, we are using the, uh, three scale, uh, the API management from, from red hat, um, just to be able to, um, use the functionality that we build, that the customers can use the functionality in other products that they use that serve partner people, uh, uh, certain partner companies can, are able to use the services as well. >>Okay. So the, the, the dumb question is, but I'll ask it anyway is you could get this stuff for free Kubernetes, open source, you know, you get E Ks for free. Why didn't you just use the freebie? >>Why? Um, well, we're, we're on a scale with so many, um, uh, customers and data centers that we have to take that we do need support in, in a way. Um, and I usually say, so if we take software from whoever, whatever company it is, we're gonna break it. Yeah. <laugh> um, the, the, the transaction load that we have is, is quite, uh, intense and the performance that we need, uh, especially in the, in the business to business, um, market is, is so big that we do need the interaction with, with a vendor and that they're able to help us, uh, with certain escalations >>German Germans play rough. So <laugh>, um, you know, when a, when a vendor announces an innovation lab, I always go, okay, that's an EBC, like an executive briefing center. It's all gonna be used for sales. But my understanding is you actually leveraged the innovation labs. It was actually helpful in building this application. Is that true? >>I, I, I actually, uh, to part in the open innovation that we did with RA hat, and we knew we knew what we wanted to do. We, we knew the technology, we knew what we wanted to have done, um, but they helped us to, to get there step by step with the, with the tools they have, the, um, uh, you know, the ways of working and how this is, this is built. It really lends itself to, to build that step by step and worry about some stuff later and just do it. Um, yeah, piecemeal, >>This is Al is also a new market for you. It's your first real business to consumer facing application. That's that implies a very different approach to experience design, uh, to how you >>And performance yeah, >>Yeah. Perform exactly. Uh, how did your development team adapt to that? >>Um, well, there's, there's, you know, certain things that you build into the process, like integration, testing, automated integration testing, where the application just gets checked right after you check in your software. Um, we built in low testing to, you know, we have an idea of how many transactions per second, there will be. And so the low testing takes care of that as well. Um, and that is easier if you have a small piece of software instead of the whole monolith that we usually have. And so you, we are able to, to build it quicker and get it out quick in, in hours. >>How, how have you, um, accessed customer feedback, you do your, you know, net promoter score surveys, what, what's the been the customer reaction, your, your consumer >>Reaction? Um, they, they, I mean, I'm kind of the wrong guy to talk to, to, uh, about <laugh> to >>Talk about, come on the architected, the thing. >>Yeah, I, I did. And, and then the feedback has been, it's been very good so far, uh, and we are pretty happy with it. Uh, it's it's running, uh, very well. Um, I don't quite know how they got there. Our customer does, uh, you know, uh, questionnaires and, and stuff like that. Yeah. We have a, a different depart, uh, department to, to solicit feedback on that. But from what I hear, uh, it's, it's received very well. >>One of the cloud native features, I understand you used extensively with APIs, uh, for integrations. How are you making this application accessible to partners? What, I mean, what are you exposing? How will you use those APIs to enhance the value through, through an ecosystem of >>Partners? Um, well, we document them, um, and so they're out there to use. And as long as there's a, um, a security process within, um, em that we have in front of it, um, they're open source, um, APIs. So, uh, as I said, they have other programs that they wrote themselves or that they bought that are able to use those APIs, um, from an open API document. Uh, and, and just interact with that as long as the user is, uh, authenticated, they're able to, to get this data and show it in a different context and use it in a different context. >>Did you play golf? >>Um, I used to time ago, not anymore. >>Now, do you know what a Mulligan is? Yes, I did. Okay. If you had a Mulligan, you'd do this all over again. What, what would you do differently? >>Um, an interesting question. I, don't not sure. Um, you, you say you're smarter after, after you've done that. Yeah. And, and of course there's, uh, there's, there, there there's certainly were things that I didn't expect that would happen. Um, like how, how really you need to go modular and on, on everything and need your own resource and infrastructure. Um, because we came from a very centralized, um, uh, scope. We had a database that is a big DB database, um, and now we're going into smaller database and not decentralized a lot. Um, and that was something that the extent of it, I didn't expect, I, I wanted to use more smaller things. And, and that was something that we very quickly learned that no, we need really need to separate stuff out. >>Was that an organizational sort of mindset shift? Um, are you, are you rethinking or rearchitecting your data, um, your data architecture as part of that, or is that more, or is this more just sort of tactical for this app? >>Um, no, we're definitely need to need to do this because, uh, it really gets, um, or it really is a, um, something to handle a, a big pool of data is, is really a challenge or can be a challenge at times >>To scale, >>To, to, to scale that up. Right. Yeah. Um, and so, yeah, we are going to, to separate that out and double some data. That's, that's gonna be a thing it's gonna be more data at the end, but since it's scaled out and, and decentralized, that will >>Help a lot of organizations would say, well, we wanna keep it centralized monolithic, which is kind of a negative term, but I think it's true, uh, because it's more cost effective. We're not gonna duplicate things as much. We're gonna have roles that are dedicated, but it sounds like you're seeing a business advantage of distributing those functions, decentralizing those functions to a >>Extent, right, right. Because if you, if you have a centralized Mon monolith, then it, I, yeah, it might be negative, but it really is. It's a good working software. Um, but to have that, it's, um, it's really hard to release new features and new, new, you know, even buck fixes it, it just takes time. It, it is, uh, uh, a time consuming process. And if you have it decentralized and in smaller packages, you can just do, Afix run it through the pipeline, have the testing done and just put that out within hours. >>How important was it to bit more to build this application on an open source platform? >>Um, the open source didn't come so much in our perspective of things, or we didn't consider it that much. It was just, this is there. This works. We have a good support behind that. Um, we are, our, our coach is not open sourced, then we're not going to anytime soon tell about it. Um, we're actually thinking about having parts that might be, uh, a kind of open source dish, uh, just in the healthcare community kind of thing. Um, but, uh, yeah, no, that didn't F factor in as much. Um, it was just something that we had >>Experienced another architecture question. So you've got the application stack, right. If I can use that term, although application development tools that you build use to build the application, and then you've got the data that the application needs, how are those architected, are they sort of separate entities? Are they coming together? >>Um, we used to have, we used to have, uh, uh, data, um, net a, a, um, an MDA approach, a J hue. Um, so they're very strong connected. That is, there's just in the database. There are models and entities that we use in the, in the JBO. Um, and well, we're still gonna use hibernate to, to, uh, to do the G GPA, but it's, uh, yeah, it's something that needs to be restructured because it just takes a lot of resources to manage data from different parts of the application, bringing them together, um, that will, will need to change. >>And what about new data sources? If I came to and say, Sebastian, I need to inject new data into the, the app. I need to get this to how, how, how difficult or, or fast easy is that, >>Uh, now in the, in the world now, or actually we wanna >>Compare, can you compare before and now, I mean, it wouldn' have to happen before would be >>Like, in the time in the timeframe it's, it's, it's not, it's hard to say. I mean, but if you have a project right now, we're talking, uh, months, um, like a year to, to get it done, get it tested, and then it even takes, um, up to a month to before it's out to every customer. Yeah. The rollout process takes some time. Yeah. Um, and we're planning on, or we, we developed the new, uh, the new software we developed in a couple of months. Uh, and then it is deployed and then it's in production and it's in production for all the customers that wanted to use it for now. I mean, it's not deployed to all customers yet, uh, because they need to adapt it and in their way. Um, but they have it, you know, it's, it's right there. It's deployed. Yeah. When we fix it, it's in a, you know, hours, couple days it's out and it's out in production, in different data centers for different customers. >>And we've come full circle the life of a, of an architect. It's, uh, it sounds like it's much better today. Sebastian, thanks so much for coming to the cube. Appreciate your time and your insights. And thank you for watching. Keep it right there that you watching the Cube's coverage of red hat summit, 2022 from Boston, Dave Valante for Paul Gillon, we'll be right back.

Published Date : May 10 2022

SUMMARY :

Good to have you in Boston. Thank you for the invitation. I mean, people kind of don't know what to do. the lockdown and, you know, there's a real pent up demand. Um, and it's everything that a health insurance company is, but, but, you know, technology changes, organizational objectives of, Um, and now we want to, The whole, the scalability, uh, you know, and quick turnaround, that is, Um, and to have a new interacting model with the customer from with the pandemic, it was like, you can't come into the office. to give, um, yeah, a benefit for using this, uh, Well, that's the other thing you can't get right. to data, you have to go through hoops to get it. Um, but, uh, yeah, it is about the interaction and that I can see when I the monolithic Jbos application that you already have, how different was building the cloud native, uh, uh, before and so on, but we now have a, um, just to be able to, um, use the functionality that we build, could get this stuff for free Kubernetes, open source, you know, you get E Ks for free. Um, and I usually say, so if we take software from whoever, So <laugh>, um, you know, when a, when a vendor announces an innovation I, I, I actually, uh, to part in the open innovation that we did with RA hat, and we knew we to how you Uh, how did your development team adapt to that? Um, we built in low testing to, you know, we have an idea of how many transactions uh, you know, uh, questionnaires and, and stuff like that. One of the cloud native features, I understand you used extensively with APIs, uh, that they bought that are able to use those APIs, um, What, what would you do differently? Um, and that was something that the extent of it, I didn't expect, Um, and so, yeah, we are going to, those functions, decentralizing those functions to a Um, but to have that, it's, um, it's really hard to release new Um, it was just something that we had If I can use that term, although application development tools that you build use to build the Um, we used to have, we used to have, uh, uh, data, um, net a, a, um, an MDA approach, I need to get this to how, how, how difficult or, Um, and we're planning Keep it right there that you watching the Cube's coverage of red hat summit, 2022 from Boston,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul GillonPERSON

0.99+

BostonLOCATION

0.99+

Dave ValantePERSON

0.99+

GermanyLOCATION

0.99+

80%QUANTITY

0.99+

Sebastian MoesPERSON

0.99+

SebastianPERSON

0.99+

United StatesLOCATION

0.99+

firstQUANTITY

0.99+

two yearsQUANTITY

0.99+

Sebastian MassPERSON

0.99+

DavePERSON

0.99+

SeaportLOCATION

0.98+

OpenShiftTITLE

0.97+

red hat summitEVENT

0.97+

about 85%QUANTITY

0.96+

todayDATE

0.95+

HIPAATITLE

0.95+

Red Hat Summit 2022EVENT

0.95+

BitmarckPERSON

0.94+

GermanOTHER

0.93+

pandemicEVENT

0.92+

2022DATE

0.92+

AfixORGANIZATION

0.91+

agileTITLE

0.89+

up to a monthQUANTITY

0.87+

JG ETITLE

0.86+

EBCORGANIZATION

0.85+

MulliganPERSON

0.81+

2022EVENT

0.81+

JBOORGANIZATION

0.74+

a yearQUANTITY

0.72+

E KsTITLE

0.71+

secondQUANTITY

0.71+

OneQUANTITY

0.69+

hatEVENT

0.68+

GermansPERSON

0.68+

doubleQUANTITY

0.67+

GermanLOCATION

0.66+

CubeORGANIZATION

0.66+

threeQUANTITY

0.65+

red hatORGANIZATION

0.65+

CubePERSON

0.63+

monolithORGANIZATION

0.63+

KubernetesTITLE

0.63+

bit markORGANIZATION

0.61+

JbosORGANIZATION

0.59+

couple of monthsQUANTITY

0.59+

bit mark SebastianPERSON

0.54+

redORGANIZATION

0.51+

bitORGANIZATION

0.47+

MulliganOTHER

0.46+

markPERSON

0.45+

BOS16 Pavlo Baron VTT


 

>>from >>around the >>globe, it's the cube >>with digital coverage of >>IBM think 2021 >>brought to >>you by IBM >>everybody welcome back to the cubes, continuous coverage of IBM think 2021 the virtual edition, my name is Dave Volonte and we're gonna talk about observe ability front and center for devops and developers, things are really changing. We're going from monitoring and logs and metrics and just this mess and now we're bringing in a I and machine intelligence and with us is Pablo Baron, who is the Ceo of inst ana, which is an IBM company that IBM acquired november of 2020. Pablo great to see you. Thanks for joining us from Munich. >>Thanks for having me. Thanks a lot. >>You're very welcome. So you know, I always love to talk to founders and co founders and try to understand sort of why they started their companies and congratulations on the exit. That's awesome. After 55 I'm sure grinding but relatively short years. Why did you guys start in stana? And what were some of the trends that you saw in that you're seeing now in the observe ability space? >>Yeah, that's a very good question. So, um, the journey began ah, as we worked in the company called code centric, the majority of the founders and uh, we actually specialized in troubleshooting uh, well, real hard customer performance problems. We used all different kinds of A PM solutions for that. You know, we, we've built expertise like collectively maybe 300 years in the whole company. So we would go from one um, adventure into the other and see customers suffer and help them, you know, overcome this trouble. At some point we started seeing architectures coming up that were not well covered by the classic KPM sellers, like people went after this. Sudha, Sudha, Sudha virtualization all in containers, you know, just dropping random workloads into container running this maybe in cabinet as well. Not not actually not 56 ago, but years ago. But you get the point, we started with the heavy continues container ization and we've seen that a classic A PM solution that is heavily, you know, like machinery rented and and some of them you've encountered by the number of CPU etcetera etcetera. They were very well suited for this. Plus all of the workloads are so dynamic. They keep coming and going. You cannot really, you know, place your agent there that is not adopting to change continuously. We've seen this coming and we really, we've seen the trouble that we cannot really support the customers properly. So after looking around, we just said, hey, uh, I think it's time to just implement a new one. Right? So we started that adventure with the idea of a constant change, with the idea of everything is containers, with idea of everything goes towards glove needed. People just run random uh workloads of all different versions that are linked altogether than this. Whole microservices trend came up where people would just break down their monoliths and resilience of literally very small components that could be deployed independently. Everything keeps changing all the time. The classic solution cannot keep up with that. >>So let me pick it up from there if I can. So it's interesting. Your timing is quite amazing because as you mentioned, it really wasn't kubernetes when you started in the middle part of last decade. You know, containers have been around for a long time, but kubernetes weren't, it wasn't mainstream back then. So you had some foresight uh and and the market has just come right into your vision but but maybe talk a little bit about the way A. P. M. Used to work. It was, I started to talk about this. It was metrics, it was traces, it was logs, it was make your eyes bleed type of type of stuff. Um, and maybe you can talk about how you guys are different and how you're accommodating the rapid changes in the market today. >>Right? So well there is very, very many um cases this. So first of all we always have seen that the work that you should not be doing by hand. I mean we already said that you should not be doing this and you should be automating as much as possible. We see this everywhere in the industry that everything gets more and more automated. We want to animate through the whole continuous delivery cycle. Unfortunately monitoring was the space that probably never was automated before installing a came into place. So our idea was, hey, just just get rid of the unnecessary work because you keep people busy with stuff they should not be doing like manually watching dashboards, setting up agents with every single software change, like adopting configuration etcetera, etcetera, etcetera. All of these things can be done automatically, you know, to very, very, very large extent. And that's what we did. We did this from the beginning, everything we approached, we, we, we think twice about can we automate, you know, the maximum out of it And only if we see that it's, it's, you know, too much an effort, etcetera. We will, we will probably not do this, but otherwise we're not, we don't do the same thing. You know, you can compromise the other right? The other aspect is, so this is different to the classic A PM world that is typically very expert heavy. The expert comes into, you know, into the project and really starts configuring etcetera, etcetera etcetera. This is this is a totally different approach the other approaches continuous change and you know, adapting to the continuous change, container comes up, you need to know what this kind of workload, what kind of work load this thing is, how it is connected to all the others. And then at some point probably it's gonna it's gonna go through the change and get a new versions etcetera etcetera. You need to capture this whole life cycle without really changing your monitoring system. Plus, if you move your workloads from the classic Monolith, through microservices on to cuba needs, you kind of transitioning, you know, it's a journey and this journey, you want to keep your business abstractions as stable as possible. The term application is nothing that you should be reconfiguring. Once you figure out what is payment in your system. This is a stable abstraction. It doesn't matter if you deliver it on containers. Doesn't matter if this is just a huge JBM that owns the whole box alone. It simply doesn't matter. So we we decoupled everything infrastructure from everything logic and uh the foundation for this is what we call the dynamic ground. It technically is pretty much a data structure. Regular graph data structure with, you know, connections in multiple directions from different notes. But the point is that we actually decompose the whole, I teach geography. This is the term I like to use because there is, there is no other its infrastructure, its topology, it is on the other hand, just, you know, same sides of the same thing. When you have a limits process, it can be HIV m it's just at the same time, it can be approached with an application, it's the same thing and given different names and this different faces of this thing can be linked with everything else in a totally different way. So we're decomposing this from the beginning of the product which allows us to to have a very deep and hierarchical understanding of problems when it appears. So we can nail it not down to a metric. That probably doesn't make sense to any user but really name the cause by look in this J. V. M, the drop wizard metric exercise that is misbehaving. This indicates that this particular piece of technology is broken and here's how it's broken. So there's a built in explanation to a problem. So um the the classic eight pm as I said, it is a very expert heavy um, territory we try to automate the expert. We have this guy called stan this is your you know, kind of virtual devoPS engineer has a I in there. It has some artificial brain, it never sleeps, it observes all of the problems. It really is an amazing guy because nobody likes him because he always tells you what's broken. You don't need to invite them to the party and give them a raise just there and conserving your systems. >>I like stand, I like stand better than fred, no offense to fred but friends of the guy in the lab coat that I have to call every time to help me fix my problems and what you're describing is end to end visibility or observe ability in terms that norm either normal people can understand or certainly stand, can understand and can automate. And that kind of leads me to this notion of anti patterns um getting software, we think of anti patterns as you know you have software hairballs and software bloat, you've got stovepipe systems, your your data guy by background and so you will understand stovepiped data systems, there's organizational examples of of of anti patterns like micromanagement or over an analysis by paralysis. If you will, how do anti patterns fit into this world? Of observe ability? What do you see? >>Oh there's many, I could write a whole book actually about that. Um let me just list a few. So first of all it is valid for any kind of automation, what you can automate you should not be doing by hand, this is a very common entire pattern. People are just doing work by hand just because the lazy word, you know like repetitive work or there is no kind of foundation to automate that whatever the reason, this is clearly an anti pattern. What we, what we also see in the monitoring space are very interesting things like normally since the problems in the observe ability monitoring space is so hard, You normally send your best people watching grants who want them to contribute to the business value rather than waste the time observing charts that like 99 of them are normal. The other aspect, of course, is what we also have seen is the other side of the spectrum where people just send total mobilizes into the, into the problem of observe ability and let them learn on the subject. Which is also not a good thing because you cannot really I mean there are so many unknown unknowns for people who are not experts in the space. They will not catch the problem. You will go through pain, right? So it's not the learning project, that's not the research from a project. This is very essential to the operation of humor, business and humanity. And there's many examples like that, >>right? Yeah. So I want to end by just sort of connecting the dots so this makes a lot of sense. And if you think about, you know, Ivan Kushner said that IBM has got to win the architectural battle for hybrid cloud. And when I think of Hybrid cloud, I think of on prem connecting to public cloud, not only the IBM public cloud but other public clouds going across clouds going to the edge, bringing open shift and kubernetes to the edge and developing new supporting new workloads. So as I. T. Is like the university keeps expanding and it gets more and more and more complicated. So to your point humans are not going to be able to solve the classic performance problems in the classic way. Uh they're gonna need automation. So it really does fit well into iBMS hybrid cloud strategy, your, your thoughts and I'll give you the last word. >>Yeah, totally. I mean, I'm IBM generally is of course very far ahead in regards to research AI and all these things this death, sorry, those could be combined with an stand a very, very, you know, natively right. We we are prepared to automate using AI all of the well, I would want to claim that all of the monitoring observe ability problems. Of course, there is manual work in some, you know, in some cases you simply don't know what people want to observe. So you kind of need to give them names and that's where people come in. But this is more creative work. Like you don't want to do the stupid work with people. It doesn't, you know, there is no, it doesn't make any sense. And IBM of course, um requiring in stana gets, you know, the foundation for all of the things that used to be done by hand. Now, fully automated, combined within standard, combined with Watson, the ions, This is, this is huge. This is like a real great story, like the best research of the world eating. Uh, probably the best a PMC. >>That's great Pablo, really appreciate you taking us through Astana and the trends and observe ability and what's going on at IBM. And congratulations on your, your success and thanks for hanging with us with all the craziness going on at your abode. And uh really, it was a pleasure having you on. Thank you. >>Thanks a lot. >>All right, and thank you for watching everybody says Dave Volonte and our ongoing coverage of IBM, think 2021 you're watching the Cube? Yeah. Mhm

Published Date : Apr 16 2021

SUMMARY :

and logs and metrics and just this mess and now we're bringing in a I and machine Thanks a lot. So you know, I always love to talk to founders and co founders and try to understand You cannot really, you know, place your agent there that So you had some foresight uh and and the market has just come right can we automate, you know, the maximum out of it And anti patterns um getting software, we think of anti patterns as you know you have software hairballs the lazy word, you know like repetitive work or there is no kind of foundation And if you think about, you know, Ivan Kushner said that IBM has got to win the architectural battle for hybrid cloud. Of course, there is manual work in some, you know, in some cases you simply don't know what people want And uh really, it was a pleasure having you on. All right, and thank you for watching everybody says Dave Volonte and our ongoing coverage of IBM,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ivan KushnerPERSON

0.99+

Dave VolontePERSON

0.99+

IBMORGANIZATION

0.99+

Pablo BaronPERSON

0.99+

november of 2020DATE

0.99+

MunichLOCATION

0.99+

PabloPERSON

0.99+

2021DATE

0.99+

300 yearsQUANTITY

0.99+

last decadeDATE

0.99+

twiceQUANTITY

0.98+

fredPERSON

0.97+

eight pmDATE

0.96+

SudhaORGANIZATION

0.95+

firstQUANTITY

0.92+

J. V. MPERSON

0.9+

code centricORGANIZATION

0.84+

MonolithTITLE

0.84+

years agoDATE

0.83+

55QUANTITY

0.81+

99 of themQUANTITY

0.78+

KPMORGANIZATION

0.77+

AstanaLOCATION

0.75+

iBMSTITLE

0.69+

56 agoDATE

0.67+

todayDATE

0.63+

stanaLOCATION

0.61+

stanPERSON

0.6+

unknownQUANTITY

0.58+

HIVOTHER

0.57+

single softwareQUANTITY

0.56+

thinkCOMMERCIAL_ITEM

0.45+

WatsonPERSON

0.42+

Pavlo BaronORGANIZATION

0.39+

CubeTITLE

0.38+

ON DEMAND SPEED K8S DEV OPS SECURE SUPPLY CHAIN


 

>> In this session, we will be reviewing the power and benefits of implementing a secure software supply chain and how we can gain a cloud like experience with the flexibility, speed and security of modern software delivering. Hi, I'm Matt Bentley and I run our technical pre-sales team here at Mirantis. I spent the last six years working with customers on their containerization journey. One thing almost every one of my customers has focused on is how they can leverage the speed and agility benefits of containerizing their applications while continuing to apply the same security controls. One of the most important things to remember is that we are all doing this for one reason and that is for our applications. So now let's take a look at how we can provide flexibility to all layers of the stack from the infrastructure on up to the application layer. When building a secure supply chain for container focused platforms, I generally see two different mindsets in terms of where their responsibilities lie between the developers of the applications and the operations teams who run the middleware platforms. Most organizations are looking to build a secure, yet robust service that fits their organization's goals around how modern applications are built and delivered. First, let's take a look at the developer or application team approach. This approach falls more of the DevOps philosophy, where a developer and application teams are the owners of their applications from the development through their life cycle, all the way to production. I would refer to this more of a self service model of application delivery and promotion when deployed to a container platform. This is fairly common, organizations where full stack responsibilities have been delegated to the application teams. Even in organizations where full stack ownership doesn't exist, I see the self service application deployment model work very well in lab development or non production environments. This allows teams to experiment with newer technologies, which is one of the most effective benefits of utilizing containers. In other organizations, there is a strong separation between responsibilities for developers and IT operations. This is often due to the complex nature of controlled processes related to the compliance and regulatory needs. Developers are responsible for their application development. This can either include dock at the development layer or be more traditional, throw it over the wall approach to application development. There's also quite a common experience around building a center of excellence with this approach where we can take container platforms and be delivered as a service to other consumers inside of the IT organization. This is fairly prescriptive in the manner of which application teams would consume it. Yeah when examining the two approaches, there are pros and cons to each. Process, controls and compliance are often seen as inhibitors to speed. Self-service creation, starting with the infrastructure layer, leads to inconsistency, security and control concerns, which leads to compliance issues. While self-service is great, without visibility into the utilization and optimization of those environments, it continues the cycles of inefficient resource utilization. And a true infrastructure as a code experience, requires DevOps, related coding skills that teams often have in pockets, but maybe aren't ingrained in the company culture. Luckily for us, there is a middle ground for all of this. Docker Enterprise Container Cloud provide the foundation for the cloud like experience on any infrastructure without all of the out of the box security and controls that our professional services team and your operations teams spend their time designing and implementing. This removes much of the additional work and worry around ensuring that your clusters and experiences are consistent, while maintaining the ideal self service model. No matter if it is a full stack ownership or easing the needs of IT operations. We're also bringing the most natural Kubernetes experience today with Lens to allow for multi-cluster visibility that is both developer and operator friendly. Lens provide immediate feedback for the health of your applications, observability for your clusters, fast context switching between environments and allowing you to choose the best in tool for the task at hand, whether it is the graphic user interface or command line interface driven. Combining the cloud like experience with the efficiencies of a secure supply chain that meet your needs brings you the best of both worlds. You get DevOps speed with all the security and controls to meet the regulations your business lives by. We're talking about more frequent deployments, faster time to recover from application issues and better code quality. As you can see from our clusters we have worked with, we're able to tie these processes back to real cost savings, real efficiency and faster adoption. This all adds up to delivering business value to end users in the overall perceived value. Now let's look and see how we're able to actually build a secure supply chain to help deliver these sorts of initiatives. In our example secure supply chain, where utilizing Docker desktop to help with consistency of developer experience, GitHub for our source control, Jenkins for our CACD tooling, the Docker trusted registry for our secure container registry and the Universal Control Plane to provide us with our secure container runtime with Kubernetes and Swarm, providing a consistent experience, no matter where our clusters are deployed. You work with our teams of developers and operators to design a system that provides a fast, consistent and secure experience. For my developers, that works for any application, Brownfield or Greenfield, Monolith or Microservice. Onboarding teams can be simplified with integrations into enterprise authentication services, calls to GitHub repositories, Jenkins access and jobs, Universal Control Plan and Docker trusted registry teams and organizations, Kubernetes namespace with access control, creating Docker trusted registry namespaces with access control, image scanning and promotion policies. So, now let's take a look and see what it looks like from the CICD process, including Jenkins. So let's start with Docker desktop. From the Docker desktop standpoint, we'll actually be utilizing visual studio code and Docker desktop to provide a consistent developer experience. So no matter if we have one developer or a hundred, we're going to be able to walk through a consistent process through Docker container utilization at the development layer. Once we've made our changes to our code, we'll be able to check those into our source code repository. In this case, we'll be using GitHub. Then when Jenkins picks up, it will check out that code from our source code repository, build our Docker containers, test the application that will build the image, and then it will take the image and push it to our Docker trusted registry. From there, we can scan the image and then make sure it doesn't have any vulnerabilities. Then we can sign them. So once we've signed our images, we've deployed our application to dev, we can actually test our application deployed in our real environment. Jenkins will then test the deployed application. And if all tests show that as good, we'll promote our Docker image to production. So now, let's look at the process, beginning from the developer interaction. First of all, let's take a look at our application as it's deployed today. Here, we can see that we have a change that we want to make on our application. So our marketing team says we need to change containerize NGINX to something more Mirantis branded. So let's take a look at visual studio code, which we'll be using for our ID to change our application. So here's our application. We have our code loaded and we're going to be able to use Docker desktop on our local environment with our Docker desktop plugin for visual studio code, to be able to build our application inside of Docker, without needing to run any command line specific tools. Here with our code, we'll be able to interact with Docker maker changes, see it live and be able to quickly see if our changes actually made the impact that we're expecting our application. So let's find our updated tiles for application and let's go ahead and change that to our Mirantis sized NGINX instead of containerized NGINX. So we'll change it in a title and on the front page of the application. So now that we've saved that changed to our application, we can actually take a look at our code here in VS code. And as simple as this, we can right click on the Docker file and build our application. We give it a name for our Docker image and VS code will take care of the automatic building of our application. So now we have a Docker image that has everything we need in our application inside of that image. So, here we can actually just right click on that image tag that we just created and do run. This will interactively run the container for us. And then once our containers running, we can just right click and open it up in a browser. So here we can see the change to our application as it exists live. So, once we can actually verify that our applications working as expected, we can stop our container. And then from here, we can actually make that change live by pushing it to our source code repository. So here, we're going to go ahead and make a commit message to say that we updated to our Mirantis branding. We will commit that change and then we'll push it to our source code repository. Again, in this case, we're using GitHub to be able to use as our source code repository. So here in VS code, we'll have that pushed here to our source code repository. And then, we'll move on to our next environment, which is Jenkins. Jenkins is going to be picking up those changes for our application and it checked it out from our source code repository. So GitHub notifies Jenkins that there's a change. Checks out the code, builds our Docker image using the Docker file. So we're getting a consistent experience between the local development environment on our desktop and then in Jenkins where we're actually building our application, doing our tests, pushing it into our Docker trusted registry, scanning it and signing our image in our Docker trusted registry and then deploying to our development environment. So let's actually take a look at that development environment as it's been deployed. So, here we can see that our title has been updated on our application, so we can verify that it looks good in development. If we jump back here to Jenkins, we'll see that Jenkins go ahead and runs our integration tests for our development environment. Everything worked as expected, so it promoted that image for our production repository in our Docker trusted registry. We're then, we're going to also sign that image. So we're assigning that yes, we've signed off that has made it through our integration tests and it's deployed to production. So here in Jenkins, we can take a look at our deployed production environment where our application is live in production. We've made a change, automated and very secure manner. So now, let's take a look at our Docker trusted registry, where we can see our name space for our application and our simple NGINX repository. From here, we'll be able to see information about our application image that we've pushed into the registry, such as the image signature, when it was pushed by who and then, we'll also be able to see the results of our image. In this case, we can actually see that there are vulnerabilities for our image and we'll actually take a look at that. Docker trusted registry does binary level scanning. So we get detailed information about our individual image layers. From here, these image layers give us details about where the vulnerabilities were located and what those vulnerabilities actually are. So if we click on the vulnerability, we can see specific information about that vulnerability to give us details around the severity and more information about what exactly is vulnerable inside of our container. One of the challenges that you often face around vulnerabilities is how exactly we would remediate that in a secure supply chain. So let's take a look at that. In the example that we were looking at, the vulnerability is actually in the base layer of our image. In order to pull in a new base layer for our image, we need to actually find the source of that and update it. One of the ways that we can help secure that as a part of the supply chain is to actually take a look at where we get our base layers of our images. Docker hub really provides a great source of content to start from, but opening up Docker hub within your organization, opens up all sorts of security concerns around the origins of that content. Not all images are made equal when it comes to the security of those images. The official images from Docker hub are curated by Docker, open source projects and other vendors. One of the most important use cases is around how you get base images into your environment. It is much easier to consume the base operating system layer images than building your own and also trying to maintain them. Instead of just blindly trusting the content from Docker hub, we can take a set of content that we find useful such as those base image layers or content from vendors and pull that into our own Docker trusted registry, using our mirroring feature. Once the images have been mirrored into a staging area of our Docker trusted registry, we can then scan them to ensure that the images meet our security requirements. And then based off of the scan result, promote the image to a public repository where you can actually sign the images and make them available to our internal consumers to meet their needs. This allows us to provide a set of curated content that we know is secure and controlled within our environment. So from here, we can find our updated Docker image in our Docker trusted registry, where we can see that the vulnerabilities have been resolved. From a developer's point of view, that's about as smooth as the process gets. Now, let's take a look at how we can provide that secure content for our developers in our own Docker trusted registry. So in this case, we're taking a look at our Alpine image that we've mirrored into our Docker trusted registry. Here, we're looking at the staging area where the images get temporarily pulled because we have to pull them in order to actually be able to scan them. So here we set up mirroring and we can quickly turn it on by making it active. And then we can see that our image mirroring, we'll pull our content from Docker hub and then make it available in our Docker trusted registry in an automatic fashion. So from here, we can actually take a look at the promotions to be able to see how exactly we promote our images. In this case, we created a promotion policy within Docker trusted registry that makes it so that content gets promoted to a public repository for internal users to consume based off of the vulnerabilities that are found or not found inside of the Docker image. So our actual users, how they would consume this content is by taking a look at the public to them, official images that we've made available. Here again, looking at our Alpine image, we can take a look at the tags that exist and we can see that we have our content that has been made available. So we've pulled in all sorts of content from Docker hub. In this case, we've even pulled in the multi architecture images, which we can scan due to the binary level nature of our scanning solution. Now let's take a look at Lens. Lens provides capabilities to be able to give developers a quick opinionated view that focuses around how they would want to view, manage and inspect applications deployed to a Kubernetes cluster. Lens integrates natively out of the box with Universal Control Plane clam bundles. So you're automatically generated TLS certificates from UCP, just work. Inside our organization, we want to give our developers the ability to see their applications in a very easy to view manner. So in this case, let's actually filter down to the application that we just employed to our development environment. Here, we can see the pod for application. And when we click on that, we get instant detailed feedback about the components and information that this pod is utilizing. We can also see here in Lens that it gives us the ability to quickly switch contexts between different clusters that we have access to. With that, we also have capabilities to be able to quickly deploy other types of components. One of those is helm charts. Helm charts are a great way to package up applications, especially those that may be more complex to make it much simpler to be able to consume and inversion our applications. In this case, let's take a look at the application that we just built and deployed. In this case, our simple NGINX application has been bundled up as a helm chart and is made available through Lens. Here, we can just click on that description of our application to be able to see more information about the helm chart. So we can publish whatever information may be relevant about our application. And through one click, we can install our helm chart. Here, it will show us the actual details of the helm charts. So before we install it, we can actually look at those individual components. So in this case, we can see this created an ingress rule. And then this will tell Kubernetes how did it create this specific components of our application. We'd just have to pick a namespace to deploy it to and in this case, we're actually going to do a quick test here because in this case, we're trying to deploy the application from Docker hub. In our Universal Control Plane, we've turned on Docker content trust policy enforcement. So this is actually going to fail to deploy. Because we're trying to employ our application from Docker hub, the image hasn't been properly signed in our environment. So the Docker content trust policy enforcement prevents us from deploying our Docker image from Docker hub. In this case, we have to go through our approved process through our secure supply chain to be able to ensure that we know where our image came from and that meets our quality standards. So if we comment out the Docker hub repository and comment in our Docker trusted registry repository and click install, it will then install the helm chart with our Docker image being pulled from our DTR, which then it has a proper signature. We can see that our application has been successfully deployed through our home chart releases view. From here, we can see that simple NGINX application and in this case, we'll get details around the actual deployed helm chart. The nice thing is, is that Lens provides us this capability here with helm to be able to see all of the components that make up our application. From this view, it's giving us that single pane of glass into that specific application, so that we know all of the components that is created inside of Kubernetes. There are specific details that can help us access the applications such as that ingress rule that we just talked about, gives us the details of that, but it also gives us the resources such as the service, the deployment and ingress that has been created within Kubernetes to be able to actually have the application exist. So to recap, we've covered how we can offer all the benefits of a cloud like experience and offer flexibility around DevOps and operations control processes through the use of a secure supply chain, allowing our developers to spend more time developing and our operators, more time designing systems that meet our security and compliance concerns.

Published Date : Sep 14 2020

SUMMARY :

of our application to be

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Matt BentleyPERSON

0.99+

GitHubORGANIZATION

0.99+

FirstQUANTITY

0.99+

one reasonQUANTITY

0.99+

MirantisORGANIZATION

0.99+

OneQUANTITY

0.99+

NGINXTITLE

0.99+

DockerTITLE

0.99+

two approachesQUANTITY

0.99+

MonolithORGANIZATION

0.99+

oneQUANTITY

0.98+

UCPORGANIZATION

0.98+

KubernetesTITLE

0.98+

One thingQUANTITY

0.98+

one developerQUANTITY

0.98+

JenkinsTITLE

0.98+

todayDATE

0.98+

BrownfieldORGANIZATION

0.97+

both worldsQUANTITY

0.97+

twoQUANTITY

0.97+

bothQUANTITY

0.96+

one clickQUANTITY

0.96+

GreenfieldORGANIZATION

0.95+

eachQUANTITY

0.95+

single paneQUANTITY

0.92+

Docker hubTITLE

0.91+

a hundredQUANTITY

0.91+

LensTITLE

0.9+

DockerORGANIZATION

0.9+

MicroserviceORGANIZATION

0.9+

VSTITLE

0.88+

DevOpsTITLE

0.87+

K8SCOMMERCIAL_ITEM

0.87+

Docker hubORGANIZATION

0.85+

waysQUANTITY

0.83+

KubernetesORGANIZATION

0.83+

last six yearsDATE

0.82+

JenkinsPERSON

0.72+

One ofQUANTITY

0.7+

Converged Infrastructure Past Present and Future


 

>> Narrator: From theCUBE's studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is theCUBE Conversation. >> You know, businesses have a staggering number of options today to support mission-critical applications. And much of the world's mission-critical data happens to live on converged infrastructure. Converged infrastructure is really designed to support the most demanding workloads. Words like resilience, performance, scalability, recoverability, et cetera. Those are the attributes that define converged infrastructure. Now with COVID-19 the digital transformation mandate, as we all know has been accelerated and buyers are demanding more from their infrastructure, and in particular converged infrastructure. Hi everybody this is Dave Vellante and welcome to this power panel where we're going to explore converged infrastructure, look at its past, its present and its future. And we're going to explore several things. The origins of converged infrastructure, why CI even came about. And what's its historic role been in terms of supporting mission-critical applications. We're going to look at modernizing workloads. What are the opportunities and the risks and what's converged infrastructures role in that regard. How has converged infrastructure evolved? And how will it support cloud and multicloud? And ultimately what's the future of converged infrastructure look like? And to examine these issues, we have three great guests, Trey Layton is here. He is the senior vice president for converged infrastructure and software engineering and architecture at Dell Technologies. And he's joined by Joakim Zetterblad. Who's the director of the SAP practice for EMEA at Dell technologies. And our very own Stu Miniman. Stu is a senior analyst at Wikibon. Guys, great to see you all welcome to theCUBE. Thanks for coming on. >> Thanks for having us. >> Great. >> Trey, I'm going to start with you. Take us back to the early days of converged infrastructure. Why was it even formed? Why was it created? >> Well, if you look back just over a decade ago, a lot of organizations were deploying virtualized environments. Everyone was consolidated on virtualization. A lot of technologies were emerging to enhance that virtualization outcome, meaning acceleration capabilities and storage arrays, networking. And there was a lot of complexity in integrating all of those underlying infrastructure technologies into a solution that would work reliably. You almost had to have a PhD and all of the best practices of many different companies integrations. And so we decided as Dell EMC, Dell Technologies to invest heavily in this area of manufacturing best practices and packaging them so that customers could acquire those technologies and already integrated fully regression tested architecture that could sustain virtually any type of workload that a company would run. And candidly that packaging, that rigor around testing produced a highly reliable product that customers now rely on heavily to operationalize greater efficiencies and run their most critical applications that power their business and ultimately the world economy. >> Now Stu, cause you were there. I was as well at the early days of the original announcement of CI. Looking back and sort of bringing it forward Stu, what was the business impact of converged infrastructure? >> Well, Dave as Trey was talking about it was that wave of virtualization had gone from, you know, just supporting many applications to being able to support all of your applications. And especially if you talk about those high value, you know business mission, critical applications, you want to make sure that you've got a reliable foundation. What the Dell tech team has done for years is make sure that they fully understand, you know the life cycle of testing that needs to happen. And you don't need to worry about, you know, what integration testing you need to do, looking at support major CS and doing a lot of your own sandbox testing, which for the most part was what enterprises needed to do. You said, okay, you know, I get the gear, I load the virtualization and then I have to see, you know, tweak everything to figure out how my application works. The business impact Dave, is you want to spend more time focusing on the business, not having to turn all the dials and worry about, do I get the performance I need? Does it have the reliability uptime that we need? And especially if we're talking about those business critical applications, of course, these are the ones that are running 24 by seven and if they go down, my business goes down with it. >> Yeah, and of course, you know, one of the other major themes we saw with conversion infrastructure was really attacking the IT labor problem. You had separate compute or server teams, storage teams, networking teams, they oftentimes weren't talking together. So there was a lot of inefficiency that converged infrastructure was designed to attack. But I want to come to the SAP expert. Joakim, that's really your wheelhouse. What is it about converged infrastructure that makes it suitable for SAP application specifically? >> You know, if you look at a classic SAP client today, there's really three major transformational waves that all SAP customers are faced with today, it's the move to S/4HANA, the introduction of this new platform, which needs to happen before 2027. It's the introduction of a multicloud cloud or operating model. And last but not least, it is the introduction of new digitization or intelligent technologies such as IOT, machine learning or artificial intelligence. And that drove to the need of a platform that could address all these three transformational waves. It came with a lot of complexity, increased costs, increased risk. And what CI did so uniquely was to provide that Edge to Core to Cloud strategy. Fully certified for both HANA, non HANA workloads for the classical analytical and transactional workloads, as well as the new modernization technologies such as IOT, machine learning, big data and analytics. And that created a huge momentum for converged in our SAP accounts. >> So Trey, I want to go to you cause you're the deep technical expert here. Joakim just mentioned uniqueness. So what are the unique characteristics of converged infrastructure that really make it suitable for handling the most demanding workloads? >> Well, converged infrastructure by definition is the integration of an external storage array with a highly optimized compute platform. And when we build best practices around integrating those technologies together, we essentially package optimizations that allow a customer to increase the quantity of users that are accessing those workloads or the applications that are driving database access in such a way where you can predictably understand consumption and utilization in your environment. Those packaged integrations are kind of like. You know, I have a friend that owns a race car shop and he has all kinds of expertise to build cars, but he has a vehicle that he buys is his daily driver. The customization that they've created to build race cars are great for the race cars that go on the track, but he's building a car on his own, it didn't make any sense. And so what customers found was the ability to acquire a packaged infrastructure with all these infrastructure optimizations, where we package these best practices that gave customers a reliable, predictable, and fully supported integration, so they didn't have to spend 20 hour support calls trying to discover and figure out what particular customization that they had employed for their application, that had some issue that they needed to troubleshoot and solve. This became a standard out of the box integration that the best and the brightest package so that customers can consume it at scale. >> So Joakim, I want to ask you let's take the sort of application view. Let's sort of flip the picture a little bit and come at it from that prism. How, if you think about like core business applications, how have they evolved over the better part of the last decade and specifically with regard to the mission-critical processes? >> So what we're seeing in the process industry and in the industry of mission-critical applications is that they have gone from being very monolithic systems where we literally saw a single ERP components such as all three or UCC. Whereas today customers are faced with a landscape of multiple components. Many of them working both on and off premise, there are multicloud strategies in place. And as we mentioned before, with the introduction of new IOT technologies, we see that there is a flow of information of data that requires a whole new set of infrastructure of components of tools to make these new processes happen. And of course, the focus in the end of the day is all on business outcomes. So what industries and companies doesn't want to do is to focus all their time in making sure that these new technologies are working together, but really focusing on how can I make an impact? How can I start to work in a better way with my clients? So the focus on business outcome, the focus on integrating multiple systems into a single consolidated approach has become so much more important, which is why the modernization of the underlying infrastructure is absolutely key. Without consolidation, without a simplification of the management and orchestration. And without the cloud enabled platform, you won't get there. >> So Stu that's key, what Joakim just said in terms of modernizing the application as being able to manage them, not as one big monolith, but integration with other key systems. So what are the options? Wikibon has done some research on this, but what are the options for modernizing workloads, whether it's on-Prem or off-prem and what are some of the trade offs there? >> Yeah, so Dave, first of all, you know, one of the biggest challenges out there is you don't just want to, you know, lift and shift. If anybody's read research for it from Wikibon, Dave, for a day, for the 10 years, I've been part of it talks about the challenges, if you just talk about migrating, because while it sounds simple, we understand that there are individual customizations that every customer's made. So you might get part of the way there, but there's often the challenges that will get in the way that could cause failure. And as we talked about for you, especially your mission-critical applications, those are the ones that you can't have downtime. So absolutely customers are reevaluating their application portfolio. You know, there are a lot of things to look at. First of all, if you can, certain things can be moved to SaaS. You've seen certain segments of the market. Absolutely SaaS can be preferred methodology, if you can go there. One of the biggest hurdles for SaaS of course, is there's retraining of the workforce. Certain applications they will embracing of that because they can take advantage of new features, get to be able to use that wherever they are. But in other cases, there are the SaaS doesn't have the capability or it doesn't fit into the workflow of the business. The cloud operating model is something we've been talking about it with you Dave, for many years. When you've seen rapid maturation of what originally was called "private cloud", but really was just virtualization plus with a little bit of a management layer on top. But now much of the automation that you build in AI technologies, you know, Trey's got a whole team working on things that if you talk to his team, it sounds very similar to what you had the same conversation should have with cloud providers. So "cloud" as an operating model, not a destination is what we're going for and being able to take advantage of automation and the like. So where your application sits, absolutely some consideration. And what we've talked about Dave, you know, the governance, the security, the reliability, the performance are all reasons why being able to keep things, you know, under my environment with an infrastructure that I have control over is absolutely one of the reasons why I might keep things more along a converged infrastructure, rather than just saying to go through the challenge of migration and optimizing and changing to something in a more of a cloud native methodology. >> What about technical debt? Trey, people talk about technical debt as a bad thing, what is technical debt? Why do I want to avoid it? And how can I avoid it? And specifically, I know, Trey, I've thrown a lot of questions at you yet, but what is it about converged infrastructure and its capabilities that helped me avoid that technical debt? >> Well, it's an interesting thing, when you deploy an environment to support a mission-critical application, you have to make a lot of implementation decisions. Some of those decisions may take you down a path that may have a finite life. And that once you reached the life expectancy of that particular configuration, you now have debt that you have to reconcile. You have to change that architecture, that configuration. And so what we do with converged infrastructure is we dedicate a team of product management, an entire product management organization, a team of engineers that treat the integrations of the architecture as a releases. And we think long range about how do we avoid not having to change the underlying architecture. And one of the greatest testaments to this is in our conversion infrastructure products over the last 11 years, we've only saw two major architectural changes while supporting generational changes in underlying infrastructure capabilities well beyond when we first started. So converged infrastructure approach is about how do we build an architecture that allows you to avoid those dead-end pathways in those integration decisions that you would normally have to make on your own. >> Joakim, I wanted to ask you, you've mentioned monolithic applications before. That's sort of, we're evolving beyond that with application architectures, but there's still a lot of monoliths out there so. And a lot of customers want to modernize those application and workloads. What, in your view, what are you seeing as the best path and the best practice for modernizing some of those monolithic workloads? >> Yeah, so Dave, as clients today are trying to build a new intelligent enterprise, which is one of SAP's leading a guidance today. They needed to start to look at how to integrate all these different systems and applications that we talked about before into the common business process framework that they have. So consolidating workloads from big data to HANA, non HANA systems, cloud, non-cloud applications into a single framework is an absolute key to that modernization strategy. The second thing which I also mentioned before is to take a new grip around orchestration and management. We know that as customers seek this intelligent approach with both analytical data, as well as experience and transactional data, we must look for new ways to orchestrate and manage those application workloads and data flows. And this is where we slowly, slowly enter into the world of a enterprise data strategy. And that's again, where converged as a very important part to play in order to build these next generation platforms that can both consolidate, simplify. And at the same time enable us to work in a cloud enabled fashion with our cloud operating model that most of our clients seek today. >> So Stu, why can't I just shove all this stuff into the public cloud and call it a day? >> Yeah, well, Dave, we've seen some people that, you know, I have a cloud first strategy and often those are the same companies that are quickly doing what we call "repatriation". I bristle a little bit when I hear these, because often it's, I've gone to the cloud without understanding how I take advantage of it, not understanding the full financial ramifications what I'm going to need to do. And therefore they quickly go back to a world that they understand. So, cloud is not a silver bullet. We understand in technology, Dave, you know, things are complicated. There's all the organizational operational pieces they do. There are excellent cloud services and it's really it's innovation. You know, how do I take advantage of the data that I have, how I allow my application to move forward and respond to the business. And really that is not something that only happens in the public clouds. If I can take advantage of infrastructure that gets me along that journey to more of a cloud model, I get the business results. So, you know, automation and APIs and everything and the Ops movement are not something that are only in the public clouds, but something that we should be embracing holistically. And absolutely, that ties into where today and tomorrow's converge infrastructure are going. >> Yeah, and to me, it comes down to the business case too. I mean, you have to look at the risk-reward. The risk of changing something that's actually working for your business versus what the payback is going to be. You know, if it ain't broken, don't fix it, but you may want to update it, change the oil every now and then, you know, maybe prune some deadwood and modernize it. But Trey, I want to come back to you. Let's take a look at some of the options that customers have. And there are a lot of options, as I said at the top. You've got do it yourself, you got a hyper-converged infrastructure, of course, converged infrastructure. What are you seeing as the use case for each of these deployment options? >> So, build your own. We're really talking about an organization that has the expertise in-house to understand the integration standards that they need to deploy to support their environment. And candidly, there are a lot of customers that have very unique application requirements that have very much customized to their environment. And they've invested in the expertise to be able to sustain that on an ongoing basis. And build your own is great for those folks. The next in converged infrastructure, where we're really talking about an external storage array with applications that need to use data services native to a storage array. And self-select compute for scaling that compute for their particular need, and owning that three tiers architecture and its associated integration, but not having to sustain it because it's converged. There are enormous number of applications out there that benefit from that. I think the third one was, you talked about hyper-converged. I'll go back to when we first introduced our hyper-converged product to the market. Which is now leading the industry for quite some time, VxRail. We had always said that customers will consume hyper-converged and converged for different use cases and different applications. The maturity of hyper-converged has come to the point where you can run virtually any application that you would like on it. And this comes down to really two vectors of consideration. One, am I going to run hyper-converged versus converged based on my operational preference? You know, hyper-converged incorporates software defined storage, predominantly a compute operating plane. Converge as mentioned previously uses that external storage array has some type of systems fabric and dedicated compute resources with access into those your operational preference is one aspect of it. And then having applications that need the data services of an external storage, primary storage array are the other aspect of deciding whether those two things are needed in your particular environment. We find more and more customers out there that have an investment of both, not one versus the other. That's not to say that there aren't customers that only have one, they exist, but a majority of customers have both. >> So Joakim, I want to come back to the sort of attributes from the application requirements perspective. When you think about mission-critical, you think about availability, scale, recoverability, data protection. I wonder if you could talk a little bit about those attributes. And again, what is it about converged infrastructure that that is the best fit and the right strategic fit for supporting those demanding applications and workloads? >> Now, when it comes to SAP, we're talking about clients and customers, most mission-critical data and information and applications. And hence the requirements on the underlying infrastructure is absolutely on the very top of what the IT organization needs to deliver. This is why, when we talk about SAP, the requirements for high availability protection disaster recovery is very, very high. And it doesn't only involve a single system. As mentioned before, SAP is not a standalone application, but rather a landscape of systems that needs to be kept consistent. And that's what a CI platform does so well. It can consolidate workloads, whether it's big data or the transactional standard workloads of SAP, ERP or UCC. The converged platforms are able to put the very highest of availability protection standards into this whole landscape and making a really unique platform for CI workloads. And at the same time, it enables our customers to accelerate those modernization journeys into things such as ML, AI, IOT, even blockchain scenarios, where we've built out our capabilities to accelerate these implementations with the help of the underlying CI platforms and the rest of the SAP environment. >> Got it. Stu, I want to go to you. You had mentioned before the cloud operating model and something that we've been talking about for a long time and Wikibon. So can converged infrastructure substantially mimic that cloud operating model and how so? What are the key ingredients of being able to create that experience on-prem? >> Yeah, well, Dave as, we've watched for more than the last decade, the cloud has looked more and more like some of the traditional enterprise things that we would look for and the infrastructure in private clouds have gone more and more cloud-like and embrace that model. So, you know, I got, I think back to the early days, Dave, we talked about how cloud was supposed to just be, you know, "simple". If you look at deploying in the cloud today, it is not simple at all that. There are so many choices out there, you know, way more than I had an initial data center. In the same way, you know, I think, you know, the original converged infrastructure from Dell, if you look at the feedback, the criticism was, you know, oh, you can have it in any color you want, as long as black, just like the Ford model T. But it was that simplicity and consistency that helped build out most of what we were talking about the cloud models I wanted to know that I had a reliable substrate platform to build on top of it. But if you talk about Dave today and in the future, what do we want? First of all, I need that operating model in a multicloud world. So, you know, we look at the environments that can spread, but beyond just a single cloud, because customers today have multiple environments, absolutely hybrid is a big piece of that. We look at what VMware's doing, look at Microsoft, Red Hat, even Amazon are extended beyond just a cloud and going into hybrid and multicloud models. Automation, a critical piece of that. And we've seen, you know, great leaps and bounds in the last couple of generations of what's happening in CI to take advantage of automation. Because we know we've gone beyond what humans can just manage themselves and therefore, you know, true automation is helping along those environments. So yes, absolutely, Dave. You know, that the lines are blurred between what the private cloud and the public cloud. And it's just that overall cloud operating model and helping customers to deal with their data and their applications, regardless of where it lives. >> Well, you know, Trey in the early days of cloud and conversion infrastructure, that homogeneity that Stu was talking about any color, as long as it's black. That was actually an advantage to removing labor costs, that consistency and that standardization. But I'm interested in how CI has evolved, its, you know, added in optionality. I mean Joakim was just talking about blockchain, so all kinds of new services. But how has CCI evolved in the better part of the last decade and what are some of the most recent innovations that people should be thinking about or aware of? >> So I think the underlying experience of CI has remained relatively constant. And we talk about the experience that customers get. So if you just look at the data that we've analyzed for over a decade now, you know, one of the data points that I love is 99% of our customers who buy CI say they have virtually no downtime anymore. And, that's a great testament. 84% of our customers say that they have that their IT operations run more efficiently. The reality around how we delivered that in the past was through services and humans performing these integrations and the upkeep associated with the sustaining of the architecture. What we've focused on at Dell Technologies is really bringing technologies that allow us to automate those human integrations and best practices. In such a way where they can become more repeatable and consumable by more customers. We don't have to have as many services folks deploying these systems as we did in the past. Because we're using software intelligence to embed that human knowledge that we used to rely on individuals exclusively for. So that's one of the aspects of the architecture. And then just taking advantage of all the new technologies that we've seen introduce over the last several years from all flash architectures and NVMe on the horizon, NVMe over fabric. All of these things as we orchestrate them in software will enable them to be more consumable by the average everyday customer. Therefore it becomes more economical for them to deploy infrastructure on premises to support mission-critical applications. >> So Stu, what about cloud and multicloud, how does CI support that? Where do those fit in? Are they relevant? >> Yeah, Dave, so absolutely. As I was talking about before, you know, customers have hybrid and multicloud environments and managing across these environments are pretty important. If I look at the Dell family, obviously they're leveraging heavily VMware as the virtualization layer. And VMware has been moving heavily as to how support containerized and incubates these environments and extend their management to not only what's happening in the data center, but into the cloud environment with VMware cloud. So, you know, management in a multicloud world Dave, is one of those areas that we definitely have some work to do. Something we've looked at Wikibon for the last few years. Is how will multicloud be different than multi-vendor? Because that was not something that the industry had done a great job of solving in the past. But you know, customers are looking to take advantage of the innovation, where it is in the services. And you know, the data first architecture is something that we see and therefore that will bring them to many services and many places. >> Oh yeah, I was talking before about in the early days of CI and even a lot of organizations, some organizations, anyway, there's still these sort of silos of, you know, storage, networking, compute resources. And you think about DevOps, where does DevOps fit into this whole equation? Maybe Stu you could take a stab at it and anybody else who wants to chime in. >> Yeah, so Dave, great, great point there. So, you know, when we talk about those silos, DevOps is one of those movements to really help the unifying force to help customers move faster. And so therefore the development team and the operations team are working together. Things like security are not a bolt-in but something that can happen along the entire path. A more recent addition to the DevOps movement also is something like FinOps. So, you know, how do we make sure that we're not just having finance sign off on things and look back every quarter, but in real time, understand how we're architecting things, especially in the cloud so that we remain responsible for that model. So, you know, speed is, you know, one of the most important pieces for business and therefore the DevOps movement, helping customers move faster and, you know, leverage and get value out of their infrastructure, their applications and their data. >> Yeah, I would add to this that I think the big transition for organizations, cause I've seen it in developing my own organization, is getting IT operators to think programmatically instead of configuration based. Use the tool to configure a device. Think about how do we create programmatic instruction to interacts with all of the devices that creates that cloud-like adaptation. Feeds in application level signaling to adapt and change the underlying configuration about that infrastructure to better run the application without relying upon an IT operator, a human to make a change. This, sort of thinking programmatically is I think one of the biggest obstacles that the industry face. And I feel really good about how we've attacked it, but there is a transformation within that dialogue that every organization is going to navigate through at their own pace. >> Yeah, infrastructure is code automation, this a fundamental to digital transformation. Joakim, I wonder if you could give us some insight as you talk to SAP customers, you know, in Europe, across the EMEA, how does the pandemic change this? >> I think the pandemic has accelerated some of the movements that we already saw in the SAP world. There is obviously a force for making sure that we get our financial budgets in shape and that we don't over spend on our cost levels. And therefore it's going to be very important to see how we can manage all these new revenue generating projects that IT organizations and business organizations have planned around new customer experience initiatives, new supply chain optimization. They know that they need to invest in these projects to stay competitive and to gain new competitive edge. And where CI plays an important part is in order to, first of all, keep costs down in all of these projects, make sure to deliver a standardized common platform upon which all these projects can be introduced. And then of course, making sure that availability and risks are kept high versus at a minimum, right? Risk low and availability at a record high, because we need to stay on with our clients and their demands. So I think again, CI is going to play a very important role. As we see customers go through this pandemic situation and needing to put pressure on both innovation and cost control at the same time. And this is where also our new upcoming data strategies will play a really important part as we need to leverage the data we have better, smarter and more efficient way. >> Got it. Okay guys, we're running out of time, but Trey, I wonder if you could, you know break out your telescope or your crystal ball, give us some visibility into the futures of converged infrastructure. What should we be expecting? >> So if you look at the last release of this last technology that we released in power one, it was all about automation. We'll build on that platform to integrate other converged capability. So if you look at the converged systems market hyper-converged is very much an element of that. And I think that we're trending to is recognizing that we can deliver an architecture that has hyper-converged and converged attributes all in a single architecture and then dial up the degrees of automation to create more adaptations for different type of application workloads, not just your traditional three tier application workloads, but also those microservices based applications that one may historically think, maybe it's best to that off premises. We feel very confident that we are delivering platforms out there today that can run more economically on premises, provide better security, better data governance, and a lot of the adaptations, the enhancements, the optimizations that we'll deliver in our converged platforms of the future about colliding new infrastructure models together, and introducing more levels of automation to have greater adaptations for applications that are running on it. >> Got it. Trey, we're going to give you the last word. You know, if you're an architect of a large organization, you've got some mission-critical workloads that, you know, you're really trying to protect. What's the takeaway? What's really the advice that you would give those folks thinking about the sort of near and midterm and even longterm? >> My advice is to understand that there are many options. We sell a lot of independent component technologies and data centers that run every organization's environment around the world. We sell packaged outcomes and hyper-converged and converged. And a lot of companies buy a little bit of build your own, they buy some converged, they buy some hyper-converged. I would employ everyone, especially in this climate to really evaluate the packaged offerings and understand how they can benefit their environment. And we recognize that everything that there's not one hammer and everything is a nail. That's why we have this broad portfolio of products that are designed to be utilized in the most efficient manners for those customers who are consuming our technologies. And converged and hyper-converge are merely another way to simplify the ongoing challenges that organizations have in managing their data estate and all of the technologies they're consuming at a rapid pace in concert with the investments that they're also making off premises. So this is very much the technologies that we talked today are very much things that organizations should research, investigate and utilize where they best fit in their organization. >> Awesome guys, and of course there's a lot of information at dell.com about that. Wikibon.com has written a lot about this and the many, many sources of information out there. Trey, Joakim, Stu thanks so much for the conversation. Really meaty, a lot of substance, really appreciate your time, thank you. >> Thank you guys. >> Thank you Dave. >> Thanks Dave. >> And everybody for watching. This is Dave Vellante for theCUBE and we'll see you next time. (soft music)

Published Date : Jul 30 2020

SUMMARY :

leaders all around the world, And much of the world's Trey, I'm going to start with you. and all of the best practices of the original announcement that needs to happen. Yeah, and of course, you know, And that drove to the need of a platform for handling the most demanding workloads? that the best and the brightest package of the last decade and And of course, the focus in terms of modernizing the application But now much of the And one of the greatest testaments to this And a lot of customers want to modernize And at the same time enable us to work that are only in the public clouds, the payback is going to be. that need the data services that that is the best fit of the underlying CI platforms and something that we've been You know, that the lines of the last decade and what delivered that in the past something that the industry of silos of, you know, and the operations team that the industry face. in Europe, across the EMEA, and that we don't over I wonder if you could, you know and a lot of the adaptations, that you would give those and all of the technologies and the many, many sources and we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

JoakimPERSON

0.99+

Dave VellantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

EuropeLOCATION

0.99+

Joakim ZetterbladPERSON

0.99+

AmazonORGANIZATION

0.99+

TreyPERSON

0.99+

Trey LaytonPERSON

0.99+

Palo AltoLOCATION

0.99+

Stu MinimanPERSON

0.99+

20 hourQUANTITY

0.99+

StuPERSON

0.99+

99%QUANTITY

0.99+

DellORGANIZATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

84%QUANTITY

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.99+

24QUANTITY

0.99+

10 yearsQUANTITY

0.99+

todayDATE

0.99+

two thingsQUANTITY

0.99+

Wikibon.comORGANIZATION

0.99+

SAPORGANIZATION

0.99+

three tiersQUANTITY

0.99+

second thingQUANTITY

0.99+

third oneQUANTITY

0.99+

BostonLOCATION

0.99+

FordORGANIZATION

0.99+

singleQUANTITY

0.98+

tomorrowDATE

0.98+

Dell EMCORGANIZATION

0.98+

two vectorsQUANTITY

0.98+

WikibonORGANIZATION

0.98+

firstQUANTITY

0.98+

S/4HANATITLE

0.98+

FinOpsTITLE

0.98+

a dayQUANTITY

0.97+

HANATITLE

0.97+

OneQUANTITY

0.97+

first strategyQUANTITY

0.97+

DevOpsTITLE

0.97+

Converged Infrastructure: Past Present and Future


 

>> Narrator: From theCUBE's studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is theCUBE Conversation. >> You know, businesses have a staggering number of options today to support mission-critical applications. And much of the world's mission-critical data happens to live on converged infrastructure. Converged infrastructure is really designed to support the most demanding workloads. Words like resilience, performance, scalability, recoverability, et cetera. Those are the attributes that define converged infrastructure. Now with COVID-19 the digital transformation mandate, as we all know has been accelerated and buyers are demanding more from their infrastructure, and in particular converged infrastructure. Hi everybody this is Dave Vellante and welcome to this power panel where we're going to explore converged infrastructure, look at its past, its present and its future. And we're going to explore several things. The origins of converged infrastructure, why CI even came about. And what's its historic role been in terms of supporting mission-critical applications. We're going to look at modernizing workloads. What are the opportunities and the risks and what's converged infrastructures role in that regard. How has converged infrastructure evolved? And how will it support cloud and multicloud? And ultimately what's the future of converged infrastructure look like? And to examine these issues, we have three great guests, Trey Layton is here. He is the senior vice president for converged infrastructure and software engineering and architecture at Dell Technologies. And he's joined by Joakim Zetterblad. Who's the director of the SAP practice for EMEA at Dell technologies. And our very own Stu Miniman. Stu is a senior analyst at Wikibon. Guys, great to see you all welcome to theCUBE. Thanks for coming on. >> Thanks for having us. >> Great. >> Trey, I'm going to start with you. Take us back to the early days of converged infrastructure. Why was it even formed? Why was it created? >> Well, if you look back just over a decade ago, a lot of organizations were deploying virtualized environments. Everyone was consolidated on virtualization. A lot of technologies were emerging to enhance that virtualization outcome, meaning acceleration capabilities and storage arrays, networking. And there was a lot of complexity in integrating all of those underlying infrastructure technologies into a solution that would work reliably. You almost had to have a PhD and all of the best practices of many different companies integrations. And so we decided as Dell EMC, Dell Technologies to invest heavily in this area of manufacturing best practices and packaging them so that customers could acquire those technologies and already integrated fully regression tested architecture that could sustain virtually any type of workload that a company would run. And candidly that packaging, that rigor around testing produced a highly reliable product that customers now rely on heavily to operationalize greater efficiencies and run their most critical applications that power their business and ultimately the world economy. >> Now Stu, cause you were there. I was as well at the early days of the original announcement of CI. Looking back and sort of bringing it forward Stu, what was the business impact of converged infrastructure? >> Well, Dave as Trey was talking about it was that wave of virtualization had gone from, you know, just supporting many applications to being able to support all of your applications. And especially if you talk about those high value, you know business mission, critical applications, you want to make sure that you've got a reliable foundation. What the Dell tech team has done for years is make sure that they fully understand, you know the life cycle of testing that needs to happen. And you don't need to worry about, you know, what integration testing you need to do, looking at support major CS and doing a lot of your own sandbox testing, which for the most part was what enterprises needed to do. You said, okay, you know, I get the gear, I load the virtualization and then I have to see, you know, tweak everything to figure out how my application works. The business impact Dave, is you want to spend more time focusing on the business, not having to turn all the dials and worry about, do I get the performance I need? Does it have the reliability uptime that we need? And especially if we're talking about those business critical applications, of course, these are the ones that are running 24 by seven and if they go down, my business goes down with it. >> Yeah, and of course, you know, one of the other major themes we saw with conversion infrastructure was really attacking the IT labor problem. You had separate compute or server teams, storage teams, networking teams, they oftentimes weren't talking together. So there was a lot of inefficiency that converged infrastructure was designed to attack. But I want to come to the SAP expert. Joakim, that's really your wheelhouse. What is it about converged infrastructure that makes it suitable for SAP application specifically? >> You know, if you look at a classic SAP client today, there's really three major transformational waves that all SAP customers are faced with today, it's the move to S/4HANA, the introduction of this new platform, which needs to happen before 2027. It's the introduction of a multicloud cloud or operating model. And last but not least, it is the introduction of new digitization or intelligent technologies such as IOT, machine learning or artificial intelligence. And that drove to the need of a platform that could address all these three transformational waves. It came with a lot of complexity, increased costs, increased risk. And what CI did so uniquely was to provide that Edge to Core to Cloud strategy. Fully certified for both HANA, non HANA workloads for the classical analytical and transactional workloads, as well as the new modernization technologies such as IOT, machine learning, big data and analytics. And that created a huge momentum for converged in our SAP accounts. >> So Trey, I want to go to you cause you're the deep technical expert here. Joakim just mentioned uniqueness. So what are the unique characteristics of converged infrastructure that really make it suitable for handling the most demanding workloads? >> Well, converged infrastructure by definition is the integration of an external storage array with a highly optimized compute platform. And when we build best practices around integrating those technologies together, we essentially package optimizations that allow a customer to increase the quantity of users that are accessing those workloads or the applications that are driving database access in such a way where you can predictably understand consumption and utilization in your environment. Those packaged integrations are kind of like. You know, I have a friend that owns a race car shop and he has all kinds of expertise to build cars, but he has a vehicle that he buys is his daily driver. The customization that they've created to build race cars are great for the race cars that go on the track, but he's building a car on his own, it didn't make any sense. And so what customers found was the ability to acquire a packaged infrastructure with all these infrastructure optimizations, where we package these best practices that gave customers a reliable, predictable, and fully supported integration, so they didn't have to spend 20 hour support calls trying to discover and figure out what particular customization that they had employed for their application, that had some issue that they needed to troubleshoot and solve. This became a standard out of the box integration that the best and the brightest package so that customers can consume it at scale. >> So Joakim, I want to ask you let's take the sort of application view. Let's sort of flip the picture a little bit and come at it from that prism. How, if you think about like core business applications, how have they evolved over the better part of the last decade and specifically with regard to the mission-critical processes? >> So what we're seeing in the process industry and in the industry of mission-critical applications is that they have gone from being very monolithic systems where we literally saw a single ERP components such as all three or UCC. Whereas today customers are faced with a landscape of multiple components. Many of them working both on and off premise, there are multicloud strategies in place. And as we mentioned before, with the introduction of new IOT technologies, we see that there is a flow of information of data that requires a whole new set of infrastructure of components of tools to make these new processes happen. And of course, the focus in the end of the day is all on business outcomes. So what industries and companies doesn't want to do is to focus all their time in making sure that these new technologies are working together, but really focusing on how can I make an impact? How can I start to work in a better way with my clients? So the focus on business outcome, the focus on integrating multiple systems into a single consolidated approach has become so much more important, which is why the modernization of the underlying infrastructure is absolutely key. Without consolidation, without a simplification of the management and orchestration. And without the cloud enabled platform, you won't get there. >> So Stu that's key, what Joakim just said in terms of modernizing the application as being able to manage them, not as one big monolith, but integration with other key systems. So what are the options? Wikibon has done some research on this, but what are the options for modernizing workloads, whether it's on-Prem or off-prem and what are some of the trade offs there? >> Yeah, so Dave, first of all, you know, one of the biggest challenges out there is you don't just want to, you know, lift and shift. If anybody's read research for it from Wikibon, Dave, for a day, for the 10 years, I've been part of it talks about the challenges, if you just talk about migrating, because while it sounds simple, we understand that there are individual customizations that every customer's made. So you might get part of the way there, but there's often the challenges that will get in the way that could cause failure. And as we talked about for you, especially your mission-critical applications, those are the ones that you can't have downtime. So absolutely customers are reevaluating their application portfolio. You know, there are a lot of things to look at. First of all, if you can, certain things can be moved to SAS. You've seen certain segments of the market. Absolutely SAS can be preferred methodology, if you can go there. One of the biggest hurdles for SAS of course, is there's retraining of the workforce. Certain applications they will embracing of that because they can take advantage of new features, get to be able to use that wherever they are. But in other cases, there are the SAS doesn't have the capability or it doesn't fit into the workflow of the business. The cloud operating model is something we've been talking about it with you Dave, for many years. When you've seen rapid maturation of what originally was called "private cloud", but really was just virtualization plus with a little bit of a management layer on top. But now much of the automation that you build in AI technologies, you know, Trey's got a whole team working on things that if you talk to his team, it sounds very similar to what you had the same conversation should have with cloud providers. So "cloud" as an operating model, not a destination is what we're going for and being able to take advantage of automation and the like. So where your application sits, absolutely some consideration. And what we've talked about Dave, you know, the governance, the security, the reliability, the performance are all reasons why being able to keep things, you know, under my environment with an infrastructure that I have control over is absolutely one of the reasons why am I keep things more along a converged infrastructure, rather than just saying to go through the challenge of migration and optimizing and changing to something in a more of a cloud native methodology. >> What about technical debt? Trey, people talk about technical debt as a bad thing, what is technical debt? Why do I want to avoid it? And how can I avoid it? And specifically, I know, Trey, I've thrown a lot of questions at you yet, but what is it about converged infrastructure and its capabilities that helped me avoid that technical debt? >> Well, it's an interesting thing, when you deploy an environment to support a mission-critical application, you have to make a lot of implementation decisions. Some of those decisions may take you down a path that may have a finite life. And that once you reached the life expectancy of that particular configuration, you now have debt that you have to reconcile. You have to change that architecture, that configuration. And so what we do with converged infrastructure is we dedicate a team of product management, an entire product management organization, a team of engineers that treat the integrations of the architecture as a releases. And we think long range about how do we avoid not having to change the underlying architecture. And one of the greatest testaments to this is in our conversion infrastructure products over the last 11 years, we've only saw two major architectural changes while supporting generational changes in underlying infrastructure capabilities well beyond when we first started. So converged infrastructure approach is about how do we build an architecture that allows you to avoid those dead-end pathways in those integration decisions that you would normally have to make on your own. >> Joakim, I wanted to ask you, you've mentioned monolithic applications before. That's sort of, we're evolving beyond that with application architectures, but there's still a lot of monoliths out there so. And a lot of customers want to modernize those application and workloads. What, in your view, what are you seeing as the best path and the best practice for modernizing some of those monolithic workloads? >> Yeah, so Dave, as clients today are trying to build a new intelligent enterprise, which is one of SAP's leading a guidance today. They needed to start to look at how to integrate all these different systems and applications that we talked about before into the common business process framework that they have. So consolidating workloads from big data to HANA, non HANA systems, cloud, non-cloud applications into a single framework is an absolute key to that modernization strategy. The second thing which I also mentioned before is to take a new grip around orchestration and management. We know that as customers seek this intelligent approach with both analytical data, as well as experience and transactional data, we must look for new ways to orchestrate and manage those application workloads and data flows. And this is where we slowly, slowly enter into the world of a enterprise data strategy. And that's again, where converged as a very important part to play in order to build these next generation platforms that can both consolidate, simplify. And at the same time enable us to work in a cloud enabled fashion with our cloud operating model that most of our clients seek today. >> So Stu, why can't I just shove all this stuff into the public cloud and call it a day? >> Yeah, well, Dave, we've seen some people that, you know, I have a cloud first strategy and often those are the same companies that are quickly doing what we call "repatriation". I bristle a little bit when I hear these, because often it's, I've gone to the cloud without understanding how I take advantage of it, not understanding the full financial ramifications what I'm going to need to do. And therefore they quickly go back to a world that they understand. So, cloud is not a silver bullet. We understand in technology, Dave, you know, things are complicated. There's all the organizational operational pieces they do. There are excellent cloud services and it's really it's innovation. You know, how do I take advantage of the data that I have, how I allow my application to move forward and respond to the business. And really that is not something that only happens in the public clouds. If I can take advantage of infrastructure that gets me along that journey to more of a cloud model, I get the business results. So, you know, automation and APIs and everything and the Ops movement are not something that are only in the public clouds, but something that we should be embracing holistically. And absolutely, that ties into where today and tomorrow's converge infrastructure are going. >> Yeah, and to me, it comes down to the business case too. I mean, you have to look at the risk-reward. The risk of changing something that's actually working for your business versus what the payback is going to be. You know, if it ain't broken, don't fix it, but you may want to update it, change the oil every now and then, you know, maybe prune some deadwood and modernize it. But Trey, I want to come back to you. Let's take a look at some of the options that customers have. And there are a lot of options, as I said at the top. You've got do it yourself, you got a hyper-converged infrastructure, of course, converged infrastructure. What are you seeing as the use case for each of these deployment options? >> So, build your own. We're really talking about an organization that has the expertise in-house to understand the integration standards that they need to deploy to support their environment. And candidly, there are a lot of customers that have very unique application requirements that have very much customized to their environment. And they've invested in the expertise to be able to sustain that on an ongoing basis. And build your own is great for those folks. The next in converged infrastructure, where we're really talking about an external storage array with applications that need to use data services native to a storage array. And self-select compute for scaling that compute for their particular need, and owning that three tiers architecture and its associated integration, but not having to sustain it because it's converged. There are enormous number of applications out there that benefit from that. I think the third one was, you talked about hyper-converged. I'll go back to when we first introduced our hyper-converged product to the market. Which is now leading the industry for quite some time, VxRail. We had always said that customers will consume hyper-converged and converged for different use cases and different applications. The maturity of hyper-converged has come to the point where you can run virtually any application that you would like on it. And this comes down to really two vectors of consideration. One, am I going to run hyper-converged versus converged based on my operational preference? You know, hyper-converged incorporates software defined storage, predominantly a compute operating plane. Converge as mentioned previously uses that external storage array has some type of systems fabric and dedicated compute resources with access into those your operational preference is one aspect of it. And then having applications that need the data services of an external storage, primary storage array are the other aspect of deciding whether those two things are needed in your particular environment. We find more and more customers out there that have an investment of both, not one versus the other. That's not to say that there aren't customers that only have one, they exist, but a majority of customers have both. >> So Joakim, I want to come back to the sort of attributes from the application requirements perspective. When you think about mission-critical, you think about availability, scale, recoverability, data protection. I wonder if you could talk a little bit about those attributes. And again, what is it about converged infrastructure that that is the best fit and the right strategic fit for supporting those demanding applications and workloads? >> Now, when it comes to SAP, we're talking about clients and customers, most mission-critical data and information and applications. And hence the requirements on the underlying infrastructure is absolutely on the very top of what the IT organization needs to deliver. This is why, when we talk about SAP, the requirements for high availability protection disaster recovery is very, very high. And it doesn't only involve a single system. As mentioned before, SAP is not a standalone application, but rather a landscape of systems that needs to be kept consistent. And that's what a CI platform does so well. It can consolidate workloads, whether it's big data or the transactional standard workloads of SAP, ERP or UCC. The converged platforms are able to put the very highest of availability protection standards into this whole landscape and making a really unique platform for CI workloads. And at the same time, it enables our customers to accelerate those modernization journeys into things such as ML, AI, IOT, even blockchain scenarios, where we've built out our capabilities to accelerate these implementations with the help of the underlying CI platforms and the rest of the SAP environment. >> Got it. Stu, I want to go to you. You had mentioned before the cloud operating model and something that we've been talking about for a long time and Wikibon. So can converged infrastructure substantially mimic that cloud operating model and how so? What are the key ingredients of being able to create that experience on-prem? >> Yeah, well, Dave is, we've watched for more than the last decade, the cloud has looked more and more like some of the traditional enterprise things that we would look for and the infrastructure in private clouds have gone more and more cloud-like and embrace that model. So, you know, I got, I think back to the early days, Dave, we talked about how cloud was supposed to just be, you know, "simple". If you look at deploying in the cloud today, it is not simple at all that. There are so many choices out there, you know, way more than I had an initial data center. In the same way, you know, I think, you know, the original converged infrastructure from Dell, if you look at the feedback, the criticism was, you know, oh, you can have it in any color you want, as long as black, just like the Ford model T. But it was that simplicity and consistency that helped build out most of what we were talking about the cloud models I wanted to know that I had a reliable substrate platform to build on top of it. But if you talk about Dave today and in the future, what do we want? First of all, I need that operating model in a multicloud world. So, you know, we look at the environments that can spread, but beyond just a single cloud, because customers today have multiple environments, absolutely hybrid is a big piece of that. We look at what VMware's doing, look at Microsoft, Red Hat, even Amazon are extended beyond just a cloud and going into hybrid and multicloud models. Automation, a critical piece of that. And we've seen, you know, great leaps and bounds in the last couple of generations of what's happening in CI to take advantage of automation. Because we know we've gone beyond what humans can just manage themselves and therefore, you know, true automation is helping along those environments. So yes, absolutely, Dave. You know, that the lines are blurred between what the private cloud and the public cloud. And it's just that overall cloud operating model and helping customers to deal with their data and their applications, regardless of where it is. >> Well, you know, Trey in the early days of cloud and conversion infrastructure, that homogeneity that Stu was talking about any color, as long as it's black. That was actually an advantage to removing labor costs, that consistency and that standardization. But I'm interested in how CI has evolved, its, you know, added in optionality. I mean Joakim was just talking about blockchain, so all kinds of new services. But how has CCI evolved in the better part of the last decade and what are some of the most recent innovations that people should be thinking about or aware of? >> So I think the underlying experience of CI has remained relatively constant. And we talk about the experience that customers get. So if you just look at the data that we've analyzed for over a decade now, you know, one of the data points that I love is 99% of our customers who buy CI say they have virtually no downtime anymore. And, that's a great testament. 84% of our customers say that they have that their IT operations run more efficiently. The reality around how we delivered that in the past was through services and humans performing these integrations and the upkeep associated with the sustaining of the architecture. What we've focused on at Dell Technologies is really bringing technologies that allow us to automate those human integrations and best practices. In such a way where they can become more repeatable and consumable by more customers. We don't have to have as many services folks deploying these systems as we did in the past. Because we're using software intelligence to embed that human knowledge that we used to rely on individuals exclusively for. So that's one of the aspects of the architecture. And then just taking advantage of all the new technologies that we've seen introduce over the last several years from all flash architectures and NVMe on the horizon, NVMe over fabric. All of these things as we orchestrate them in software will enable them to be more consumable by the average everyday customer. Therefore it becomes more economical for them to deploy infrastructure on premises to support mission-critical applications. >> So Stu, what about cloud and multicloud, how does CI support that? Where do those fit in? Are they relevant? >> Yeah, Dave, so absolutely. As I was talking about before, you know, customers have hybrid and multicloud environments and managing across these environments are pretty important. If I look at the Dell family, obviously they're leveraging heavily VMware as the virtualization layer. And VMware has been moving heavily as to how support containerized and incubates these environments and extend their management to not only what's happening in the data center, but into the cloud environment with VMware cloud. So, you know, management in a multicloud world Dave, is one of those areas that we definitely have some work to do. Something we've looked at Wikibon for the last few years. Is how will multicloud be different than multi-vendor? Because that was not something that the industry had done a great job of solving in the past. But you know, customers are looking to take advantage of the innovation, where it is in the services. And you know, the data first architecture is something that we see and therefore that will bring them to many services and many places. >> Oh yeah, I was talking before about in the early days of CI and even a lot of organizations, some organizations, anyway, there's still these sort of silos of, you know, storage, networking, compute resources. And you think about DevOps, where does DevOps fit into this whole equation? Maybe Stu you could take a stab at it and anybody else who wants to chime in. >> Yeah, so Dave, great, great point there. So, you know, when we talk about those silos, DevOps is one of those movements to really help the unifying force to help customers move faster. And so therefore the development team and the operations team are working together. Things like security are not a built-in but something that can happen along the entire path. A more recent addition to the DevOps movement also is something like FinOps. So, you know, how do we make sure that we're not just having finance sign off on things and look back every quarter, but in real time, understand how we're architecting things, especially in the cloud so that we remain responsible for that model. So, you know, speed is, you know, one of the most important pieces for business and therefore the DevOps movement, helping customers move faster and, you know, leverage and get value out of their infrastructure, their applications and their data. >> Yeah, I would add to this that I think the big transition for organizations, cause I've seen it in developing my own organization, is getting IT operators to think programmatically instead of configuration based. Use the tool to configure a device. Think about how do we create programmatic instruction to interacts with all of the devices that creates that cloud-like adaptation. Feeds in application level signaling to adapt and change the underlying configuration about that infrastructure to better run the application without relying upon an IT operator, a human to make a change. This, sort of thinking programmatically is I think one of the biggest obstacles that the industry face. And I feel really good about how we've attacked it, but there is a transformation within that dialogue that every organization is going to navigate through at their own pace. >> Yeah, infrastructure is code automation, this a fundamental to digital transformation. Joakim, I wonder if you could give us some insight as you talk to SAP customers, you know, in Europe, across the EMEA, how does the pandemic change this? >> I think the pandemic has accelerated some of the movements that we already saw in the SAP world. There is obviously a force for making sure that we get our financial budgets in shape and that we don't over spend on our cost levels. And therefore it's going to be very important to see how we can manage all these new revenue generating projects that IT organizations and business organizations have planned around new customer experience initiatives, new supply chain optimization. They know that they need to invest in these projects to stay competitive and to gain new competitive edge. And where CI plays an important part is in order to, first of all, keep costs down in all of these projects, make sure to deliver a standardized common platform upon which all these projects can be introduced. And then of course, making sure that availability and risks are kept high versus at a minimum, right? Risk low and availability at a record high, because we need to stay on with our clients and their demands. So I think again, CI is going to play a very important role. As we see customers go through this pandemic situation and needing to put pressure on both innovation and cost control at the same time. And this is where also our new upcoming data strategies will play a really important part as we need to leverage the data we have better, smarter and more efficient way. >> Got it. Okay guys, we're running out of time, but Trey, I wonder if you could, you know break out your telescope or your crystal ball, give us some visibility into the futures of converged infrastructure. What should we be expecting? So if you look at the last release of this last technology that we released in power one, it was all about automation. We'll build on that platform to integrate other converged capability. So if you look at the converged systems market hyper-converged is very much an element of that. And I think that we're trending to is recognizing that we can deliver an architecture that has hyper-converged and converged attributes all in a single architecture and then dial up the degrees of automation to create more adaptations for different type of application workloads, not just your traditional three tier application workloads, but also those microservices based applications that one may historically think, maybe it's best to that off premises. We feel very confident that we are delivering platforms out there today that can run more economically on premises, provide better security, better data governance, and a lot of the adaptations, the enhancements, the optimizations that we'll deliver in our converged platforms of the future about colliding new infrastructure models together, and introducing more levels of automation to have greater adaptations for applications that are running on it. >> Got it. Trey, we're going to give you the last word. You know, if you're an architect of a large organization, you've got some mission-critical workloads that, you know, you're really trying to protect. What's the takeaway? What's really the advice that you would give those folks thinking about the sort of near and midterm and even longterm? >> My advice is to understand that there are many options. We sell a lot of independent component technologies and data centers that run every organization's environment around the world. We sell packaged outcomes and hyper-converged and converged. And a lot of companies buy a little bit of build your own, they buy some converged, they buy some hyper-converged. I would employ everyone, especially in this climate to really evaluate the packaged offerings and understand how they can benefit their environment. And we recognize that everything that there's not one hammer and everything is a nail. That's why we have this broad portfolio of products that are designed to be utilized in the most efficient manners for those customers who are consuming our technologies. And converged and hyper-converge are merely another way to simplify the ongoing challenges that organizations have in managing their data estate and all of the technologies they're consuming at a rapid pace in concert with the investments that they're also making off premises. So this is very much the technologies that we talked today are very much things that organizations should research, investigate and utilize where they best fit in their organization. >> Awesome guys, and of course there's a lot of information at dell.com about that. Wikibon.com has written a lot about this and the many, many sources of information out there. Trey, Joakim, Stu thanks so much for the conversation. Really meaty, a lot of substance, really appreciate your time, thank you. >> Thank you guys. >> Thank you Dave. >> Thanks Dave. >> And everybody for watching. This is Dave Vellante for theCUBE and we'll see you next time. (soft music)

Published Date : Jul 6 2020

SUMMARY :

leaders all around the world, And much of the world's Trey, I'm going to start with you. and all of the best practices of the original announcement that needs to happen. Yeah, and of course, you know, And that drove to the need of a platform for handling the most demanding workloads? that the best and the brightest package of the last decade and And of course, the focus in terms of modernizing the application But now much of the And one of the greatest testaments to this And a lot of customers want to modernize And at the same time enable us to work that are only in the public clouds, the payback is going to be. that need the data services that that is the best fit of the underlying CI platforms and something that we've been You know, that the lines of the last decade and what delivered that in the past something that the industry of silos of, you know, and the operations team that the industry face. in Europe, across the EMEA, and that we don't over and a lot of the adaptations, that you would give those and all of the technologies and the many, many sources and we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

JoakimPERSON

0.99+

Dave VellantePERSON

0.99+

MicrosoftORGANIZATION

0.99+

Joakim ZetterbladPERSON

0.99+

AmazonORGANIZATION

0.99+

EuropeLOCATION

0.99+

TreyPERSON

0.99+

Trey LaytonPERSON

0.99+

Palo AltoLOCATION

0.99+

StuPERSON

0.99+

Stu MinimanPERSON

0.99+

20 hourQUANTITY

0.99+

99%QUANTITY

0.99+

DellORGANIZATION

0.99+

bothQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

84%QUANTITY

0.99+

24QUANTITY

0.99+

10 yearsQUANTITY

0.99+

todayDATE

0.99+

oneQUANTITY

0.99+

two thingsQUANTITY

0.99+

SAPORGANIZATION

0.99+

second thingQUANTITY

0.99+

Wikibon.comORGANIZATION

0.99+

FordORGANIZATION

0.99+

BostonLOCATION

0.99+

three tiersQUANTITY

0.98+

Dell EMCORGANIZATION

0.98+

two vectorsQUANTITY

0.98+

tomorrowDATE

0.98+

EMEALOCATION

0.98+

third oneQUANTITY

0.98+

WikibonORGANIZATION

0.98+

S/4HANATITLE

0.98+

firstQUANTITY

0.97+

singleQUANTITY

0.97+

dell.comORGANIZATION

0.97+

HANATITLE

0.97+

FinOpsTITLE

0.97+

one aspectQUANTITY

0.97+

DevOpsTITLE

0.97+

a dayQUANTITY

0.97+

Innovation Happens Best in Open Collaboration Panel | DockerCon Live 2020


 

>> Announcer: From around the globe, it's the queue with digital coverage of DockerCon live 2020. Brought to you by Docker and its ecosystem partners. >> Welcome, welcome, welcome to DockerCon 2020. We got over 50,000 people registered so there's clearly a ton of interest in the world of Docker and Eddie's as I like to call it. And we've assembled a power panel of Open Source and cloud native experts to talk about where things stand in 2020 and where we're headed. I'm Shawn Conley, I'll be the moderator for today's panel. I'm also a proud alum of JBoss, Red Hat, SpringSource, VMware and Hortonworks and I'm broadcasting from my hometown of Philly. Our panelists include; Michelle Noorali, Senior Software Engineer at Microsoft, joining us from Atlanta, Georgia. We have Kelsey Hightower, Principal developer advocate at Google Cloud, joining us from Washington State and we have Chris Aniszczyk, CTO CIO at the CNCF, joining us from Austin, Texas. So I think we have the country pretty well covered. Thank you all for spending time with us on this power panel. Chris, I'm going to start with you, let's dive right in. You've been in the middle of the Docker netease wave since the beginning with a clear focus on building a better world through open collaboration. What are your thoughts on how the Open Source landscape has evolved over the past few years? Where are we in 2020? And where are we headed from both community and a tech perspective? Just curious to get things sized up? >> Sure, when CNCF started about roughly four, over four years ago, the technology mostly focused on just the things around Kubernetes, monitoring communities with technology like Prometheus, and I think in 2020 and the future, we definitely want to move up the stack. So there's a lot of tools being built on the periphery now. So there's a lot of tools that handle running different types of workloads on Kubernetes. So things like Uvert and Shay runs VMs on Kubernetes, which is crazy, not just containers. You have folks that, Microsoft experimenting with a project called Kruslet which is trying to run web assembly workloads natively on Kubernetes. So I think what we've seen now is more and more tools built around the periphery, while the core of Kubernetes has stabilized. So different technologies and spaces such as security and different ways to run different types of workloads. And at least that's kind of what I've seen. >> So do you have a fair amount of vendors as well as end users still submitting in projects in, is there still a pretty high volume? >> Yeah, we have 48 total projects in CNCF right now and Michelle could speak a little bit more to this being on the DOC, the pipeline for new projects is quite extensive and it covers all sorts of spaces from two service meshes to security projects and so on. So it's ever so expanding and filling in gaps in that cloud native landscape that we have. >> Awesome. Michelle, Let's head to you. But before we actually dive in, let's talk a little glory days. A rumor has it that you are the Fifth Grade Kickball Championship team captain. (Michelle laughs) Are the rumors true? >> They are, my speech at the end of the year was the first talk I ever gave. But yeah, it was really fun. I wasn't captain 'cause I wasn't really great at anything else apart from constantly cheer on the team. >> A little better than my eighth grade Spelling Champ Award so I think I'd rather have the kickball. But you've definitely, spent a lot of time leading an Open Source, you've been across many projects for many years. So how does the art and science of collaboration, inclusivity and teamwork vary? 'Cause you're involved in a variety of efforts, both in the CNCF and even outside of that. And then what are some tips for expanding the tent of Open Source projects? >> That's a good question. I think it's about transparency. Just come in and tell people what you really need to do and clearly articulate your problem, more clearly articulate your problem and why you can't solve it with any other solution, the more people are going to understand what you're trying to do and be able to collaborate with you better. What I love about Open Source is that where I've seen it succeed is where incentives of different perspectives and parties align and you're just transparent about what you want. So you can collaborate where it makes sense, even if you compete as a company with another company in the same area. So I really like that, but I just feel like transparency and honesty is what it comes down to and clearly communicating those objectives. >> Yeah, and the various foundations, I think one of the things that I've seen, particularly Apache Software Foundation and others is the notion of checking your badge at the door. Because the competition might be between companies, but in many respects, you have engineers across many companies that are just kicking butt with the tech they contribute, claiming victory in one way or the other might make for interesting marketing drama. But, I think that's a little bit of the challenge. In some of the, standards-based work you're doing I know with CNI and some other things, are they similar, are they different? How would you compare and contrast into something a little more structured like CNCF? >> Yeah, so most of what I do is in the CNCF, but there's specs and there's projects. I think what CNCF does a great job at is just iterating to make it an easier place for developers to collaborate. You can ask the CNCF for basically whatever you need, and they'll try their best to figure out how to make it happen. And we just continue to work on making the processes are clearer and more transparent. And I think in terms of specs and projects, those are such different collaboration environments. Because if you're in a project, you have to say, "Okay, I want this feature or I want this bug fixed." But when you're in a spec environment, you have to think a little outside of the box and like, what framework do you want to work in? You have to think a little farther ahead in terms of is this solution or this decision we're going to make going to last for the next how many years? You have to get more of a buy in from all of the key stakeholders and maintainers. So it's a little bit of a longer process, I think. But what's so beautiful is that you have this really solid, standard or interface that opens up an ecosystem and allows people to build things that you could never have even imagined or dreamed of so-- >> Gotcha. So I'm Kelsey, we'll head over to you as your focus is on, developer advocate, you've been in the cloud native front lines for many years. Today developers are faced with a ton of moving parts, spanning containers, functions, Cloud Service primitives, including container services, server-less platforms, lots more, right? I mean, there's just a ton of choice. How do you help developers maintain a minimalist mantra in the face of such a wealth of choice? I think minimalism I hear you talk about that periodically, I know you're a fan of that. How do you pass that on and your developer advocacy in your day to day work? >> Yeah, I think, for most developers, most of this is not really the top of mind for them, is something you may see a post on Hacker News, and you might double click into it. Maybe someone on your team brought one of these tools in and maybe it leaks up into your workflow so you're forced to think about it. But for most developers, they just really want to continue writing code like they've been doing. And the best of these projects they'll never see. They just work, they get out of the way, they help them with log in, they help them run their application. But for most people, this isn't the core idea of the job for them. For people in operations, on the other hand, maybe these components fill a gap. So they look at a lot of this stuff that you see in the CNCF and Open Source space as number one, various companies or teams sharing the way that they do things, right? So these are ideas that are put into the Open Source, some of them will turn into products, some of them will just stay as projects that had mutual benefit for multiple people. But for the most part, it's like walking through an ion like Home Depot. You pick the tools that you need, you can safely ignore the ones you don't need, and maybe something looks interesting and maybe you study it to see if that if you have a problem. And for most people, if you don't have that problem that that tool solves, you should be happy. No one needs every project and I think that's where the foundation for confusion. So my main job is to help people not get stuck and confused in LAN and just be pragmatic and just use the tools that work for 'em. >> Yeah, and you've spent the last little while in the server-less space really diving into that area, compare and contrast, I guess, what you found there, minimalist approach, who are you speaking to from a server-less perspective versus that of the broader CNCF? >> The thing that really pushed me over, I was teaching my daughter how to make a website. So she's on her Chromebook, making a website, and she's hitting 127.0.0.1, and it looks like geo cities from the 90s but look, she's making website. And she wanted her friends to take a look. So she copied and paste from her browser 127.0.0.1 and none of her friends could pull it up. So this is the point where every parent has to cross that line and say, "Hey, do I really need to sit down "and teach my daughter about Linux "and Docker and Kubernetes." That isn't her main goal, her goal was to just launch her website in a way that someone else can see it. So we got Firebase installed on her laptop, she ran one command, Firebase deploy. And our site was up in a few minutes, and she sent it over to her friend and there you go, she was off and running. The whole server-less movement has that philosophy as one of the stated goal that needs to be the workflow. So, I think server-less is starting to get closer and closer, you start to see us talk about and Chris mentioned this earlier, we're moving up the stack. Where we're going to up the stack, the North Star there is feel where you get the focus on what you're doing, and not necessarily how to do it underneath. And I think server-less is not quite there yet but every type of workload, stateless web apps check, event driven workflows check, but not necessarily for things like machine learning and some other workloads that more traditional enterprises want to run so there's still work to do there. So server-less for me, serves as the North Star for why all these Projects exists for people that may have to roll their own platform, to provide the experience. >> So, Chris, on a related note, with what we were just talking about with Kelsey, what's your perspective on the explosion of the cloud native landscape? There's, a ton of individual projects, each can be used separately, but in many cases, they're like Lego blocks and used together. So things like the surface mesh interface, standardizing interfaces, so things can snap together more easily, I think, are some of the approaches but are you doing anything specifically to encourage this cross fertilization and collaboration of bug ability, because there's just a ton of projects, not only at the CNCF but outside the CNCF that need to plug in? >> Yeah, I mean, a lot of this happens organically. CNCF really provides of the neutral home where companies, competitors, could trust each other to build interesting technology. We don't force integration or collaboration, it happens on its own. We essentially allow the market to decide what a successful project is long term or what an integration is. We have a great Technical Oversight Committee that helps shepherd the overall technical vision for the organization and sometimes steps in and tries to do the right thing when it comes to potentially integrating a project. Previously, we had this issue where there was a project called Open Tracing, and an effort called Open Census, which is basically trying to standardize how you're going to deal with metrics, on the tree and so on in a cloud native world that we're essentially competing with each other. The CNCF TC and committee came together and merged those projects into one parent ever called Open Elementary and so that to me is a case study of how our committee helps, bridges things. But we don't force things, we essentially want our community of end users and vendors to decide which technology is best in the long term, and we'll support that. >> Okay, awesome. And, Michelle, you've been focused on making distributed systems digestible, which to me is about simplifying things. And so back when Docker arrived on the scene, some people referred to it as developer dopamine, which I love that term, because it's simplified a bunch of crufty stuff for developers and actually helped them focus on doing their job, writing code, delivering code, what's happening in the community to help developers wire together multi-part modern apps in a way that's elegant, digestible, feels like a dopamine rush? >> Yeah, one of the goals of the(mumbles) project was to make it easier to deploy an application on Kubernetes so that you could see what the finished product looks like. And then dig into all of the things that that application is composed of, all the resources. So we're really passionate about this kind of stuff for a while now. And I love seeing projects that come into the space that have this same goal and just iterate and make things easier. I think we have a ways to go still, I think a lot of the iOS developers and JS developers I get to talk to don't really care that much about Kubernetes. They just want to, like Kelsey said, just focus on their code. So one of the projects that I really like working with is Tilt gives you this dashboard in your CLI, aggregates all your logs from your applications, And it kind of watches your application changes, and reconfigures those changes in Kubernetes so you can see what's going on, it'll catch errors, anything with a dashboard I love these days. So Yali is like a metrics dashboard that's integrated with STL, a service graph of your service mesh, and lets you see the metrics running there. I love that, I love that dashboard so much. Linkerd has some really good service graph images, too. So anything that helps me as an end user, which I'm not technically an end user, but me as a person who's just trying to get stuff up and running and working, see the state of the world easily and digest them has been really exciting to see. And I'm seeing more and more dashboards come to light and I'm very excited about that. >> Yeah, as part of the DockerCon just as a person who will be attending some of the sessions, I'm really looking forward to see where DockerCompose is going, I know they opened up the spec to broader input. I think your point, the good one, is there's a bit more work to really embrace the wealth of application artifacts that compose a larger application. So there's definitely work the broader community needs to lean in on, I think. >> I'm glad you brought that up, actually. Compose is something that I should have mentioned and I'm glad you bring that up. I want to see programming language libraries, integrate with the Compose spec. I really want to see what happens with that I think is great that they open that up and made that a spec because obviously people really like using Compose. >> Excellent. So Kelsey, I'd be remiss if I didn't touch on your January post on changelog entitled, "Monoliths are the Future." Your post actually really resonated with me. My son works for a software company in Austin, Texas. So your hometown there, Chris. >> Yeah. >> Shout out to Will and the chorus team. His development work focuses on adding modern features via micro services as extensions to the core monolith that the company was founded on. So just share some thoughts on monoliths, micro services. And also, what's deliverance dopamine from your perspective more broadly, but people usually phrase as monoliths versus micro services, but I get the sense you don't believe it's either or. >> Yeah, I think most companies from the pragmatic so one of their argument is one of pragmatism. Most companies have trouble designing any app, monolith, deployable or microservices architecture. And then these things evolve over time. Unless you're really careful, it's really hard to know how to slice these things. So taking an idea or a problem and just knowing how to perfectly compartmentalize it into individual deployable component, that's hard for even the best people to do. And double down knowing the actual solution to the particular problem. A lot of problems people are solving they're solving for the first time. It's really interesting, our industry in general, a lot of people who work in it have never solved the particular problem that they're trying to solve for the first time. So that's interesting. The other part there is that most of these tools that are here to help are really only at the infrastructure layer. We're talking freeways and bridges and toll bridges, but there's nothing that happens in the actual developer space right there in memory. So the libraries that interface to the structure logging, the libraries that deal with rate limiting, the libraries that deal with authorization, can this person make this query with this user ID? A lot of those things are still left for developers to figure out on their own. So while we have things like the brunettes and fluid D, we have all of these tools to deploy apps into those target, most developers still have the problem of everything you do above that line. And to be honest, the majority of the complexity has to be resolved right there in the app. That's the thing that's taking requests directly from the user. And this is where maybe as an industry, we're over-correcting. So we had, you said you come from the JBoss world, I started a lot of my Cisco administration, there's where we focus a little bit more on the actual application needs, maybe from a router that as well. But now what we're seeing is things like Spring Boot, start to offer a little bit more integration points in the application space itself. So I think the biggest parts that are missing now are what are the frameworks people will use for authorization? So you have projects like OPA, Open Policy Agent for those that are new to that, it gives you this very low level framework, but you still have to understand the concepts around, what does it mean to allow someone to do something and one missed configuration, all your security goes out of the window. So I think for most developers this is where the next set of challenges lie, if not actually the original challenge. So for some people, they were able to solve most of these problems with virtualization, run some scripts, virtualize everything and be fine. And monoliths were okay for that. For some reason, we've thrown pragmatism out of the window and some people are saying the only way to solve these problems is by breaking the app into 1000 pieces. Forget the fact that you had trouble managing one piece, you're going to somehow find the ability to manage 1000 pieces with these tools underneath but still not solving the actual developer problems. So this is where you've seen it already with a couple of popular blog posts from other companies. They cut too deep. They're going from 2000, 3000 microservices back to maybe 100 or 200. So to my world, it's going to be not just one monolith, but end up maybe having 10 or 20 monoliths that maybe reflect the organization that you have versus the architectural pattern that you're at. >> I view it as like a constellation of stars and planets, et cetera. Where you you might have a star that has a variety of, which is a monolith, and you have a variety of sort of planetary microservices that float around it. But that's reality, that's the reality of modern applications, particularly if you're not starting from a clean slate. I mean your points, a good one is, in many respects, I think the infrastructure is code movement has helped automate a bit of the deployment of the platform. I've been personally focused on app development JBoss as well as springsSource. The Spring team I know that tech pretty well over the years 'cause I was involved with that. So I find that James Governor's discussion of progressive delivery really resonates with me, as a developer, not so much as an infrastructure Deployer. So continuous delivery is more of infrastructure notice notion, progressive delivery, feature flags, those types of things, or app level, concepts, minimizing the blast radius of your, the new features you're deploying, that type of stuff, I think begins to speak to the pain of application delivery. So I'll guess I'll put this up. Michelle, I might aim it to you, and then we'll go around the horn, what are your thoughts on the progressive delivery area? How could that potentially begin to impact cloud native over 2020? I'm looking for some rallying cries that move up the stack and give a set of best practices, if you will. And I think James Governor of RedMonk opened on something that's pretty important. >> Yeah, I think it's all about automating all that stuff that you don't really know about. Like Flagger is an awesome progressive delivery tool, you can just deploy something, and people have been asking for so many years, ever since I've been in this space, it's like, "How do I do AB deployment?" "How do I do Canary?" "How do I execute these different deployment strategies?" And Flagger is a really good example, for example, it's a really good way to execute these deployment strategies but then, make sure that everything's happening correctly via observing metrics, rollback if you need to, so you don't just throw your whole system. I think it solves the problem and allows you to take risks but also keeps you safe in that you can be confident as you roll out your changes that it all works, it's metrics driven. So I'm just really looking forward to seeing more tools like that. And dashboards, enable that kind of functionality. >> Chris, what are your thoughts in that progressive delivery area? >> I mean, CNCF alone has a lot of projects in that space, things like Argo that are tackling it. But I want to go back a little bit to your point around developer dopamine, as someone that probably spent about a decade of his career focused on developer tooling and in fact, if you remember the Eclipse IDE and that whole integrated experience, I was blown away recently by a demo from GitHub. They have something called code spaces, which a long time ago, I was trying to build development environments that essentially if you were an engineer that joined a team recently, you could basically get an environment quickly start it with everything configured, source code checked out, environment properly set up. And that was a very hard problem. This was like before container days and so on and to see something like code spaces where you'd go to a repo or project, open it up, behind the scenes they have a container that is set up for the environment that you need to build and just have a VS code ID integrated experience, to me is completely magical. It hits like developer dopamine immediately for me, 'cause a lot of problems when you're going to work with a project attribute, that whole initial bootstrap of, "Oh you need to make sure you have this library, this install," it's so incredibly painful on top of just setting up your developer environment. So as we continue to move up the stack, I think you're going to see an incredible amount of improvements around the developer tooling and developer experience that people have powered by a lot of this cloud native technology behind the scenes that people may not know about. >> Yeah, 'cause I've been talking with the team over at Docker, the work they're doing with that desktop, enable the aim local environment, make sure it matches as closely as possible as your deployed environments that you might be targeting. These are some of the pains, that I see. It's hard for developers to get bootstrapped up, it might take him a day or two to actually just set up their local laptop and development environment, and particularly if they change teams. So that complexity really corralling that down and not necessarily being overly prescriptive as to what tool you use. So if you're visual code, great, it should feel integrated into that environment, use a different environment or if you feel more comfortable at the command line, you should be able to opt into that. That's some of the stuff I get excited to potentially see over 2020 as things progress up the stack, as you said. So, Michelle, just from an innovation train perspective, and we've covered a little bit, what's the best way for people to get started? I think Kelsey covered a little bit of that, being very pragmatic, but all this innovation is pretty intimidating, you can get mowed over by the train, so to speak. So what's your advice for how people get started, how they get involved, et cetera. >> Yeah, it really depends on what you're looking for and what you want to learn. So, if you're someone who's new to the space, honestly, check out the case studies on cncf.io, those are incredible. You might find environments that are similar to your organization's environments, and read about what worked for them, how they set things up, any hiccups they crossed. It'll give you a broad overview of the challenges that people are trying to solve with the technology in this space. And you can use that drill into the areas that you want to learn more about, just depending on where you're coming from. I find myself watching old KubeCon talks on the cloud native computing foundations YouTube channel, so they have like playlists for all of the conferences and the special interest groups in CNCF. And I really enjoy talking, I really enjoy watching excuse me, older talks, just because they explain why things were done, the way they were done, and that helps me build the tools I built. And if you're looking to get involved, if you're building projects or tools or specs and want to contribute, we have special interest groups in the CNCF. So you can find that in the CNCF Technical Oversight Committee, TOC GitHub repo. And so for that, if you want to get involved there, choose a vertical. Do you want to learn about observability? Do you want to drill into networking? Do you care about how to deliver your app? So we have a cig called app delivery, there's a cig for each major vertical, and you can go there to see what is happening on the edge. Really, these are conversations about, okay, what's working, what's not working and what are the next changes we want to see in the next months. So if you want that kind of granularity and discussion on what's happening like that, then definitely join those those meetings. Check out those meeting notes and recordings. >> Gotcha. So on Kelsey, as you look at 2020 and beyond, I know, you've been really involved in some of the earlier emerging tech spaces, what gets you excited when you look forward? What gets your own level of dopamine up versus the broader community? What do you see coming that we should start thinking about now? >> I don't think any of the raw technology pieces get me super excited anymore. Like, I've seen the circle of around three or four times, in five years, there's going to be a new thing, there might be a new foundation, there'll be a new set of conferences, and we'll all rally up and probably do this again. So what's interesting now is what people are actually using the technology for. Some people are launching new things that maybe weren't possible because infrastructure costs were too high. People able to jump into new business segments. You start to see these channels on YouTube where everyone can buy a mic and a B app and have their own podcasts and be broadcast to the globe, just for a few bucks, if not for free. Those revolutionary things are the big deal and they're hard to come by. So I think we've done a good job democratizing these ideas, distributed systems, one company got really good at packaging applications to share with each other, I think that's great, and never going to reset again. And now what's going to be interesting is, what will people build with this stuff? If we end up building the same things we were building before, and then we're talking about another digital transformation 10 years from now because it's going to be funny but Kubernetes will be the new legacy. It's going to be the things that, "Oh, man, I got stuck in this Kubernetes thing," and there'll be some governor on TV, looking for old school Kubernetes engineers to migrate them to some new thing, that's going to happen. You got to know that. So at some point merry go round will stop. And we're going to be focused on what you do with this. So the internet is there, most people have no idea of the complexities of underwater sea cables. It's beyond one or two people, or even one or two companies to comprehend. You're at the point now, where most people that jump on the internet are talking about what you do with the internet. You can have Netflix, you can do meetings like this one, it's about what you do with it. So that's going to be interesting. And we're just not there yet with tech, tech is so, infrastructure stuff. We're so in the weeds, that most people almost burn out what's just getting to the point where you can start to look at what you do with this stuff. So that's what I keep in my eye on, is when do we get to the point when people just ship things and build things? And I think the closest I've seen so far is in the mobile space. If you're iOS developer, Android developer, you use the SDK that they gave you, every year there's some new device that enables some new things speech to text, VR, AR and you import an STK, and it just worked. And you can put it in one place and 100 million people can download it at the same time with no DevOps team, that's amazing. When can we do that for server side applications? That's going to be something I'm going to find really innovative. >> Excellent. Yeah, I mean, I could definitely relate. I was Hortonworks in 2011, so, Hadoop, in many respects, was sort of the precursor to the Kubernetes area, in that it was, as I like to refer to, it was a bunch of animals in the zoo, wasn't just the yellow elephant. And when things mature beyond it's basically talking about what kind of analytics are driving, what type of machine learning algorithms and applications are they delivering? You know that's when things tip over into a real solution space. So I definitely see that. I think the other cool thing even just outside of the container and container space, is there's just such a wealth of data related services. And I think how those two worlds come together, you brought up the fact that, in many respects, server-less is great, it's stateless, but there's just a ton of stateful patterns out there that I think also need to be addressed as these richer applications to be from a data processing and actionable insights perspective. >> I also want to be clear on one thing. So some people confuse two things here, what Michelle said earlier about, for the first time, a whole group of people get to learn about distributed systems and things that were reserved to white papers, PhDs, CF site, this stuff is now super accessible. You go to the CNCF site, all the things that you read about or we used to read about, you can actually download, see how it's implemented and actually change how it work. That is something we should never say is a waste of time. Learning is always good because someone has to build these type of systems and whether they sell it under the guise of server-less or not, this will always be important. Now the other side of this is, that there are people who are not looking to learn that stuff, the majority of the world isn't looking. And in parallel, we should also make this accessible, which should enable people that don't need to learn all of that before they can be productive. So that's two sides of the argument that can be true at the same time, a lot of people get caught up. And everything should just be server-less and everyone learning about distributed systems, and contributing and collaborating is wasting time. We can't have a world where there's only one or two companies providing all infrastructure for everyone else, and then it's a black box. We don't need that. So we need to do both of these things in parallel so I just want to make sure I'm clear that it's not one of these or the other. >> Yeah, makes sense, makes sense. So we'll just hit the final topic. Chris, I think I'll ask you to help close this out. COVID-19 clearly has changed how people work and collaborate. I figured we'd end on how do you see, so DockerCon is going to virtual events, inherently the Open Source community is distributed and is used to not face to face collaboration. But there's a lot of value that comes together by assembling a tent where people can meet, what's the best way? How do you see things playing out? What's the best way for this to evolve in the face of the new normal? >> I think in the short term, you're definitely going to see a lot of virtual events cropping up all over the place. Different themes, verticals, I've already attended a handful of virtual events the last few weeks from Red Hat summit to Open Compute summit to Cloud Native summit, you'll see more and more of these. I think, in the long term, once the world either get past COVID or there's a vaccine or something, I think the innate nature for people to want to get together and meet face to face and deal with all the serendipitous activities you would see in a conference will come back, but I think virtual events will augment these things in the short term. One benefit we've seen, like you mentioned before, DockerCon, can have 50,000 people at it. I don't remember what the last physical DockerCon had but that's definitely an order of magnitude more. So being able to do these virtual events to augment potential of physical events in the future so you can build a more inclusive community so people who cannot travel to your event or weren't lucky enough to win a scholarship could still somehow interact during the course of event to me is awesome and I hope something that we take away when we start all doing these virtual events when we get back to physical events, we find a way to ensure that these things are inclusive for everyone and not just folks that can physically make it there. So those are my thoughts on on the topic. And I wish you the best of luck planning of DockerCon and so on. So I'm excited to see how it turns out. 50,000 is a lot of people and that just terrifies me from a cloud native coupon point of view, because we'll probably be somewhere. >> Yeah, get ready. Excellent, all right. So that is a wrap on the DockerCon 2020 Open Source Power Panel. I think we covered a ton of ground. I'd like to thank Chris, Kelsey and Michelle, for sharing their perspectives on this continuing wave of Docker and cloud native innovation. I'd like to thank the DockerCon attendees for tuning in. And I hope everybody enjoys the rest of the conference. (upbeat music)

Published Date : May 29 2020

SUMMARY :

Brought to you by Docker of the Docker netease wave on just the things around Kubernetes, being on the DOC, the A rumor has it that you are apart from constantly cheer on the team. So how does the art and the more people are going to understand Yeah, and the various foundations, and allows people to build things I think minimalism I hear you You pick the tools that you need, and it looks like geo cities from the 90s but outside the CNCF that need to plug in? We essentially allow the market to decide arrived on the scene, on Kubernetes so that you could see Yeah, as part of the and I'm glad you bring that up. entitled, "Monoliths are the Future." but I get the sense you and some people are saying the only way and you have a variety of sort in that you can be confident and in fact, if you as to what tool you use. and that helps me build the tools I built. So on Kelsey, as you and be broadcast to the globe, that I think also need to be addressed the things that you read about in the face of the new normal? and meet face to face So that is a wrap on the DockerCon 2020

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

MichellePERSON

0.99+

Shawn ConleyPERSON

0.99+

Michelle NooraliPERSON

0.99+

Chris AniszczykPERSON

0.99+

2011DATE

0.99+

CNCFORGANIZATION

0.99+

KelseyPERSON

0.99+

1000 piecesQUANTITY

0.99+

10QUANTITY

0.99+

Apache Software FoundationORGANIZATION

0.99+

2020DATE

0.99+

JanuaryDATE

0.99+

oneQUANTITY

0.99+

CiscoORGANIZATION

0.99+

PhillyLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

Austin, TexasLOCATION

0.99+

a dayQUANTITY

0.99+

Atlanta, GeorgiaLOCATION

0.99+

SpringSourceORGANIZATION

0.99+

TOCORGANIZATION

0.99+

100QUANTITY

0.99+

HortonworksORGANIZATION

0.99+

DockerConEVENT

0.99+

North StarORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

PrometheusTITLE

0.99+

Washington StateLOCATION

0.99+

first timeQUANTITY

0.99+

Red HatORGANIZATION

0.99+

bothQUANTITY

0.99+

DockerORGANIZATION

0.99+

YouTubeORGANIZATION

0.99+

WillPERSON

0.99+

200QUANTITY

0.99+

Spring BootTITLE

0.99+

AndroidTITLE

0.99+

two companiesQUANTITY

0.99+

two sidesQUANTITY

0.99+

iOSTITLE

0.99+

one pieceQUANTITY

0.99+

Kelsey HightowerPERSON

0.99+

RedMonkORGANIZATION

0.99+

two peopleQUANTITY

0.99+

3000 microservicesQUANTITY

0.99+

Home DepotORGANIZATION

0.99+

JBossORGANIZATION

0.99+

Google CloudORGANIZATION

0.98+

NetflixORGANIZATION

0.98+

50,000 peopleQUANTITY

0.98+

20 monolithsQUANTITY

0.98+

OneQUANTITY

0.98+

one thingQUANTITY

0.98+

ArgoORGANIZATION

0.98+

KubernetesTITLE

0.98+

two companiesQUANTITY

0.98+

eachQUANTITY

0.98+

GitHubORGANIZATION

0.98+

over 50,000 peopleQUANTITY

0.98+

five yearsQUANTITY

0.98+

twoQUANTITY

0.98+

DockerEVENT

0.98+

Laetitia Cailleteau & Pete Yao, Accenture | Boomi World 2019


 

>> Narrator: Live, from Washington, D.C. It's theCube! Covering Boomi World 19. Brought to you by Boomi. >> Welcome back to the Cube's coverage of Boomi World 2019, from D.C. I'm Lisa Martin. John Furrier is my cohost, and we're pleased to be welcome a couple of guests from Accenture, Boomi partner. To my right, we've got Pete Yao, Global Managing Director of Integration, and Laetitia Cailleteau, Accenture's Global Lead for Conversational AI. Welcome, both of you. >> Thank you. It's great to be here. >> Thank you so much. So, big news. You can't go anywhere these days without talking about AI. I mean, there's even commercials on TV, that, you know, any generation knows something about AI. But, Laetitia, let's start with you. Some big news coming out this morning, with what Boomi and Accenture are doing for conversational AI. Give our audience, kind of an overview of what you guys announced this morning. >> So, thank you very much. So, conversational AI is booming in the market. It's at the top of the agenda for a number of our C-Suites. It's a new way to make system more human. So, instead of having to learn the system you can actually speak. Ask them direct question. Have a conversation. And actually, what we are doing, what we announced this morning, is Accenture and Boomi are going to partner together to deliver that kind of services for our client. Much faster. Cause we have the expertise and the know how, of designing those conversational experience, and Boomi, obviously, integrates really fast with Beacon system. And the two, together, can really be accelerating, you know, the value delivered to our client. >> And the technology piece, I just want to sure of something. Cause, you guys are providing a front end, so, real technology, with Boomi. So, it's a together story? >> Yeah, it's definitely a together story. And as you say, we are quite expert in designing those experience on the front end. And Boomi, obviously, kind of powers up the integration in the background. >> So, this is going to be enabler of, something you said a minute ago, is, instead of us humans having to learn the tech the tech's going to learn us. Is that fair to say? >> Very fair to say. That's exactly how we want to see it. And I think we call that trend, radically human systems. So, systems are going to become more radically human as we go on. And conversational AI is one enabler of that. >> Is it going to be empathetic? Like, when, you were saying this morning something I loved, on stage. We've all had these interactions with AI, with bots, whether we're on a dot com site, trying to fix something for our cable provider. Or we're calling into a call center. You're starting to get, your voice changes, your agent! And you want that. Is it going to be able to understand, oh, all right, this person, maybe we need to escalate this. There's anger coming through the voice. Is it going to be able to detect that? >> On voice, you can definitely start detecting tone much better than on text. Cause on texts it's very small snippets. And it's quite difficult to define somebody's mood by one small interaction. Typically, you need a number of interactions to kind of see the build up of the person's emotion. But, on voice, definitely. You know, your intonation definitely defines your state of communication. >> You can tell someone's happy, sad, and then use the text meta data to add to it. This is fascinating, cause we all see Apple with Siri front end. That's a different system. They have a back end to Apple. This is a similar thing. You guys have a solution at Accenture. Can you explain how people engage with Accenture? Cause, the Boomi story is a great announcement, congratulations on that. But still, you can deploy this technology to any back end. Is that right? >> Yeah, to any back end. We have a number of live deployment running at the moment. I think the key thing is, you know, especially in the call center. Call center is an area that has not been invested in for, like decades, yeah. And, very often, the scripts are very inward driven. So they would describe the company's processes rather than think about the end user. So, what we do in Accenture, is we try to reinvent the experience, be much more user driven. And then we have a low code, no code, kind of interface, to be able to craft some of those conversation on all the variation. But, more importantly, we actually store all those conversation and can learn. And so we have assisted learning module to make a natural language processor cleverer and cleverer. And as you were saying, before we started to be on air, the user is contributing training data. Yeah, I was just sharing one of recent stories, of an ISP that I was trying to interact with, and frustrated that I couldn't just solve this problem on my own. And then after I was doing some work for theCube, a few months ago I realized, oh, actually I have to be calm here. I have an opportunity, as does everybody, to help train the models. Because that's what they need, right? It takes a tremendous amount of training data before our voices can become like fingerprints. So, I think, if more of us just kind of flip that, maybe our tone will get better, and obviously the machines will detect that, right? >> Yeah, no definitely. I think they key with conversational AI is not to see it as just plain tech, but really an opportunity to be more human centered. And, you know, obviously knowing who peoples are and how they interact in different kind of problems and scenario is absolutely critical. >> Pete, I want to get your thoughts on digital transformation, because we've done, I've done thousands of interviews on theCube, and many, many shows. Digital transformation has been around for awhile It all stops in one area. Okay, process technology, great areas, we've got visibility on that. Automation's excellent for processes. Technology, a plethora of activity. The people equations always broken down. Culture, has stopped dev ops. Maybe not enough data scientists or linguistic engineers to do conversational AI. You guys fill that void. Great technology. The people equation changes when there's successes. It all comes down to integration. Because that's where, either I don't believe in it, I don't want to do it, the culture doesn't want it. Time to value. The integration piece is critical. Can you guys explain how the Boomi Accenture integration works? And what should enterprises take away from this? >> Well, yeah, one of the key things when we started our relationship with Boomi more than five years ago now, really, Boomi was the leader, kind of the ones who invented iPad, right, the integration platform as a service. So, in the small and medium business, a lot of those companies had already moved a lot of the critical apps to the cloud. But, in the enterprise we see that it's taken a lot longer, right, so, certain departments may move certain pieces, but it's still very much a hybrid, right, between a cloud and on-prem based. So, taking a platform like Boomi, and being able to use that with the atomsphere platform has really allowed us to move forward. We've done quite a bit of work in Europe. And, now, in the last year, we've been focusing on North America, along with Europe. So, really, the platform has allowed us to focus on the integration. >> It's interesting, you bring up, you guys have been at Accenture for a long time, you've seen the waves. Oh, big 18 month deployment, eight years. Sometimes years, going back to the 80s and 90s. But now, the large enterprise kind of looks like SMB's because the projects all look, they're different now. You could have a plethora of projects out there, hundreds of projects, not one monolith. So, this seems to be a trend. Do you guys see it that away? Do you agree? Could you, like, share some insight as to what's going on in these large companies. Is it still the same game of a lot of big projects? Or, are things being broken down into smaller chunks with cloud platform? Can you guys just share your insights on this? >> Do you want to take that one first? >> You can do first, yeah. >> Okay. So the days of the big bang, big transformation, multi year programs, we don't see very many of those. A lot of our clients have moved away, towards lean, agile delivery. So, it's really being able to deliver value in shorter periods of time. And in that sense, you do see these big companies acting more like SMBs. Cause you really have to deliver that value. And, with Boomi's platform it's not just the integration aspect, and though our relationship started there, it's with some of the other pieces of technology, like flow and low code or no code as well, which has allowed Boomi customers and our clients and our teams to be able to get those applications out to production much quicker. >> Lisa: A big enabler, sorry, of the citizen developer. >> Yeah, absolutely. >> John: Thoughts on this trend. >> Yeah, so I guess my thought I will come with the innovation angle. So, obviously, we are in a very turbulent time, where company, you know, like a number of the Fortune 500 of 20 years ago, they're not there any longer. And there's quite a heavy rotation on some of the big corporation. And, what's really important is to size the market, and innovate all the time. And I think that's one of the reason why we have much smaller project. Because if you want to innovate you need to go to market really fast, try things up, and pivot ideas really fast, to try to see if people like it and want it. And, I think, that's also one of the key driver of smaller, kind of projects, that would just go much faster to like... >> We had a guy on theCube say, data is the new software. Kind of provocative, bringing a provocative statement around data's now part of the programatic element. And integration speaks volumes. I want to get your reaction to the idea of glue layers. I mean, people kick that term around. That's a glue layer. Basically integration layer with data. Control plane. This isn't really a big part of the integration story for Boomi but for other customers. What's your guys thoughts on this data layer, glue layer, that software and data come together? You're showing it with the conversational AI. It's voice, in terms of software, connects to another system. There's glue. >> Yeah, so, that's a very interesting angle. Cause I think, you know, in the old integration world people would just build an interface, and then it would go live, and they wouldn't necessarily know exactly what's going on the bonnet. And I think, adding that insight, of what you flow, or how often they use, when they're kicked off, is something that becomes quite important when you have a lot of integration to manage. I would remember, I was working for a bank, a major bank in the UK, where we trying to make a mainframe system go real time. But we had all those batch schedule, kind of running, and nobody really knew when, what, and the dependency in between each other. So, I think it definitely helps a lot. You know, bubbling up that level of visibility you need to transform truly. >> Yeah, and you're seeing lot of companies now have Chief Data Officers. Right, but data really is important. And with big data data links, unstructured data, structured data, tradional RDMS databases, being able to access that information. Is it just read only? Is it read and write? You're really seeing, kind of, how all of it has to come together. >> So, if we look at the go-to-market for Boomi and Accenture. Pete, talk to us about how that go-to-market strategy has evolved during the partnership. And where you see it going with respect to emerging technologies like conversational AI. >> Oh, yeah, we've got great opportunities. So, we've started off, really just, hey, there was integration opportunity. Are we doing much work with Boomi and the enterprise. Five years ago, we hadn't. And we started doing more work, kind of in AsiaPac, and then in Europe. Three years ago we entered a formal relationship to accelerate the growth. It was accelerated growth platform which started at Amia. And this last year we formally signed one in North America as well. And in the last three years we've done four times the amount of work. The number of customers, we've got more than 40 joint customers together. The number of trained professionals within Accenture. We have more than 400 people certified, with more than 600 certifications. Some of them may be a developer as well as an architect. And so, a lot of that is really that awareness and the education, training and enablement, as well as some joint go-to-market activities. >> Any of those in a specific, I was reading some US cases in healthcare and utilities? >> Yeah, we're definitely, we've seen quite a bit in utilities and our energy practice. We've seen it in transportation. Because Accenture covers all the different industry groups we're really seeing it in all of them. >> You know, I'm fascinated by the announcement you guys had with Boomi. The big news. Conversational AI. Because it just makes so much sense. But I worry people will pigeon hole this into, you know, voice, like telephone call centers only. Cause the US cases you guys were showing on stage was essentially like, almost like a query engine, and using voices. Versus like an agent call center work flow, which is an actual work flow. Big market there, I have no doubt about it. But, there's other US cases. I mean, this is a big, wide topic. Can you just share the vision of conversational AI a little further? >> So, meaning, I think the capability we have is to kind of go on any channel. Voice is an interesting one, cause it's, I think, it's very common still, you know, to have a call center, when you dip into challenges. And this is kind of the most emerging and challenging from a technology perspective. So, that's the one that was showcased. But there's a number of chat channels that are also very important. On the web, or a synchronous channel, like Whatsapp and Facebook and all of that kind of thing. So, it's really kind of, really offering a broad choice to the end consumer. So they can pick and choose what they want at the moment they want. I think what we see in the market is a big shift from synchronous kind of interaction, like on the web. You go on the web, you chat with something, and you just need to be there to finish it. To actually text. Because you can just send a text, get a response, go to a meeting, and on the back of the meeting, when you have five minutes, you just kind of do the reply. And you actually solve your problem on your own terms. But really when you have the time. So, there is a lot coming there. And, you know, with Apple Business Chat, you know, there's a number of mechanisms that are coming up, and new channels. Before company tended to be, you know, we do digital, we do call center, and maybe we have chat, but actually all of that is broadening up. You know, people want multi channel experts. >> So, synchronous is key. Synchronous and synchronous communication. So, is there a tell sign for a client that says I'm ready for conversational AI? Would I have to have a certain data set? I mean, is it interface? What are some of the requirements, someone says, hey, I really want this. I want to do this. >> Yeah, so, the way we deal with all of that, very often, is if you have call center recording or chat recording, we have a set of routines that we pass through. So, we transcribe everything and we do what we'd call intend discovery. And from that we can know, you know, what are the most, kind of critical, kind of processes kicked off. And from that, we know if it's transactional, or if it's an interaction, or an attendant's emotionally loaded, like people not happy with their bill. And then we have different techniques to address all of those different, kind of processes, if you want, and transform them into new experiences. And we can very easily, kind of look at the potential value we can get out of it. So, for instance, with one of our client, we identify, you know, if you do that kind of transformation you can get 25 million off your call center. You know, like, which is very sizeable. And it's very precise cause it's data driven. So, it's based on kind of, real calls, recordings and data. >> Can't hide from data. I mean, it's either successful or not. You can't hide anymore. >> Yeah, and I think one of the extra value add is very often call center agent or chat agent, they're not really paid to classify properly, so they would just pick up the most easy one all time. So, they will misclassify some of those recordings. Choose what's easiest for them. But when you actually go into what was said it's a very different story. >> John: Well, great insight. >> So, AI becoming, not just IQ, but EQ, in the future? >> Yes, definitely. That's the whole idea. That why we need our users to emrace it. (laughing) >> Exactly. And turn those frustrating experiences into I have the opportunity to influence the model. >> Last question, Pete, for you. In terms of conversational AI, and the business opportunities that this partnership with Boomi is going to give to you guys, at Accenture. >> Oh, definitely looking forward to joint go-to-market, taking this globally. We were named, earlier this week, yesterday, the worldwide partner of the year. Second time that Accenture's been awarded that. Which we appreciate. And that we look forward to working with Boomi and taking conversational AI to our joint clients. >> Awesome. Laetitia, Pete, thank you so much for joining John and me. Really interesting conversation. Can't wait to see where it goes. >> Great. Thank you very much. >> Our pleasure. >> Great conversational. >> Very conversational. >> Got some AI here, come on. >> Next time we give you a bot to sit in our seat. (all laughing) >> Cube conversations. >> Exactly. For our guests, and for John Furrier, I'm Lisa Martin. You're watching theCube, from Boomi World 19. Thanks for watching. (upbeat music)

Published Date : Oct 2 2019

SUMMARY :

Brought to you by Boomi. Welcome back to the Cube's coverage of Boomi World 2019, It's great to be here. of what you guys announced this morning. So, instead of having to learn the system And the technology piece, And as you say, we are quite expert the tech's going to learn us. And I think we call that trend, radically human systems. And you want that. And it's quite difficult to define somebody's mood But still, you can deploy this technology to any back end. And as you were saying, before we started to be on air, And, you know, obviously knowing who peoples are Can you guys explain how the Boomi Accenture a lot of the critical apps to the cloud. So, this seems to be a trend. And in that sense, you do see these big companies like a number of the Fortune 500 of 20 years ago, a big part of the integration story for Boomi Cause I think, you know, in the old integration world how all of it has to come together. And where you see it going And in the last three years Because Accenture covers all the different industry groups Cause the US cases you guys were showing on stage You go on the web, you chat with something, Would I have to have a certain data set? And from that we can know, you know, I mean, it's either successful or not. But when you actually go into what was said That's the whole idea. into I have the opportunity to influence the model. that this partnership with Boomi is going to give to you guys, And that we look forward to working with Boomi Laetitia, Pete, thank you so much for joining John and me. Thank you very much. Next time we give you a bot to sit in our seat. Thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Pete YaoPERSON

0.99+

Lisa MartinPERSON

0.99+

JohnPERSON

0.99+

five minutesQUANTITY

0.99+

EuropeLOCATION

0.99+

LisaPERSON

0.99+

John FurrierPERSON

0.99+

Laetitia CailleteauPERSON

0.99+

UKLOCATION

0.99+

AccentureORGANIZATION

0.99+

LaetitiaPERSON

0.99+

BoomiORGANIZATION

0.99+

25 millionQUANTITY

0.99+

AppleORGANIZATION

0.99+

Lisa MartPERSON

0.99+

iPadCOMMERCIAL_ITEM

0.99+

PetePERSON

0.99+

eight yearsQUANTITY

0.99+

North AmericaLOCATION

0.99+

twoQUANTITY

0.99+

18 monthQUANTITY

0.99+

SiriTITLE

0.99+

more than 600 certificationsQUANTITY

0.99+

last yearDATE

0.99+

Washington, D.C.LOCATION

0.99+

more than 400 peopleQUANTITY

0.99+

yesterdayDATE

0.99+

USLOCATION

0.99+

Five years agoDATE

0.99+

90sDATE

0.99+

bothQUANTITY

0.99+

more than 40 joint customersQUANTITY

0.99+

Three years agoDATE

0.99+

FacebookORGANIZATION

0.98+

80sDATE

0.98+

AmiaORGANIZATION

0.98+

oneQUANTITY

0.98+

WhatsappORGANIZATION

0.98+

firstQUANTITY

0.98+

20 years agoDATE

0.98+

BoomiPERSON

0.97+

Boomi World 2019TITLE

0.97+

D.C.LOCATION

0.96+

earlier this weekDATE

0.96+

thousands of interviewsQUANTITY

0.96+

Second timeQUANTITY

0.95+

hundreds of projectsQUANTITY

0.94+

five years agoDATE

0.92+

Boomi World 19TITLE

0.92+

Boomi WorldTITLE

0.91+

one areaQUANTITY

0.9+

one small interactionQUANTITY

0.89+

this morningDATE

0.87+

Narrator: LiveTITLE

0.86+

Dom Wilde and Glenn Sullivan, SnapRoute | CUBEConversation, July 2019


 

(upbeat jazz music) >> Narrator: From our studios in the heart of Silicon Valley, Palo Alto, California, This is a Cube Conversation. >> Everyone welcome to this Cube Conversation here in Palo Alto, California. I'm John Furrier host of the Cube, here in the Cube Studios. We have Dom Wilde the CEO of SnapRoute, and Glenn Sullivan co-founder of SnapRoute hot startup. You guy are out there. Great to see you again, thanks of coming on. >> Good to see you. >> Appreciate it. >> Thanks. >> Your famous you got done at Apple, we talked about last time. You guys were in buildup mode, bringing your product to market. What is the update? You guys are now out there with traction. Dom give us the update. What's going on with the company? Quick update. >> Yeah, so if you remember we've built the sort of new generation of networking, targeted at the next generation of cloud around distributed compute networking. We have built Cloud Native microservices architecture from the ground up to reinvent networking. We now have the product out. We released the product back at the end of February of this year, 2019. So we're out with our initial POCs, we've got a couple of initial deals already done. And a couple customers of record and we deployed up and running with a lot of interest coming in. I think it's kind of one of the topics we want to talk about here is where is the interest coming from and where is this sort of new build out of networking, new build out of cloud happening. >> Yeah I want to get the detail on that traction but real quick what is the main motivator for some of these interest points? Obviously you got traction. What is the main traction points? >> So a couple things, number one, people need to be able to deploy apps faster. The network has always traditionally got in the way. It's been a inhibitor to the speed of business. So, number one, we enable people to deploy applications much faster because we're sort of integrating networking with the rest of the infrastructure operational model. We're also solving some of the problems around, or in fact, all of the problems around how do you keep your network compliant and security patched. And make it easier for operations teams to do those things and get security updates done really really quickly. So there's a whole bunch of operational problems that we're solving and then we're also looking at some of the issues around how do we have both a technology revolution in networking but also a economic revolution. Networking is just too expensive and always has been. So we've got quite a works of revolutionary model there in terms of bringing the cost of networking down significantly. >> Glenn, as the co-founder, as the baby starts to get out there and grow up, what's your perspective? Are you happy with things right now or how are things going on your end? >> Absolutely, the thing that I'm proudest of is the innovation that the team has been able to drive based on having folks that are real experts in Kubernetes, DevOps, and networking, all sitting in one room solving this problem of how you manage a distributed cloud using tool sets that are Cloud Native. That's really what I'm proudest of is the technology that we've been able to build and demonstrate to folks. Because nobody else can really do what we're doing with this mix of DevOps and Kubernetes, and Cloud Native engineering. Like general network protocol and systems people. >> You know it's always fun to interview the founders, and being an entrepreneur myself, sometimes where you get is not always where you thought you'd end up. But you guys always had a good line of sight on this Cloud Native shift in the modern infrastructure. >> Glenn: Right. >> You did work at Apple we talked about it in our last conversation. Really with obviously leading the win, they had pressure from the marketplace selling trillion dollar valuation company. But that was a early indicator. You guys had clear line of sight on this new modern architecture, kind of the cloud 2.0 we were talking about before we came on camera. This is now developing, right? So you guys are now in the market, you're riding that wave. It's a good wave to be on because certainly app developers are talking about microservices, or you talking about Kubernetes, talking about service meshes, stateful data. All these things are now part of the conversation but it's not siloed organizations doing it. So I want to dig into this topic of what is cloud 2.0. How do you guys define this cloud 2.0 and what is cloud 1.0? And then lets talk about cloud 2.0. >> Yeah, so cloud 1.0, huge success. The growth of the hyperscale vendors. You've got the success of Amazon, or Microsoft, Azure, and all of these guys. And that was all about the hyper-centralization of data, bringing all the desperate data centers that enterprises used to run and all that infrastructure into relatively a few locations. A few geographic locations and hyper-centralizing everything to support SaaS applications. Massively successful because really what cloud 1.0 did was it made infrastructure invisible. You could be an application developer and you didn't have to manage or understand infrastructure, you could just go and deploy your applications. So, the rise of SaaS with cloud 1.0. Cloud 2.0 is actually a evolution in our mind. It's not an alternative, it's actually an evolution that compliments what those vendors did with cloud 1.0. But it's actually... It's actually distributing data. So we pulled everything to central and now what we're seeing is that the applications themselves are developing such that we have new use cases. Things like enhanced reality and retail. We have massive sensor networks that are generating enormous amounts of data. We have self-driving cars where, you know, that need rapid response for safety things. And so what happens is you have to put compute closer to the devices that are generating that data. So you have to geographically now disperse and have edge compute and obviously the network that goes with that to support that. And you have to push that out into thousands of locations geographically. And so cloud 2.0 is this move of we've got this whole new class of cloud service providers and some regional telcos and things who are reinventing themselves, and saying, "Hey we can actually provide "the colos, we can provide the smaller locations "to host these edge compute capabilities." But what that creates is a huge networking problem. Distributed networking in massively distributed cases is a really big problem. What it does is it amplifies all of the problems that we coped with in networking for many years. I mean, Glenn, you can talk about this right? When you were at Apple one of the first realtime apps was Siri. >> Yeah, and I know it. Lets get back to the huge networking problem but I want to stay on the thread of cloud 2.0. Glenn, you were talking about that before we came on camera. He referenced that you worked for a time at Apple. Kind of a peak into the future around what cloud 2.0 was. Can you elaborate on this notion of realtime, latency, as an extension to the success of cloud 1.0? >> Right, so we saw this when we were deploying Siri. Siri was originally just a centralized application, just like every other centralized application. You know, iTunes. You buy a song, it doesn't really have to have that much data about you when you're buying that song. You go and you download it via the CDN and it gets it to you very quickly, and you're happy and everything's great. But Siri kind of changed that because now it has to know my voice, it has to know what questions I ask, it has to know things about me that are very personal. And it's also very latency sensitive, right? The quicker that it gets me a response the more likely I am to use it, the more data it gets about me the better the answers get. Everything about it drives that the data has to be close to the edge. So that means the network has to be a lot bigger than it was before. >> And this changes the architectural view. So just to summarize what you said is, iTunes needs to know a lot about the songs that it needs to deliver to. >> Glenn: Right. >> The network delivers it, okay easy. >> Glenn: Right. >> If you're clicking. But with the voice piece that kind of changed the paradigm a little bit because it had to be optimized and peaked for realtime, low latency, accuracy. Different problem set, than say, the iTunes. >> Glenn: Exactly. >> So they've networked together. >> Language specific, right? So, where is the user, what language are they speaking, how much data do we have to have for that language? It's all very very specific to the user. >> So cloud 2.0 is if I can piece this together is cloud 1.0 we get it, Amazon showcased there. It's kind of data, it's a data problem too. It's like AI, you seen the growth of AI validate that. It's about data personalization, Siri is a great example. Edge where you have data (chuckles) that needs to integrate into another application. So if cloud 1.0 is about making the infrastructure invisible, what is cloud 2.0 about? What's the main value proposition? >> To me it's about extracting the value from the data and personalizing it. It's about being able to provide more realtime services and applications while maintaining that infrastructure invisibility paradigm. That is still the big value of any cloud, any public cloud offering, is that I don't want to own the infrastructure, I don't want to know about it, I want to be able to use it and deploy applications. But it's the types of applications now and it's the value that the applications are delivering has changed. It's not just a standard SaaS application like Workday for instance, that is still a very static application-- >> John: It's a monolithic application, yeah. >> These are realtime apps, they're operating realtime. If you take an autonomous car, right? If I'm about to crash my car and the sensors are all going off, and it needs to brake and it needs to send information back and get a response. I want all that to happen in realtime, I don't want to sort of like have-- >> In any extraction layer of any layer of innovation 1.0, 2.0, as you're implying advancement. It's still an application developer opportunity, Glenn, right? >> Absolutely. >> Because at the end of the day the user expectations changed because of the experience that they're getting-- >> Yeah and it only gets worse right? Because the more network that I have the more distributed the network is, the harder it is to manage it. So if you don't take that network OS, the really really boring, not very exciting thing, and treat it the same way you always have. And try to take what you learned in the data center and apply to the edge, you lose the ability to really take advantage of all the things that we've learned from the Cloud Native era a from the public cloud 1.0, right? I mean just look at containers for instance, containers have taken over. But you still see this situation where most of the applications that are infrastructure based aren't actually containerized themselves. So how can they build upon what we've learned from pubic cloud 1.0 and take it to that next level, unless you start replacing the parts of the infrastructure with things that are containerized. >> This just is a side note, just going through my head right now. It's going to be a huge conflict between who leads the innovation in the future. >> Glenn: Absolutely. >> On premises or cloud. And that's going to be an interesting dynamic because you could argue that containerization and networking is a trend in mixed tense to be Cloud Native but now you got it on premises. It's going to be a dynamic we're going to have to watch. But you mentioned, Dom, about this huge networking problem that evolves out of cloud 2.0. >> Dom: Absolutely. >> What is that networking problem? And what specifically is a directionally correct solution for that problem? >> So I think the biggest problem is an operational one. In the cloud 1.0 era and even prior to that when we were in a hosted enterprise data centers, we've always built data centers and the applications running with them, with the assumption that there are physically expert resources there. That if something goes wrong, they can hands-on do something about it. With cloud 2.0 because it's so distributed, you can't have people everywhere. And one of the challenges that has always existed with networking technology and architecture is it is a very static thing. We set it, we forge it, we walk away, and try not to touch it again because it's pretty brittle. 'Cause we know that if we do touch it, it probably breaks and something goes wrong. And we see today a ton of outages, we were talking about a survey the other day that says the second biggest cause of outages in the cloud age is still the network. It's an operational problem whereby I want to be able to go and now touch these thousands of devices for... Usually I'm fixing a bug or I want to add a feature but more and more it's about security. It's more about security compliance, and I want to make sure that all my security updates are done. With a traditional network operating system, we call it The Monolith, all of the features are in big blob. You can turn them off but you can't remove them. So it's a big blob and all of those features are interdependent. When you have to do a security patch in a traditional model, what happens is that you actually are going to replace the blob. And so you're going to remove that and put a new blob in place. It's a rip and replace. >> And that's a hard enough operational problem all on it's own because when you do that you sort of down things and up things. So consequently-- >> And anyone who's done any location shifting on hardware knows it's a multi-day/week operation. >> It is but, ya know, and what people do is they overbuild the network, so they have two of everything. So it's when they down one, the other one stays up. When you're in thousands of geographic locations, that's really expensive to have two of everything. >> So the problem statement is essentially how do you have a functional robust network that can handle the kind of apps and IOT. Is that-- >> Yeah it is absolutely but as I said it's important to understand that you have this Monolith that is getting in the way of this robust network. What we've done is we've said, 'We'll apply Cloud Native technology in thinking.' Containerize the actual network operating system itself, not just the protocols, but the actually infrastructure services to the operating system. So if you have to security patch something or you have to fix something, you can replace an individual container and you don't touch anything else. So you maintain a known state for your network that devices is probably going to be way more reliable, and you don't have to interrupt any kind of service. So rather than downing and uping the thing you're just replacing a container. >> You guys built a service on top of the networks to make it manageable, make it more functional, is that-- >> We actually didn't build it. This is the beautiful part. If we built it then I would just be another network vendor that says, "Hey trust my propietary not-open solution. "I can do it better than everyone else." That would be what traditional vendors did with stuff like ISSU and things like that. We've actually just used Kubernetes to do that. So you've already trust Kubernetes, it came out of Google, everybody's adding to it, it's the best community project ever for distributed systems. So you don't have to trust that we've built the solution, you just trust in Kubernetes. So what we've done is we made the network native to that and then used that paradigm to do these updates and keep everything current. >> And the reason why you're getting traction is you're attractive to a network environment because you're not there to sell them more networking (laughs). >> Right. >> You're there to give them more network capability with Kubernetes. >> Yeah, well I mean-- Yeah we're attractive to a business for two reasons. We're attractive to the business because we enable you to move your business faster. You can deploy applications faster, more reliably, you can keep them up and running. So from a business perspective, we've taken away the pain of the network interrupting the business. From an operations perspective, from an IT operations network operations perspective, what we've done is we've made the network manageable. We've now, as you said, we've taken this paradigm and said what would've taken months of pretesting, and planning, and troubleshooting at two o'clock in the morning has now become a matter of seconds in order to replace a container. And has eased the burden operationally. And now those operational teams can do worthwhile work that is more meaningful than just testing a bunch of vendor fixes. >> Yeah, even though cloud 1.0 had networking in their computed storage, I think cloud 1.0 data would be about computing storage. cloud 2.0 is really about the network and all the data that's going around to help the app developers scale up their capability. >> Dom: Yeah, that's a great way to think about it. >> I was talking about the use cases. I think the next track that I'd love to dig in with you guys on is as you guys are pioneering this new modern approach, some of the use cases that you touch are probably also pretty modern. What specific use cases are you guys getting into or your customers are talking about. What are some of these cloud 2.0 use cases that you're seeing? >> Yeah, so one we already touched on was this sort of horizontally and generally was the security one. I mean security is everybody's business today. And it's a very very difficult networking problem, ya know, keeping things compliant. If you take for instance, recently Cisco announced that there was faulty vulnerabilities in their mainstream Nexus products. And that's not a terrible thing, it's normal course of business. And they put out the patches and the fixes and said, "Hey, here it is." But now when you think about the burden on any IT team. That comes out of the blue, they hadn't planned for it. Now they have to take the time to take a step back and what they have to do is say, well I've got this new code. I don't know what else was fixed or changed in there. So I now have to retest everything and retest all of my use cases, and I have to spend considerable time to do that to understand what else has changed. And then I have to have a plan to go out and deploy this. That's a hard enough problem in a centralized data center. Doing that across hundreds, if not thousands of geographically dispersed sights is a nightmare. But it's just, ya know, the new world we live in, this is going to happen more and more and more. And so being able to change that operational model to say actually this is trivial. And actually what you should be doing is doing these updates everyday to keep yourself compliant. >> Do the use cases Glenn, have certain characteristics? I mean, we're talking about latency and bandwidth that's a traditional networking kind of philosophy. Is there certain characteristics that these new use cases have? Is it latency and bandwidth, is there anything else? >> No it's mostly about bringing properties like CI/CD to networking, right? So the biggest thing we're seeing now is as people start to investigate disaggregated networking and new ways of doing things. They're not getting this free pass that they used to get for the network because the network isn't just an appliance anymore. When you had something that was from one of the three vendors you'd say, "Okay, that thing runs some version of Linux on it. "I don't know what it is. "Maybe it runs free SD in Juniper's case. "I don't understand what kernel it is, "I don't care just keep that thing up to date." But now it's like, "Oh I'm starting to "add more services to my network devices." Say in the remote sites I want to kickstart some servers with these network devices I install first, well that means that I have to start treating this thing like it's another server in my environment for my provisioning. That means that everything on that box has to be compliant just like it is in everything else. Lets not even get into personal credit card information, personal identifying information. Everything is becoming more and more heightened from a non-exemplary status. >> It's a surface area device, I mean it's part of the surface area. >> And if it's not inside a data center than it's even worse because you can't guarantee the physical security of that device as much as you could if it was inside a regular data center. >> So this is a new dynamic that's going on with the advent of security, regulatory issues, and also obviously the parameter being dismantled because of cloud. >> Glenn: Absolutely. >> Yeah, you also got specific use cases. There are multiple verticals and industries that are having these challenges. Retail is a good example, point-of-sale. Anywhere where you have the sort of a branch problem or mentality where you're running sophisticated applications, and by the way, people think of point-of-sale is not terribly sophisticated. It's incredibly sophisticated these days. Incredibly sophisticated. And there are thousands of these devices, hundreds of stores, thousands of devices, similar with healthcare. You know, again, distributed hospitals, medical centers, doctor's offices, etcetera. You have all running private mission critical data. I think one of the ones that we see coming is this kind of autonomous car thing. As we get IOT sensor networks, large amounts of data being aggregated from those. So there's lots of different use cases. We add on a lot of interest. And to be quite frankly, the challenge for us as a startup is keeping focused on just a few things today. But the number of things we're being asked to look at is just enormous. >> Well those tailwinds for you guys in terms of momentum, you have this cloud 2.0 trend. Which we talked about. But hybrid cloud and multi-cloud is essentially distributed cloud on edge? If you think about it. >> Yeah, yeah. >> And that's what most companies are going to do, they're going to keep there own premises and their going to treat it as either on their platform or an external remote location that's going to be everywhere, big surface area. So with that, what are some of the under the hood benefits of the OS? Can you go into more detail on that because I find that to be much more interesting to say the network architect or someone who's saying, "Hey you know what? "I got hybrid cloud right now. "I got Amazon, I know the future's coming on "to my front door step really fast. "I got to start architecting, I got to start hiring, "I got to start planning for distributed cloud "and distributed edge deployments." If not already doing it. So technical depth becomes an huge issue. I might try some things with my old gear or old stuff. They're in this mold, you know, a lot of people are in that mode. I'll do a little technical depth to learn but ultimately I got to build out this capability. What do you guys do for that? >> So the critical thing for us is that you have to standardize on an open non-proprietary orchestration layer, right? You can talk about containers and microservices all day long. We hear those terms all the time but what people really need to make sure that they focus on is that their orchestrator that managing those containers is open and non-proprietary. If you pull that from one of the current vendors it's going to be something that is network centric and it's going to be something that was developed by them for their use. Their basically saying here's another silo, keep feeding into it. Sure we give you API, sure we give you a way to programmatically configure the network but you're still doing it specifically to me. One of the smartest decisions we made besides just using Kubernetes as core infrastructure. We've also completely adapted their API structure. So if you already speak Kubernetes, if you understand how to configure network paradigms in Kubernetes, we just extend that. So now you can take somebody, who off the street might be a Cloud Native Kubernetes expert and say here's a little bit of networking, go to play the network, right? You just have to take the barrier down of what you have to teach them from this CLI and this API structure that's specific to this vendor, and then this CLI and this API structure. But the cool thing about what we're doing is we also don't leave the network engineers out in the cold, we've give them a fully Cloud Native network CLI that is just like everything else they're used to, but it's doing all this Cloud Native Kubernetes microservices containers stuff underneath to hide all that from them. So they don't have to learn it and that's powerful because we recognize because of our Ops experience, there's a lot of different people touching these boxes. Whether you put it in a ivory tower or not, you've got knocks that have to login and check 'em, you've got junior network admin, senior network engineers, architects. You've got Cloud Native folks, Kubernetes folks, everybody has to look at these boxes, so they all have to have ways that end of the switch, end of the routers that is native to what they understand. So that's very critical as to present data that makes sense to the audience. >> And also give them comfort to what they're used to like you said before. If they got whatever's running Linux on there, as long as it's operationally running, water's flowing through the pipes, your packets are moving through, their happy. >> Glenn: Right. >> But they got to have this new capability to please the people who need to touch the boxes and work with the network, and gives them some more capabilities. >> Right, it prevents you from building those silos which is really critical in the Cloud Native. And that's what public cloud 1.0 taught us, right? Is stop building these silos, these infrastructure silos and say okay, you look at AWS right now. There's AWS certified engineers, they're not network experts, they're not storage experts, their not compute experts, they're AWS experts. And you're going to see the same thing happen with Cloud Native. >> Cloud 3.0 is decimating the silos basically 'cause if this goes that next level, that's why horizontally scalable networks is the way to go, right? That's kind of what you were talking about about the use case. >> Yeah, I think all revolutionary ideas are all actually more transformational. Revolutions begin by taking something that is familiar and presenting it in a new way, and enabling somebody to do something different. So I think it's important as we approach this is to not just come in and go, 'Oh what you're doing is stupid, we have to replace it.' The answer is, what you're doing is obviously the right thing. But you've not been given the tools that enable you to take full advantage and achieve the full potential of the network as it relates to your business. >> And you guys know as well as we do is that the networking folks are, it's a high bar for them because you mentioned the security and the lockdown nature of networking. It's always been, you don't F with it because you think that thing is going to be, anyone who touches it, they need to be reviewed. So they're a hard customer to sell to. You got to align with their Ops mindset. >> I think the network operators have been, and Glenn, and our other co-founder have waxed theoretical about this. (laughter) But network operators have been forced to live in a world of no. Anytime the business comes to them and says, "Hey we need you to do X." The answer is no, because I know that if I touch my stuff it's going to break, or I'm limited in what I can do, or I can't achieve the timeframe that you're looking for. So the network has always been an inhibitor but the heroes of the moment are actually the network operations team. Because nobody knows that the network was an inhibitor. >> Well this is an interesting agile conversation we've been having this is our, here in our Cube Studios yesterday amongst our own team because we love agile content. Agile's different, agile is about getting to yes because iteration in a sense is about learning, right? So you have to say no, but you have to say no with the idea of getting to yes. Because the whole microservices is about figuring out through iteration and ultimately automation, what to tear down, what to. So I would see a trend where it's not the no Ops kind of guys, as they say, "No, no, no." It's no, don't mess with the current operational plumbing. >> Glenn: Right. >> But we got to get to yes for the new capabilities. So there's a shift in the Cloud Native. Your thoughts and reaction to that Glenn. >> Yeah, so it's basically like I set myself up so that I'm doing a whole drop the forklift with everything in there, like a crated replacement. Networking has always been this way. I'm not saying no to you, I'm just saying not right now. I do my maintenance three times a year on the third Sunday of the second month and the moon's in the right place, and I make sure that I've 50/60 changes. I've got 20 engineers on call, we do everything in order. We've got a rollback plan if something breaks. This is the problem. Network engineers don't do enough changes to build a muscle like the agile developers have seen or CI/CD developers have seen. Where it's like I do a little bit of changes everyday, if something breaks, I roll it back. I do a little bit of changes everyday, and if something breaks I roll it back. That's what we enable because you can do things without breaking the entire system, you can just replace a container, you can move on. In networking, the classic networking, you're stack modeling so many changes and so many new things that everything has to be a greenfield deployment. How many times have you heard that? Like, "Oh this thing would be perfect "for our Greenfield Data Center. "We're going to do everything different "in this Greenfield Data Center." And that doesn't work. >> You don't get a mulligan in network and you realize they say, look this is a good point, great conversation. I think that is a very good follow up topic because developing those muscles is an operational practice as well as understanding what you're building. You got to know what the outcome looks like, this is where we're starting to get into more of these agile apps. And you guys are at the front end of it, and I think this is a sea change, cloud 2.0. >> Yeah, it is. >> Quick plug for the company. Take the last minute to explain what you guys are up to, hiring, funding. What are you guys looking for? Give a quick plug for the company. >> Yeah, I mean, we're doing great. Always hiring, everybody always is if you're a cutting edge startup. We're always looking for great new talent. Yeah, we're moving forward with our next round of funding plans. We're looking at expanding the growth of the company or go to market. Doubling down on our engineering. We're just delivering now our Kubernetes fabric capabilities, so that's the next big functional release that we're actually already delivered the beta of. So taking Kubernetes and actually using it as a distributed fabric. So a lot of exciting things happening technology wise. A lot of customer engagements happening. So yeah, it's great. >> Glenn, what are you excited about now? Obviously Kubernetes, we know you're excited about. >> Oh yeah. >> But what's getting you excited. >> So the dual process that we have where we actually use, we're doing stuff in Kubernetes that nobody else is doing because we have a version that runs on the switch. And it manages all the containers local and then it also talks to a big controller. It's fixing that SDN issue, right? Where you have this SDN controller that manages everything in the data plane, and it controls my devices, and it uses open flow to do this. And it has a headless operation in case the controllers go away. Oh and if I need another controller, here's another one, so now I've got two controllers. It gets really messy, you got to buy a lot of gear to manage it. Now we're saying, 'Okay, you've got 'Kubernetes running local. 'You don't want to have a Kubernetes cluster, don't bother.' It just uses it autonomously. 'You want to manage it as a fabric like Dom says. 'Now you can use the Kubernetes fabric 'that you've already built. 'There are Kubernetes masters that 'you've already built for the applications.' And now we can start to really imbed some really neat operational stuff in there. Things that as a network engineer took me years of breaking stuff and then fixing it to learn, we can start putting those operational intelligence in the operating system itself to make it react to problems in the network and solve things before waking people up at three a.m.. >> This takes policy to a whole nother level. >> Absolutely. >> It's a whole nother intelligence layer. >> Yeah, if this is broken, do this, cut off the arm to save the rest of the animal. And don't wake people up and troubleshoot stuff, troubleshoot stuff during the day when everybody's there and happy and awake. >> Guys congratulations. SnapRoute, hot startup. Networking is the real area for cloud 2.0. You got realtime, you got data, you got to move packets from A to B, you got to store them, you got to move compute around, you need to (laughs) move stuff around the cloud to distribute to networks. Thanks for coming in. >> Thanks. >> Thank you. >> Appreciate it. >> Thanks for having us. >> I'm John Furrier for Cube Conversation here in Palo Alto which SnapRoute, thanks for watching. (upbeat jazz music)

Published Date : Jul 25 2019

SUMMARY :

Narrator: From our studios in the Great to see you again, thanks of coming on. What is the update? is the interest coming from and What is the main traction points? It's been a inhibitor to the speed of business. is the innovation that the team has been able You know it's always fun to interview the founders, kind of the cloud 2.0 we were talking of the problems that we coped with Kind of a peak into the future around what cloud 2.0 was. So that means the network has to be a lot So just to summarize what you said is, because it had to be optimized and peaked how much data do we have to have for that language? So if cloud 1.0 is about making the and it's the value that the applications and it needs to brake and it needs In any extraction layer of any layer of in the data center and apply to the edge, It's going to be a huge conflict to be Cloud Native but now you got it on premises. In the cloud 1.0 era and even prior to that all on it's own because when you do that And anyone who's done any location shifting that's really expensive to have two of everything. that can handle the kind of apps and IOT. it's important to understand that you built the solution, you just trust in Kubernetes. And the reason why you're getting traction You're there to give them more network we enable you to move your business faster. and all the data that's going around to help some of the use cases that you touch And actually what you should be doing Do the use cases Glenn, have certain characteristics? So the biggest thing we're seeing now it's part of the surface area. of that device as much as you could the parameter being dismantled because of cloud. And to be quite frankly, the challenge for us of momentum, you have this cloud 2.0 trend. because I find that to be much more interesting of what you have to teach them from And also give them comfort to what But they got to have this new capability Right, it prevents you from building those silos That's kind of what you were talking and achieve the full potential of the network is that the networking folks are, Anytime the business comes to them So you have to say no, but you have Your thoughts and reaction to that Glenn. and the moon's in the right place, You got to know what the outcome looks like, Take the last minute to explain growth of the company or go to market. Glenn, what are you excited about now? So the dual process that we have cut off the arm to save the rest of the animal. the cloud to distribute to networks. in Palo Alto which SnapRoute, thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
GlennPERSON

0.99+

John FurrierPERSON

0.99+

AmazonORGANIZATION

0.99+

AppleORGANIZATION

0.99+

JohnPERSON

0.99+

SnapRouteORGANIZATION

0.99+

AWSORGANIZATION

0.99+

July 2019DATE

0.99+

MicrosoftORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

Glenn SullivanPERSON

0.99+

SiriTITLE

0.99+

CiscoORGANIZATION

0.99+

Dom WildePERSON

0.99+

20 engineersQUANTITY

0.99+

thousandsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

twoQUANTITY

0.99+

cloud 1.0TITLE

0.99+

cloud 2.0TITLE

0.99+

yesterdayDATE

0.99+

DomPERSON

0.99+

two controllersQUANTITY

0.99+

iTunesTITLE

0.99+

Cloud 2.0TITLE

0.99+

oneQUANTITY

0.99+

thousands of devicesQUANTITY

0.99+

three vendorsQUANTITY

0.99+

hundredsQUANTITY

0.99+

two reasonsQUANTITY

0.99+

todayDATE

0.99+

two o'clockDATE

0.99+

LinuxTITLE

0.98+

three a.mDATE

0.98+

GoogleORGANIZATION

0.98+

hundreds of storesQUANTITY

0.98+

KubernetesTITLE

0.98+

Cube StudiosORGANIZATION

0.98+

Greenfield Data CenterORGANIZATION

0.97+

Cloud Native KubernetesTITLE

0.97+

AzureORGANIZATION

0.97+

end of February of this year, 2019DATE

0.97+

Cloud 3.0TITLE

0.97+

OneQUANTITY

0.97+

second monthQUANTITY

0.96+

Lew Cirne, New Relic | AWS re:Invent 2017


 

(upbeat instrumental music) >> Narrator: Live from Las Vegas, it's the Cube. Covering AWS re:Invent 2017, presented by AWS, Intel, and our ecosystem of partners. >> Hey, welcome back everyone. This is the Cube, live here in Las Vegas for AWS re:Invent 2017. I'm John Furrier, the cohost of the cube. My cohost, Keith Townsend, here for our fifth year in a row, covering the thunderous growth of Amazon Web Services as they continue to not only nail the developers and the start ups, but continue to win the enterprise. Our next guest, Lew Cirne, who's the founder and CEO of publicly held New Relic, a very successful startup, one of the most admired places to work in the Bay area, and in tech. Lew, great to have you on the Cube, welcome. >> Hi. >> John: Hi, first time. >> I know, so great to be here. I can't believe it's the firs time. I've been such a fan for a long time. >> Now you're an alumni, the benefits. >> Here I am. >> All the benefits of being an alumni, all those season tickets to all of our games. I gotta, I want to just share something with the audience out there. You're the only public CEO that I know that's been on the Cube that writes software, has a GitHub account, and manages a publicly held company. So that's a unique thing and I want to just say it's awesome. >> It's a full plate, that's for sure, but I'm the luckiest guy in the world because I've always loved building software since my first computer I got in the Christmas of 82, what's that, 35 years ago now, and, and so, what an exciting time to be someone who's passionate about software and technology. Look what's going on in the cloud, and so I've been fortunate enough to start this company that's participating in this revolution in technology, so it's great. >> You guys are always in the cutting edge. I noticed, you guys get your hands dirty, you get in there, you're coding away, but you guys are very successful in a very important area right now, which is instrumentation of data. >> Lew: Absolutely. >> In applications, so I really want to get your, kind of your thoughts on the landscape. We were talking about on our intro analysis, that we're seeing a renaissance in software development, where with open source growing exponentially, a new software methodology's coming out, where there's just so much going on. Multiple databases within one app, IOT, so a new kind of thinking is evolving. What's your take on that? >> Well I think it's really important to understand why all of this is happening. So why are there 40,000 people here in Las Vegas for re:Invent? Why are people consuming the cloud at just a dizzying pace? It's not just for the sake of cloud computing, it's because there's this business imperative to compete on software, so if you look at where software was 15, 20 years ago, software was a tool to reduce costs and automate things in the back end. Now your software's your business. If you are a large global bank, your app has more to do with your customers' experience and satisfaction than the branch because nobody walks through a branch anymore, so now the best software developing bank is going to be the winner, so if you think about that's what's going on and that's why they're adopting new technologies to move faster, so where do we fit in? If you're going to compete on your software, and by competing you have to build the best stuff, the fastest as possible, so you have to get to market quickly, and that means you've got to change a lot. Anytime you're changing something rapidly, that introduces risk. New Relic de-risks all of that rapid movement by instrumentation, by measuring everything in the software. Those measurements help you move faster with confidence. >> And also I would say that you, not only does that create risks, but new software creates risks, so I'm doing server-less, I want to try the new service because it could A, add value, AKA Lamda or whatever, so a new, maybe time out is needed, so all kinds of new things or elements are going on inside the software stacks. >> Yes, and more complex than ever before, right, so you introduce things like Lambda server-less function computing, call it what you will, and you integrate it with, you know, microservice architecture, and so instead of one monolith, you might have hundreds, or even some of our customers have thousands of independent services, all supposed to be working in flawless concert in order to deliver a great customer experience. How on earth do you make sense of whether that's all working? Well it involves collecting an enormous amount of data about everything that's going on in real time, and then applying intelligence to that data using what we call at New Relic applied intelligence to tell our customers in real time, here's what's working well, and more importantly, here's what's going to be a problem if you don't take immediate action. And that's, you know, that's a hard problem to solve. We think we're the best at doing it. >> And that's critical too, because like you said, if it crashes, or there's some sort of breach hold that comes out there, all the stuff is at risk. >> And like, customers have just incredibly high expectations that only get higher and higher every day. Like, you know, one of our customers is Domino's and it's an amazing thing where you pre-order your pizza and you can see, second by second, how your order is doing, right? They put your pizza in the oven, then they took the pizza out of the oven, and I see that in phone, and that gives, that's that feedback that's valuable to me, right? So long as it's working, right? >> John: I'm hungry now. >> So we, we've ravished this word digital transformation all the time. >> Oh yeah, it's a little overused, but. >> It is a little overused. But melding that physical world with cold. I love it that you're a developer. First off, what's your favorite language? >> Oh geez, it really depends on the project. I'm really getting into, I love React right now on the front end. I'll still do Java when it needs some heavy lifting, Ruby for rapid prototyping. It really depends on the task at hand. >> So the value of reducing friction from a developer seeing a problem, needing to solve that problem, and getting the resources needed to solve a problem, AWS does a wonderful job of saying, you know what, developer, give me your credit card, we'll give you all the tools you need. Where is the first stumbling block because this is new capability, net new over the past few years? Where's the first set of stumbling blocks when developers reduce friction, get to that first level contact with the branch manager of the pizza store, where does it fall apart and New Relic comes in to help? >> Look, how many times have you ever had a developer or a tech or someone that works on my machine, right? >> Exactly, worked on my laptop. I don't know why it didn't deploy well in production, it worked perfectly fine on my laptop. >> I really, I started thinking about and solving this problem 20 years ago now. The notion of less instrument Java code because I was frustrated with the stuff that worked on my laptop. I couldn't understand why it didn't work when a customer used it, and everything prior to the customer using the software is nothing but sunk cost. There is no value in the software you're building until it runs in production. How well it runs in production is what determines the fate of the application. And that's where New Relic comes in, is we feel like alright, let me take you back to the ancient days of like turn of the century, 2000, nothing went to production without QA. Now nothing goes to production without instrumentation. >> Yeah, but now Agile's there, so the old days was a crab. You built a software product, but you didn't know if it was going to work until it went into production with QA. Now you're shipping stuff fast, so it's still. You've got that dev off mindset, but it's in QA. >> One of our customers, Airbnb, deploys more than a thousand times a day. And this is not a small, low load site. I mean like every deploy has to work, otherwise millions of people are impacted and it's the whole business, and it's a big business, so you're talking about a pace of innovation and change that cannot be managed with a traditional QA cycle. I've, of course testing's important, but instrumentation's more important than that. >> Lew, I want to ask you an important question because I asked Andy Jassie this last Monday when I had a one on one with him. A lot of people that are entering ecosystem for Amazon is new, that are new, or considerably, Amazon's the big, they're fearful, it's always going to be that way. He highlighted your company, New Relic, and said they're an amazing part, they do extremely well, even though they introduced Cloud Watch, which because some customers just wanted it, they have monitoring, but you guys are so much better. I said that, but if he implied it, obviously you're doing well. So the successful participation of the ecosystem is there. You can be successful in the Amazon ecosystem. >> Absolutely, it's a great partnership. >> So what's this formula for a new entry coming in or someone who's here that needs to find some white space? How do you read the tea leaves to know where not to play and where to play? >> You know, it just comes down to the fundamental good thought process you use when you're thinking about approaching your customer too. Don't think about what's in it for me, the Amazon partner. What's in it for Amazon? How do you make them more successful? And so when I imagine myself as Andy, who is like, what an incredible job he's done, but what Andy, what's top mind of Andy is how do I get more customers consuming more of Amazon faster, right? All of Amazon, all of Amazon's web services, and so we solve a problem for Andy and his team. We help our customers consume Amazon faster because we give them the confidence to consume more and move faster, and there's data to prove it. When Amazon asks their customers that aren't yet New Relic customers how much they're consuming and how fast, they get a slower rate of adoption than they do for the cohort that uses New Relic, and so it's in our mutual interest to go to market together because we help them consume more, and so I. >> John: Build a good product. >> Build a good product. >> John: Customer value. >> Think about how you help your partner be successful. Talk in that language, don't talk in language. >> Alright, so personal question. So you and I, pretend we're sitting here, having a beer, you're playing the guitar. >> A little light. >> I'm singing some tunes and Keith's our friend. He says I'm in trouble, I'm a CIO. I've got a transformation project. I don't know what to do. Which cloud do I use? How do I become data driven? Guys, help me out. Lew, what do you say? >> I say first of all, you have an instrumentation strategy. Everything, if you're a CIO in a large organization, you don't have one, two, three, or four projects. You have dozens, if not hundreds, sometimes thousands of applications and services that are all running, and you've got, I haven't met a CIO that doesn't say they've got too many monitoring tools. So you need an instrumentation strategy. Nothing should run in production without instrumentation. That's not just the service light stuff that runs on EC2, it's also every click that runs. You know, when Dunkin Donuts, which has been a longtime customer of ours, and they run in the Amazon Cloud, you know when you pre-order that doughnut, we track the tap, how long it takes from the phone all the way through the cloud services, all that's fully instrumented, so if you're a CIO, you say I can't be tactical with instrumentation. If I'm going to move fast and compete at my software, nothing should run in production without education. >> John: That's native. >> That's right. >> Foundational. >> Foundational. It's a core requirement to run in production if you're going to move at any level of speed, so establish that strategy, and then we think, we offer the best instrumentation, certainly the best value, the most ubiquitous, the easiest to use, the most comprehensive, and then we make the most sense of it, but you could pick another, you know you could pick another strategy. Some people do the heavy lifting of manually instrumenting all their code. We just don't think that's a good use of your developer time, so we automatically do that for you, but have a strategy and then execute to it. >> Awesome. Lew, congratulations on a blowout quarter. I won't even get you to comment on it, just say that you guys had a great quarter, stocks at an all time high, all because you guys are doing a great product. Congratulations and great to have you on the Cube. >> We're delighted to be here. I've honestly, I've been a longtime fan. It means a lot that you could have me on, and we really enjoy partnering with Amazon, and what a great show. >> Yeah, super successful ecosystem partner, one of the best, New Relic, based out of San Francisco, here with the founder and CEO, also musician, writes code, gets down and dirty, runs a publicly held company. He's Superman. Lew, thanks for coming on the Cube. More live data and action here on the Cube after this short break, stay with us. (upbeat instrumental music)

Published Date : Nov 28 2017

SUMMARY :

Narrator: Live from Las Vegas, it's the Cube. Lew, great to have you on the Cube, welcome. I know, so great to be here. that's been on the Cube that writes software, but I'm the luckiest guy in the world I noticed, you guys get your hands dirty, In applications, so I really want to get your, and by competing you have to build the best stuff, inside the software stacks. and you integrate it with, you know, because like you said, if it crashes, and it's an amazing thing where you pre-order your pizza all the time. I love it that you're a developer. Oh geez, it really depends on the project. and getting the resources needed to solve a problem, I don't know why it didn't deploy well in production, and everything prior to the customer using so the old days was a crab. and it's the whole business, and it's a big business, Lew, I want to ask you an important question and there's data to prove it. Think about how you help your partner be successful. So you and I, pretend we're sitting here, Lew, what do you say? I say first of all, you have an instrumentation strategy. the easiest to use, the most comprehensive, Congratulations and great to have you on the Cube. It means a lot that you could have me on, Lew, thanks for coming on the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AndyPERSON

0.99+

JohnPERSON

0.99+

Keith TownsendPERSON

0.99+

John FurrierPERSON

0.99+

AmazonORGANIZATION

0.99+

Andy JassiePERSON

0.99+

AWSORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

oneQUANTITY

0.99+

Amazon Web ServicesORGANIZATION

0.99+

New RelicORGANIZATION

0.99+

KeithPERSON

0.99+

JavaTITLE

0.99+

twoQUANTITY

0.99+

first computerQUANTITY

0.99+

hundredsQUANTITY

0.99+

Lew CirnePERSON

0.99+

Las VegasLOCATION

0.99+

fifth yearQUANTITY

0.99+

dozensQUANTITY

0.99+

40,000 peopleQUANTITY

0.99+

LewPERSON

0.99+

threeQUANTITY

0.99+

Dunkin DonutsORGANIZATION

0.99+

ChristmasEVENT

0.99+

firs timeQUANTITY

0.99+

thousandsQUANTITY

0.99+

FirstQUANTITY

0.99+

last MondayDATE

0.99+

20 years agoDATE

0.98+

GitHubORGANIZATION

0.98+

35 years agoDATE

0.98+

2000DATE

0.98+

LambdaTITLE

0.98+

IntelORGANIZATION

0.97+

first setQUANTITY

0.97+

BayLOCATION

0.97+

LamdaTITLE

0.97+

one appQUANTITY

0.97+

first timeQUANTITY

0.96+

ReactTITLE

0.96+

first stumbling blockQUANTITY

0.96+

AirbnbORGANIZATION

0.95+

more than a thousand times a dayQUANTITY

0.95+

CubeCOMMERCIAL_ITEM

0.95+

SupermanPERSON

0.94+

OneQUANTITY

0.94+

RubyTITLE

0.93+

four projectsQUANTITY

0.93+

EC2TITLE

0.91+

15DATE

0.88+

re:EVENT

0.88+

first levelQUANTITY

0.88+

one monolithQUANTITY

0.87+

Domino'sORGANIZATION

0.86+

thousands of independent servicesQUANTITY

0.82+

82QUANTITY

0.78+