Micah Coletti & Venkat Ramakrishnan | KubeCon + CloudNativeCon NA 2021
>>Mhm Welcome back to Los Angeles. The Cubans live, I can't say that enough. The Cubans live. We're at cu con cloud Native Con 21. We've been here all day yesterday and today and tomorrow talking with lots of gas. Really uncovering what's going on in the world of kubernetes, lisa martin here with Dave Nicholson. We've got some folks. Next we're gonna be talking about a customer use case, which is always one of my favorite things to talk about. Please welcome Michael Coletti, the principal platform engineer at CHG Healthcare and then cat from a christian VP of products from port works by pure storage. Guys, welcome to the program, Thank you. Happy to be here. Yeah. So Michael, first of all, let's go ahead and start with you, give the audience an overview of CHG healthcare. >>Yeah, so CHG Healthcare were a staffing company so we sure like a locum pen and so our clients are doctors and hospitals, so we help staff hospitals with temporary doctors or even permanent placing. So we deal with a lot of doctors, a lot of nursing and we're were a combination of multiple companies to see if she is the parents. So and uh yeah, we're known in the industry is one of the leaders in this, this field and providing uh hospitals with high quality uh doctors and nurses and uh you know, our customer services like number one and one of these are Ceos really focused on is now how do we make that more digital, how we provide that same level of quality of service, but a digital experience as rich for >>I can imagine there was a massive need for that in the last 18 months alone. >>Covid definitely really raised that awareness out for us and the importance of that digital experience and that we need to be out there in the digital market. >>Absolutely. So your customer report works by pure storage, we're gonna get into that. But then can talk to us about what's going on. The acquisition of port works by peer storage was about a year ago I talked to us about your VP of product, what's going on? >>Yeah, I mean, you know, first of all, I think I could not say how much of a great fit for a port works to be part of your storage. It's uh uh Pure itself is a very fast moving large start up that's a dominant leader in a flash and data center space. And you know, pure recognizes the fact that Cuban it is is the new operating system of the cloud is now how you know, it's kind of virtualizing the cloud itself and there is a, you know, a big burgeoning need for data management in communities and how you can kind of orchestrate work lords between your on prem data centers in the cloud and back. So port books fits right into the story as complete vision of data management for our customers and uh spend phenomenal or business has grown as part of being part of Pure and uh you know, we're looking at uh launching some new products as well and it's all exciting times. >>So you must have been pretty delighted to be acquired as a startup by essentially a startup because because although pure has reached significant milestones in the storage business and is a leader in flash storage still, that, that startup mindset is there, that's unique, that's not, that's not the same as being acquired by a company that's been around for 100 years seeking to revitalize >>itself. Can >>you talk a little bit about that >>aspect? So I think it will uh, Purest culture is highly innovation driven and it's a very open flat culture. Right? I mean everybody impure is accessible, it can easily have a conversation with folks and everybody has his learning mindset and Port works is and has always been in the same way. Right? So when you put these teams together, if we can create wonders, I mean we, right after that position, just within a few months we announced an integrated solution that Port works orchestrates volumes and she file shares in Pure flash products and then delivers as an integrated solution for our customers. And Pure has a phenomenal uh, cloud based monitoring and management system called Pure one that we integrated well into. Now we're bringing the power of all of the observe ability that Purest customers are used to for all of the partners customers and having super happy, you know, delivering that capability to our customers and our customers are delighted now they can have a complete view all the way from community is an >>app to the >>flash and I don't think any one company on the planet can even climb, they can do that. >>I think, I think it's fair to acknowledge that pure one was observe ability before observe ability was a word. Exactly one used regularly. So that's very interesting. >>I could talk to us about obviously you are a customer CHD as a customer of court works now Port works by peer storage. Talk to us about the use case, what what was the compelling? It was their compelling event and from a storage perspective that that led you to Port works in the >>first so we be, they began this our Ceo basically in the vision, we we need to have a digital presence, we need and hazards and this was even before Covid, so they brought me on board and my my manager read uh glass or he we basically had this task to how are we going to get out into the cloud, how we're going to make that happen And we we chose to follow very much cloud native strategy and the platform of choice. I mean it just made sense with kubernetes and so when we were looking at kubernetes, we're starting to figure out how we're doing, we knew that data is going to be a big factor, you know, um being to provide data, we're very much focused on an event driven, were really pushing to event driven architecture. So we leverage Kafka on top of kubernetes, but at the time we were actually leveraging Kafka with M S K down out in a W S and that was just a huge cost to us. So I came on board, I had experienced with poor works prior company before that and I basically said we need to figure out a great storage away overlay. And the only way to do is we gotta have high performance storage, we've got to have secure, we gotta be able to back up and recover that storage and the poor works was the right match and that allowed us to have a very smooth transition off of M S K onto kubernetes, saving us, it's a significant amount of money per month and just leverage that already existing hardware that are existing, compute memory and just in the and move right to port works, >>leveraging your existing investments. >>Exactly which is key. Very, very key. So, >>so been kept, how common are the challenges that when you guys came together with the HD, how common are the challenges? It's actually, >>that's a great question, you know, this is, I'll tell you the challenges that Michael and his team are running into is what we see a lot in the, in the industry where people pay a ton of money, you know, to, you know, to to other vendors or especially in some cases use some cloud native services, but they want to have control over the data. They want to control the cost and they want higher performance and they want to have, you know, there's also governance and regulatory things that they need to control better. So they want to kind of bring these services and have more control over them. Right? So now we will work very well with all of our partners including the cloud providers as well as uh, you know, an from several vendors and everybody but different customers are different kinds of needs and port works gives them the flexibility if you are a customer who want, you know, have a lot of control over your applications, the performance of the agency and want to control cars very well in leveraging existing investments board works can deliver that for you in your data center right now you can integrate it with pure slash and you get a complete solution or you won't run it in cloud and you still want to have leverage the agility of the cloud and scale for books delivers a solution for you as well. So it kind of not only protects their investment in future proves their architecture, you get future proving your architecture completely. So if you want to tear the cloud or burst the cloud, you have a great solution that you can continue to leverage >>when you hear a future proof and I'm a marketer. So I always go, I love to know what it means to different people, what does that mean to you in your environment? >>My environment. So a future proof means like one of the things we've been addressing lately, that's just a real big challenge and I'm sure it's a challenge in the industry, especially Q and A's is upgrading our clusters ability to actually maintain a consistent flow with how fast kubernetes is growing, you know, they they're out I think yes, we leverage eks so it's like 1 21 or 1 22 now, uh that effort to upgrade a cluster, it can be a daunting one with port works. We actually were able to make that to where we could actually spin up a brand new cluster and with port work shift, all our application services, data migrated completely over poor works, handles all that for us and stand up that new cluster in less than a day. And that effort, it would take us a week, two weeks to do so not even man hours the time spent there, but just the reliability of being able to do that and the cost, you know, instead of standing up a new cluster and configuring it and doing all that and spending all that time, we can just really, we move to what we call blue green cut over strategy and port works is an essential piece of that. >>So is it fair to say that there are a variety of ways that people approach port works from a, from a value perspective in terms of, I I know that one area that you are particularly good in is the area of backups in this environment, but then you get data management and there's a third kind of vector there. What is the third vector? >>Yeah, it's all of the data services. Data services, like for example, database as a service on any kubernetes cluster paid on your cloud or you're on from data centers, which >>data, what kind of databases >>you were talking about? Anything from Red is Kafka Postgres, my sequel, you know, council were supporting, we just announced something called port books, data services offering that essentially delivers all these databases as a service on any kubernetes cluster uh that that a customer can point to unless than kind of get the automated management of the database on day one to day three, the entire life cycle. Um you know, through regular communities, could curdle experience through Api and SDK s and a nice slick ui that they can, you know, just role based access control and all of that, that they can completely control their data and their applications through it. And, you know, that's the third vector of potatoes Africans >>like a question for you. So what works has been a part of peer storage? You've known it since obviously for several years before you were a c h G, you brought up to see H G, you now know it a year into being acquired by a fast paced startup. Talk to me about the relationship and some of the benefits that you're getting with port works as a part of pure storage. >>Well, I mean one of the things, you know, when, when I heard about the accusation, my first thing was I was a little bit concerned is that relationship going to change and when we were acquiring, when we're looking at a doctor and Poor works, One thing I would tell my management is poor works is not just a vendor that wants to throw a solution on you and provide some capability there, partner, they want to partner with you and your success in your journey and this whole cloud native journey to provide this rich digital experience for not only our platform engineering team, but our dev teams, but also be able to really accelerate the development of our services so we can provide that digital portal for our end users and that didn't change. If anything that accelerated that that relationship did not change. You know, I came to the cat with an issue we just, we're dealing with, he immediately got someone on the phone call with me and so that has not changed. So it's really exciting to see that now that they've been acquired that they still are very much invested in the success of their customers and making sure we're successful. You know, it's not all of a sudden I was worried I was gonna have to do a whole different support process and it's gonna go into a black hole didn't happen. They still are very much involved with their customers. And >>that sounds kind of similar to what you talked about with the cultural alignment I've known here for a long time and they're very customer centric. Sounds like one of the areas in which there was a very strong alignment with port works. >>Absolutely important works has always taken pride in being customer. First company. Our founders are heavily customer focused. Uh, you know, they are aligned. They want, they have always aligned uh, the portraits business to our customers needs. Uh Pure is a company that's men. I actually focused on customers, right? I mean, that's all, you know, purist founder cause and everybody care about and so, you know, bringing these companies together and being part of the pure team. I kind of see how synergistic it is. And you know, we have, you know, that has enabled us to serve our customers customers even better than before. >>So, I'm curious about the two of you personally, in terms of your histories, I'm going to assume that you didn't both just bounce out of high school into the world of kubernetes, right? So like lisa and I your spanning the generations between the world of, say, virtualization based on X 86 architecture and virtualization where you can have microservices, you have a full blown operating system that you're working with, that kind of talk about, you know, Michael with you first talk about what that's been like navigating that change. We were in the midst of that, Do you have advice for others that are navigating that change? >>Don't be afraid of it, you know, a lot of people want to, you know, I call it, we're moving from where we're uh naming, we still have cats and dogs, they have a name, the VMS either whether or not their physical boxes or their VMS to where it's more like it's a cattle, you know, it's like we don't own the Os and not to be afraid afraid of that because change is really good. You know, the ability for me to not have to worry about patching and operating system is huge, you know, where I can rely on someone like the chaos and and the version and allow them to, if CV comes out, they let me know I go and I use their tools to be able to upgrade. So I don't have to literally worry about owning that Os and continues the same thing. You know, you, you, you know, it's all about being fault tolerant, right? And being able to be changed where you can actually brought a new version of a container, a base image with a lot of these without having to go and catch a bunch of servers, I mean patch night was held, I'm sorry if I could say that, but it was a nightmare, you know, but this whole world has just been a game changer >>with that. So Van cut from your perspective, you were coming at it, going into a startup, looking at the landscape in the future and seeing opportunity, um what what what's that been like for you? I guess the question for you is more something lisa and I talk about this concept of peak kubernetes, where are we in the wave, is this just is this just the beginning, are we in the thick of it? >>Yeah, I think I would say we're kind of transitioning from earlier doctors too early majority face in the whole, you know, um crossing the chasm analogy. Right, so uh I would say we're still the early stages of this big wave that's going to transform how infrastructure is built, apps are, apps are built and managed and run in production. Um I think some of the uh pieces, the key pieces are falling in place and maturing, uh there are some other pieces like observe ability and security, uh you know, kind of edge use cases need to be, you know, they're kind of going to get a lot more mature and you'll see that the cloud as we know today and the apps as we know today, they're going to be radically different and you know, if you're not building your apps and your business on this modern platform, on this modern infrastructure, you're gonna be left behind. Um, you know, I, my wife's birthday was a couple of days ago. I was telling this story a couple of friends is that I r I used another flowers delivery website. Uh they missed delivering the flowers on the same day, right? So when they told me all kinds of excuses, then I just went and looked up, you know, like door dash, which delivers uh, you know, and then, you know, like your food, but there's also flower delivery, indoor dash and I don't do it, I door dash flowers to her and I can track the flower does all the way she did not eat them, okay, You need them. But my kids love the chocolates though. So, you know, the case in point is that you cannot be, you know, building a modern business without leveraging the moral toolchain and modern toolchain and how the business is going to be delivered. That that thing is going to be changing dramatically. And those kind of customer experience, if you don't deliver, uh, you're not gonna be successful in business and communities is the fundamental technology that enables these containers. It's a fundamental piece of technology that enables building new businesses, you know, modernizing existing businesses and the five G is gonna be, there's gonna be new innovations that's going to get unleashed. And uh, again, communities and containers enable us to leverage those. And so we're still scratching the surface on this, it's big now, it's going to be much, much bigger as we go to the next couple of years. >>Speaking of scratching the surface, Michael, take us out in the last 30 seconds or so with where CHG healthcare is on its digital transformation. How is port works facilitating that? >>So we're right in the thick of it. I mean we are we still have what we call the legacy, we're working on getting those. But I mean we're really moving forward um to provide that rich experience, especially with inventing driven platforms like Kafka and Kubernetes and partnering with port works is one of the key things for us with that and a W s along with that. But we're, and I remember I heard a talk and I can't, I can't remember me but he he talked about how, how kubernetes just sort of like 56 K. Modem, You're hearing it, see, but it's got to get to the point where it's just there, it's just the high speed internet and Kelsey Hightower, That's who Great. Yeah, and I really like that because that's true, you know, and that's where we're on that transition, where we're still early, it's still that 50. So you still want to hear a note, you still want to do cube Cto, you want to learn it the hard way and do all that fun stuff, but eventually it's gonna be where it's just, it's just there and it's running everything like five G. I mean stripped down doing Micro K. It's things like that, you know, we're gonna see it in a lot of other areas and just proliferate and really accelerate uh the industry and compute and memory and, and storage and >>yeah, a lot of acceleration guys, thank you. This has been a really interesting session. I always love digging into customer use cases how C H. G is really driving its evolution with port works Venkat. Thanks for sharing with us. What's going on with port works a year after the acquisition. It sounds like all good stuff. >>Thank you. Thanks for having us. It's been fun, our >>pleasure. Alright for Dave Nicholson. I'm lisa martin. You're watching the cube live from Los Angeles. This is our coverage of Yukon cloud native Con 21 mhm
SUMMARY :
So Michael, first of all, let's go ahead and start with you, high quality uh doctors and nurses and uh you know, importance of that digital experience and that we need to be out The acquisition of port works by peer storage was about a year ago I talked to us of Pure and uh you know, we're looking at uh launching some new products as well and it's you know, delivering that capability to our customers and our customers are delighted now they can have a complete view I think, I think it's fair to acknowledge that pure one was observe ability before observe ability I could talk to us about obviously you are a customer CHD as a customer of court works now Port works by peer storage. you know, um being to provide data, we're very much focused on an event driven, Very, very key. you know, have a lot of control over your applications, the performance of the agency and want to control cars what does that mean to you in your environment? with how fast kubernetes is growing, you know, they they're out I think yes, good in is the area of backups in this environment, but then you get data Yeah, it's all of the data services. and SDK s and a nice slick ui that they can, you know, for several years before you were a c h G, you brought up to see H G, you now know it a Well, I mean one of the things, you know, when, when I heard about the accusation, that sounds kind of similar to what you talked about with the cultural alignment I've known here for a long time And you know, we have, you know, So, I'm curious about the two of you personally, in terms of your histories, Don't be afraid of it, you know, a lot of people want to, you know, I call it, I guess the question for you is more something lisa and I talk about this concept of peak kubernetes, they're going to be radically different and you know, if you're not building your Speaking of scratching the surface, Michael, take us out in the last 30 seconds or so with where CHG Yeah, and I really like that because that's true, you know, and that's where we're on that transition, What's going on with port works a year after the acquisition. It's been fun, our This is our coverage of Yukon cloud native Con 21
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Michael | PERSON | 0.99+ |
Michael Coletti | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Micah Coletti | PERSON | 0.99+ |
Los Angeles | LOCATION | 0.99+ |
CHG Healthcare | ORGANIZATION | 0.99+ |
two weeks | QUANTITY | 0.99+ |
lisa martin | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
lisa | PERSON | 0.99+ |
a week | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
Venkat Ramakrishnan | PERSON | 0.99+ |
less than a day | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
Ceos | ORGANIZATION | 0.99+ |
first thing | QUANTITY | 0.99+ |
Purest | ORGANIZATION | 0.98+ |
KubeCon | EVENT | 0.98+ |
both | QUANTITY | 0.98+ |
50 | QUANTITY | 0.97+ |
pure | ORGANIZATION | 0.97+ |
Red | ORGANIZATION | 0.97+ |
Cubans | PERSON | 0.97+ |
One | QUANTITY | 0.96+ |
CloudNativeCon | EVENT | 0.96+ |
one | QUANTITY | 0.96+ |
CHG | ORGANIZATION | 0.96+ |
Kelsey Hightower | PERSON | 0.96+ |
big | EVENT | 0.95+ |
next couple of years | DATE | 0.94+ |
Pure | ORGANIZATION | 0.91+ |
Api | TITLE | 0.91+ |
about a year ago | DATE | 0.9+ |
last 18 months | DATE | 0.9+ |
Venkat | ORGANIZATION | 0.9+ |
Kafka | TITLE | 0.9+ |
third vector | QUANTITY | 0.87+ |
one area | QUANTITY | 0.87+ |
CHG healthcare | ORGANIZATION | 0.87+ |
First company | QUANTITY | 0.86+ |
M S K | TITLE | 0.86+ |
21 mhm | QUANTITY | 0.84+ |
NA 2021 | EVENT | 0.83+ |
1 | OTHER | 0.83+ |
couple of days ago | DATE | 0.83+ |
five G | ORGANIZATION | 0.82+ |
a year | QUANTITY | 0.82+ |
one company | QUANTITY | 0.82+ |
couple | QUANTITY | 0.79+ |
day three | QUANTITY | 0.79+ |
Kafka Postgres | ORGANIZATION | 0.78+ |
friends | QUANTITY | 0.78+ |
a year | QUANTITY | 0.77+ |
100 years | QUANTITY | 0.77+ |
Yukon cloud native Con | EVENT | 0.76+ |
last 30 seconds | DATE | 0.76+ |
wave | EVENT | 0.73+ |
Covid | ORGANIZATION | 0.72+ |
day | QUANTITY | 0.71+ |
Kubernetes | TITLE | 0.7+ |
cu con cloud | ORGANIZATION | 0.69+ |
Modem | PERSON | 0.67+ |
1 22 | OTHER | 0.67+ |
third kind | QUANTITY | 0.66+ |
K. | COMMERCIAL_ITEM | 0.65+ |
SDK | TITLE | 0.62+ |
21 | OTHER | 0.62+ |
Os | COMMERCIAL_ITEM | 0.61+ |
C H. G | COMMERCIAL_ITEM | 0.6+ |
a ton of money | QUANTITY | 0.6+ |
Victor Chang, ThoughtSpot | AWS Startup Showcase
(bright music) >> Hello everyone, welcome today's session for the "AWS Startup Showcase" presented by theCUBE, featuring ThoughtSpot for this track and data and analytics. I'm John Furrier, your host. Today, we're joined by Victor Chang, VP of ThoughtSpot Everywhere and Corporate Development for ThoughtSpot. Victor, thanks for coming on and thanks for presenting. Talking about this building interactive data apps through ThoughtSpot Everywhere. Thanks for coming on. >> Thank you, it's my pleasure to be here. >> So digital transformation is reality. We're seeing it large-scale. More and more reports are being told fast. People are moving with modern application development and if you don't have AI, you don't have automation, you don't have the analytics, you're going to get slowed down by other forces and even inside companies. So data is driving everything, data is everywhere. What's the pitch to customers that you guys are doing as everyone realizes, "I got to go faster, I got to be more secure," (laughs) "And I don't want to get slowed down." What's the- >> Yeah, thank you John. No, it's true. I think with digital transformation, what we're seeing basically is everything is done in the cloud, everything gets done in applications, and everything has a lot of data. So basically what we're seeing is if you look at companies today, whether you are a SaaS emerging growth startup, or if you're a traditional company, the way you engage with your customers, first impression is usually through some kind of an application, right? And the application collects a lot of data from the users and the users have to engage with that. So for most of the companies out there, one of the key things that really have to do is find a way to make sense and get value for the users out of their data and create a delightful and engaging experience. And usually, that's pretty difficult these days. You know, if you are an application company, whether it doesn't really matter what you do, if you're hotel management, you're productivity application, analytics is not typically your strong suit, and where ThoughtSpot Everywhere comes in is instead of you having to build your own analytics and interactivity experience with a data, ThoughtSpot Everywhere helps deliver a really self-service interactive experience and transform your application into a data application. And with digital transformation these days, all applications have to engage, all applications have to delight, and all applications have to be self-service. And with analytics, ThoughtSpot Everywhere brings that for you to your customers and your users. >> So a lot of the mainstream enterprises and even businesses from SMB, small businesses that are in the cloud are scaling up, they're seeing the benefits. What's the problem that you guys are targeting? What's the use case? When does a potential customer or customer know they get that ThoughtSpot is needed to be called in and to work with? Is it that they want low code, no code? Is it more democratization? What's the problem statement and how do you guys turn that problem being solved into an opportunity and benefit? >> I think the key problem we're trying to solve is that most applications today, when they try to deliver analytics, really when they're delivering, is usually a static representation of some data, some answers, and some insights that are created by someone else. So usually the company would present, you know, if you think about it, if you go to your banking application, they usually show some pretty charts for you and then it sparks your curiosity about your credit card transactions or your banking transactions over the last month. Naturally, usually for me, I would then want to click in and ask the next question, which transactions fall into this category, what time, you know, change the categories a bit, usually you're stuck. So what happens with most applications? The challenge is because someone else is asking the questions and then the user is just consuming static insights, you wet their appetite and you don't satisfy it. So application users typically get stunted, they're not satisfied, and then leave application. Where ThoughtSpot comes in, ThoughtSpots through differentiation is our ability to create an interactive curiosity journey with the user. So ThoughtSpot in general, if you buy a standalone, that's the experience that we really stand by, now you can deliberate your application where the user, any user, business user, untrained, without the help of an analyst can ask their own questions. So if you see, going back to my example, if it's in your banking app, you see some kind of visualization around expense actions, you can dig in. What about last month? What about last week? Which transactions? Which merchant? You know, all those things you can continue your curiosity journey so that the business user and the app user ask their questions instead of an analyst who's sitting in the company behind a desk kind of asking your questions for you. >> And that's the outcome that everyone wants. I totally see that and everyone kind of acknowledges that, but I got to then ask you, okay, how do you make that happen? Because you've got the developers who have essentially make that happen and so, the cloud is essentially SaaS, right? So you got a SaaS kind of marketplace here. The apps can be deployed very quickly, but in order to do that, you kind of need self-service and you got to have good analytics, right? So self-service, you guys have that. Now on the analytics side, most people have to build their own or use an existing tool and tools become specialists, you know what I'm saying? So you're in this kind of like weird cycle of, "Okay, I got to deploy and spend resource to build my own, which could be long and tiresome." >> Yeah. >> "And or rely on other tools that could be good, but then I have too many tools but that creates specialism kind of silos." These seems to be trends. Do you agree with that? And if customers have this situation, you guys come in, can you help there? >> Absolutely, absolutely. So, you know, if you think about the two options that you just laid out, that you could either roll your own, kind of build your own, and that's really hard. If you think about analyst industry, where 20, $30 billion industry with a lot of companies that specialize in building analytics so it's a really tough thing to do. So it doesn't really matter how big of a company you are, even if you're a Microsoft or an Amazon, it's really hard for them to actually build analytics internally. So for a company to try to do it on their own, hire the talent and also to come up with that interactive experience, most companies fail. So what ends up happening is you deliver the budget and the time to market ends up taking much longer, and then the experience is engaging for the users and they still end up leaving your app, having a bad impression. Now you can also buy something. They are our competitors who offer embedded analytics options as well, but the mainstream paradigm today with analytics is delivering. We talked about earlier static visualizations of insights that are created by someone else. So that certainly is an option. You know, where ThoughtSpot Everywhere really stands out above everything else is our technology is fundamentally built for search and interactive and cloud-scale data kind of an experience that the static visualizations today can't really deliver. So you could deliver a static dashboard purchase from one of our competitors, or if you really want to engage your users again, today is all about self-service, it's all about interactivity, and only ThoughtSpot's architecture can deliver that embedded in a data app for you. >> You know, one of the things I'm really impressed with you guys at ThoughtSpot is that you see data as I see strategic advantage for companies and people say that it's kind of a cliche but, or a punchline, and some sort of like business statement. But when you start getting into new kinds of workflows, that's the intellectual property. If you can enable people to essentially with very little low-code, no-code, or just roll their own analysis and insights from a platform, you're then creating intellectual property for the company. So this is kind of a new paradigm. And so a lot of CIO's that I talked to, or even CSOs on the security side of like, they kind of want this but maybe can't get there overnight. So if I'm a CIO, Victor, who do I, how do I point to on my team to engage with you guys? Like, okay, you sold me on it, I love the vision. This is definitely where we want to go. Who do I bring into the meeting? >> I think that in any application, in any company actually, there's usually product leaders and developers that create applications. So, you know, if you are a SaaS company, obviously your core product, your core product team would be the right team we want to talk to. If you're a traditional enterprise, you'd be surprised actually, how many traditional enterprises that been around for 50, 100 years, you might think of them selling a different product but actually, they have a lot of visual applications and product teams within their company as well. For example, you know, we have customers like a big tractor company. You can probably imagine who they might be. They actually have visual applications that they use ThoughtSpot to offer to the dealers so that they can look at their businesses with the tractors. We also have a big telecom company, for example, that you would think about telecom as a whole service but they have a building application that they offer to their merchants to track their billing. So what I'm saying is really, whether you're a software company where that's your core product, or you're a traditional enterprise that has visual applications underneath to support your core product, there's usually product teams, product leaders, and developers. Those are the ones that we want to talk to and we can help them realize a better vision for the product that they're responsible for. >> I mean, the reality is all applications need analytics, right, at some level. >> Yes. >> Full instrumentation at a minimum log everything and then the ability to roll that up, that's where I see people always telling me like that's where the challenge seems to be. Okay, I can log everything, but now how do I have a... And then after the fact that they say, "Give me a report, what's happening?" >> That's right. >> They get stuck. >> They get stuck 'cause you get that report and you know, someone else asked that question for you and you're probably a curious person. I'm a curious person. You always have that next question, and then usually if you're in a company, let's just say, you're a CIO. You're probably used to having a team of analysts at your fingertip so at least if you have a question, you don't like the report, you can find two people, five people they'll respond to your request. But if you're a business application user, you're sitting there, I don't know about you, but I don't remember the last time I actually went through and really found a support ticket in my application, or I really read a detailed documentation describing features in application. Users like to be self-taught, self-service and they like to explore it on their own. And there's no analyst there, there's no IT guy that they can lean on so if they get a static report of the data, they'll naturally always want to ask more questions, then they're stuck. So it's that kind of unsatisfying where, "I have some curiosity, you sparked by questions, I can't answer them." That's where I think a lot of companies struggle with. That's why a lot of applications, they're data intensive but they don't deliver any insights. >> It's interesting and I like this anywhere idea because you think about like what you guys do, applications can be, they always start small, right? I mean, applications got to be built. So you guys, your solution really fits for small startups and business all the way up to large enterprises which in a large enterprise, they could have hundreds and thousands of applications which look like small startups. >> Absolutely, absolutely. You know, that's a great thing about the sort of ThoughtSpot Everywhere which takes the engine around ThoughtSpot that we built over the last eight or nine years and could deliver in any kind of a context. 'Cause nowadays, as opposed to 10, 15, 20 years ago, everything does run in applications these days. We talk about visual transformation at the beginning of the call. That's really what it means is today, the workflows of business are conducted in applications no matter who you're interacting with. And so we have all these applications. A lot of times, yes, if you have big analytical problems, you can take the data and put into a different context like ThoughtSpot's own UI and do a lot of analytics, but we also understand that a lot of times customers and users, they like to analyze in the context the workflow of the application they're actually working in. And so with that situation, actually having the analytics embedded within right next to their workflow is something that I think a lot of, especially business users that are less trained, they'd like to do that right in the context of their business productivity workflow. And so that's where ThoughtSpot Everywhere, I know the terminology is a little self-serving, but ThoughtSpot Everywhere, we think ThoughtSpot could actually be everywhere in your business workflow. >> That's great value proposition. I'm going to put my skeptic hat on challenge you and say, Okay, I don't want to... Prove it to me, what's in it for me? And how much is it going to cost me, how do I engage? So, you know- >> Yeah. >> What's in it for me as the buyer? If people want to buy this, I want to use it, I'm going to get engaged with ThoughtSpot and how much does it cost and what's the engagements look like? >> So, what's in it for you is easy. So if you have data in the cloud and you have an application, you should use ThoughtSpot Everywhere to deliver a much more valuable, interactive experience for your user's data. So that's clear. How do you engage? So we have a very flexible pricing models. If your data's in the cloud, we can either, you can purchase with us, we'll land small and then grow with your consumption. You know, that's always the kind of thing, "Hey, allow us to prove it to you, right?" We start, and then if a user starts to consume, you don't really have to pay a big bill until we see the consumption increase. So we have consumption and data capacity-based types of pricing models. And you know, one of the real advantages that we have for cloud applications is if you're a developer, often, even in the past for ThoughtSpot, we haven't always made that development experience very easy. You have to embed a relatively heavy product but the beauty for ThoughtSpot is from the beginning, we were designed with a modern API-based kind of architecture. Now, a lot of our BI competitors were designed and developed in the desktop server kind of era where everything you embed is very monolithic. But because we have an API driven architecture, we invest a lot of time now to wrap a seamless developer SDK, plus very easy to use REST APIs, plus an interactive kind of a portal to make that development experience also really simple. So if you're a developer, now you really can get from zero to an easy app for ThoughtSpot embedded in your data app in just often in less than 60 minutes. >> John: Yeah. >> So that's also a very great proposition where modern leaders is your data's in the cloud, you've got developers with an SDK, it can get you into an app very quickly. >> All right so bottom line, if you're in the cloud, you got to get the data embed in the apps, data everywhere with ThoughtSpot. >> Yes. >> All right, so let's unpack it a little bit because I think you just highlighted I think what I think is the critical factor for companies as they evaluate their plethora of tools that they have and figuring out how to streamline and be cloud native in scale. You mentioned static and old BI competitors to the cloud. They also have a team of analysts as well that just can make the executives feel like the all of the reports are dynamic but they're not, they're just static. But look at, I know you guys have a relation with Snowflake, and not to kind of bring them into this but to highlight this, Snowflake disrupted the data warehouse. >> Yes. >> Because they're in the cloud and then they refactored leveraging cloud scale to provide a really easy, fast type of value for their product and then the rest is history. They're public, they're worth a lot of money. That's kind of an example of what's coming for every category of companies. There's going to be that. In fact, Jerry Chen, who was just given the keynote here at the event, had just had a big talk called "Castles In The Cloud", you can build a moat in the cloud with your application if you have the right architecture. >> Absolutely. >> So this is kind of a new, this is a new thing and it's almost like beachfront property, whoever gets there first wins the category. >> Exactly, exactly. And we think the timing is right now. You know, Snowflake, and even earlier, obviously we had the best conference with Redshift, which really started the whole cloud data warehouse wave, and now you're seeing Databricks even with their Delta Lake and trying to get into that kind of swim lane as well. Right now, all of a sudden, all these things that have been brewing in the background in the data architecture has to becoming mainstream. We're now seeing even large financial institutions starting to always have to test and think about moving their data into cloud data warehouse. But once you're in the cloud data warehouse, all the benefits of its elasticity, performance, that can really get realized at the analytics layer. And what ThoughtSpot really can bring to the table is we've always, because we're a search-based paradigm and when you think about search. Search is all about, doesn't really matter what kind of search you're doing, it's about digging really deep into a lot of data and delivering interactive performance. Those things have always... Doesn't really matter what data architecture we sit on, I've always been really fundamental to how we build our product. And that translates extremely well when you have your data in a Snowflake or Redshift have billions of rows in the cloud. We're the only company, we think, that can deliver interactive performance on all the data you have in a cloud data warehouse. >> Well, I want to congratulate you, guys. I'm really a big fan of the company. I think a lot of companies are misunderstood until they become big and there was, "Why didn't everyone else do that search? Well, I thought they were a search engine?" Being search centric is an architectural philosophy. I know as a North Star for your company but that creates value, right? So if you look at like say, Snowflake, Redshift and Databricks, you mentioned a few of those, you have kind of a couple of things going on. You have multiple personas kind of living well together and the developers like the data people. Normally, they hated each other, right? (giggles) Or maybe they didn't hate each other but there's conflict, there's always cultural tension between the data people and the developers. Now, you have developers who are becoming data native, if you will, just by embedding that in. So what Snowflake, these guys, are doing is interesting. You can be a developer and program and get great results and have great performance. The developers love Snowflake, they love Databricks, they love Redshift. >> Absolutely. >> And it's not that hard and the results are powerful. This is a new dynamic. What's your reaction to that? >> Yeah, no, I absolutely believe that. I think, part of the beauty of the cloud is I like your kind of analogy of bringing people together. So being in the cloud, first of all, the data is accessible by everyone, everywhere. You just need a browser and the right permissions, you can get your data, and also different kind of roles. They all kind of come together. Things best of breed tools get blended together through APIs. Everything just becomes a lot more accessible and collaborative and I know that sounds kind of little kumbaya, but the great thing about the cloud is it does blur the lines between goals. Everyone can do a little bit of everything and everyone can access a little bit more of their data and get more value out of it. >> Yeah. >> So all of that, I think that's the... If you talk about digital transformation, you know, that's really at the crux of it. >> Yeah, and I think at the end of the day, speed and high quality applications is a result and I think, the speed game if automation being built in on data plays a big role in that, it's super valuable and people will get slowed down. People get kind of angry. Like I don't want to get, I want to go faster, because automations and AI is going to make things go faster on the dev side, certainly with DevOps, clouds proven that. But if you're like an old school IT department (giggles) or data department, you're talking to weeks not minutes for results. >> Yes. >> I mean, that's the powerful scale we're talking about here. >> Absolutely. And you know, if you think about it, you know, if it's days to minutes, it sounds like a lot but if you think about like also each question, 'cause usually when you're thinking about questions, they come in minutes. Every minute you have a new question and if each one then adds days to your journey, that over time is just amplified, it's just not sustainable. >> Okay- >> So now in the cloud world, you need to have things delivered on demand as you think about it. >> Yeah, and of course you need the data from a security standpoint as well and build that in. Chances is people shift left. I got to ask you if I'm a customer, I want to just run this by you. You mentioned you have an SDK and obviously talking to developers. So I'm working with ThoughtSpot, I'm the leader of the organization. I'm like, "Okay, what's the headroom? What's going to happen as a bridge, the future gets built so I'm going to ride with ThoughtSpot." You mentioned SDK, how much more can I do to build and wrap around ThoughtSpot? Because obviously, this kind of value proposition is enabling value. >> Yes. >> So I want to build around it. How do I get started and where does it go? >> Yeah, well, you can get started as easy as starting with our free trial and just play around with it. And you know, the beauty of SDK and when I talk about how ThoughtSpot is built with API-driven architecture is, hey, there's a lot of magic and features built into ThoughtSpot core pod. You could embed all of that into an application if you would like or you could also use our SDK and our APIs to say, "I just want to embed a couple of visualizations," start with that and allow the users to take into that. You could also embed the whole search feature and allow users to ask repetitive questions, or you can have different role-based kind of experiences. So all of that is very flexible and very dynamic and with SDK, it's low-code in the sense where it creates a JavaScript portal for you and even for me who's haven't coded in a long time. I can just copy and paste some JavaScript code and I can see my applications reflecting in real time. So it's really kind of a modern experience that developers in today's world appreciate, and because all the data's in the cloud and in the cloud, applications are built as services connected through APIs, we really think that this is the modern way that developers would get started. And analysts, even analysts who don't have strong developer training can get started with our developer portal. So really, it's a very easy experience and you can customize it in whichever way you want that suits your application's needs. >> Yeah, I think it's, you don't have to be a developer to really understand the basic value of reuse and discovery of services. I think that's one of these we hear from developers all the time, "I had no idea that Victor did that code. Why do I have to rewrite that?" So you see, reuse come up a lot around automation where code is building with code, right? So you have this new vibe and you need data to discover that search paradigm mindset. How prevalent is that on the minds of customers? Are they just trying to like hold on and survive through the pandemic? (giggles) >> Well, customers are definitely thinking about it. You know, the challenge is change is always hard, you know? So it takes time for people to see the possibilities and then have to go through especially in larger organizations, but even in smaller organizations, people think about, "Well, how do I change my workflow?" and then, "How do I change my data pipeline?" You know, those are the kinds of things where, you know, it takes time, and that's why Redshift has been around since 2012 or I believe, but it took years before enterprises really are now saying, "The benefits are so profound that we really have to change the workflows, change the data pipelines to make it work because we can't hold on to the old ways." So it takes time but when the benefits are so clear, it's really kind of a snowball effect, you know? Once you change a data warehouse, you got to think about, "Do I need to change my application architecture?" Then, "Do I need to change the analytics layer?" And then, "Do I need to change the workflow?" And then you start seeing new possibilities because it's all more flexible that you can add more features to your application and it's just kind of a virtuous cycle, but it starts with taking that first step to your point of considering migrating your data into the cloud and we're seeing that across all kinds of industries now. I think nobody's holding back anymore. It just takes time, sometimes some are slower and some are faster. >> Well, all apps or data apps and it's interesting, I wrote a blog post in 2017 called, "Data Is The New Developer Kit" meaning it was just like a vision statement around data will be part of how apps, like software, it'll be data as code. And you guys are doing that. You're allowing data to be a key ingredient for interactivity with analytics. This is really important. Can you just give us a use case example of how someone builds an interactive data app with ThoughtSpot Everywhere? >> Yeah, absolutely. So I think there are certain applications that when naturally things relates to data, you know, I talk about bending or those kinds of things. Like when you use it, you just kind of inherently know, "Hey, there's tons of data and then can I get some?" But a lot of times we're seeing, you know, for example, one of our customers is a very small company that provides software for personal trainers and small fitness studios. You know, you would think like, "Oh well, these are small businesses. They don't have a ton of data. A lot of them would probably just run on QuickBooks or Excel and all of that." But they could see the value is kind of, once a personal trainer conducts his business on a cloud software, then he'll realize, "Oh, I don't need to download any more data. I don't need to run Excel anymore, the data is already there in a software." And hey, on top of that, wouldn't it be great if you have an analytics layer that can analyze how your clients paid you, where your appointments are, and so forth? And that's even just for, again like I said, no disrespect to personal trainers, but even for one or two personal trainers, hey, they can be an analytics and they could be an analyst on their business data. >> Yeah, why not? Everyone's got a Fitbits and watches and they could have that built into their studio APIs for the trainers. They can get collaboration. >> That's right. So there's no application you can think that's too simple or you might think too traditional or whatnot for analytics. Every application now can become a very engaging data application. >> Well Victor, it's great to have you on. Obviously, great conversation around ThoughtSpot anywhere. And as someone who runs corp dev for ThoughtSpot, for the folks watching that aren't customers yet for ThoughtSpot, what should they know about you guys as a company that they might not know about or they should know about? And what are people talking about ThoughtsSpot, what are they saying about it? So what should they know that know that's not being talked about or they may not understand? And what are other people saying about ThoughtSpot? >> So a couple of things. One is there's a lot of fun out there. I think about search in general, search is generally a very broad term but I think it, you know, I go back to what I was saying earlier is really what differentiates ThoughtSpot is not just that we have a search bar that's put on some kind of analytics UI. Really, it's the fundamental technical architecture underlying that is from the ground up built for search large data, granular, and detailed exploration of your data. That makes us truly unique and nobody else can really do search if you're not built with a technical foundation. The second thing is, we're very much a cloud first company now, and a ton of our over the past few years because of the growth of these highly performing data warehouses like Snowflake and Redshift, we're able to really focus on what we do best which is the search and the query processing performance on the front end and we're fully engaged with cloud platforms now. So if you have data in the cloud, we are the best analytics front end for that. >> Awesome, well, thanks for coming on. Great the feature you guys here in the "Startup Showcase", great conversation, ThoughtSpot leading company, hot startup. We did their event with them with theCUBE a couple of months ago. Congratulations on all your success. Victor Chang, VP of ThoughtSpot Everywhere and Corporate Development here on theCUBE and "AWS Startup Showcase". Go to awsstartups.com and be part of the community, we're doing these quarterly featuring the hottest startups in the cloud. I'm John Furrier, thanks for watching. >> Victor: Thank you so much. (bright music)
SUMMARY :
for the "AWS Startup Showcase" and if you don't have AI, the way you engage with your customers, So a lot of the mainstream and you don't satisfy it. but in order to do that, you can you help there? and the time to market to engage with you guys? that you would think about I mean, the reality is all and then the ability to roll that up, get that report and you know, So you guys, your solution A lot of times, yes, if you hat on challenge you and say, the cloud and you have an it can get you into an app very quickly. you got to get the data embed in the apps, of the reports are "Castles In The Cloud", you So this is kind of a new, and when you think about search. and Databricks, you and the results are powerful. of all, the data is accessible transformation, you know, on the dev side, certainly with I mean, that's the powerful scale And you know, if you think about it, So now in the cloud world, Yeah, and of course you need the data So I want to build and in the cloud, applications are built and you need data to discover of things where, you know, And you guys are doing that. relates to data, you know, APIs for the trainers. So there's no application you Well Victor, it's great to have you on. So if you have data in the cloud, Great the feature you guys Victor: Thank you so much.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jerry Chen | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Victor Chang | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
John | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
two people | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Excel | TITLE | 0.99+ |
Victor | PERSON | 0.99+ |
last week | DATE | 0.99+ |
ThoughtSpot | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
last month | DATE | 0.99+ |
five people | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.99+ |
less than 60 minutes | QUANTITY | 0.99+ |
each question | QUANTITY | 0.99+ |
two options | QUANTITY | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
JavaScript | TITLE | 0.99+ |
ThoughtsSpot | ORGANIZATION | 0.99+ |
Redshift | ORGANIZATION | 0.99+ |
2012 | DATE | 0.98+ |
awsstartups.com | OTHER | 0.98+ |
first | QUANTITY | 0.98+ |
QuickBooks | TITLE | 0.98+ |
today | DATE | 0.98+ |
each one | QUANTITY | 0.98+ |
Snowflake | EVENT | 0.98+ |
first impression | QUANTITY | 0.98+ |
100 years | QUANTITY | 0.98+ |
first step | QUANTITY | 0.98+ |
10 | DATE | 0.98+ |
Databricks | ORGANIZATION | 0.97+ |
One | QUANTITY | 0.97+ |
SDK | TITLE | 0.96+ |
theCUBE | ORGANIZATION | 0.96+ |
first company | QUANTITY | 0.95+ |
15 | DATE | 0.95+ |
Startup Showcase | EVENT | 0.95+ |
20 years ago | DATE | 0.94+ |
pandemic | EVENT | 0.93+ |
ThoughtSpot Everywhere | ORGANIZATION | 0.92+ |
AWS Startup Showcase | EVENT | 0.92+ |
AWS | ORGANIZATION | 0.9+ |
Ed Naim & Anthony Lye | AWS Storage Day 2021
(upbeat music) >> Welcome back to AWS storage day. This is the Cubes continuous coverage. My name is Dave Vellante, and we're going to talk about file storage. 80% of the world's data is in unstructured storage. And most of that is in file format. Devs want infrastructure as code. They want to be able to provision and manage storage through an API, and they want that cloud agility. They want to be able to scale up, scale down, pay by the drink. And the big news of storage day was really the partnership, deep partnership between AWS and NetApp. And with me to talk about that as Ed Naim, who's the general manager of Amazon FSX and Anthony Lye, executive vice president and GM of public cloud at NetApp. Two Cube alums. Great to see you guys again. Thanks for coming on. >> Thanks for having us. >> So Ed, let me start with you. You launched FSX 2018 at re-invent. How has it being used today? >> Well, we've talked about MSX on the Cube before Dave, but let me start by recapping that FSX makes it easy to, to launch and run fully managed feature rich high performance file storage in the cloud. And we built MSX from the ground up really to have the reliability, the scalability you were talking about. The simplicity to support, a really wide range of workloads and applications. And with FSX customers choose the file system that powers their file storage with full access to the file systems feature sets, the performance profiles and the data management capabilities. And so since reinvent 2018, when we launched this service, we've offered two file system choices for customers. So the first was a Windows file server, and that's really storage built on top of Windows server designed as a really simple solution for Windows applications that require shared storage. And then Lustre, which is an open source file system that's the world's most popular high-performance file system. And the Amazon FSX model has really resonated strongly with customers for a few reasons. So first, for customers who currently managed network attached storage or NAS on premises, it's such an easy path to move their applications and their application data to the cloud. FSX works and feels like the NAZA appliances that they're used to, but added to all of that are the benefits of a fully managed cloud service. And second, for builders developing modern new apps, it helps them deliver fast, consistent experiences for Windows and Linux in a simple and an agile way. And then third, for research scientists, its storage performance and its capabilities for dealing with data at scale really make it a no-brainer storage solution. And so as a result, the service is being used for a pretty wide spectrum of applications and workloads across industries. So I'll give you a couple of examples. So there's this class of what we call common enterprise IT use cases. So think of things like end user file shares the corporate IT applications, content management systems, highly available database deployments. And then there's a variety of common line of business and vertical workloads that are running on FSX as well. So financial services, there's a lot of modeling and analytics, workloads, life sciences, a lot of genomics analysis, media and entertainment rendering and transcoding and visual effects, automotive. We have a lot of electronic control units, simulations, and object detection, semiconductor, a lot of EDA, electronic design automation. And then oil and gas, seismic data processing, pretty common workload in FSX. And then there's a class of, of really ultra high performance workloads that are running on FSX as well. Think of things like big data analytics. So SAS grid is a, is a common application. A lot of machine learning model training, and then a lot of what people would consider traditional or classic high performance computing or HPC. >> Great. Thank you for that. Just quick follow-up if I may, and I want to bring Anthony into the conversation. So why NetApp? This is not a Barney deal, this was not elbow grease going into a Barney deal. You know, I love you. You love me. We do a press release. But, but why NetApp? Why ONTAP? Why now? (momentary silence) Ed, that was to you. >> Was that a question for Anthony? >> No, for you Ed. And then I want to bring Anthony in. >> Oh, Sure. Sorry. Okay. Sure. Yeah, I mean it, uh, Dave, it really stemmed from both companies realizing a combined offering would be highly valuable to and impactful for customers. In reality, we started collaborating in Amazon and NetApp on the service probably about two years ago. And we really had a joint vision that we wanted to provide AWS customers with the full power of ONTAP. The complete ONTAP with every capability and with ONTAP's full performance, but fully managed an offer as a full-blown AWS native service. So what that would mean is that customers get all of ONTAP's benefits along with the simplicity and the agility, the scalability, the security, and the reliability of an AWS service. >> Great. Thank you. So Anthony, I have watched NetApp reinvent itself started in workstations, saw you go into the enterprise, I saw you lean into virtualization, you told me at least two years, it might've been three years ago, Dave, we are going all in on the cloud. We're going to lead this next, next chapter. And so, I want you to bring in your perspective. You're re-inventing NetApp yet again, you know, what are your thoughts? >> Well, you know, NetApp and AWS have had a very long relationship. I think it probably dates now about nine years. And what we really wanted to do in NetApp was give the most important constituent of all an experience that helped them progress their business. So ONTAP, you know, the industry's leading shared storage platform, we wanted to make sure that in AWS, it was as good as it was on premise. We love the idea of giving customers this wonderful concept of symmetry. You know, ONTAP runs the biggest applications in the largest enterprises on the planet. And we wanted to give not just those customers an opportunity to embrace the Amazon cloud, but we wanted to also extend the capabilities of ONTAP through FSX to a new customer audience. Maybe those smaller companies that didn't really purchase on premise infrastructure, people that were born in the cloud. And of course, this gives us a great opportunity to present a fully managed ONTAP within the FSX platform, to a lot of non NetApp customers, to our competitors customers, Dave, that frankly, haven't done the same as we've done. And I think we are the benefactors of it, and we're in turn passing that innovation, that, that transformation onto the, to the customers and the partners. >> You know, one is the, the key aspect here is that it's a managed service. I don't think that could be, you know, overstated. And the other is that the cloud nativeness of this Anthony, you mentioned here, our marketplace is great, but this is some serious engineering going on here. So Ed maybe, maybe start with the perspective of a managed service. I mean, what does that mean? The whole ball of wax? >> Yeah. I mean, what it means to a customer is they go into the AWS console or they go to the AWS SDK or the, the AWS CLI and they are easily able to provision a resource provision, a file system, and it automatically will get built for them. And if there's nothing that they need to do at that point, they get an endpoint that they have access to the file system from and that's it. We handle patching, we handle all of the provisioning, we handle any hardware replacements that might need to happen along the way. Everything is fully managed. So the customer really can focus not on managing their file system, but on doing all of the other things that they, that they want to do and that they need to do. >> So. So Anthony, in a way you're disrupting yourself, which is kind of what you told me a couple of years ago. You're not afraid to do that because if we don't do it, somebody else is going to do it because you're, you're used to the old days, you're selling a box and you say, we'll see you next time, you know, three or four years. So from, from your customer's standpoint, what's their reaction to this notion of a managed service and what does it mean to NetApp? >> Well, so I think the most important thing it does is it gives them investment protection. The wonderful thing about what we've built with Amazon in the FSX profile is it's a complete ONTAP. And so one ONTAP cluster on premise can immediately see and connect to an ONTAP environment under FSX. We can then establish various different connectivities. We can use snap mirror technologies for disaster recovery. We can use efficient data transfer for things like dev test and backup. Of course, the wonderful thing that we've done, that we've gone beyond, above and beyond, what anybody else has done is we want to make sure that the actual primary application itself, one that was sort of built using NAS built in an on-premise environment an SAP and Oracle, et cetera, as Ed said, that we can move those over and have the confidence to run the application with no changes on an Amazon environment. So, so what we've really done, I think for customers, the NetApp customers, the non NetApp customers, is we've given them an enterprise grade shared storage platform that's as good in an Amazon cloud as it was in an on-premise data center. And that's something that's very unique to us. >> Can we talk a little bit more about those, those use cases? You know, both, both of you. What are you seeing as some of the more interesting ones that you can share? Ed, maybe you can start. >> Yeah, happy to. The customer discussions that we've, we've been in have really highlighted four cases, four use cases the customers are telling us they'll use a service for. So maybe I'll cover two and maybe Anthony can cover the other two. So, the first is application migrations. And customers are increasingly looking to move their applications to AWS. And a lot of those are applications work with file storage today. And so we're talking about applications like SAP. We're talking about relational databases like SQL server and Oracle. We're talking about vertical applications like Epic and the healthcare space. As another example, lots of media entertainment, rendering, and transcoding, and visual effects workload. workflows require Windows, Linux, and Mac iOS access to the same set of data. And what application administrators really want is they want the easy button. They want fully featured file storage that has the same capabilities, the same performance that their applications are used to. Has extremely high availability and durability, and it can easily enable them to meet compliance and security needs with a robust set of data protection and security capabilities. And I'll give you an example, Accenture, for example, has told us that a key obstacle their clients face when migrating to the cloud is potentially re-architecting their applications to adopt new technologies. And they expect that Amazon FSX for NetApp ONTAP will significantly accelerate their customers migrations to the cloud. Then a second one is storage migrations. So storage admins are increasingly looking to extend their on-premise storage to the cloud. And why they want to do that is they want to be more agile and they want to be responsive to growing data sets and growing workload needs. They want to last to capacity. They want the ability to spin up and spin down. They want easy disaster recovery across geographically isolated regions. They want the ability to change performance levels at any time. So all of this goodness that they get from the cloud is what they want. And more and more of them also are looking to make their company's data accessible to cloud services for analytics and processing. So services like ECS and EKS and workspaces and App Stream and VMware cloud and SageMaker and orchestration services like parallel cluster and AWS batch. But at the same time, they want all these cloud benefits, but at the same time, they have established data management workflows, and they build processes and they've built automation, leveraging APIs and capabilities of on-prem NAS appliances. It's really tough for them to just start from scratch with that stuff. So this offering provides them the best of both worlds. They get the benefits of the cloud with the NAS data management capabilities that they're used to. >> Right. >> Ed: So Anthony, maybe, do you want to talk about the other two? >> Well, so, you know, first and foremost, you heard from Ed earlier on the, the, the FSX sort of construct and how successful it's been. And one of the real reasons it's been so successful is, it takes advantage of all of the latest storage technologies, compute technologies, networking technologies. What's great is all of that's hidden from the user. What FSX does is it delivers a service. And what that means for an ONTAP customer is you're going to have ONTAP with an SLA and an SLM. You're going to have hundreds of thousands of IOPS available to you and sub-millisecond latencies. What's also really important is the design for FSX and app ONTAP was really to provide consistency on the NetApp API and to provide full access to ONTAP from the Amazon console, the Amazon SDK, or the Amazon CLI. So in this case, you've got this wonderful benefit of all of the, sort of the 29 years of innovation of NetApp combined with all the innovation AWS, all presented consistently to a customer. What Ed said, which I'm particularly excited about, is customers will see this just as they see any other AWS service. So if they want to use ONTAP in combination with some incremental compute resources, maybe with their own encryption keys, maybe with directory services, they may want to use it with other services like SageMaker. All of those things are immediately exposed to Amazon FSX for the app ONTAP. We do some really intelligent things just in the storage layer. So, for example, we do intelligent tiering. So the customer is constantly getting the, sort of the best TCO. So what that means is we're using Amazon's S3 storage as a tiered service, so that we can back off code data off of the primary file system to give the customer the optimal capacity, the optimal throughput, while maintaining the integrity of the file system. It's the same with backup. It's the same with disaster recovery, whether we're operating in a hybrid AWS cloud, or we're operating in an AWS region or across regions. >> Well, thank you. I think this, this announcement is a big deal for a number of reasons. First of all, it's the largest market. Like you said, you're the gold standard. I'll give you that, Anthony, because you guys earned it. And so it's a large market, but you always had to make previously, you have to make trade-offs. Either I could do file in the cloud, but I didn't get the rich functionality that, you know, NetApp's mature stack brings, or, you know, you could have wrapped your stack in Kubernete's container and thrown it into the cloud and hosted it there. But now that it's a managed service and presumably you're underneath, you're taking advantage. As I say, my inference is there's some serious engineering going on here. You're taking advantage of some of the cloud native capabilities. Yeah, maybe it's the different, you know, ECE two types, but also being able to bring in, we're, we're entering a new data era with machine intelligence and other capabilities that we really didn't have access to last decade. So I want to, I want to close with, you know, give you guys the last word. Maybe each of you could give me your thoughts on how you see this partnership of, for the, in the future. Particularly from a customer standpoint. Ed, maybe you could start. And then Anthony, you can bring us home. >> Yeah, well, Anthony and I and our teams have gotten to know each other really well in, in ideating around what this experience will be and then building the product. And, and we have this, this common vision that it is something that's going to really move the needle for customers. Providing the full ONTAP experience with the power of a, of a native AWS service. So we're really excited. We're, we're in this for the long haul together. We have, we've partnered on everything from engineering, to product management, to support. Like the, the full thing. This is a co-owned effort, a joint effort backed by both companies. And we have, I think a pretty remarkable product on day one, one that I think is going to delight customers. And we have a really rich roadmap that we're going to be building together over, over the years. So I'm excited about getting this in customer's hands. >> Great, thank you. Anthony, bring us home. >> Well, you know, it's one of those sorts of rare chances where you get to do something with Amazon that no one's ever done. You know, we're sort of sitting on the inside, we are a peer of theirs, and we're able to develop at very high speeds in combination with them to release continuously to the customer base. So what you're going to see here is rapid innovation. You're going to see a whole host of new services. Services that NetApp develops, services that Amazon develops. And then the whole ecosystem is going to have access to this, whether they're historically built on the NetApp APIs or increasingly built on the AWS APIs. I think you're going to see orchestrations. I think you're going to see the capabilities expand the overall opportunity for AWS to bring enterprise applications over. For me personally, Dave, you know, I've demonstrated yet again to the NetApp customer base, how much we care about them and their future. Selfishly, you know, I'm looking forward to telling the story to my competitors, customer base, because they haven't done it. So, you know, I think we've been bold. I think we've been committed as you said, three and a half years ago, I promised you that we were going to do everything we possibly could. You know, people always say, you know, what's, what's the real benefit of this. And at the end of the day, customers and partners will be the real winners. This, this innovation, this sort of, as a service I think is going to expand our market, allow our customers to do more with Amazon than they could before. It's one of those rare cases, Dave, where I think one plus one equals about seven, really. >> I love the vision and excited to see the execution Ed and Anthony, thanks so much for coming back in the Cube. Congratulations on getting to this point and good luck. >> Anthony and Ed: Thank you. >> All right. And thank you for watching everybody. This is Dave Vellante for the Cube's continuous coverage of AWS storage day. Keep it right there. (upbeat music)
SUMMARY :
And the big news of storage So Ed, let me start with you. And the Amazon FSX model has into the conversation. I want to bring Anthony in. and NetApp on the service And so, I want you to in the largest enterprises on the planet. And the other is that the cloud all of the provisioning, You're not afraid to do that that the actual primary of the more interesting ones and maybe Anthony can cover the other two. of IOPS available to you and First of all, it's the largest market. really move the needle for Great, thank you. the story to my competitors, for coming back in the Cube. This is Dave Vellante for the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Anthony | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Anthony Lye | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Ed | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Ed Naim | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
NetApp | ORGANIZATION | 0.99+ |
29 years | QUANTITY | 0.99+ |
FSX | TITLE | 0.99+ |
Barney | ORGANIZATION | 0.99+ |
ONTAP | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
both companies | QUANTITY | 0.99+ |
NetApp | TITLE | 0.99+ |
four years | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
Windows | TITLE | 0.99+ |
MSX | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
Craig Nunes & Tobias Flitsch, Nebulon | CUBEconversations
(upbeat intro music) >> More than a decade ago, the team at Wikibon coined the term Server SAN. We saw the opportunity to dramatically change the storage infrastructure layer and predicted a major change in technologies that would hit the market. Server SAN had three fundamental attributes. First of all, it was software led. So all the traditionally expensive controller functions like snapshots and clones and de-dupe and replication, compression, encryption, et cetera, they were done in software directly challenging a two to three decade long storage controller paradigm. The second principle was it leveraged and shared storage inside of servers. And the third it enabled any-to-any typology between servers and storage. Now, at the time we defined this coming trend in a relatively narrow sense inside of a data center location, for example, but in the past decade, two additional major trends have emerged. First the software defined data center became the dominant model, thanks to VMware and others. And while this eliminated a lot of overhead, it also exposed another problem. Specifically data centers today allocate probably we estimate around 35% of CPU cores and cycles to managing things like storage and network and security, offloading those functions. This is wasted cores and doing this with traditional general purpose x86 processors is expensive and it's not efficient. This is why we've been reporting so aggressively on ARM's ascendancy into the enterprise. It's not only coming it's here and we're going to talk about that today. The second mega trend is cloud computing. Hyperscale infrastructure has allowed technology companies to put a management and control plane into the cloud and expand beyond our narrow server SAN scope within a single data center and support the management of distributed data at massive scale. And today we're on the cusp of a new era of infrastructure. And one of the startups in this space is Nebulon. Hello everybody, this is Dave Vellante, and welcome to this Cube Conversation where we welcome in two great guests, Craig Nunes, Cube alum, co-founder and COO at Nebulon and Tobias Flitsch who's director of product management at Nebulon. Guys, welcome. Great to see you. >> So good to be here Dave. Feels awesome. >> Soon, face to face. Craig, I'm heading your way. >> I can't wait. >> Craig, you heard my narrative upfront and I'm wondering are those the trends that you guys saw when you, when you started the company, what are the major shifts in the world today that, that caused you and your co-founders to launch Nebulon? >> Yeah, I'll give you sort of the way we think about the world, which I think aligns super right with, with what you're talking about, you know, over the last several years, organizations have had a great deal of experience with public cloud data centers. And I think like any platform or technology that is, you know, gets its use in a variety of ways, you know, a bit of savvy is being developed by organizations on, you know, what do I put where, how do I manage things in the most efficient way possible? And there are, in terms of the types of folks we're focused on in Nebulon's business, we see now kind of three groups of people emerging, and, and we sort of simply coined three terms, the returners, the removers, and the remainers. I'll explain what I mean by each of those, the returners are folks who maybe early on, you know, hit the gas on cloud, moved, you know, everything in, a lot in, and realize that while it's awesome for some things, for other things, it was less optimal. Maybe cost became a factor or visibility into what was going on with their data was a factor, security, service levels, whatever. And they've decided to move some of those workloads back. Returners. There are what I call the removers that are taking workloads from, you know, born in the cloud. On-prem, you know, and this was talked a lot about in Martine's blog that, you know, talked about a lot of the growth companies that built up such a large footprint in the public cloud, that economics were kind of working against them. You can, depending on the knobs you turn, you know, you're probably spending two and a half X, two X, what you might spend if you own your own factory. And you can argue about, you know, where your leverage is in negotiating your pricing with the cloud vendors, but there's a big gap. The last one is, and I think probably the most significant in terms of who we've engaged with is the remainers. And the remainers are, you know, hybrid IT organizations. They've got assets in the cloud and on-prem, they aspire to an operational model that is consistent across everything and, you know, leveraging all the best stuff that they observed in their cloud-based assets. And it's kind of our view that frankly we take from, from this constituency that, when people talk about cloud or cloud first, they're moving to something that is really more an operating model versus a destination or data center choice. And so, we get people on the phone every day, talking about cloud first. And when you kind of dig into what they're after, it's operating model characteristics, not which data center do I put it in, and those, those decisions are separating. And so that, you know, it's really that focus for us is where, we believe we're doing something unique for that group of customers. >> Yeah. Cloud first doesn't doesn't mean cloud only. And of course followers of this program know, we talk a lot about this, this definition of cloud is changing, it's evolving, It's moving to the edge, it's moving to data centers, data centers are moving to the cloud. Cross-cloud, it's that big layer that's expanding. And so I think the definition of cloud, even particularly in customer's minds is evolving. There's no question about it. People, they'll look at what VMware is doing in AWS and say, okay, that's cloud, but they'll also look at things like VMware cloud foundation and say oh yeah, that's cloud too. So to me, the beauty of cloud is in the eye of the customer beholder. So I buy that. Tobias. I wonder if you could talk about how this all translates into product, because you guys start up, you got to sell stuff, you use this term smart infrastructure, what is that? How does this all turn into stuff you can sell? >> Right. Yeah. So let me back up a little bit and talk a little bit about, you know, what we at Nebulon do. So we are a cloud based software company, and we're delivering sort of a new category of smart infrastructure. And if you think about things that you would know from your everyday surroundings, smart infrastructure is really all around us. Think smart home technology like Google Nest as an example. And what this all has in common is that there's a cloud control plane that is managing some IOT end points and smart devices in various locations. And by doing that, customers gain benefits like easy remote management, right? You can manage your thermostat, your temperature from anywhere in the world basically. You don't have to worry about automated software updates anymore, and you can easily automate your home, your infrastructure, through this cloud control plane and translating this idea to the data center, right? This idea is not necessarily new, right? If you look into the networking space with Meraki networks, now Cisco or Mist Systems now Juniper, they've really pioneered efforts in cloud management. So smart network infrastructure, and the key problem that they solved there is, you know, managing these vast amount of access points and switches that are scattered across the data centers across campuses, and, you know, the data center. Now, if you translate that to what Nebulon does, it's really applying this smart infrastructure idea, this methodology to application infrastructure, specifically to compute and storage infrastructure. And that's essentially what we're doing. So with smart infrastructure, basically our offering it at Nebulon, the product, that comes with the benefits of this cloud experience, public cloud operating model, as we've talked about, some of our customers look at the cloud as an operating model rather than a destination, a physical location. And with that, we bring to us this model, this, this experience as operating a model to on-premises application infrastructure, and really what you get with this broad offering from Nebulon and the benefits are really circling it out, you know, four areas, first of all, rapid time to value, right? So application owners think people that are not specialists or experts when it comes to IT infrastructure, but more generalists, they can provision on-premise application infrastructure in less than 10 minutes, right? It can go from, from just bare metal physical racks to the full application stack in less than 10 minutes, so they're up and running a lot quicker. And they can immediately deliver services to their end customers, cloud-like operations, this, this notion of zero touch remote management, which now with the last couple of months with this strange time that we're with COVID is, you know, turnout is becoming more and more relevant really as in remotely administrating and management of infrastructure that scales from just hundreds of nodes to thousands of nodes. It doesn't really matter with behind the scenes software updates, with global AI analytics and insights, and basically overall combined reduce the operational overhead when it comes to on-premises infrastructure by up to 75%, right? The other thing is support for any application, whether it's containerized, virtualized, or even bare metal applications. And the idea here is really consistent leveraging server-based storage that doesn't require any Nebulon-specific software on the server. So you get the full power of your application servers for your applications. Again, as the servers intended. And then the fourth benefit when it comes to smart infrastructure is, is of course doing this all at a lower cost and with better data center density. And that is really comparing it to three-tier architectures where you have your server, your SAN fabric, and then you have an external storage, but also when you compare it with hyper-converged infrastructure software, right, that is consuming resources of the application servers, think CPU, think memory and networking. So basically you get a lot more density with that approach compared to those architectures. >> Okay, I want to dig into some of that differentiation too, but what exactly do I buy from you? Do I buy a software subscription? Is that right? Can you explain that a little bit? >> Right. So basically the way we do this is it's really leveraging two key new innovations, right? So, and you see why I made the bridge to smart home technology, because the approach is civil, right? The one is, you know, the introduction of a cloud control plane that basically manage this on-premise application infrastructure, of course, that is delivered to customers as a service. The second one is, you know, a new infrastructure model that uses IOT endpoint technology, and that is embedded into standard application servers and the storage within this application servers. Let me add a couple of words to that to explain a little bit more, so really at the heart of smart infrastructure, in order to deliver this public cloud experience for any on-prem application is this cloud-based control plane, right? So we've built this, how we recommend our customers to use a public cloud, and that is built, you know, building your software on modern technologies that are vendor-agnostic. So it could essentially run anywhere, whether it is, you know, any public cloud vendor, or if we want to run in our own data centers, when regulatory requirements change, it's massively scalable and responsive, no matter how large the managed infrastructure is. But really the interesting part here, Dave, is that the customer doesn't really have to worry about any of that, it's delivered as a service. So what a customer gets from this cloud control plane is a single API end point, how they get it with a public cloud. They get a web user interface, from which they can manage all of their infrastructure, no matter how many devices, no matter where it is, can be in the data center, can be in an edge location anywhere in the world, they get template-based provisioning much like a marketplace in a public cloud. They get analytics, predictive support services, and super easy automation capabilities. Now the second thing that I mentioned is this server embedded software, the server embedded infrastructure software, and that is running on a PCIE based offload engine. And that is really acting as this managed IOT endpoint within the application server that I managed that I mentioned earlier. And that approach really further converges modern application infrastructure. And it really replaces the software defined storage approach that you'll find in hyper-converged infrastructure software. And that is really by embedding the data services, the storage data service into silicon within the server. Now this offload engine, we call that a services processing unit or SPU in short. And that is really what differentiates us from hyper-converged infrastructure. And it's quite different than a regular accelerator card that you get with some of the hyper-converged infrastructure offerings. And it's different in the sense that the SPU runs basically all of the shared and local data services, and it's not just accelerating individual algorithms, individual functions. And it basically provides all of these services aside the CPU with the boot drive, with data drives. And in essence provides you with this a separate fall domain from the service, so for example, if you reboot your server, the data plan remains intact. You know, it's not impacted for that. >> Okay. So I want to stay on that for just a second, Craig, if I could, I get very clear how you're different from, as Tobias said, the three-tier server SAN fabric, external array, the HCI thing's interesting because in some respects, the HCI has, you know, guys take Nutanix, they talk about cloud and becoming more friendly with developers and API piece, but what's your point of view Craig on how you position relative to say HCI? >> Yeah, absolutely. So everyone gets what three-tier architecture is and was, and HCI software, you know, emerged as an alternative to the three-tier architectures. Everyone I think today understands that data services are, you know, SDS is software hosted in the operating system of each HCI device and consume some amount of CPU, memory, network, whatever. And it's typically constrained to a hypervisor environment, kind of where we're most of that stuff is done. And over time, these platforms have added some monitoring capabilities, predictive analytics, typically provided by the vendor's cloud, right? And as Tobias mentioned, some HCIS vendors have augmented this approach by adding an accelerator to make things like compression and dedupe go faster, right? Think SimpliVity or something like that. The difference that we're talking about here is, the infrastructure software that we deliver as a service is embedded right into server silicon. So it's not sitting in the operating system of choice. And what that means is you get the full power of the server you bought for your workloads. It's not constrained to a hypervisor-only environment, it's OS agnostic. And, you know, it's entirely controlled and administered by the cloud versus with, you know, most HCIS is an on-prem console that manages a cluster or two on-prem. And, you know, think of it from a automation perspective. When you automate something, you've got to set up your playbook kind of cluster by cluster. And depending what versions they're on, APIs are changing, behaviors are changing. So a very different approach at scale. And so again, for us, we're talking about something that gives you a much more efficient infrastructure that is then managed by the cloud and gives you this full kind of operational model you would expect for any kind of cloud-based deployment. >> You know, I got to go back, you guys obviously have some three-part DNA hanging around and you know, of course you remember well, the three-part ASIC, it was kind of famous at the time and it was unique. And I bring that up only because you've mentioned a couple of times the silicon and a lot of people yeah, whatever, but we have been on this, especially, particularly with ARM. And I want to share with the audience, if you follow my breaking analysis, you know this. If you look at the historical curve of Moore's law with x86, it's the doubling of performance every two years, roughly, that comes out to about 40% a year. That's moderated down to about 30% a year now, if you look at the ARM ecosystem and take for instance, apple A15, and the previous series, for example, over the last five years, the performance, when you combine the CPU, GPU, NPU, the accelerators, the DSPs, which by the way, are all customizable. That's growing at 110% a year, and the SOC costs 50 bucks. So my point is that you guys are riding perfect example of doing offloads with a way more efficient architecture. You're now on that curve, that's growing at 100% plus per year. Whereas a lot of the legacy storage is still on that 30% a year curve, and so cheaper, lower power. That's why I love to buy, as you were bringing in the IOT and the smart infrastructure, this is the future of storage and infrastructure. >> Absolutely. And the thing I would emphasize is it's not limited to storage, storage is a big issue, but we're talking about your application infrastructure and you brought up something interesting on the GPU, the SmartNIC of things, and just to kind of level set with everybody there, there's the HCI world, and then there's this SmartNIC DPU world, whatever you want to call it, where it's effectively a network card, it's got that specialized processing onboard and firmware to provide some network security storage services, and think of it as a PCIE card in your server. It connects to an external storage system, so think Nvidia Bluefield 2 connecting to an external NVME storage device. And the interesting thing about that is, you know, storage processing is offloaded from the server. So as we said earlier, good, right, you get the server back to your application, but storage moves out of the server. And it starts to look a little bit like an external storage approach versus a server based approach. And infrastructure management is done by, you know, the server SmartNIC with some monitoring and analytics coming from, you know, your supplier's cloud support service. So complexity creeps back in, if you start to lose that, you know, heavily converged approach. Again, we are taking advantage of storage within the server and, you know, keeping this a real server based approach, but distinguishing ourselves from the HCI approach. Cause there's a real ROI there. And when we talked to folks who are looking at new and different ways, we talk a lot about the cloud and I think we've done a bit of that already, but then at the end of the day, folks are trying to figure out well, okay, but then what do I buy to enable this? And what you buy is your standard server recipe. So think your favorite HPE, Lenovo, Supermicro, whatever, whatever your brand, and it's going to come enabled with this IOT end point within it, so it's really a smart server, if you will, that can then be controlled by our cloud. And so you're effectively buying, you know, from your favorite server vendor, a server option that is this endpoint and a subscription. You don't buy any of this from us, by the way, it's all coming from them. And that's the way we deliver this. >> You know, sorry to get into the plumbing, but this is something we've been on and a facet of it. Is that silicon custom designed or is it pretty much off the shelf, do you guys add any value to it? >> No, there are off the shelf options that can deliver tremendous horsepower on that form factor. And so we take advantage of that to, you know, do what we do in terms of, you know, creating these sort of smart servers with our end point. And so that's where we're at. >> Yeah. Awesome. So guys, what's your sweet spot, you know, why are customers, you know, what are you seeing customers adopting? Maybe some examples you guys can share? >> Yeah, absolutely. So I think Tobias mentioned that because of the architectural approach, there's a lot of flexibility there, you can run virtualized, containerized, bare metal applications. The question is where are folks choosing to get started? And those use cases with our existing customers revolved heavily around virtualization modernization. So they're going back in to their virtualized environment, whether their existing infrastructure is array-based or HCI-based. And they're looking to streamline that, save money, automate more, the usual things. The second area is the distributed edge. You know, the edge is going through tremendous transformation with IOT devices, 5g, and trying to get processing closer to where customers are doing work. And so that distributed edge is a real opportunity because again, it's a more cost-effective, more dense infrastructure. The cloud effectively can manage across all of these sites through a single API. And then the third area is cloud service provider transformation. We do a fair bit of business with, you know, cloud service providers, CTOs, who are looking at trying to build top line growth, trying to create new services and, and drive better bottom line. And so this is really, you know, as much as a revenue opportunity for them as cost saving opportunity. And then the last one is this notion of, you know, bringing the cloud on-prem, we've done a cloud repatriation deal. And I know you've seen a little of that, but maybe not a lot of it. And, you know, I can tell you in our first deals, we've already seen it, so it's out there. Those are the places where people are getting started with us today. >> It's just interesting, you're right. I don't see a ton of it, but if I'm going to repatriate, I don't want to go backwards. I don't want to repatriate to legacy. So it actually does kind of make sense that I repatriate to essentially a component of on-prem cloud that's managed in the cloud, that makes sense to me to buy. But today you're managing from the cloud, you're managing on-prem infrastructure. Maybe you could show us a little leg, share a little roadmap, I mean, where are you guys headed from a product standpoint? >> Right, so I'm not going to go too far on the limb there, but obviously, right. So one of the key benefits of a cloud managed platform is this notion of a single API, right. We talked about the distributed edge where, you know, think of retailer that has, you know, thousands of stores, each store having local infrastructure. And, you know, if you think about the challenges that come with, you know, just administrating those systems, rolling out firmware updates, rolling out updates in general, monitoring those systems, et cetera. So having a single console, a cloud console to administrate all of that infrastructure, obviously, you know, the benefits are easy now. If you think about, if you're thinking about that and spin it further, right? So from the use cases and the types of users that we've see, and Craig talked about them at the very beginning, you can think about this as this is a hybrid world, right. Customers will have data that they'll have in the public cloud. They will have data and applications in their data centers and at the edge, obviously it is our objective to deliver the same experience that they gained from the public cloud on-prem, and eventually, you know, those two things can come closer together. Apart from that, we're constantly improving the data services. And as you mentioned, ARM is, is on a path that is becoming stronger and faster. So obviously we're going to leverage on that and build out our data storage services and become faster. But really the key thing that I'd like to, to mention all the time, and this is related to roadmap, but rather feature delivery, right? So the majority of what we do is in the cloud, our business logic in the cloud, the capabilities, the things that make infrastructure work are delivered in the cloud. And, you know, it's provided as a service. So compared with your Gmail, you know, your cloud services, one day, you don't have a feature, the next day you have a feature, so we're continuously rolling out new capabilities through our cloud. >> And that's about feature acceleration as opposed to technical debt, which is what you get with legacy features, feature creep. >> Absolutely. The other thing I would say too, is a big focus for us now is to help our customers more easily consume this new concept. And we've already got, you know, SDKs for things like Python and PowerShell and some of those things, but we've got, I think, nearly ready, an Ansible SDK. We're trying to help folks better kind of use case by use case, spin this stuff up within their organization, their infrastructure. Because again, part of our objective, we know that IT professionals have, you know, a lot of inertia when they're, you know, moving stuff around in their own data center. And we're aiming to make this, you know, a much simpler, more agile experience to deploy and grow over time. >> We've got to go, but Craig, quick company stats. Am I correct, you've raised just under 20 million. Where are you on funding? What's your head count today? >> I am going to plead the fifth on all of that. >> Oh, okay. Keep it stealth. Staying a little stealthy, I love it. Really excited for you. I love what you're doing. It's really starting to come into focus. And so congratulations. You know, you got a ways to go, but Tobias and Craig, appreciate you coming on The Cube today. And thank you for watching this Cube Conversation. This is Dave Vellante. We'll see you next time. (upbeat outro music)
SUMMARY :
We saw the opportunity to So good to be here Dave. Soon, face to face. hit the gas on cloud, moved, you know, of the customer beholder. that you would know from your and that is built, you know, building your the HCI has, you know, guys take Nutanix, that data services are, you know, So my point is that you guys about that is, you know, or is it pretty much off the of that to, you know, why are customers, you know, And so this is really, you know, the cloud, that makes sense to me to buy. challenges that come with, you know, you get with legacy features, a lot of inertia when they're, you know, Where are you on funding? the fifth on all of that. And thank you for watching
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Cisco | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Tobias Flitsch | PERSON | 0.99+ |
Tobias | PERSON | 0.99+ |
Craig Nunes | PERSON | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Craig | PERSON | 0.99+ |
Mist Systems | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Supermicro | ORGANIZATION | 0.99+ |
fifth | QUANTITY | 0.99+ |
Nebulon | ORGANIZATION | 0.99+ |
less than 10 minutes | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Juniper | ORGANIZATION | 0.99+ |
50 bucks | QUANTITY | 0.99+ |
three decade | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
second thing | QUANTITY | 0.99+ |
Meraki | ORGANIZATION | 0.99+ |
Nebulon | PERSON | 0.99+ |
less than 10 minutes | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
first deals | QUANTITY | 0.99+ |
each store | QUANTITY | 0.99+ |
PowerShell | TITLE | 0.99+ |
third area | QUANTITY | 0.98+ |
Martine | PERSON | 0.98+ |
today | DATE | 0.98+ |
third | QUANTITY | 0.98+ |
Nutanix | ORGANIZATION | 0.98+ |
A15 | COMMERCIAL_ITEM | 0.98+ |
three-tier | QUANTITY | 0.98+ |
Gmail | TITLE | 0.98+ |
First | QUANTITY | 0.98+ |
second principle | QUANTITY | 0.98+ |
Bluefield 2 | COMMERCIAL_ITEM | 0.98+ |
110% a year | QUANTITY | 0.98+ |
single console | QUANTITY | 0.98+ |
second area | QUANTITY | 0.98+ |
hundreds of nodes | QUANTITY | 0.98+ |
Moore | PERSON | 0.97+ |
about 40% a year | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
ARM | ORGANIZATION | 0.97+ |
VMware | ORGANIZATION | 0.97+ |
Cube | ORGANIZATION | 0.97+ |
three-part | QUANTITY | 0.97+ |
thousands of stores | QUANTITY | 0.97+ |
single | QUANTITY | 0.97+ |
fourth benefit | QUANTITY | 0.96+ |
two great guests | QUANTITY | 0.96+ |
first | QUANTITY | 0.96+ |
each | QUANTITY | 0.96+ |
second one | QUANTITY | 0.96+ |
More than a decade ago | DATE | 0.96+ |
about 30% a year | QUANTITY | 0.96+ |
HPE | ORGANIZATION | 0.96+ |
around 35% | QUANTITY | 0.95+ |
thousands of nodes | QUANTITY | 0.95+ |
up to 75% | QUANTITY | 0.95+ |
apple | ORGANIZATION | 0.95+ |
Kyle Hines, Presidio & Chuck Hoskin, Cherokee Nation | AWS Global Public Sector Partner Awards 2021
(upbeat music) >> Hello, and welcome to today's session of the 2021 AWS Global Public Sector Partner Awards. I'm delighted to present our special guests for today's program and they are Kyle Hines, VP Strategic Accounts at Presidio as well as chief Chuck Hoskin, Jr., chief of the Cherokee Nation. Welcome to the program, gentlemen >> Thank you. >> Terrific, well, delighted to have you here, we're going to discuss the key award of best partner transformation, most impactful nonprofit partner, of course now highlighting some of the technologies now being technology now being leveraged to help preserve the Cherokee language as well as its culture. Now, Chuck, I'd like to start with you and if you could describe some of the challenges that the Cherokee nation is now faced with in terms of preserving the language and its culture and how you see technology being able to really help preserve it. >> Well, thank you, Natalie. It was really good to be with you all today. The Cherokee language and culture is what makes us unique as a people. It's the link that links us back to time and immemorial through generations. And over those generations, there've been many threats to our language and culture. There's been disease after European contact, there's been dispossession, there's been our forced removal on the trail of tears. Other pressures in more modern times have continued to erode our language and culture, including, boarding schools, the public school system through most of the 20th century as Cherokee Nation has gotten back on its feet, that is to say when the govern the United States has allowed Cherokee Nation to do what we've always done well which is to govern ourselves, chart our own destiny, and preserve our life ways, we've been able to make preservation efforts but those generations of eroding our language and culture had coming to steep costs. We're the largest tribe in the country, 392,000 citizens and by the way we're mostly in Northeast, Oklahoma but we have Cherokees living all over the country even all over the world. And we only have 2000 fluent speakers left. So it's a great challenge to save a language that's truly endangered. And if we don't save it generations from now we may do a number of things exceedingly well as we do today, business, providing education and housing, creating a great healthcare system, but we will have lost that thing that makes us a unique people, that thing that links us back to our past. And so what we're doing today, working with great partners like Presidio is just indispensable to what's really our most important mission. >> Yeah, terrific. Well, thank you so much for those insights. I'd like to switch it over to Kyle and hear about the technologies now being utilized to preserve the Cherokee language and culture. >> Sure, happy to Natalie and thanks for having us this morning. So yeah, when we started to work with the Cherokee Nation, it was very clear to us that, there's obviously a higher power or a higher mission here. And so it's really been an honor to work with the chief and the nation and what we've been able to do is is take what the Cherokee Nation is trying to do in terms of language and cultural preservation and build solutions in really a very modern way. So between Inage’i, the 3D mobile open-world game and the virtual classroom platform, it's entirely a cloud native serverless solution in AWS, using a lot of the most modern tools and technologies in the marketplace. For example, in the mobile game, it's built around unity and the virtual classroom platform is built around the Amazon chime SDK, which allows us to really build something that is very clean and light and focused on what the nation is trying to achieve and really cut out a lot of the baggage and the other sort of plumbing and various other technologies that this would have, this type of solution would have taken just a few short years ago. >> Yeah, terrific. Well, Kyle, staying with you, what do you think were some of the factors behind the development of this solution? >> Yeah, so I think flexibility was key. Was maybe the biggest design goal in building these solutions because you learn a lot when you originally set out to build something and it starts to impact real users, and in this case, speakers of the Cherokee Nation, you learn a tremendous amount about the language and how it's used and how people communicate with each other. And so the main design goal of the solutions was to allow a sort of flexibility that lets us adapt. And every time we learn something and every time we find something that works or perhaps doesn't work quite as well as was imagined, we have the flexibility to change that and kind of stay nimble and on our toes. >> Terrific, well, Chuck, now switching over to you, why do you think that some of these, platforms like the virtual classroom are so effective with Cherokee speakers? >> Well, a couple of reasons, one pandemic related, during COVID the worst public health crisis the world seen in living memory, we have had to adapt quickly to continue on our mission to save this language. We couldn't afford a year off in terms of pairing speakers, by the way, most of our fluent speakers are over the age of 70, with young people who need to learn the language and be the new generation of speakers. So it's been really important that during those difficult times we could connect virtually and the technology we've been using has worked so effectively, but the other is really irrespective of what's going on in terms of having to isolate, and social distance and things of that nature during COVID, and that is just making sure we can make this language accessible, particularly to young people in a manner in which they are becoming accustomed to learning things throughout the rest of the world. And so using platforms that they're familiar with is very important but it also has to be something that an older generation of these fluent speakers, as I say most of them are over 70, can use. And that's what really has been so effective about this platform. It's so usable. Once you introduce it to people whether it's a young person who can adapt pretty quickly 'cause they're growing up immersed in it, or it's someone who has not been familiar with that technology, with just a little bit of showing them how to use it, suddenly this classroom becomes just like you're in person. And that makes all the difference in the world in terms of connecting these young people with their elders. As the other thing is Cherokees are by nature very much part of a big extended family. And so that personal connection that you can maintain through this platform is really important. I think it's going to be the key to how we save this language, because as I say we have Cherokees all over the country, even all over the world and we're going to harness our numbers, the large population we have and find those with the interest and aptitude to learn the language, we must use this technology and so far it's worked well. >> Yeah, terrific, and now switching over to Kyle, we'd love to hear from you how your team developed this technology. How they really thought out, what kinds of methods are really going to drive the interaction and the immersion and engagement among these disparate demographics of, elderly Cherokees and also the young generation. So, how did your team go about developing that? >> Yeah, it's a very good question because in a situation like this, there is no shortage of different ways that you could have built a solution like this. There are a lot of different ways that it could have been done. So the tax that we took was a rigorous focus on the user experience and on the experience of the speaker. And that allowed us to detach ourselves to a large degree from what were the exact technology choices that were implemented in terms of AWS services, other open source packages that run on AWS, it's being able to focus completely on what the nation was trying to achieve with their speakers, both through the game and the virtual classroom platform. It let us take a lot of other design decisions and technology choices sort of into the background and behind a level of abstraction. And so there's always quite a bit of rigorous testing and really making sure you understand how something's going to perform in the wild, but the reality of the situation was, the whole reason for doing it was the experience of the speakers, both in the game and in the classroom platform. So we stayed very focused on that and made technology decisions sort of second fiddle or lower priority. >> Terrific, well, Chuck, how do you think that these kinds of innovations could be applied to other areas of the Cherokee school system? >> Well, our greatest challenge is preserving language and culture, but we also have as part of our mission to educate this new generation of Cherokees coming up. For years and years, really generations, Cherokees who were able to get a good education many of them left our tribal lands for new opportunities. And so we lost a great deal because of the economic pressures here in Northeast, Oklahoma, particularly on our Cherokee lands. So the task now is to generate opportunity for a new generation coming up. Education is key to that and so if we want to create a pipeline of young Cherokees who want to get into the healthcare fields, want to get into aerospace, want to get into other professions, we've got to create an education system that is steadier and modern. We have a school that is K through 12th grade, K through the senior year, and so we have an opportunity really to do that. And I think for the first time in our history, in this era, I'm talking elect the last few decades, we are able to really craft education in a way that works for us and using technology and making choices about what that technology is, is important to us. It's a bygone era in which the federal government or the state is sort of imposing on us what choices we make. Now we can reach out with great partners all over the world like Presidio and say what solution can work for our classroom? When we can identify what the great demands are on the reservation in terms of jobs. And one of the great demands we have is healthcare. So how can we use technology to inspire little Cherokee boys and girls to grow up and be doctors and nurses here in just a few decades when we're building this great health system? Well, we're going to use technology to do it. So the possibilities are really unlimited and they need to be because we think our potential here in Cherokee Nation is unlimited. >> Yeah, I mean that's terrific to hear how technology is really encouraging younger generations to study, learn and really push themselves further. Kyle, I'd like to switch over to you and hear a little bit about the benefits of launching this kind of platform on AWS. >> Yeah, there are a lot of benefits to building this on AWS. And I think that it spans a couple of categories, even. I mean, from a technological perspective there was every tool and every service that we needed to build both of the solutions that we built right there in AWS. And when there was a, when there was a time where we needed to jump out and use a project outside of AWS, running on AWS such as the unity engine, AWS makes that very easy. So I would say that the choice was easy because there are technological realities and the breadth and the depth of the technological portfolio in AWS combined with the partnership that we get from them, It's really, you know, there's a lot of support when it comes to, Hey we're working with the Cherokee nation on something that's extremely important. We need your help. We need you to help us figure this out. It's never been hard to get that partnership. >> Terrific, and also following up on that, love to hear how AWS really helped with flexibility and also the cost effective effectiveness of this kind of platform. >> Yeah I would take those questions backwards or in reverse order because the cost-effectiveness of the solution is really, it's really something to make note of because when we build something in the way that we built these platforms they're serverless and event driven. Meaning that the Cherokee Nation is not paying for a solution constantly as we would in lives past running things in data centers and such. It really, the services in AWS allow us to say, Hey, let's spin up certain pieces of functionality when they're needed as they're being used. And the meter is running during that time, and the cost is occurred during the time it's being used and not all of the time. So that really has a dramatic impact on cost effectiveness. And then from a flexibility standpoint, as we learn new things, as we evolve the platform as we grow this out to more and more speakers and to more and more impact to the Cherokee Nation, we have all kinds of different technology choices that we can make and it's all contained within AWS. >> Yeah, and I'd like to open this now to both of you, starting with Chuck, how do you think this kind of technology could be applied to other cultures or languages that re seeking to preserve themselves? There's so many languages in the world that are now dying out because most of us are only speaking, just a few like English, Spanish, just a few others, what steps can be taken so that humanity can preserve these important languages? >> Well, you're right. There are so many endangered languages around the world and indigenous languages are unfortunately dying all over the world all the time, even as we speak, they're slipping away. The United nations is dedicated the next decade to the preservation of indigenous languages. That's gotten many leaders around the world thinking about how we can save languages here in this era. And I would encourage any tribal leader in particular in the United States, but I think it certainly applies around the world to seek out this technology. I mean, Cherokee Nation's in a position now where we can seek out the best in the world in terms of partnerships. And we've found that in Presidio. And of course they're using AWS which means they're using the best in the world and so the technology exists, and the willingness to work together exist. And I think generations ago that would have been not something we could have connected well on in terms of partnering with companies that were doing cutting edge things. So if you're looking to connect generations in terms of learning and sharing the language, which is just I cannot stress enough how indispensable that is to language preservation, this type of technology will do it. There are some, I think that may think, and I don't have a technology background, that if you're using this cutting edge technology, I mean this is the best in the world that you're going to speak only to this young generation coming up, and maybe it's inaccessible to an older generation. It's just not the case. This is so user-friendly that we we've been able to connect elders with young people. And if anyone in the world interested in preserving languages could see this in action, could see a young person sitting next to an elder talking about the technology or connecting virtually, it would change their whole perspective on what technology means for language reservations because I promise you all over the world the great challenges you have this group of older generations of people who know the language. They have it in their hearts, they have it in their minds and they're slipping away just from the passage of time. Connecting them with the generation coming up is just what we need to do. This technology allows us to do it. >> Yeah, Chuck following up on that when I hear about elderly people being able to connect with the younger generations in this way and share their history and their culture I'm sure that also, It must have a positive mental effect for them. Right, so elderly are often isolated. Do you have any insight on that? Any quality of insight what you've heard from people using this? >> Yeah, absolutely. And I think the last year has proven how valuable it is. I mean, we lost over 50 fluent Cherokee speakers and I mentioned earlier in the program, that we only have 2000 left. 50 to COVID and more to just the passage of time and old age. But we have many that are active and engaged in language preservation and they have said to me how valuable it's been to be able to be at home and yet still feel like they're part of this great mission that we have at the Cherokee Nation. Understand that this mission that we have is on par with what any nation in history has set as a goal to shoot for whether it's the United States wanting to land a man on the moon, we're trying to save the language. This is that level of importance. And so for an elder to feel like they're connected and still contributing during this past year difficult times, that makes all the difference in the world. And even as I say, as the pandemic recedes and we hope it continues to recede, there is still a need for elders to stay connected. And in many cases they cannot due to poor health, due to the lack of transportation, this knocks down those barriers and so there's a great deal of joy that has been gained from using this technology. And honestly, just talking to elders about young people getting the opportunity to play this video game even some elders that were voice actors in this game, that Presidio helped us develop. I mean, I can't tell you how important that is for somebody to use their language, to make a living. And that's part of how you preserve a language. Presidio has showed us a way that we can do just that. So we're not only training new speakers, we're giving this opportunity many cases to elders to do something that is very productive with the wonderful gift they have, which is the Cherokee language. >> Terrific, well that is really inspiring because potentially this technology could be utilized by generations to come. The current young people that are using this will one day be the elderly. So, Kyle, how do you see this technology potentially on this platform being evolved? What's the next step to keep it really up to date for future generations as it's evolving. >> Yeah, there's a lot of plans on where to take this I can tell you, honestly. From the perspective of the mobile game, you're building on a platform of an open world game means that the imagination is the limit quite honestly. So there are a lot of new characters and new levels and new adventures that are plans to further immerse the speakers in the platform. And I think that will, that will help with reach and it will help with the amount of connection that's built to the chief's point about bridging the older generations into the younger generations over that common bond of the language and the culture that keeps those connections alive. And so we want to expand the mobile game Engage, the navigate to be as accessible and as wide reaching and immersive as it possibly can, and there are a lot of plans in the works for that. And then with the virtual classroom platform, we started with a various focused constituency within the nation of the language immersion school. And there are many other educational services and even healthcare to the chief's earlier point again where I think there's a lot of potential for that one as well. >> All right, well, terrific gentlemen. Thank you so much for your insights, really fantastic hearing how this platform is really a difference in the lives of people in the Cherokee Nation. Of course, that were our guests, Kyle Hines, VP Strategic Accounts at Presidio as well as chief Chuck Hoskin Jr., the chief of the Cherokee Nation. And that's all for today's session at the 2021 AWS Global Public Sector Partner Awards, I'm your host for "theCUBE", Natalie Erlich. Thanks so much for watching. (upbeat music)
SUMMARY :
chief of the Cherokee Nation. of the challenges that the and by the way we're mostly and hear about the and really cut out a lot of the baggage of the factors behind the And so the main design goal And that makes all the and the immersion and engagement and in the classroom platform. So the task now is to generate opportunity and hear a little bit about the benefits of the solutions that we and also the cost effective effectiveness and not all of the time. and so the technology exists, people being able to connect and I mentioned earlier in the program, What's the next step to the navigate to be as accessible of people in the Cherokee Nation.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Natalie Erlich | PERSON | 0.99+ |
Kyle Hines | PERSON | 0.99+ |
Natalie | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Chuck | PERSON | 0.99+ |
Kyle | PERSON | 0.99+ |
Chuck Hoskin | PERSON | 0.99+ |
United States | LOCATION | 0.99+ |
Cherokee Nation | ORGANIZATION | 0.99+ |
Chuck Hoskin Jr. | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
392,000 citizens | QUANTITY | 0.99+ |
Chuck Hoskin, Jr. | PERSON | 0.99+ |
COVID | EVENT | 0.99+ |
Spanish | OTHER | 0.99+ |
Cherokee Nation | ORGANIZATION | 0.99+ |
last year | DATE | 0.98+ |
English | OTHER | 0.98+ |
over 70 | QUANTITY | 0.98+ |
Amazon | ORGANIZATION | 0.98+ |
first time | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
2021 AWS Global Public Sector Partner Awards | EVENT | 0.97+ |
Presidio | PERSON | 0.97+ |
Presidio | ORGANIZATION | 0.96+ |
next decade | DATE | 0.96+ |
Cherokee nation | ORGANIZATION | 0.95+ |
a year | QUANTITY | 0.95+ |
today | DATE | 0.95+ |
European | OTHER | 0.95+ |
2000 fluent speakers | QUANTITY | 0.94+ |
Cherokees | PERSON | 0.94+ |
last few decades | DATE | 0.92+ |
AWS Global Public Sector Partner Awards 2021 | EVENT | 0.91+ |
Cherokee | ORGANIZATION | 0.91+ |
Northeast, Oklahoma | LOCATION | 0.91+ |
pandemic | EVENT | 0.9+ |
Cherokees | ORGANIZATION | 0.89+ |
Cherokee | LOCATION | 0.89+ |
few short years ago | DATE | 0.87+ |
20th century | DATE | 0.87+ |
over | QUANTITY | 0.86+ |
over 50 fluent | QUANTITY | 0.84+ |
12th grade | QUANTITY | 0.84+ |
second | QUANTITY | 0.82+ |
theCUBE | TITLE | 0.81+ |
Jeff Boudreau, Dell Technologies | Dell Technologies World 2020
>>from around the globe. It's the Cube with digital coverage of Dell Technologies. World Digital experience Brought to you by Dell Technologies. Hello, everyone. And welcome back to the cubes Coverage of Del Tech World 2020. With me is Jeff Boudreau, the president general manager of Infrastructure Solutions group Deltek. Jeff, always good to see you, my friend. How you doing? >>Good. Good to see you. >>I wish we were hanging out a Sox game or a pat's game, but, uh, I guess this will dio But, you know, it was about a year ago when you took over leadership of I s G. I actually had way had that sort of brief conversation. You were in the room with Jeff Clark. I thought it was a great, great choice. How you doing? How you feeling Any sort of key moments the past 12 months that you you feel like sharing? >>Sure. So I first I want to say, I do remember that about a year ago. So thank you for reminding me. Yeah, it's, uh it's been a very interesting year, right? It's been it's been one year. It was in September was one year since I took over I s G. But I'm feeling great. So thank you for asking. I hope you're doing the same. And I'm really optimistic about where we are and where we're heading. Aziz, you know, it's been an extremely challenging year in a very unpredictable year, as we've all experienced. And I'd say for the, you know, the first part of the year, especially starting in March on I've been really focused on the health and safety of our, you know, the families, our customers and our team members of the team on a lot of it's been shifting, you know, in regards to helping our customers around, you know, work from home or education and learn from home. And, you know, during all this time, though, I'll tell you, as a team, we've accomplished a lot. There's a handful of things that I'm very proud of, you know, first and foremost, that states around the customer experience we have delivered on our best quality in our product. NPS scores in our entire history. So something I'm extremely proud of during this time around our innovation and innovation engine, we part of the entire portfolio which you're well aware of. We had nine launches in nine weeks back in that May in June. Timeframe. So something I'm really proud of the team on, uh, on. Then last, I'd say it's around the team and right, we shifted about 90% of our workforce from the office tow home, you know, from an engineering team. That could be, you know, 85% of my team is engineers and writing code. And so, you know, people were concerned about that. But we didn't skip a beat, so, you know, pretty impressed by the team and what they've done there. So, you know, the strategy remains unchanged. Uh, you know, we're focused on our customers integrating across the entire portfolio and the businesses like VM ware and really focused on getting share. So despite all the uncertainty in the market, I'm pretty pleased with the team and everything that's been going on. So uh, yeah, it's it's been it's been an interesting year, but it's really great. I'm really optimistic about what we have in front of us. >>Yeah, I mean, there's not much you could do a control about the macro condition on it, you know it. Z dealt to us and we have to deal with it. I mean, in your space. It's the sort of countervailing things here one is. Look, you're not selling laptops and endpoint security. That's not your business right in the data center. Eso. But the flip side of that is you mentioned your portfolio refresh. You know, things like Power Store. You got product cycles now kicking in. So that could be, you know, a buffer. What are you seeing with Power Store and what's the uptake look like? They're >>sure. Well, specifically, let me take a step back and the regards the portfolio. So first, you know, the portfolio itself is a direct reflection in the feedback from all our partners and our customers over the last couple of years on Day two, ramp up that innovation. I spent a lot of time in the last few years simplifying under the power brands, which you're well aware of, right? So we had a lot of for a legacy EMC and Legacy dollars. Really? How do we simplify under a set of brands really over delivering innovation on a fewer set of products that really accelerating in exceeding customer needs? And we did that across the board. So from power edge servers, you know, power Max, the high end storage, the Powerball, all that we didn't hear one. And just most recently. And, you know, it's part of the big launches. We had power scale. We have power flex for software to find. And, of course, the new flagship offer for the mid range, which is power store. Um, Specifically, the policy of the momentum has been building since our launch back in May. And the feedback from our partners and our customers has been fantastic. And we've had a lot of big wins against, you know, a lot of a lot of our core competitors. A couple examples one is Arrow Electronics SAA, Fortune 500 Global Elektronik supplier. They leverage power Store to provide, you know, basically both, you know, enterprise computing and storage needs for their for their broader bases around the world on there, really taking advantage of the 41 data reduction, really helping them simplify their capacity planning and really improve operational efficiencies specifically without impacting performance. So it's it's one. We're given the data reductions, but there's no impact on performance, which is a huge value proffer for arrow another big customers tickets and write a global law firm on their reporting to us that over 90 they've had a 90% reduction in their rack space, and they've had over five times two performance over a core competitors storage systems azi. They've deployed power store around the world, really, and it's really been helping them. Thio easily migrate workloads across, so the feedback from the customers and partners has been extremely positive. Um, there really citing benefits around the architecture, the flexibility architecture around the micro services, the containers they're loving, the D M or integration. They're loving the height of the predictable data reduction capabilities in line with in line performance or no performance penalties with data efficiencies, the workload support, I'd say the other big things around the anytime upgrades is another big thing that customers we're really talking about so very excited and optimistic in regards as we continue to re empower store the second half of the year into next year really is the full full year for power store. >>So can I ask you about that? That in line data reduction with no performance hit is that new ipe? I mean, you're not doing some kind of batch data reduction, right? >>No, it's It's new, I p. It's all patented. We've actually done a lot of work in regards to our technologies. There's some of the things we talk about GPS and deep use and smart Knicks and things like that. We've used some offload engines to help with that. So between the software and the hardware, we've had leverage new I. P. So we can actually provide that predictable data reduction. But right with the performance customers need, So we're not gonna have a trade off in regards. You get more efficiencies and less performance or more performance and less efficiency. >>That's interesting. Yeah, when I talked to the chip guys, they talk about this sort of the storage offloads and other offloads we're seeing. These alternative processors really start to hit the market videos. The obvious one. But you're seeing others. Aziz. Well, you're really it sounds like you're taking advantage of that. >>Yeah, it's a huge benefit. I mean, we should, you know, with our partners, if it's Intel's and in videos and folks like that broad comes, it's really leveraging the great innovation that they do, plus our innovation. So if you know the sum of the parts, can you know equal Mauritz a benefit to our customers in the other day? That's what it's all about. >>So it sounds like Cove. It hasn't changed your strategy. I was talking toe Dennis Hoffman and he was saying, Look, you know, fundamentally, we're executing on the same strategy. You know, tactically, there's things that we do differently. But what's your summarize your strategy coming in tow 2021. You know, we're still early in this decade. What are you seeing is the trends that you're trying to take advantage of? What do you excited about? Maybe some things that keep you up at night? >>Yeah, so I'd say, you know, I'll stay with what Dennis said. You know, it's our strategy is not changing its a company. You probably got that from Michael and from job, obviously, Dennis just recently. But for me, it's a two pronged approach. One's all about winning the consolidation in the core infrastructure markets that we could just paid in today. So I think Service Storage Network, we're already clear leader across all those segments that we serve in our you know, we'll continue to innovate within our existing product categories. And you saw that with the nine launches in the nine weeks in my point on that one is we're gonna always make sure that we have best debris offers. If it's a three tier, two tier or converge or hyper converged offer, we wanna make sure that we serve that and have the best innovation possible. In addition to that, though, the secondary piece of the strategy really is around. How do we differentiate value across or innovating across I S G? You know, Dell Technologies and even the broader ecosystems and some of the examples I'll give you right now that we're doing is if you think about innovating across icy, that's all about providing improved customer experience, a set of solutions and offers that really helped simplify customer operations, right? And really give them better T CEOs or better. S L. A. An example of something like that's cloud like it's a SAS based off of that we have. That really helps provide great insights and telemetry to our customers. That helps them simplify their I T operations, and it's a major step forward towards, you know, autonomous infrastructure which is really what they're asking for. Customers of a very happy with the work we've done around Day one, you know, faster, time to value. But now it's like Day two and beyond. How do you really helped me Kinda accelerate the operations and really take that away from a three other big pieces innovating across all technologies. And you know, we do this with VM Ware now live today, and that's just writing. So things like VX rail is an example where we work together and where the clear leader in H C I. Things like Delta Cloud Uh, when we built in V M V C F A, B, M or cloud foundation in Tan Xue delivering an industry leading hybrid cloud platform just recently a VM world. I'm sure you heard about it, but Project Monterey was just announced, and that's an effort we're doing with VM Ware and some other partners. They're really about the next generation of infrastructure. Um, you know, I guess taking it up a notch out of the infrastructure and I've g phase, you know, some of the areas that we're gonna be looking at the end to end solutions to help our customers around six key areas. I'm sure John Rose talking about the past, but things like cloud Edge five g A i m l data management security. So those will be the big things. You'll see us lean into a Z strategies consistent. Some big themes that you'll see us lean into going into next year. >>Yeah, I mean, it is consistent, right? You guys have always tried to ride the waves, vector your portfolio into those waves and add value. I'm particularly impressed with your focus on customer experience, and I think that's a huge deal. You know, in the past, a lot of companies yours included your predecessor. You see, Hey, throwing so many products at me, I can't I don't understand the portfolio. So I mean, focusing on that I think is huge right now because people want that experience, you know, to be mawr cloudlike. And that's that's what you got to deliver. What about any news from from Dell Tech world? Any any announcements that you you wanna highlight that we could talk about? >>Sure. And actually, just touching back on the point you had no about the simplification that is a major 10 of my in regards the organization. So there's three key components that I drive once around customer focus, and that's keeping customers first and foremost. And everything we do to is around axillary that innovation. Engine three is really bringing everything together as one team. So we provide a better outcome to our customers. You know, in that simplification after that you talk about is court toe what we're driving. So I want to do less things, I guess better in the notion of how we do that. What that means to me is, as I make decisions that want to move away from other technologies and really leverage our best of breed type shared type, that's technology. I p people I p I can, you know, e can exceed customer needs in those markets that were serving. So it's actually allows me to x Sorry, my innovation engine, because I shift more and more resource is onto the newer stock now for Del Tech world. Yes, We got some cool stuff coming. You probably heard about a few of them. Uh, we're gonna be announcing a project project Apex. Hopefully you've been briefed on that already. This isn't new news or I'll be in trouble. But that's really around. Our strategy about delivering, simple, consistent as a service experiences for our customers bringing together are dealt technology as a service offering and our cloud strategy together. Onda also our technology offerings in our go to market all under a single unified effort, which Ellison do would be leading. Um, you know, on behalf of our executive leadership team s, that's one big area. And there is also another big one that I'll talk about a sui expand our as a service offers. And we think there's a big power to that in regards to our Dell Technologies. Cloud console solving will be launching a new cloud console that will provide uniformed experience across all the resources and give users and ability toe instantly managed every aspect of their cloud journey with just a few clicks. So going back to your broader point, it's all about simplicity. >>Yeah, we definitely all over Apex. That's something I wanted to ask you about this notion of as a service, really requiring it could have a new mindset, certainly from a pricing and how you talk about the customer experience that it's a whole new customer experience. Your you're basically giving them access. Thio What I would consider more of a platform on giving them some greater flexibility. Yeah, there's some constraints in there, but of course, you know the physical only put so much capacity and before him. But the idea of being ableto dial up, dial down within certain commitments is, I think, a powerful one. How does it change the way in which you you think about how you go about developing products just in terms of you know, this AP economy Infrastructure is code. How how you converse about those products internally and externally. How would you see that shaking >>out Dave? That's an awesome question. And it's actually for its front center. For everything we do, obviously, customers one choice and flexibility what they do. And to your point as we evolved warm or as a service, no specific product and product brands and logos on probably the way of the future. It's the services. It's the experience that you provide in regards to how we do that. So if you think about me, you know, in in infrastructure making infrastructure as a service, you really want to define what that customer experiences. That s L. A. That they're trying toe realize. And then how do we make sure that we build the right solutions? Products feature functions to enable that a law that goes back to the core engineering stuff that we need to dio right now, a lot of that stuff is about making sure that we have the right things around. If it's around developer community. If it's around AP rich, it's around. SdK is it's all about how do we leverage if it's internal source or external open source, if you will. It's regards to How do we do that? No. A thing that I think we all you know what you're well aware but we ought to keep in mind is that the cloud native applications are really relevant. Toe both the on premises, wealthy off premise. So think about things around portability reusability. You know, those are some great examples of just kind of how we think about this as we go forward. But those modern applications were required modern infrastructure, and regardless of how that infrastructure is abstracted now, just think about things like this. Aggregation or compose ability or Internet based computing. It's just it's a huge trend that we have to make sure we're thinking of. So is we. We just aggregate between the physical layers to the software layers and how we provide that to a service that could be think of a modern container based asset that could be repurposed. Either could be on a purpose built thing. It could be deployed in a converge or hyper converged. Or it could be two points a software feature in a cloud. Now, that's really how we're thinking about that, regards that we go forward. So we're talking about building modern assets or components That could be you right once we used many type model, and we can deploy that wherever you want because of some of the abstraction of desegregation that we're gonna do. >>E could see customers in the in the near term saying, I don't care so much about the product. I want the fast one all right with the cheaper one e. >>It's kind of what you talking about, that I talked about the ways. If you think about that regards, you know, maybe it's on a specific brand or portfolio. You look into and you say, Hey, what's the service level that I'd wanted to your point like Hey, for compute or for storage, it's really gonna end up being the specific S l A. And that's around performance or Leighton see, or cost or resiliency they want. They want that experience in that that you know, And that's why they're gonna be looking for the end of the end state. That's what we have to deliver is an engineering. >>So there's an opportunity here for you guys that I wonder if you could comment on. And that's the storage admin E. M. C essentially created. You know, you get this army of people that you know pretty good of provisioning lungs, although that's not really that's a great career path for folks. But program ability is, and this notion of infrastructure is code as you as you make your systems more programmable. Is there a skill set opportunity to take that army of constituents that you guys helped train and grow and over their careers and bring them along into sort of the next decade? This new era? >>I think the the easy answer is yes, I obviously that's a hard thing to do and you go forward. But I think embracing the change in the evolution of change, I think is a great opportunity. And I think there is e mean if you look step back and you think about data management, right? And you think about all the you know all data is not created equal and you know, and it has a life cycle, if you will. And so if it's on edge to Korda, Cloward, depending think about data vaults and data mobility and all that stuff. There's gonna be a bunch of different personas and people touching data along the way. I think the I T advance and the storage admin. They're just one of those personas that we have to help serve and way talk about How do we make them heroes, if you will, in regards to their broader environment. So if they're providing, if they evolve and really helped provide a modern infrastructure that really enables, you know infrastructure is a code or infrastructure as a service, they become a nightie hero, if you will for the rest of team. So I think there's a huge opportunity for them to evolve as the technology evolves. >>Yeah, you talked about you know, your families, your employees, your team s o. You obviously focused on them. You got your products going hitting all the marks. How are you spending your time these days? >>Thes days right now? Well, we're in. We're in our cycle for fiscal 22 planning. Right? And right now, a lot of that's above the specific markets were serving. It's gonna be about the strategy and making sure that we have people focused on those things. So it really comes back to some of the strategy tents were driving for next year. Now, as I said, our focus big time. Well, I guess for the for this year is one is consolidation of the core markets. Major focus for May 2 is going to be around winning in storage, and I want to be very specific. It's winning midrange storage. And that was one of the big reasons why Power Store came. That's gonna be a big focus on Bennett's really making sure that we're delivering on the as a service stuff that we just talked about in regards to all the technology innovation that's required to really provide the customer experience. And then, lastly, it's making sure that we take advantage of some of these growth factors. So you're going to see a dentist. Probably talked a lot about Telco, but telco on edge and as a service and cloud those things, they're just gonna be key to everything I do. So if you think about from poor infrastructure to some of these emerging opportunities Z, I'm spending all my time. >>Well, it's a It's a big business and a really important one for Fidel. Jeff Boudreau. Thanks so much for coming back in the Cube. Really a pleasure seeing you. I hope we can see each other face to face soon. >>You too. Thank you for having me. >>You're very welcome. And thank you for watching everybody keep it right there. This is Dave Volonte for the Cube. Our continuing coverage of Del Tech World 2020. We'll be right back right after this short break
SUMMARY :
World Digital experience Brought to you by Dell Technologies. the past 12 months that you you feel like sharing? especially starting in March on I've been really focused on the health and safety of our, you know, the families, But the flip side of that is you mentioned your portfolio refresh. So from power edge servers, you know, power Max, the high end storage, There's some of the things we talk about GPS and deep use and smart Knicks and things like that. These alternative processors really start to hit the market videos. I mean, we should, you know, with our partners, if it's Intel's and in videos and folks like and he was saying, Look, you know, fundamentally, we're executing on the same strategy. and some of the examples I'll give you right now that we're doing is if you think about innovating across icy, And that's that's what you got to deliver. You know, in that simplification after that you talk about is court toe what we're driving. How does it change the way in which you you think about how It's the experience that you provide in regards to how we do that. I don't care so much about the product. They want that experience in that that you know, So there's an opportunity here for you guys that I wonder if you could comment on. And you think about all the you know all data is not Yeah, you talked about you know, your families, your employees, So if you think about from poor infrastructure I hope we can see each other face to face soon. Thank you for having me. And thank you for watching everybody keep it right there.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Telco | ORGANIZATION | 0.99+ |
Jeff Boudreau | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Jeff Clark | PERSON | 0.99+ |
Dennis | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
telco | ORGANIZATION | 0.99+ |
Dave Volonte | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
May 2 | DATE | 0.99+ |
May | DATE | 0.99+ |
Deltek | ORGANIZATION | 0.99+ |
Dennis Hoffman | PERSON | 0.99+ |
85% | QUANTITY | 0.99+ |
John Rose | PERSON | 0.99+ |
March | DATE | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Arrow Electronics SAA | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
September | DATE | 0.99+ |
one team | QUANTITY | 0.99+ |
Power Store | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
nine weeks | QUANTITY | 0.99+ |
Aziz | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
2021 | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
fiscal 22 | DATE | 0.99+ |
41 data | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
Del Tech | ORGANIZATION | 0.98+ |
three key components | QUANTITY | 0.98+ |
Onda | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
next decade | DATE | 0.98+ |
two tier | QUANTITY | 0.97+ |
Day two | QUANTITY | 0.97+ |
two points | QUANTITY | 0.97+ |
June | DATE | 0.97+ |
about 90% | QUANTITY | 0.97+ |
three tier | QUANTITY | 0.97+ |
first | QUANTITY | 0.96+ |
Apex | ORGANIZATION | 0.96+ |
this year | DATE | 0.96+ |
one year | QUANTITY | 0.96+ |
Fidel | PERSON | 0.96+ |
single | QUANTITY | 0.96+ |
Day one | QUANTITY | 0.96+ |
Fortune 500 Global Elektronik | ORGANIZATION | 0.95+ |
nine launches | QUANTITY | 0.95+ |
Dell Tech | ORGANIZATION | 0.94+ |
Project Monterey | ORGANIZATION | 0.94+ |
Service Storage Network | ORGANIZATION | 0.94+ |
One | QUANTITY | 0.93+ |
Mauritz | PERSON | 0.93+ |
over five times | QUANTITY | 0.93+ |
EMC | ORGANIZATION | 0.91+ |
couple | QUANTITY | 0.9+ |
Cloward | PERSON | 0.88+ |
Bennett | PERSON | 0.88+ |
Leighton | ORGANIZATION | 0.88+ |
six key areas | QUANTITY | 0.87+ |
over 90 | QUANTITY | 0.86+ |
past 12 months | DATE | 0.86+ |
Del Tech World 2020 | EVENT | 0.86+ |
Delta | ORGANIZATION | 0.85+ |
second half | QUANTITY | 0.83+ |
about | DATE | 0.82+ |
two | QUANTITY | 0.81+ |
Jill Rouleau, Brad Thornton & Adam Miller, Red Hat | AnsibleFest 2020
>> (soft upbeat music) >> Announcer: From around the globe, it's the cube with digital coverage of Ansible Fest 2020, brought to you by RedHat. >> Hello, welcome to the cubes coverage of Ansible Fest 2020. We're not in person, we're virtual. I'm John Furrier , your host of theCube. We've got a great power panel here of RedHat engineers. We have Brad Thorton, Senior Principle Software Engineer for Ansible networking. Adam Miller, Senior Principle Software Engineer for Security and Jill Rouleau, who's the Senior Software Engineer for Ansible Cloud. Thanks for joining me today. Appreciate it. Thanks for coming on. >> Thanks. >> Good to be here. >> We're not in person this year because of COVID, a lot going on but still a lot of great news coming out of Ansible Fest this year. Last year, you guys launched a lot since last year. It's been awesome. Launched the new platform. The automation platform, grown the collections, certified collections community from five supported platforms to over 50, launched a lot of automation services catalog. Brad let's start with you. Why are customers successful with Ansible in networking? >> Why are customers successful with Ansible in networking? Well, let's take a step back to a bit of classic network engineering, right? Lots of CLI interaction with the terminal, a real opportunity for human error there. Managing thousands of devices from the CLI becomes very difficult. I think one of the reasons why Ansible has done well in the networking space and why a lot of network engineers find it very easy to use is because you can still see an attack at the CLI. But what we have the ability to do is pull information from the same COI that you were using manually, and showed that as structured data and then let you return that structured data and push it back to the configuration. So what you get when you're using Ansible is a way to programmatically interface and do configuration management across your entire fleet. It brings consistency and stability, and speed really to network configuration management. >> You know, one of the big hottest areas is, you know, I always ask the folks in the cloud what's next after cloud and pretty much unanimously it's edge, and edge is super important around automation, Brad. What's your thoughts on, as people start thinking about, okay, I need to have edge devices. How does automation play into that? And cause networking, edge it's kind of hand in hand there. So what's your thought on that? >> Yeah, for sure. It really depends on what infrastructure you have at the edge. You might be deploying servers at the edge. You may be administering IOT devices and really how you're directing that traffic either into edge compute or back to your data center. I think one of the places Ansible is going to be really critical is administering the network devices along that path from the edge, from IOT back to the data center, or to the cloud. >> Jill, when you have a Cloud, what's your thoughts on that? Because when you think about Cloud and Multicloud, that's coming around the horizon, you're looking at kind of the operational model. We talked about this a lot last year around having Cloud ops on premises and in the Cloud. What should customers think about when they look at the engineering challenges and the development challenges around Cloud? >> So cloud gets used for a lot of different things, right? But if we step back Cloud just means any sort of distributed applications, whether it's on prem in your own data center, on the edge, in a public hosted environment, and automation is critical for making those things work, when you have these complex applications that are distributed across, whether it's a rack, a data center or globally. You need a tool that can help you make sense of all of that. You've got to... We can't manage things just with, Oh, everything is on one box anymore. Cloud really just means that things have been exploded out and broken up into a bunch of different pieces. And there's now a lot more architectural complexity, no matter where you're running that. And so I think if you step back and look at it from that perspective, you can actually apply a lot of the same approaches and philosophies to these new challenges as they come up without having to reinvent the wheel of how you think about these applications. Just because you're putting them in a new environment, like at the edge or in a public Cloud or on a new, private on premise solution. >> It's interesting, you know, I've been really loving the cloud native action lately, especially with COVID, we're seeing a lot of more modern apps come out of that. If I could follow up there, how do you guys look at tools like Terraform and how does Ansible compare to that? Because you guys are very popular in the cloud configuration, you look at cloud native, Jill, your thoughts. >> Yeah. So Terraform and tools like that. Things like cloud formation or heat in the OpenStack world, they do really, really great at things like deploying your apps and setting up your stack and getting them out there. And they're really focused on that problem space, which is a hard problem space that they do a fantastic job with where Ansible tends to come in and a tool like Ansible is what do you do on day two with that application? How do you run an update? How do you manage it in the longterm of something like 60% of the workloads or cloud spend at least on AWS is still just EC2 instances. What do you do with all of those EC2 instances once you've deployed them, once they're in a stack, whether you're managing it, whatever tool you're managing it with, Ansible is a phenomenal way of getting in there and saying, okay, I have these instances, I know about them, but maybe I just need to connect out and run an update or add a package or reconfigure a service that's running on there. And I think you can glue these things together and use Ansible with these other stack deployment based tools really, really effectively. >> Real quick, just a quick followup on that. what's the big pain point for developers right now when they're looking at these tools? Because they see the path, what are some of the pain points that they're living right now that they're trying to overcome? >> I think one of the problems kind of coincidentally is we have so many tools. We're in kind of a tool explosion in the cloud space, right now. You could piece together as as many tools to manage your stack, as you have components in your stack and just making sense of what that landscape looks like right now and figuring out what are the right tools for the job I'm trying to do, that can be flexible and that are not going to box me into having to spend half of my engineering time, just managing my tools and making sense of all of that is a significant effort and job on its own. >> Yes, too many may add, would choke in years ago in the big data search, the tools, the tool train, one we call the tool shed, after a while, you don't know what's in the back, what you're using every day. People get comfortable with the right tools, but the platform becomes a big part of that thinking holistically as a system. And Adam, this comes back to security. There's more tools in the security space than ever before. Talking about tool challenges, security is the biggest tool shed everyone's got tools they'd buy everything, but you got to look at, what a platform looks like and developers just want to have the truth. And when you look at the configuration management piece of it, security is critical. What's your thoughts on the source of truth when it comes into play for these security appliances? >> So these are... Source of truth piece is kind of an interesting one because this is going to be very dependent on the organization. What type of brownfield environment they've developed, what type of things that they rely on, and what types of data they store there. So we have the ability for various sources of truth to come in for your inventory source and the types of information you store with that. This could be tagged information on a series of cloud instances or series about resources. This could be something you store in a network management tool or a CMDB. This could even be something that you put into a privilege access management system, such as, CyberArk or hashivault. Like those are the things and because of Ansible flexibility and because of the way that everything is put together in a pluggable nature, we have the capability to actually bring in all of these components from anywhere in a brownfield environment, in a preexisting infrastructure, as well as new decisions that are being made for the enterprise as I move forward. And, and we can bring all that together and be that infrastructure glue, be that automation component that can tie all these disjoint loosely coupled, or complete disc couple pieces, together. And that's kind of part of that, that security posture, remediation various levels of introspection into your environment, these types of things, as we go forward, and that's kind of what we're focusing on doing with this. >> What kind of data is stored in the source of truth? >> I mean... So what type of data? This could be credential. It could be single use credential access. This could be your inventory data for your systems, what target systems you're trying to do. It could be, various attributes of different systems to be able to classify them ,and codify them in different ways. It's kind of kind of depending, be configuration data. You know, we have the ability with some of the work that Brad and his team are doing to actually take unstructured data, make it structured, bullet into whatever your chosen source of truth is, store it, and then utilize that to, kind of decompose it into different vendors, specific syntax representations and those types of things. So we have a lot of different capability there as well. >> Brad, you were mentioned, do you have a talk on parsing, can you elaborate on that? And why should network operators care about that? >> Yeah, welcome to 2020. We're still parsing network configuration and operational state. This is an interesting one. If you had asked me years ago, did I think that we would be investing development time into parsing with Ansible network configurations? I would have said, "Well, I certainly hope not. "I hope programmability of network devices and the vendors "really have their API's in order." But I think what we're seeing is network containers are still comfortable with the command line. They're still very familiar with the command line and when it comes time to do operational state assessment and health assessment of your network, engineers are comfortable going to the command line and running show commands. So really what we're trying to do in the parsing space is not author brand new parking and parsing engine ourselves, but really leverage a lot of the open source tools that are already out there bringing them into Ansible, so network engineers can now harvest the critical information from usher operational state commands on their network devices. And then once they've gotten to the structure data, things get really interesting because now you can do entrance criteria checks prior to doing configuration changes, right? So if you want to ensure a network device has a very particular operational state, all the BGP neighbors are, for example before pushing configuration changes, what we have the ability to do now is actually parse the command that you would have run from the command line. Use that within a decision tree in your Ansible playbook, and only move forward when the configuration changes. If the box is healthy. And then once the configuration changes are made at the end, you run those same health checks to ensure that you're in a speck can do a steady state and are production ready. So parsing is the mechanism. It's the data that you get from the parsing that's so critical. >> If I had to ask you real quick, just while it's on my mind. You know, people want to know about automation. It's top of mind use case. What are some of these things around automation and configuration parsing, whether it's parsing to other configuration manager, what are the big challenges around automation? Because it's the Holy grail. Everyone wants it now. What are the couches? where's the hotspots that needs to be jumped on and managed carefully? Or the easiest low hanging fruit? >> Well, there's really two pieces to it, right? There's the technology. And then there's the culture. And, and we talk really about a culture of automation, bringing the team with you as you move into automation, ensuring that everybody has the tools and they're familiar with how automation is going to work and how their day job is going to change because of automation. So I think once the organization embraces automation and the culture is in place. On the technology side, low hanging fruit automation can be as simple as just using Ansible to push the commands that you would have previously pushed to the device. And then as your organization matures, and you mature along this kind of path of network automation, you're dealing with larger pieces, larger sections of the configuration. And I think over time, network engineers will become data managers, right? Because they become less concerned about the network, the vendors specific configuration, and they're really managing the data that makes up the configuration. And I think once you hit that part, you've won at automation because you can move forward with Ansible resource modules. You're well positioned to do NETCONF for RESTCONF or... Right once you've kind of grown to that it's the data that we need to be concerned about and it could fit (indistinct) and the operational state management piece, you're going to go through a transformation on the networking side. >> So you mentioned-- >> And one thing to note there, if I may, I feel like a piece of this too, is you're able to actually bridge teams because of the capability of Ansible, the breadth of technologies that we've had integrations with and our ability to actually bridge that gap between different technologies, different teams. Once you have that culture of automation, you can start to realize these DevOps and DevSecOps workflow styles that are top of everybody's mind these days. And that's something that I think is very powerful. And I like to try to preach when I have the opportunity to talk to folks about what we can do, and the fact that we have so much capability and so many integrations across the entire industry. >> That's a great point. DevSecOps is totally a hop on. When you have software and hardware, it becomes interesting. There's a variety of different equipment, on the security automation. What kind of security appliances can you guys automate? >> As of today, we are able to do endpoint management systems, enterprise firewalls, security information, and event management systems. We're able to do security orchestration, automation, remediation systems, privileged access management systems. We're doing some threat intelligence platforms. And we've recently added to the I'm sorry, did I say intrusion detection? We have intrusion detection and prevention, and we recently added endpoint security management. >> Huge, huge value there. And I think everyone's wants that. Jill, I've got to ask you about the Cloud because the modules came up. What use cases do you see the Ansible modules in for the public cloud? Because you got a lot of cloud native folks in public cloud, you've got enterprises lifting and shifting, there's a hybrid and multicloud horizon here. What's some of the use cases where you see those Ansible modules fitting well with public level. >> The modules that we have in public cloud can work across all of those things, you know. In our public clouds, we have support for Amazon web services, Azure GCP, and they all support your main services. You can spin up a Lambda, you can deploy ECS clusters, build AMI, all of those things. And then once you get all of that up there, especially looking at AWS, which is where I spend the most time, you get all your EC2 instances up, you can now pull that back down into Ansible, build an inventory from that. And seamlessly then use Ansible to manage those instances, whether they're running Linux or windows or whatever distro you might have them running, we can go straight from having deployed all of those services and resources to managing them and going between your instances in your traditional operating system management or those instances and your cloud services. And if you've got multiple clouds or if you still have on prem, or if you need to, for some reason, add those remote cloud instances into some sort of on-prem hardware load balancer, security endpoint, we can go between all of those things and glue everything together, fairly seamlessly. You can put all of that into tower and have one kind of view of your cloud and your hardware and your on-prem and being able to move things between them. >> Just put some color commentary on what that means for the customer in terms of, is it pain reduction, time savings? How would you classify their value? >> I mean, both. Instead of having to go between a number of different tools and say, "Oh, well for my on-prem, I have to use this. "But as soon as I shift over to a cloud, "I have to use these tools. "And, Oh, I can't manage my Linux instances with this tool "that only knows how to speak to, the EC2 to API." You can use one tool for all of these things. So like we were saying, bring all of your different teams together, give them one tool and one view for managing everything end to end. I think that's, that's pretty killer. >> All right. Now I get to the fun part. I want you guys to weigh in on the Kubernetes. Adam, if you can start with you, we'll start with you go in and tell us why is Kubernetes more important now? What does it mean? A lot of hype continues to be out there. What's the real meet around Kubernetes what's going on? >> I think the big thing is the modernization of the application development delivery. When you talk about Kubernetes and OpenShift and the capabilities we have there, and you talk about the architecture, you can build a lot of the tooling that you used to have to maintain, to be able to deliver sophisticated resilient architectures in your application stack, are now baked into the actual platform, so the container platform itself takes care of that for you and removes that complexity from your operations team, from your development team. And then they can actually start to use these primitives and kind of achieve what the cloud native compute foundation keeps calling cloud native applications and the ability to develop and do this in a way that you are able to take yourself out of some of the components you used to have to babysit a lot. And that becomes in also with the OpenShift operator framework that came out of originally Coral S, and if you go to operator hub, you're able to see these full lifecycle management stacks of infrastructure components that you don't... You no longer have to actually, maintain a large portion of what you start to do. And so the operator SDK itself, are actually developing these operators. Ansible is one of the automation capabilities. So there's currently three supported there's Ansible, there's one that you just have full access to the Golang API and then helm charts. So Ansible's specifically obviously being where we focus. We have our collection content for the... carries that core, and then also ReHat to OpenShift certified collection's coming out in, I think, a month or so. Don't hold me to the timeline. I'm shoving in trouble for that one, but we have those things going to come out. Those will be baked into the operator's decay that we fully supported by our customer base. And then we can actually start utilizing the Ansible expertise of your operations team to container native of the infrastructure components that you want to put into this new platform. And then Ansible itself is able to build that capability of automating the entire Kubernetes or OpenShift cluster in a way that allows you to go into a brownfield environment and automate your existing infrastructure, along with your more container native, futuristic next generation, net structure. >> Jill this brings up the question. Why don't you just use native public cloud resources versus Kubernetes and Ansible? What's the... What should people know about where you use that, those resources? >> Well, and it's kind of what Adam was saying with all of those brownfield deployments and to the same point, how many workloads are still running just in EC2 instances or VMs on the cloud. There's still a lot of tech out there that is not ready to be made fully cloud native or containerized or broken up. And with OpenShift, it's one more layer that lets you put everything into a kind of single environment instead of having to break things up and say, "Oh, well, this application has to go here. "And this application has to be in this environment.' You can do that across a public cloud and use a little of this component and a little of that component. But if you can bring everything together in OpenShift and manage it all with the same tools on the same platform, it simplifies the landscape of, I need to care about all of these things and look at all of these different things and keep track of these and are my tools all going to work together and are my tools secure? Anytime you can simplify that part of your infrastructure, I think is a big win. >> John: You know, I think about-- >> The one thing, if I may, Jill spoke to this, I think in the way that a architectural, infrastructure person would, but I want to try to really quick take the business analyst component of it as the hybrid component. If you're trying to address multiple footprints, both on prem, off prem, multiple public clouds, if you're running OpenShift across all of them, you have that single, consistent deployment and development footprint for everywhere. So I don't disagree with anything they said, I just wanted to focus specifically on... That piece is something that I find personally unique, as that was a problem for me in a past life. And that kind of speaks to me. >> Well, speaking of past lives-- >> Having me as an infrastructure person, thank you. >> Yeah. >> Well, speaking of past lives, OpenStack, you look at Jill with OpenStack, we've been covering the Cuba thing when OpenStack was rolling out back in the day, but you can also have private cloud. Where you used to... There's a lot of private cloud out there. How do you talk about that? How do people understand using public cloud versus the private cloud aspect of Ansible? >> Yeah, and I think there is still a lot of private cloud out there and I don't think that's a bad thing. I've kind of moved over onto the public cloud side of things, but there are still a lot of use cases that a lot of different industries and companies have that don't make sense for putting into public cloud. So you still have a lot of these on-prem open shift and on-prem OpenStack deployments that make a ton of sense and that are solving a bunch of problems for these folks. And I think they can all work together. We have Ansible that can support both of those. If you're a telco, you're not going to put your network function, virtualization on USC as to one in spot instances, right? When you call nine one one, you don't want that going through the public cloud. You want that to be on dedicated infrastructure, that's reliable and well-managed and engineered for that use case. So I think we're going to see a lot of ongoing OpenStack and on-prem OpenShift, especially with edge, enabling those types of use cases for a long time. And I think that's great. >> I totally agree with you. I think private cloud is not a bad thing at all. Things that are only going to accelerate my opinion. You look at the VM world, they talked about the telco cloud and you mentioned edge when five G comes out, you're going to have basically have private clouds everywhere, I guess, in my opinion. But anyway, speaking of VMware, could you talk about the Ansible VMware module real quick? >> Yeah, so we have a new collection that we'll be debuting at Ansible Fest this year bore the VMware REST API. So the existing VMware modules that we have usually SOAP API for VMware, and they rely on an external Python library that VMware provides, but with these fare 6.0 and especially in vSphere 6.5, VMware has stepped up with a REST API end point that we find is a lot more performance and offers a lot of options. So we built a new collection of VMware modules that will take advantage of that. That's brand new, it's a lighter way. It's much faster, we'll get better performance out of it. You know, reduced external requirements. You can install it and get started faster. And especially with these sphere seven, continuing to build on this REST API, we're going to see more and more interfaces being exposed so that we can take advantage. We plan to expand it as new interfaces are being exposed in that API, it's compatible with all of the existing modules. You can go back and forth, use your existing playbooks and start introducing these. But I think especially on the performance side, and especially as we get these larger clouds and more cloud deployments, edge clouds, where you have these private clouds and lots and lots of different places, the performance benefits of this new collection that we're trying to build is going to be really, really powerful for a lot of folks. >> Awesome. Brad, we didn't forget about you. We're going to bring you back in. Network automation has moved towards the resource modules. Why should people care about them? >> Yeah. Resource modules, excuse me. Probably I think having been a network engineer for so long, I think some of the most exciting work that has gone into Ansible network over the past year and a half, what the resource modules really do for you is they will reach out to network devices. They will pull back that network native, that vendor native configuration. While the resource module actually does the parsing for you. So there's none of that with the resource modules. And we returned structured data back to the user that represents the configuration. Going back to your question about source of truth. You can take that structure data, maybe for your interface CONFIG, your OSPF CONFIG, your access list CONFIG, and you can store that data in your source of truth under source of truth. And then where you are moving forward, is you really spend time as every engineer managing the data that makes up the configuration, and you can share that data across different platforms. So if you were to look at a lot of the resource modules, the data model that they support, it's fairly consistent between vendors. As an example, I can pull OSPF configuration from one vendor and with very small changes, push that OSPF configuration to a different vendor's platform. So really what we've tried to do with the resource modules is normalize the data model across vendors. It'll never be a hundred percent because there's functionality that exists in one platform that doesn't exist and that's exposed through the configuration, but where we could, we have normalized the data model. So I think it's really introducing the concept of network configuration management through data management and not through CLI commands anymore. >> Yeah, that's a great point. It just expands the network automation vision. And one of the things that's interesting here in this panel is you're talking about, cloud holistically, public multicloud, private hybrid security network automation as a platform, not just a tool, we're still going to have all kind of tools out there. And then the importance of automating the edge. I mean, that's a network game Brad. I mean, it's a data problem, right? I mean, we all know about networking, moving packets from here to there, but automating the data is critical and you give have bad data and you don't have... If you have misinformation, it sounds like our current politics, but you know, bad information is bad automation. I mean, what's your thoughts? How do you share that concept to developers out there? What should they be thinking about in terms of the data quality? >> I think that's the next thing we have to tackle as network engineers. It's not, do I have access to the data? You can get the data now for resource modules, you can get the data from NETCONF, from RESTCONF, you can get it from OpenConfig, you can get it from parsing. The question really is, how do you ensure the integrity and the quality of the data that is making up your configurations and the consistency of the data that you're using to look at operational state. And I think this is where the source of truth really becomes important. If you look at Git as a viable source of truth, you've got all the tools and the mechanisms within Git to use that as your source of truth for network configuration. So network engineers are actually becoming developers in the sense that they're using Git ops to worklow to manage configuration moving forward. It's just really exciting to see that transformation happen. >> Great panel. Thanks for everyone coming on, I appreciate it. We'll just end this by saying, if you guys could just quickly summarize Ansible fast 2020 virtual, what should people walk away with? What should your customers walk away with this year? What's the key points. Jill, we'll start with you. >> Hopefully folks will walk away with the idea that the Ansible community includes so many different folks from all over, solving lots of different, interesting problems, and that we can all come together and work together to solve those problems in a way that is much more effective than if we were all trying to solve them individually ourselves, by bringing those problems out into the open and working together, we get a lot done. >> Awesome, Brad? >> I'm going to go with collections, collections, collections. We introduced in last year. This year, they are real. Ansible2.10 that just came out is made up of collections. We've got certified collections on automation. We've got cloud collections, network collections. So they are here. They're the real thing. And I think it just gets better and deeper and more content moving forward. All right, Adam? >> Going last is difficult. Especially following these two. They covered a lot of ground and I don't really know that I have much to add beyond the fact that when you think about Ansible, don't think about it in a single context. It is a complete automation solution. The capability that we have is very extensible. It's very pluggable, which has a standing ovation to the collections and the solutions that we can come up with collectively. Thanks to ourselves. Everybody in the community is almost infinite. A few years ago, one of the core engineers did a keynote speech using Ansible to automate Philips hue light bulbs. Like this is what we're capable of. We can automate the fortune 500 data centers and telco networks. And then we can also automate random IOT devices around your house. Like we have a lot of capability here and what we can do with the platform is very unique and something special. And it's very much thanks to the community, the team, the open source development way. I just, yeah-- >> (Indistinct) the open source of truth, being collaborative all is what it makes up and DevOps and Sec all happening together. Thanks for the insight. Appreciate the time. Thank you. >> Thank you. I'm John Furrier, you're watching theCube here for Ansible Fest, 2020 virtual. Thanks for watching. (soft upbeat music)
SUMMARY :
brought to you by RedHat. and Jill Rouleau, who's the Launched the new platform. and then let you return I always ask the folks in the along that path from the edge, from IOT and the development lot of the same approaches and how does Ansible compare to that? And I think you can glue that they're trying to overcome? as you have components in your And when you look at the and because of the way that and those types of things. It's the data that you If I had to ask you real quick, bringing the team with you and the fact that we on the security automation. and we recently added What's some of the use cases where you see those Ansible and being able to move Instead of having to go between A lot of hype continues to be out there. and the capabilities we have there, about where you use that, and a little of that component. And that kind of speaks to me. infrastructure person, thank you. but you can also have private cloud. and that are solving a bunch You look at the VM world, and lots and lots of different places, We're going to bring you back in. and you can store that data and you give have bad data and the consistency of What's the key points. and that we can all come I'm going to go with collections, and the solutions that we can Thanks for the insight. Thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brad | PERSON | 0.99+ |
Adam Miller | PERSON | 0.99+ |
Brad Thorton | PERSON | 0.99+ |
John | PERSON | 0.99+ |
60% | QUANTITY | 0.99+ |
Adam | PERSON | 0.99+ |
Jill | PERSON | 0.99+ |
Jill Rouleau | PERSON | 0.99+ |
Ansible | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
two pieces | QUANTITY | 0.99+ |
Last year | DATE | 0.99+ |
This year | DATE | 0.99+ |
last year | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Git | TITLE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
vSphere 6.5 | TITLE | 0.99+ |
OpenShift | TITLE | 0.99+ |
RedHat | ORGANIZATION | 0.99+ |
Philips | ORGANIZATION | 0.99+ |
Kubernetes | TITLE | 0.99+ |
Python | TITLE | 0.99+ |
Linux | TITLE | 0.99+ |
two | QUANTITY | 0.99+ |
EC2 | TITLE | 0.99+ |
five supported platforms | QUANTITY | 0.99+ |
Ansible Fest | EVENT | 0.99+ |
one tool | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
thousands of devices | QUANTITY | 0.99+ |
over 50 | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
USC | ORGANIZATION | 0.98+ |
2020 | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
one box | QUANTITY | 0.98+ |
Lambda | TITLE | 0.98+ |
this year | DATE | 0.98+ |
Brad Thornton | PERSON | 0.98+ |
windows | TITLE | 0.98+ |
telco | ORGANIZATION | 0.98+ |
one more layer | QUANTITY | 0.98+ |
one platform | QUANTITY | 0.98+ |
Ansible Fest 2020 | EVENT | 0.97+ |
DevSecOps | TITLE | 0.97+ |
AnsibleFest | EVENT | 0.96+ |
day two | QUANTITY | 0.96+ |
one vendor | QUANTITY | 0.96+ |
NETCONF | ORGANIZATION | 0.95+ |
three | QUANTITY | 0.95+ |
nine | QUANTITY | 0.95+ |
one view | QUANTITY | 0.95+ |
hundred percent | QUANTITY | 0.94+ |
Innovation Happens Best in Open Collaboration Panel | DockerCon Live 2020
>> Announcer: From around the globe, it's the queue with digital coverage of DockerCon live 2020. Brought to you by Docker and its ecosystem partners. >> Welcome, welcome, welcome to DockerCon 2020. We got over 50,000 people registered so there's clearly a ton of interest in the world of Docker and Eddie's as I like to call it. And we've assembled a power panel of Open Source and cloud native experts to talk about where things stand in 2020 and where we're headed. I'm Shawn Conley, I'll be the moderator for today's panel. I'm also a proud alum of JBoss, Red Hat, SpringSource, VMware and Hortonworks and I'm broadcasting from my hometown of Philly. Our panelists include; Michelle Noorali, Senior Software Engineer at Microsoft, joining us from Atlanta, Georgia. We have Kelsey Hightower, Principal developer advocate at Google Cloud, joining us from Washington State and we have Chris Aniszczyk, CTO CIO at the CNCF, joining us from Austin, Texas. So I think we have the country pretty well covered. Thank you all for spending time with us on this power panel. Chris, I'm going to start with you, let's dive right in. You've been in the middle of the Docker netease wave since the beginning with a clear focus on building a better world through open collaboration. What are your thoughts on how the Open Source landscape has evolved over the past few years? Where are we in 2020? And where are we headed from both community and a tech perspective? Just curious to get things sized up? >> Sure, when CNCF started about roughly four, over four years ago, the technology mostly focused on just the things around Kubernetes, monitoring communities with technology like Prometheus, and I think in 2020 and the future, we definitely want to move up the stack. So there's a lot of tools being built on the periphery now. So there's a lot of tools that handle running different types of workloads on Kubernetes. So things like Uvert and Shay runs VMs on Kubernetes, which is crazy, not just containers. You have folks that, Microsoft experimenting with a project called Kruslet which is trying to run web assembly workloads natively on Kubernetes. So I think what we've seen now is more and more tools built around the periphery, while the core of Kubernetes has stabilized. So different technologies and spaces such as security and different ways to run different types of workloads. And at least that's kind of what I've seen. >> So do you have a fair amount of vendors as well as end users still submitting in projects in, is there still a pretty high volume? >> Yeah, we have 48 total projects in CNCF right now and Michelle could speak a little bit more to this being on the DOC, the pipeline for new projects is quite extensive and it covers all sorts of spaces from two service meshes to security projects and so on. So it's ever so expanding and filling in gaps in that cloud native landscape that we have. >> Awesome. Michelle, Let's head to you. But before we actually dive in, let's talk a little glory days. A rumor has it that you are the Fifth Grade Kickball Championship team captain. (Michelle laughs) Are the rumors true? >> They are, my speech at the end of the year was the first talk I ever gave. But yeah, it was really fun. I wasn't captain 'cause I wasn't really great at anything else apart from constantly cheer on the team. >> A little better than my eighth grade Spelling Champ Award so I think I'd rather have the kickball. But you've definitely, spent a lot of time leading an Open Source, you've been across many projects for many years. So how does the art and science of collaboration, inclusivity and teamwork vary? 'Cause you're involved in a variety of efforts, both in the CNCF and even outside of that. And then what are some tips for expanding the tent of Open Source projects? >> That's a good question. I think it's about transparency. Just come in and tell people what you really need to do and clearly articulate your problem, more clearly articulate your problem and why you can't solve it with any other solution, the more people are going to understand what you're trying to do and be able to collaborate with you better. What I love about Open Source is that where I've seen it succeed is where incentives of different perspectives and parties align and you're just transparent about what you want. So you can collaborate where it makes sense, even if you compete as a company with another company in the same area. So I really like that, but I just feel like transparency and honesty is what it comes down to and clearly communicating those objectives. >> Yeah, and the various foundations, I think one of the things that I've seen, particularly Apache Software Foundation and others is the notion of checking your badge at the door. Because the competition might be between companies, but in many respects, you have engineers across many companies that are just kicking butt with the tech they contribute, claiming victory in one way or the other might make for interesting marketing drama. But, I think that's a little bit of the challenge. In some of the, standards-based work you're doing I know with CNI and some other things, are they similar, are they different? How would you compare and contrast into something a little more structured like CNCF? >> Yeah, so most of what I do is in the CNCF, but there's specs and there's projects. I think what CNCF does a great job at is just iterating to make it an easier place for developers to collaborate. You can ask the CNCF for basically whatever you need, and they'll try their best to figure out how to make it happen. And we just continue to work on making the processes are clearer and more transparent. And I think in terms of specs and projects, those are such different collaboration environments. Because if you're in a project, you have to say, "Okay, I want this feature or I want this bug fixed." But when you're in a spec environment, you have to think a little outside of the box and like, what framework do you want to work in? You have to think a little farther ahead in terms of is this solution or this decision we're going to make going to last for the next how many years? You have to get more of a buy in from all of the key stakeholders and maintainers. So it's a little bit of a longer process, I think. But what's so beautiful is that you have this really solid, standard or interface that opens up an ecosystem and allows people to build things that you could never have even imagined or dreamed of so-- >> Gotcha. So I'm Kelsey, we'll head over to you as your focus is on, developer advocate, you've been in the cloud native front lines for many years. Today developers are faced with a ton of moving parts, spanning containers, functions, Cloud Service primitives, including container services, server-less platforms, lots more, right? I mean, there's just a ton of choice. How do you help developers maintain a minimalist mantra in the face of such a wealth of choice? I think minimalism I hear you talk about that periodically, I know you're a fan of that. How do you pass that on and your developer advocacy in your day to day work? >> Yeah, I think, for most developers, most of this is not really the top of mind for them, is something you may see a post on Hacker News, and you might double click into it. Maybe someone on your team brought one of these tools in and maybe it leaks up into your workflow so you're forced to think about it. But for most developers, they just really want to continue writing code like they've been doing. And the best of these projects they'll never see. They just work, they get out of the way, they help them with log in, they help them run their application. But for most people, this isn't the core idea of the job for them. For people in operations, on the other hand, maybe these components fill a gap. So they look at a lot of this stuff that you see in the CNCF and Open Source space as number one, various companies or teams sharing the way that they do things, right? So these are ideas that are put into the Open Source, some of them will turn into products, some of them will just stay as projects that had mutual benefit for multiple people. But for the most part, it's like walking through an ion like Home Depot. You pick the tools that you need, you can safely ignore the ones you don't need, and maybe something looks interesting and maybe you study it to see if that if you have a problem. And for most people, if you don't have that problem that that tool solves, you should be happy. No one needs every project and I think that's where the foundation for confusion. So my main job is to help people not get stuck and confused in LAN and just be pragmatic and just use the tools that work for 'em. >> Yeah, and you've spent the last little while in the server-less space really diving into that area, compare and contrast, I guess, what you found there, minimalist approach, who are you speaking to from a server-less perspective versus that of the broader CNCF? >> The thing that really pushed me over, I was teaching my daughter how to make a website. So she's on her Chromebook, making a website, and she's hitting 127.0.0.1, and it looks like geo cities from the 90s but look, she's making website. And she wanted her friends to take a look. So she copied and paste from her browser 127.0.0.1 and none of her friends could pull it up. So this is the point where every parent has to cross that line and say, "Hey, do I really need to sit down "and teach my daughter about Linux "and Docker and Kubernetes." That isn't her main goal, her goal was to just launch her website in a way that someone else can see it. So we got Firebase installed on her laptop, she ran one command, Firebase deploy. And our site was up in a few minutes, and she sent it over to her friend and there you go, she was off and running. The whole server-less movement has that philosophy as one of the stated goal that needs to be the workflow. So, I think server-less is starting to get closer and closer, you start to see us talk about and Chris mentioned this earlier, we're moving up the stack. Where we're going to up the stack, the North Star there is feel where you get the focus on what you're doing, and not necessarily how to do it underneath. And I think server-less is not quite there yet but every type of workload, stateless web apps check, event driven workflows check, but not necessarily for things like machine learning and some other workloads that more traditional enterprises want to run so there's still work to do there. So server-less for me, serves as the North Star for why all these Projects exists for people that may have to roll their own platform, to provide the experience. >> So, Chris, on a related note, with what we were just talking about with Kelsey, what's your perspective on the explosion of the cloud native landscape? There's, a ton of individual projects, each can be used separately, but in many cases, they're like Lego blocks and used together. So things like the surface mesh interface, standardizing interfaces, so things can snap together more easily, I think, are some of the approaches but are you doing anything specifically to encourage this cross fertilization and collaboration of bug ability, because there's just a ton of projects, not only at the CNCF but outside the CNCF that need to plug in? >> Yeah, I mean, a lot of this happens organically. CNCF really provides of the neutral home where companies, competitors, could trust each other to build interesting technology. We don't force integration or collaboration, it happens on its own. We essentially allow the market to decide what a successful project is long term or what an integration is. We have a great Technical Oversight Committee that helps shepherd the overall technical vision for the organization and sometimes steps in and tries to do the right thing when it comes to potentially integrating a project. Previously, we had this issue where there was a project called Open Tracing, and an effort called Open Census, which is basically trying to standardize how you're going to deal with metrics, on the tree and so on in a cloud native world that we're essentially competing with each other. The CNCF TC and committee came together and merged those projects into one parent ever called Open Elementary and so that to me is a case study of how our committee helps, bridges things. But we don't force things, we essentially want our community of end users and vendors to decide which technology is best in the long term, and we'll support that. >> Okay, awesome. And, Michelle, you've been focused on making distributed systems digestible, which to me is about simplifying things. And so back when Docker arrived on the scene, some people referred to it as developer dopamine, which I love that term, because it's simplified a bunch of crufty stuff for developers and actually helped them focus on doing their job, writing code, delivering code, what's happening in the community to help developers wire together multi-part modern apps in a way that's elegant, digestible, feels like a dopamine rush? >> Yeah, one of the goals of the(mumbles) project was to make it easier to deploy an application on Kubernetes so that you could see what the finished product looks like. And then dig into all of the things that that application is composed of, all the resources. So we're really passionate about this kind of stuff for a while now. And I love seeing projects that come into the space that have this same goal and just iterate and make things easier. I think we have a ways to go still, I think a lot of the iOS developers and JS developers I get to talk to don't really care that much about Kubernetes. They just want to, like Kelsey said, just focus on their code. So one of the projects that I really like working with is Tilt gives you this dashboard in your CLI, aggregates all your logs from your applications, And it kind of watches your application changes, and reconfigures those changes in Kubernetes so you can see what's going on, it'll catch errors, anything with a dashboard I love these days. So Yali is like a metrics dashboard that's integrated with STL, a service graph of your service mesh, and lets you see the metrics running there. I love that, I love that dashboard so much. Linkerd has some really good service graph images, too. So anything that helps me as an end user, which I'm not technically an end user, but me as a person who's just trying to get stuff up and running and working, see the state of the world easily and digest them has been really exciting to see. And I'm seeing more and more dashboards come to light and I'm very excited about that. >> Yeah, as part of the DockerCon just as a person who will be attending some of the sessions, I'm really looking forward to see where DockerCompose is going, I know they opened up the spec to broader input. I think your point, the good one, is there's a bit more work to really embrace the wealth of application artifacts that compose a larger application. So there's definitely work the broader community needs to lean in on, I think. >> I'm glad you brought that up, actually. Compose is something that I should have mentioned and I'm glad you bring that up. I want to see programming language libraries, integrate with the Compose spec. I really want to see what happens with that I think is great that they open that up and made that a spec because obviously people really like using Compose. >> Excellent. So Kelsey, I'd be remiss if I didn't touch on your January post on changelog entitled, "Monoliths are the Future." Your post actually really resonated with me. My son works for a software company in Austin, Texas. So your hometown there, Chris. >> Yeah. >> Shout out to Will and the chorus team. His development work focuses on adding modern features via micro services as extensions to the core monolith that the company was founded on. So just share some thoughts on monoliths, micro services. And also, what's deliverance dopamine from your perspective more broadly, but people usually phrase as monoliths versus micro services, but I get the sense you don't believe it's either or. >> Yeah, I think most companies from the pragmatic so one of their argument is one of pragmatism. Most companies have trouble designing any app, monolith, deployable or microservices architecture. And then these things evolve over time. Unless you're really careful, it's really hard to know how to slice these things. So taking an idea or a problem and just knowing how to perfectly compartmentalize it into individual deployable component, that's hard for even the best people to do. And double down knowing the actual solution to the particular problem. A lot of problems people are solving they're solving for the first time. It's really interesting, our industry in general, a lot of people who work in it have never solved the particular problem that they're trying to solve for the first time. So that's interesting. The other part there is that most of these tools that are here to help are really only at the infrastructure layer. We're talking freeways and bridges and toll bridges, but there's nothing that happens in the actual developer space right there in memory. So the libraries that interface to the structure logging, the libraries that deal with rate limiting, the libraries that deal with authorization, can this person make this query with this user ID? A lot of those things are still left for developers to figure out on their own. So while we have things like the brunettes and fluid D, we have all of these tools to deploy apps into those target, most developers still have the problem of everything you do above that line. And to be honest, the majority of the complexity has to be resolved right there in the app. That's the thing that's taking requests directly from the user. And this is where maybe as an industry, we're over-correcting. So we had, you said you come from the JBoss world, I started a lot of my Cisco administration, there's where we focus a little bit more on the actual application needs, maybe from a router that as well. But now what we're seeing is things like Spring Boot, start to offer a little bit more integration points in the application space itself. So I think the biggest parts that are missing now are what are the frameworks people will use for authorization? So you have projects like OPA, Open Policy Agent for those that are new to that, it gives you this very low level framework, but you still have to understand the concepts around, what does it mean to allow someone to do something and one missed configuration, all your security goes out of the window. So I think for most developers this is where the next set of challenges lie, if not actually the original challenge. So for some people, they were able to solve most of these problems with virtualization, run some scripts, virtualize everything and be fine. And monoliths were okay for that. For some reason, we've thrown pragmatism out of the window and some people are saying the only way to solve these problems is by breaking the app into 1000 pieces. Forget the fact that you had trouble managing one piece, you're going to somehow find the ability to manage 1000 pieces with these tools underneath but still not solving the actual developer problems. So this is where you've seen it already with a couple of popular blog posts from other companies. They cut too deep. They're going from 2000, 3000 microservices back to maybe 100 or 200. So to my world, it's going to be not just one monolith, but end up maybe having 10 or 20 monoliths that maybe reflect the organization that you have versus the architectural pattern that you're at. >> I view it as like a constellation of stars and planets, et cetera. Where you you might have a star that has a variety of, which is a monolith, and you have a variety of sort of planetary microservices that float around it. But that's reality, that's the reality of modern applications, particularly if you're not starting from a clean slate. I mean your points, a good one is, in many respects, I think the infrastructure is code movement has helped automate a bit of the deployment of the platform. I've been personally focused on app development JBoss as well as springsSource. The Spring team I know that tech pretty well over the years 'cause I was involved with that. So I find that James Governor's discussion of progressive delivery really resonates with me, as a developer, not so much as an infrastructure Deployer. So continuous delivery is more of infrastructure notice notion, progressive delivery, feature flags, those types of things, or app level, concepts, minimizing the blast radius of your, the new features you're deploying, that type of stuff, I think begins to speak to the pain of application delivery. So I'll guess I'll put this up. Michelle, I might aim it to you, and then we'll go around the horn, what are your thoughts on the progressive delivery area? How could that potentially begin to impact cloud native over 2020? I'm looking for some rallying cries that move up the stack and give a set of best practices, if you will. And I think James Governor of RedMonk opened on something that's pretty important. >> Yeah, I think it's all about automating all that stuff that you don't really know about. Like Flagger is an awesome progressive delivery tool, you can just deploy something, and people have been asking for so many years, ever since I've been in this space, it's like, "How do I do AB deployment?" "How do I do Canary?" "How do I execute these different deployment strategies?" And Flagger is a really good example, for example, it's a really good way to execute these deployment strategies but then, make sure that everything's happening correctly via observing metrics, rollback if you need to, so you don't just throw your whole system. I think it solves the problem and allows you to take risks but also keeps you safe in that you can be confident as you roll out your changes that it all works, it's metrics driven. So I'm just really looking forward to seeing more tools like that. And dashboards, enable that kind of functionality. >> Chris, what are your thoughts in that progressive delivery area? >> I mean, CNCF alone has a lot of projects in that space, things like Argo that are tackling it. But I want to go back a little bit to your point around developer dopamine, as someone that probably spent about a decade of his career focused on developer tooling and in fact, if you remember the Eclipse IDE and that whole integrated experience, I was blown away recently by a demo from GitHub. They have something called code spaces, which a long time ago, I was trying to build development environments that essentially if you were an engineer that joined a team recently, you could basically get an environment quickly start it with everything configured, source code checked out, environment properly set up. And that was a very hard problem. This was like before container days and so on and to see something like code spaces where you'd go to a repo or project, open it up, behind the scenes they have a container that is set up for the environment that you need to build and just have a VS code ID integrated experience, to me is completely magical. It hits like developer dopamine immediately for me, 'cause a lot of problems when you're going to work with a project attribute, that whole initial bootstrap of, "Oh you need to make sure you have this library, this install," it's so incredibly painful on top of just setting up your developer environment. So as we continue to move up the stack, I think you're going to see an incredible amount of improvements around the developer tooling and developer experience that people have powered by a lot of this cloud native technology behind the scenes that people may not know about. >> Yeah, 'cause I've been talking with the team over at Docker, the work they're doing with that desktop, enable the aim local environment, make sure it matches as closely as possible as your deployed environments that you might be targeting. These are some of the pains, that I see. It's hard for developers to get bootstrapped up, it might take him a day or two to actually just set up their local laptop and development environment, and particularly if they change teams. So that complexity really corralling that down and not necessarily being overly prescriptive as to what tool you use. So if you're visual code, great, it should feel integrated into that environment, use a different environment or if you feel more comfortable at the command line, you should be able to opt into that. That's some of the stuff I get excited to potentially see over 2020 as things progress up the stack, as you said. So, Michelle, just from an innovation train perspective, and we've covered a little bit, what's the best way for people to get started? I think Kelsey covered a little bit of that, being very pragmatic, but all this innovation is pretty intimidating, you can get mowed over by the train, so to speak. So what's your advice for how people get started, how they get involved, et cetera. >> Yeah, it really depends on what you're looking for and what you want to learn. So, if you're someone who's new to the space, honestly, check out the case studies on cncf.io, those are incredible. You might find environments that are similar to your organization's environments, and read about what worked for them, how they set things up, any hiccups they crossed. It'll give you a broad overview of the challenges that people are trying to solve with the technology in this space. And you can use that drill into the areas that you want to learn more about, just depending on where you're coming from. I find myself watching old KubeCon talks on the cloud native computing foundations YouTube channel, so they have like playlists for all of the conferences and the special interest groups in CNCF. And I really enjoy talking, I really enjoy watching excuse me, older talks, just because they explain why things were done, the way they were done, and that helps me build the tools I built. And if you're looking to get involved, if you're building projects or tools or specs and want to contribute, we have special interest groups in the CNCF. So you can find that in the CNCF Technical Oversight Committee, TOC GitHub repo. And so for that, if you want to get involved there, choose a vertical. Do you want to learn about observability? Do you want to drill into networking? Do you care about how to deliver your app? So we have a cig called app delivery, there's a cig for each major vertical, and you can go there to see what is happening on the edge. Really, these are conversations about, okay, what's working, what's not working and what are the next changes we want to see in the next months. So if you want that kind of granularity and discussion on what's happening like that, then definitely join those those meetings. Check out those meeting notes and recordings. >> Gotcha. So on Kelsey, as you look at 2020 and beyond, I know, you've been really involved in some of the earlier emerging tech spaces, what gets you excited when you look forward? What gets your own level of dopamine up versus the broader community? What do you see coming that we should start thinking about now? >> I don't think any of the raw technology pieces get me super excited anymore. Like, I've seen the circle of around three or four times, in five years, there's going to be a new thing, there might be a new foundation, there'll be a new set of conferences, and we'll all rally up and probably do this again. So what's interesting now is what people are actually using the technology for. Some people are launching new things that maybe weren't possible because infrastructure costs were too high. People able to jump into new business segments. You start to see these channels on YouTube where everyone can buy a mic and a B app and have their own podcasts and be broadcast to the globe, just for a few bucks, if not for free. Those revolutionary things are the big deal and they're hard to come by. So I think we've done a good job democratizing these ideas, distributed systems, one company got really good at packaging applications to share with each other, I think that's great, and never going to reset again. And now what's going to be interesting is, what will people build with this stuff? If we end up building the same things we were building before, and then we're talking about another digital transformation 10 years from now because it's going to be funny but Kubernetes will be the new legacy. It's going to be the things that, "Oh, man, I got stuck in this Kubernetes thing," and there'll be some governor on TV, looking for old school Kubernetes engineers to migrate them to some new thing, that's going to happen. You got to know that. So at some point merry go round will stop. And we're going to be focused on what you do with this. So the internet is there, most people have no idea of the complexities of underwater sea cables. It's beyond one or two people, or even one or two companies to comprehend. You're at the point now, where most people that jump on the internet are talking about what you do with the internet. You can have Netflix, you can do meetings like this one, it's about what you do with it. So that's going to be interesting. And we're just not there yet with tech, tech is so, infrastructure stuff. We're so in the weeds, that most people almost burn out what's just getting to the point where you can start to look at what you do with this stuff. So that's what I keep in my eye on, is when do we get to the point when people just ship things and build things? And I think the closest I've seen so far is in the mobile space. If you're iOS developer, Android developer, you use the SDK that they gave you, every year there's some new device that enables some new things speech to text, VR, AR and you import an STK, and it just worked. And you can put it in one place and 100 million people can download it at the same time with no DevOps team, that's amazing. When can we do that for server side applications? That's going to be something I'm going to find really innovative. >> Excellent. Yeah, I mean, I could definitely relate. I was Hortonworks in 2011, so, Hadoop, in many respects, was sort of the precursor to the Kubernetes area, in that it was, as I like to refer to, it was a bunch of animals in the zoo, wasn't just the yellow elephant. And when things mature beyond it's basically talking about what kind of analytics are driving, what type of machine learning algorithms and applications are they delivering? You know that's when things tip over into a real solution space. So I definitely see that. I think the other cool thing even just outside of the container and container space, is there's just such a wealth of data related services. And I think how those two worlds come together, you brought up the fact that, in many respects, server-less is great, it's stateless, but there's just a ton of stateful patterns out there that I think also need to be addressed as these richer applications to be from a data processing and actionable insights perspective. >> I also want to be clear on one thing. So some people confuse two things here, what Michelle said earlier about, for the first time, a whole group of people get to learn about distributed systems and things that were reserved to white papers, PhDs, CF site, this stuff is now super accessible. You go to the CNCF site, all the things that you read about or we used to read about, you can actually download, see how it's implemented and actually change how it work. That is something we should never say is a waste of time. Learning is always good because someone has to build these type of systems and whether they sell it under the guise of server-less or not, this will always be important. Now the other side of this is, that there are people who are not looking to learn that stuff, the majority of the world isn't looking. And in parallel, we should also make this accessible, which should enable people that don't need to learn all of that before they can be productive. So that's two sides of the argument that can be true at the same time, a lot of people get caught up. And everything should just be server-less and everyone learning about distributed systems, and contributing and collaborating is wasting time. We can't have a world where there's only one or two companies providing all infrastructure for everyone else, and then it's a black box. We don't need that. So we need to do both of these things in parallel so I just want to make sure I'm clear that it's not one of these or the other. >> Yeah, makes sense, makes sense. So we'll just hit the final topic. Chris, I think I'll ask you to help close this out. COVID-19 clearly has changed how people work and collaborate. I figured we'd end on how do you see, so DockerCon is going to virtual events, inherently the Open Source community is distributed and is used to not face to face collaboration. But there's a lot of value that comes together by assembling a tent where people can meet, what's the best way? How do you see things playing out? What's the best way for this to evolve in the face of the new normal? >> I think in the short term, you're definitely going to see a lot of virtual events cropping up all over the place. Different themes, verticals, I've already attended a handful of virtual events the last few weeks from Red Hat summit to Open Compute summit to Cloud Native summit, you'll see more and more of these. I think, in the long term, once the world either get past COVID or there's a vaccine or something, I think the innate nature for people to want to get together and meet face to face and deal with all the serendipitous activities you would see in a conference will come back, but I think virtual events will augment these things in the short term. One benefit we've seen, like you mentioned before, DockerCon, can have 50,000 people at it. I don't remember what the last physical DockerCon had but that's definitely an order of magnitude more. So being able to do these virtual events to augment potential of physical events in the future so you can build a more inclusive community so people who cannot travel to your event or weren't lucky enough to win a scholarship could still somehow interact during the course of event to me is awesome and I hope something that we take away when we start all doing these virtual events when we get back to physical events, we find a way to ensure that these things are inclusive for everyone and not just folks that can physically make it there. So those are my thoughts on on the topic. And I wish you the best of luck planning of DockerCon and so on. So I'm excited to see how it turns out. 50,000 is a lot of people and that just terrifies me from a cloud native coupon point of view, because we'll probably be somewhere. >> Yeah, get ready. Excellent, all right. So that is a wrap on the DockerCon 2020 Open Source Power Panel. I think we covered a ton of ground. I'd like to thank Chris, Kelsey and Michelle, for sharing their perspectives on this continuing wave of Docker and cloud native innovation. I'd like to thank the DockerCon attendees for tuning in. And I hope everybody enjoys the rest of the conference. (upbeat music)
SUMMARY :
Brought to you by Docker of the Docker netease wave on just the things around Kubernetes, being on the DOC, the A rumor has it that you are apart from constantly cheer on the team. So how does the art and the more people are going to understand Yeah, and the various foundations, and allows people to build things I think minimalism I hear you You pick the tools that you need, and it looks like geo cities from the 90s but outside the CNCF that need to plug in? We essentially allow the market to decide arrived on the scene, on Kubernetes so that you could see Yeah, as part of the and I'm glad you bring that up. entitled, "Monoliths are the Future." but I get the sense you and some people are saying the only way and you have a variety of sort in that you can be confident and in fact, if you as to what tool you use. and that helps me build the tools I built. So on Kelsey, as you and be broadcast to the globe, that I think also need to be addressed the things that you read about in the face of the new normal? and meet face to face So that is a wrap on the DockerCon 2020
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
Michelle | PERSON | 0.99+ |
Shawn Conley | PERSON | 0.99+ |
Michelle Noorali | PERSON | 0.99+ |
Chris Aniszczyk | PERSON | 0.99+ |
2011 | DATE | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
Kelsey | PERSON | 0.99+ |
1000 pieces | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
Apache Software Foundation | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
January | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Philly | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Austin, Texas | LOCATION | 0.99+ |
a day | QUANTITY | 0.99+ |
Atlanta, Georgia | LOCATION | 0.99+ |
SpringSource | ORGANIZATION | 0.99+ |
TOC | ORGANIZATION | 0.99+ |
100 | QUANTITY | 0.99+ |
Hortonworks | ORGANIZATION | 0.99+ |
DockerCon | EVENT | 0.99+ |
North Star | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Prometheus | TITLE | 0.99+ |
Washington State | LOCATION | 0.99+ |
first time | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Docker | ORGANIZATION | 0.99+ |
YouTube | ORGANIZATION | 0.99+ |
Will | PERSON | 0.99+ |
200 | QUANTITY | 0.99+ |
Spring Boot | TITLE | 0.99+ |
Android | TITLE | 0.99+ |
two companies | QUANTITY | 0.99+ |
two sides | QUANTITY | 0.99+ |
iOS | TITLE | 0.99+ |
one piece | QUANTITY | 0.99+ |
Kelsey Hightower | PERSON | 0.99+ |
RedMonk | ORGANIZATION | 0.99+ |
two people | QUANTITY | 0.99+ |
3000 microservices | QUANTITY | 0.99+ |
Home Depot | ORGANIZATION | 0.99+ |
JBoss | ORGANIZATION | 0.99+ |
Google Cloud | ORGANIZATION | 0.98+ |
Netflix | ORGANIZATION | 0.98+ |
50,000 people | QUANTITY | 0.98+ |
20 monoliths | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
one thing | QUANTITY | 0.98+ |
Argo | ORGANIZATION | 0.98+ |
Kubernetes | TITLE | 0.98+ |
two companies | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
GitHub | ORGANIZATION | 0.98+ |
over 50,000 people | QUANTITY | 0.98+ |
five years | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
Docker | EVENT | 0.98+ |
Dr. Thomas Di Giacomo & Daniel Nelson, SUSE | SUSECON '20
(upbeat music) >> From around the globe, it's theCUBE with coverage of SUSECON Digital. Brought to you by SUSE. >> Welcome back. I'm Stuart Miniman coming to you from our Boston area studio and this is theCUBE coverage of SUSECON Digital 20. Happy to welcome to the program two of the keynote present presenters. First of all, we have Dr. Thomas Giacomo. He is the President of Engineering and innovation and joining him his co presenter from Makino state, Daniel Nelson, who is the Vice President of Product Solutions, both of you with SUSE. Gentlemen, thanks so much for joining us. >> Thank you. >> Thank you for having us. >> All right. So, Dr. T, Let's start out, innovation, open source, give us a little bit of the message for our audience that you and Daniel were talking about on stage. We've been watching for decades, the growth in the proliferation of open source communities, so give us the update there. >> Yeah. And then it's not stopping, it's actually growing even more and more and more and more innovations coming from open source. The way we look at it is that our customers there, they have their business problems, they have their business reality. And so we, we have to curate, and prepare and filter all the open source innovation that they can benefit from, because that takes time to understand how that can match your needs and fix problems. So at SUSE, we've always done that, since 27 plus years. So, working in the open source projects, innovating there but with customers in mind, and what is pretty clear in 2020 is that large enterprises, more startups, everybody's doing software, everybody's is doing IT and they all have the same type of needs in a way they need to simplify their landscape, because they've been accumulating investments all the way or infrastructure or software, different solutions, different platforms from different vendors. They need to simplify that. They need to modernize, and they need to accelerate their business stay relevant and competitive in their own industries. And that's what we are focusing on. >> Yeah, it's interesting, I completely agree when you say simplify thing, you know, Daniel, I go back in the opportunities about 20 years. And in those days, we were talking about the operating Linux was helping to go past the proprietary Unix platform, Microsoft, the big enemy. And you were talking about operating system, server storage, the application that on, it was a relatively simple environment in there compared to today's multi cloud, AI, container based architecture, applications going through this radical Information broke, though, gives a little bit of insight as to the impact this is having on ecosystems and, of course SUSE now has a broad portfolio that at all? >> It's a great question and I totally get where you're coming from, like, if you look 20 years ago, the landscape is completely different, the technologies we're using are completely different, the problems we're trying to solve with technology are more and more sophisticated. At the same time, though, there's kind of nothing new under the sun. Every company, every technology, every modality goes through this expansion of capabilities and the collapse around simplification as the capabilities become more and more complex and more manageable. So there's this continuous tension between capabilities, ease of use consume ability. What we see with open source is that, that kind of dynamic still exists, but it's more online of like developers want, easy to use technologies, but they want the cutting edge. They want the latest things. They want those things within their packages. And then if you look at operations groups or people that are trying to consume that technology, they want that technology to be consumable simple, works well with others be able to pick and choose and have one pane of glass to be able to operate within that. And that's where we see this dynamic. And that's kind of what the SUSE portfolio was built upon. It's like, how do we take the thousands and thousands of developers that are working on these really critical projects, whether it's Linux like you mentioned, or Kubernetes, or or Cloud Foundry? And how do we make that then more consumable to the thousands of companies that are trying to do it, who may even be new to open source or may not contribute directly, but when you have all the benefits that are coming to it, and that's where SUSE fits and where SUSE has fits historically, and where we see us continuing to fit long term is taken all those Legos, put into together for companies that want that, and then allow them a lot of autonomy and choice and how those technologies are consumed. >> Right, one of the themes that I heard you both talk about, in the keynote, it was simplifying, modernize, celebrate, really reminded me of the imperatives of the CIO. There's always run the business, they need to help grow the business, and if they have the opportunity, they want to transform the business. I think you said, run improve in scale. Scale absolutely a critical thing that we talk about these days, when I think back to the Cloud Foundry summit, in the keynote stage, it was the old way if I could do faster, better, cheaper, you could do them today. We know Faster, faster, faster is what you want. So give us a little bit of insight as to, you talked about Cloud Foundry and Kubernetes, application, modernization, what are the imperatives that you're hearing from customers and how are we, with all of these tools out there helping, IT, not just be responsive to the business but actually be a driver for that transformation of the business? >> It's a great question. And so when I talk to customers, and Dr. T, feel free to chime in, you talk to as many or more customers than I do. They do have these what are historically competing imperatives. But what we see with the adoption of some of these technologies is that faster is cheaper, faster is safer, creating more opportunities to grow and to innovate betters the business. It's not risk injection, when we change something, it's actually risk mitigation, when we get good at changing. And so it's kind of that modality of moving from, a simplified model or a very kind of like a manufacturing model of software to a much more organic, much more permissimuch more being able to learn within ecosystems model. And so that's how we see companies start to change the way they're adopting this technology. What's interesting about them is that same level of adoption. That same thought of adoption, It's also how open sources is developed. Open Source has developed organically, it's developed with many eyes make shallow bugs, it's developed by like, let me try this and see what happens, right? And be able to do that in smaller and smaller increments just like we look at Red Green deployments or being able to do micro services, or Canary or any of those things. It's like, let's not, do one greatly for what we're used to and waterfall is that's actually really risky. Let's do many, many, many steps forward and be able to transform it iteratively and be able to go faster iteratively and make that just part of what the business is good at. And so you're exactly right. Like those are the three imperatives of the CIO. What I see with customers is the more that they are aligning those three imperatives together and not making them separate, but we have to be better at being faster and being transformative. Those are the companies that are really using IT as a competitive advantage within their reach. >> Yeah, because most of the time they have different starting points. They have a history. They have different business strategy and things they've done in the past. So you need to be able to accommodate all of that and the faster microservice, native development posture for the new apps, but they're also coming from somewhere, and if you don't take care of that together, you can just accelerate if you simplify your existing because otherwise you spend your time making sure that your existing is running. So you have to combine all of that together, and the two, you mentioned Cloud Foundry and Kubernetes and I love those topics because, I mean, everybody knows about Kubernetes. Now it's picking up in terms of adoption, in terms of innovation technology, uilding AI ML framework on top of it. Now, what's very interesting as well is that, Cloud Foundry was designed for fast software development, and cloud native from the beginning that by the factor apps, and several like four or five years ago, right? What we see now is we can extract the value that Cloud Foundry brings to speed up and accelerate our software development cycles, and we can combine that very nicely and very smoothly simple in a simple way, with all the benefits you get from Kubernetes, and not from one Kubernetes. From your Kubernetes running in your public clouds because you have workloads there, you have services that you want to consume from one public clouds. We have a great SUSECON fireside chat with open shot from Microsoft. Asia, we're actually discussing those topics. Or you might have also Kubernetes clusters at the edge that you want to run in your factory or close to your data and workloads in the field. So those things and Daniel mentioned that as well taking care of the IT ops, like simplify, modernize and accelerate for the IT ops and also accelerate for the developers themselves, we benefiting from a combination of open source technologies. And today, there's not one open source technology that can do that. You need to bundle combine them together and best make sure that they are integrated, hat they are certified together, that they are stable together, that the security aspects, all the technology around them are deeply integrated into services as well. >> Well, I'm really glad you brought up some of those Kubernetes that are out there. We've been saying for a couple years on theCUBE, Kubernetes is getting baked in everywhere. SUSE's got partnership with all the cloud providers and you're not fighting them over whether to use a solution that you have versus theirs. I worry a little bit about, how do I manage all those environments? Do I end up with Kubernetes sprawl just like we have with every other technology out there? Help us understand what differentiates SUSE's offerings in this space? And how do you fit in with the rest of that very dynamic and diverse. >> So, let me start with the aspect of combining things together. And Daniel, maybe you can take the management piece. So, first of all, we are making sure at SUSE that we don't force our customers into a SUSE stack. Of course we have a SUSE stack, and we're very happy people use it. But the reality is that the customer knows that they have some investments, they have different needs, they use different technologies from the past, or they want to try different technologies. So you have to make sure that for Kubernetes like for any other part of the stack, the IT stack or the developer stack, your pieces are our modular that you can accommodate different different elements. So typically, at SUSE, we support different types of hypervisors We're not like focused on one but we can support KVM, Xen, Hyper-V, vSphere, all of the nutanix hypervisor, NetApp hypervisors and everything. Same thing with the OS, there's not only one Linux that people are running, and that's exactly the same with kubernetes. There's no one probably that I've seen in our customer base that will just need one vendor for Kubernetes because they have a hybrid cloud needs and strategy and they will benefit from the native Kubernetes they found on AKA, CKA, SDK, Alibaba clouds, you name them and we have cloud vendors in Europe as well doing that. So for us, it's very important that what we bring as SUSE to our customers can be combined with what they have, what they want, even if it's from the so called competition. And so the SUSE Cloud Foundry is running on. I guess, you can find it on the marketplace of public clouds. It could run on any Kubernetes. It doesn't have to be SUSE Kubernetes. But then you end up with a lot of cells, right? So how do we deal with that then? >> So it's a great question. And I'll actually even broaden that out because it's not like we're only running Kubernetes. Yes, we've got lots of clusters, we've got lots of containers, we've got lots of applications that are moving there. But it's not like all the VMs disappeared. It's not like all the beige boxes, like in the data center, like suddenly don't exist. We all bring all the sins and decisions of the past board with us wherever we go. So for us, it's not just that lens of how do we manage the most modern, the most cutting edge? That's definitely a part of it. But how do you do that within the context of all the other things you have to do within your business? How do I manage virtual machines? How do I manage bare metal? How do I manage all those. And so for us, it's about creating a presentation layer. On top of that, where you can look at your clusters, look at your VMs, look at all your deployments, and be able to understand what's actually happening within your environment. We don't take a prescriptive approach. We don't say you have to use one technology or have to use that technology. What we want to do is to be adaptive to the customer's needs. And say you've got these things. Here's some of our offerings. You've got some legacy offerings too. Let's show you how to bring those together. Let's show you how you modernize your viewpoints, how you simplify your operational framework and how you end up accelerating what you can do with the stuff that you've got in place. >> Yeah, I'm just on the management piece. Is there any recommendations from your team? Last year at Microsoft Ignite, there was a launch of Azure Arc, and, we're starting to see a lot of solutions come out there. Our concern is that any of us that live through the multi vendor management days, don't have good memories from those. It is a different discussion if we're just talking about kind of managing multiple Kubernetes. But, how do we learn from the past? And, what, what are you recommending for people in this multi cloud era? >> So my suggestion to customers is you always start with what are your needs, what is strategic problems you're trying to solve. And then choose a vendor that is going to help you solve those strategic problems. So isn't going to take a product centric view. Isn't going to tell you, use this technology and this technology and this technology, but it's going to take the view of like, this is the problem you're going to solve. Let me be your advisor within that and choose people that you're going to trust within that. That being said, you want to have relationships with customers that have been there for a while that have done this that have a breadth of experience in solving enterprise problems. Coz, I mean, everything that we're talking about, is mostly around the new things. But keep in mind that there are nuances about the enterprise, there are things that are that are intrinsically found within the enterprise, that it takes a vendor with a lot of experience to be able to meet customers where they are. I think you've seen that in some of the real growth opportunities within the hyper scalars. They've kind of moved into being more enterprise, view of things, kind of moving away from just an individual bill perspective to enterprise problems. You're seeing that more and more. I think vendors and customers need to choose companies that meet them where they are, that enable their decisions, not prescribe their decision. >> Okay. Oh-- >> Let me just add to that. >> Please go ahead. >> Yeah, sorry. Yeah. I also wanted to add that I would recommend people to look at open source based solutions because that will prevent them to be in a difficult situation potentially, in a few years from now. So there are open source solutions that can do that. And look at viable, sustainable, healthy open source solutions that are not just one vendor, but multi vendor as well, because that leaves doors options open for you in the future as well. So if you need to move for another vendor, or if you need to complement with an additional technology, or you've made a new investment or you go to a new public cloud, if you base your choices on open source, you have a better chance but from a data. >> I think that's a great point, Dr. T, and I would glom on to that by saying, customers need to bring a new perspective on how they adjudicate these solutions. Like it's really important to look at the health of the open source community. Just because it's open source doesn't mean that there's a secret army of gnomes that you know, in the middle of the night go and fix box, like there needs to be a healthy community around that. And that is not just individual contributors. That is also what are the companies that are invested in this? Where are they dedicating resources? Like that's another level of sophistication that a lot of customers need to bring into their own vendor selection process. >> Excellent. Speaking about communities and open ports, want to make sure you have time to tell us a little bit about the AI platform discussed. >> Yeah, it's it's very, very interesting and something I'm super excited about it SUSE. And it's kind of this, we're starting to see AI done and it's really interesting problems to solve. And like, I'll just give you one example, is that we're working with a Formula One team around using AI to help them actually manage in car mechanics and actually manage some of the things that they're doing to get super high performance out of their vehicles. And that is such an interesting problem to solve. And it's such a natural artificial intelligence problem that even then you're talking about cars instead of servers or you're talking about racing stack instead of data centers, you still got a lot of the same problems. And so you need an easy to use AI stack, you need it to be high performance, you need it to be real time, you need to be able to get decisions made really quickly. These are the same kinds of problems. But we're starting to see them in all these really interesting real world scenarios, which is one of the coolest things that I've seen in my career, especially as it turns of IT, is that IT is really everywhere. It's not just grab your sweater and go to the data centre, because it's 43 degrees in there, it's also get on the racetrack, it's also go to the airfield, it's also go to the grocery store and look at some of the problems being addressed and solved there. And that is super fascinating. One of the things that I'm super excited about in our industry in total. >> All right, well, really good discussion here. Daniel, Dr. T, thank you so much for sharing everything from your keynote and been a pleasure watching. >> Thank you. >> All right back with lots more covered from SUSECON Digital 20 I'm Stuart Miniman and as always, thank you for watching theCUBE. (upbeat music)
SUMMARY :
Brought to you by SUSE. Miniman coming to you for our audience that you because that takes time to understand how of insight as to the impact benefits that are coming to it, that I heard you both talk about, and make that just part of and the two, you mentioned that you have versus theirs. that you can accommodate of all the other things you have to do Our concern is that any of us that is going to help you So if you need to move for another vendor, of gnomes that you know, want to make sure you have and actually manage some of the things Daniel, Dr. T, thank you so thank you for watching theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Daniel | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Stuart Miniman | PERSON | 0.99+ |
Daniel Nelson | PERSON | 0.99+ |
Stuart Miniman | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
thousands | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
43 degrees | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
SUSE | ORGANIZATION | 0.99+ |
Last year | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
T | PERSON | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
Thomas Di Giacomo | PERSON | 0.99+ |
SUSE | TITLE | 0.99+ |
Thomas Giacomo | PERSON | 0.99+ |
Cloud Foundry | TITLE | 0.99+ |
27 plus years | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
One | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Linux | TITLE | 0.98+ |
one | QUANTITY | 0.98+ |
one example | QUANTITY | 0.98+ |
Formula One | ORGANIZATION | 0.98+ |
Asia | LOCATION | 0.97+ |
four | DATE | 0.97+ |
one vendor | QUANTITY | 0.97+ |
Legos | ORGANIZATION | 0.96+ |
five years ago | DATE | 0.96+ |
First | QUANTITY | 0.96+ |
Azure Arc | TITLE | 0.95+ |
about 20 years | QUANTITY | 0.94+ |
three imperatives | QUANTITY | 0.94+ |
20 years ago | DATE | 0.93+ |
decades | QUANTITY | 0.9+ |
SUSECON Digital 20 | ORGANIZATION | 0.9+ |
Kubernetes | TITLE | 0.9+ |
three imperatives | QUANTITY | 0.9+ |
one pane | QUANTITY | 0.89+ |
Cloud Foundry | EVENT | 0.89+ |
Dr. | PERSON | 0.88+ |
Xen | TITLE | 0.86+ |
Unix | TITLE | 0.85+ |
Microsoft Ignite | ORGANIZATION | 0.84+ |
vSphere | TITLE | 0.83+ |
Kubernetes | ORGANIZATION | 0.83+ |
SUSE stack | TITLE | 0.77+ |
Red Green | ORGANIZATION | 0.77+ |
Makino | LOCATION | 0.77+ |
one vendor | QUANTITY | 0.75+ |
developers | QUANTITY | 0.73+ |
UNLIST TILL 4/2 - Extending Vertica with the Latest Vertica Ecosystem and Open Source Initiatives
>> Sue: Hello everybody. Thank you for joining us today for the Virtual Vertica BDC 2020. Today's breakout session in entitled Extending Vertica with the Latest Vertica Ecosystem and Open Source Initiatives. My name is Sue LeClaire, Director of Marketing at Vertica and I'll be your host for this webinar. Joining me is Tom Wall, a member of the Vertica engineering team. But before we begin, I encourage you to submit questions or comments during the virtual session. You don't have to wait. Just type your question or comment in the question box below the slides and click submit. There will be a Q and A session at the end of the presentation. We'll answer as many questions as we're able to during that time. Any questions that we don't get to, we'll do our best to answer them offline. Alternatively, you can visit the Vertica forums to post you questions after the session. Our engineering team is planning to join the forums to keep the conversation going. Also a reminder that you can maximize your screen by clicking the double arrow button in the lower right corner of the slides. And yes, this virtual session is being recorded and will be available to view on demand later this week. We'll send you a notification as soon as it's ready. So let's get started. Tom, over to you. >> Tom: Hello everyone and thanks for joining us today for this talk. My name is Tom Wall and I am the leader of Vertica's ecosystem engineering team. We are the team that focuses on building out all the developer tools, third party integrations that enables the SoftMaker system that surrounds Vertica to thrive. So today, we'll be talking about some of our new open source initatives and how those can be really effective for you and make things easier for you to build and integrate Vertica with the rest of your technology stack. We've got several new libraries, integration projects and examples, all open source, to share, all being built out in the open on our GitHub page. Whether you use these open source projects or not, this is a very exciting new effort that will really help to grow the developer community and enable lots of exciting new use cases. So, every developer out there has probably had to deal with the problem like this. You have some business requirements, to maybe build some new Vertica-powered application. Maybe you have to build some new system to visualize some data that's that's managed by Vertica. The various circumstances, lots of choices will might be made for you that constrain your approach to solving a particular problem. These requirements can come from all different places. Maybe your solution has to work with a specific visualization tool, or web framework, because the business has already invested in the licensing and the tooling to use it. Maybe it has to be implemented in a specific programming language, since that's what all the developers on the team know how to write code with. While Vertica has many different integrations with lots of different programming language and systems, there's a lot of them out there, and we don't have integrations for all of them. So how do you make ends meet when you don't have all the tools you need? All you have to get creative, using tools like PyODBC, for example, to bridge between programming languages and frameworks to solve the problems you need to solve. Most languages do have an ODBC-based database interface. ODBC is our C-Library and most programming languages know how to call C code, somehow. So that's doable, but it often requires lots of configuration and troubleshooting to make all those moving parts work well together. So that's enough to get the job done but native integrations are usually a lot smoother and easier. So rather than, for example, in Python trying to fight with PyODBC, to configure things and get Unicode working, and to compile all the different pieces, the right way is to make it all work smoothly. It would be much better if you could just PIP install library and get to work. And with Vertica-Python, a new Python client library, you can actually do that. So that story, I assume, probably sounds pretty familiar to you. Sounds probably familiar to a lot of the audience here because we're all using Vertica. And our challenge, as Big Data practitioners is to make sense of all this stuff, despite those technical and non-technical hurdles. Vertica powers lots of different businesses and use cases across all kinds of different industries and verticals. While there's a lot different about us, we're all here together right now for this talk because we do have some things in common. We're all using Vertica, and we're probably also using Vertica with other systems and tools too, because it's important to use the right tool for the right job. That's a founding principle of Vertica and it's true today too. In this constantly changing technology landscape, we need lots of good tools and well established patterns, approaches, and advice on how to combine them so that we can be successful doing our jobs. Luckily for us, Vertica has been designed to be easy to build with and extended in this fashion. Databases as a whole had had this goal from the very beginning. They solve the hard problems of managing data so that you don't have to worry about it. Instead of worrying about those hard problems, you can focus on what matters most to you and your domain. So implementing that business logic, solving that problem, without having to worry about all of these intense, sometimes details about what it takes to manage a database at scale. With the declarative syntax of SQL, you tell Vertica what the answer is that you want. You don't tell Vertica how to get it. Vertica will figure out the right way to do it for you so that you don't have to worry about it. So this SQL abstraction is very nice because it's a well defined boundary where lots of developers know SQL, and it allows you to express what you need without having to worry about those details. So we can be the experts in data management while you worry about your problems. This goes beyond though, what's accessible through SQL to Vertica. We've got well defined extension and integration points across the product that allow you to customize this experience even further. So if you want to do things write your own SQL functions, or extend database softwares with UDXs, you can do so. If you have a custom data format that might be a proprietary format, or some source system that Vertica doesn't natively support, we have extension points that allow you to use those. To make it very easy to do passive, parallel, massive data movement, loading into Vertica but also to export Vertica to send data to other systems. And with these new features in time, we also could do the same kinds of things with Machine Learning models, importing and exporting to tools like TensorFlow. And it's these integration points that have enabled Vertica to build out this open architecture and a rich ecosystem of tools, both open source and closed source, of different varieties that solve all different problems that are common in this big data processing world. Whether it's open source, streaming systems like Kafka or Spark, or more traditional ETL tools on the loading side, but also, BI tools and visualizers and things like that to view and use the data that you keep in your database on the right side. And then of course, Vertica needs to be flexible enough to be able to run anywhere. So you can really take Vertica and use it the way you want it to solve the problems that you need to solve. So Vertica has always employed open standards, and integrated it with all kinds of different open source systems. What we're really excited to talk about now is that we are taking our new integration projects and making those open source too. In particular, we've got two new open source client libraries that allow you to build Vertica applications for Python and Go. These libraries act as a foundation for all kinds of interesting applications and tools. Upon those libraries, we've also built some integrations ourselves. And we're using these new libraries to power some new integrations with some third party products. Finally, we've got lots of new examples and reference implementations out on our GitHub page that can show you how to combine all these moving parts and exciting ways to solve new problems. And the code for all these things is available now on our GitHub page. And so you can use it however you like, and even help us make it better too. So the first such project that we have is called Vertica-Python. Vertica-Python began at our customer, Uber. And then in late 2018, we collaborated with them and we took it over and made Vertica-Python the first official open source client for Vertica You can use this to build your own Python applications, or you can use it via tools that were written in Python. Python has grown a lot in recent years and it's very common language to solve lots of different problems and use cases in the Big Data space from things like DevOps admission and Data Science or Machine Learning, or just homegrown applications. We use Python a lot internally for our own QA testing and automation needs. And with the Python 2 End Of Life, that happened at the end of 2019, it was important that we had a robust Python solution to help migrate our internal stuff off of Python 2. And also to provide a nice migration path for all of you our users that might be worried about the same problems with their own Python code. So Vertica-Python is used already for lots of different tools, including Vertica's admintools now starting with 9.3.1. It was also used by DataDog to build a Vertica-DataDog integration that allows you to monitor your Vertica infrastructure within DataDog. So here's a little example of how you might use the Python Client to do some some work. So here we open in connection, we run a query to find out what node we've connected to, and then we do a little DataLoad by running a COPY statement. And this is designed to have a familiar look and feel if you've ever used a Python Database Client before. So we implement the DB API 2.0 standard and it feels like a Python package. So that includes things like, it's part of the centralized package manager, so you can just PIP install this right now and go start using it. We also have our client for Go length. So this is called vertica-sql-go. And this is a very similar story, just in a different context or the different programming language. So vertica-sql-go, began as a collaboration with the Microsoft Focus SecOps Group who builds microfocus' security products some of which use vertica internally to provide some of those analytics. So you can use this to build your own apps in the Go programming language but you can also use it via tools that are written Go. So most notably, we have our Grafana integration, which we'll talk a little bit more about later, that leverages this new clients to provide Grafana visualizations for vertica data. And Go is another rising popularity programming language 'cause it offers an interesting balance of different programming design trade-offs. So it's got good performance, got a good current concurrency and memory safety. And we liked all those things and we're using it to power some internal monitoring stuff of our own. And here's an example of the code you can write with this client. So this is Go code that does a similar thing. It opens a connection, it runs a little test query, and then it iterates over those rows, processing them using Go data types. You get that native look and feel just like you do in Python, except this time in the Go language. And you can go get it the way you usually package things with Go by running that command there to acquire this package. And it's important to note here for the DC projects, we're really doing open source development. We're not just putting code out on our GitHub page. So if you go out there and look, you can see that you can ask questions, you can report bugs, you can submit poll requests yourselves and you can collaborate directly with our engineering team and the other vertica users out on our GitHub page. Because it's out on our GitHub page, it allows us to be a little bit faster with the way we ship and deliver functionality compared to the core vertica release cycle. So in 2019, for example, as we were building features to prepare for the Python 3 migration, we shipped 11 different releases with 40 customer reported issues, filed on GitHub. That was done over 78 different poll requests and with lots of community engagement as we do so. So lots of people are using this already, we see as our GitHub badge last showed with about 5000 downloads of this a day of people using it in their software. And again, we want to make this easy, not just to use but also to contribute and understand and collaborate with us. So all these projects are built using the Apache 2.0 license. The master branch is always available and stable with the latest creative functionality. And you can always build it and test it the way we do so that it's easy for you to understand how it works and to submit contributions or bug fixes or even features. It uses automated testing both for locally and with poll requests. And for vertica-python, it's fully automated with Travis CI. So we're really excited about doing this and we're really excited about where it can go in the future. 'Cause this offers some exciting opportunities for us to collaborate with you more directly than we have ever before. You can contribute improvements and help us guide the direction of these projects, but you can also work with each other to share knowledge and implementation details and various best practices. And so maybe you think, "Well, I don't use Python, "I don't use go so maybe it doesn't matter to me." But I would argue it really does matter. Because even if you don't use these tools and languages, there's lots of amazing vertica developers out there who do. And these clients do act as low level building blocks for all kinds of different interesting tools, both in these Python and Go worlds, but also well beyond that. Because these implementations and examples really generalize to lots of different use cases. And we're going to do a deeper dive now into some of these to understand exactly how that's the case and what you can do with these things. So let's take a deeper look at some of the details of what it takes to build one of these open source client libraries. So these database client interfaces, what are they exactly? Well, we all know SQL, but if you look at what SQL specifies, it really only talks about how to manipulate the data within the database. So once you're connected and in, you can run commands with SQL. But these database client interfaces address the rest of those needs. So what does the programmer need to do to actually process those SQL queries? So these interfaces are specific to a particular language or a technology stack. But the use cases and the architectures and design patterns are largely the same between different languages. They all have a need to do some networking and connect and authenticate and create a session. They all need to be able to run queries and load some data and deal with problems and errors. And then they also have a lot of metadata and Type Mapping because you want to use these clients the way you use those programming languages. Which might be different than the way that vertica's data types and vertica's semantics work. So some of this client interfaces are truly standards. And they are robust enough in terms of what they design and call for to support a truly pluggable driver model. Where you might write an application that codes directly against the standard interface, and you can then plug in a different database driver, like a JDBC driver, to have that application work with any database that has a JDBC driver. So most of these interfaces aren't as robust as a JDBC or ODBC but that's okay. 'Cause it's good as a standard is, every database is unique for a reason. And so you can't really expose all of those unique properties of a database through these standard interfaces. So vertica's unique in that it can scale to the petabytes and beyond. And you can run it anywhere in any environment, whether it's on-prem or on clouds. So surely there's something about vertica that's unique, and we want to be able to take advantage of that fact in our solutions. So even though these standards might not cover everything, there's often a need and common patterns that arise to solve these problems in similar ways. When there isn't enough of a standard to define those comments, semantics that different databases might have in common, what you often see is tools will invent plug in layers or glue code to compensate by defining application wide standard to cover some of these same semantics. Later on, we'll get into some of those details and show off what exactly that means. So if you connect to a vertica database, what's actually happening under the covers? You have an application, you have a need to run some queries, so what does that actually look like? Well, probably as you would imagine, your application is going to invoke some API calls and some client library or tool. This library takes those API calls and implements them, usually by issuing some networking protocol operations, communicating over the network to ask vertica to do the heavy lifting required for that particular API call. And so these API's usually do the same kinds of things although some of the details might differ between these different interfaces. But you do things like establish a connection, run a query, iterate over your rows, manage your transactions, that sort of thing. Here's an example from vertica-python, which just goes into some of the details of what actually happens during the Connect API call. And you can see all these details in our GitHub implementation of this. There's actually a lot of moving parts in what happens during a connection. So let's walk through some of that and see what actually goes on. I might have my API call like this where I say Connect and I give it a DNS name, which is my entire cluster. And I give you my connection details, my username and password. And I tell the Python Client to get me a session, give me a connection so I can start doing some work. Well, in order to implement this, what needs to happen? First, we need to do some TCP networking to establish our connection. So we need to understand what the request is, where you're going to connect to and why, by pressing the connection string. and vertica being a distributed system, we want to provide high availability, so we might need to do some DNS look-ups to resolve that DNS name which might be an entire cluster and not just a single machine. So that you don't have to change your connection string every time you add or remove nodes to the database. So we do some high availability and DNS lookup stuff. And then once we connect, we might do Load Balancing too, to balance the connections across the different initiator nodes in the cluster, or in a sub cluster, as needed. Once we land on the node we want to be at, we might do some TLS to secure our connections. And vertica supports the industry standard TLS protocols, so this looks pretty familiar for everyone who've used TLS anywhere before. So you're going to do a certificate exchange and the client might send the server certificate too, and then you going to verify that the server is who it says it is, so that you can know that you trust it. Once you've established that connection, and secured it, then you can start actually beginning to request a session within vertica. So you going to send over your user information like, "Here's my username, "here's the database I want to connect to." You might send some information about your application like a session label, so that you can differentiate on the database with monitoring queries, what the different connections are and what their purpose is. And then you might also send over some session settings to do things like auto commit, to change the state of your session for the duration of this connection. So that you don't have to remember to do that with every query that you have. Once you've asked vertica for a session, before vertica will give you one, it has to authenticate you. and vertica has lots of different authentication mechanisms. So there's a negotiation that happens there to decide how to authenticate you. Vertica decides based on who you are, where you're coming from on the network. And then you'll do an auth-specific exchange depending on what the auth mechanism calls for until you are authenticated. Finally, vertica trusts you and lets you in, so you going to establish a session in vertica, and you might do some note keeping on the client side just to know what happened. So you might log some information, you might record what the version of the database is, you might do some protocol feature negotiation. So if you connect to a version of the database that doesn't support all these protocols, you might decide to turn some functionality off and that sort of thing. But finally, after all that, you can return from this API call and then your connection is good to go. So that connection is just one example of many different APIs. And we're excited here because with vertica-python we're really opening up the vertica client wire protocol for the first time. And so if you're a low level vertica developer and you might have used Postgres before, you might know that some of vertica's client protocol is derived from Postgres. But they do differ in many significant ways. And this is the first time we've ever revealed those details about how it works and why. So not all Postgres protocol features work with vertica because vertica doesn't support all the features that Postgres does. Postgres, for example, has a large object interface that allows you to stream very wide data values over. Whereas vertica doesn't really have very wide data values, you have 30, you have long bar charts, but that's about as wide as you can get. Similarly, the vertica protocol supports lots of features not present in Postgres. So Load Balancing, for example, which we just went through an example of, Postgres is a single node system, it doesn't really make sense for Postgres to have Load Balancing. But Load Balancing is really important for vertica because it is a distributed system. Vertica-python serves as an open reference implementation of this protocol. With all kinds of new details and extension points that we haven't revealed before. So if you look at these boxes below, all these different things are new protocol features that we've implemented since August 2019, out in the open on our GitHub page for Python. Now, the vertica-sql-go implementation of these things is still in progress, but the core protocols are there for basic query operations. There's more to do there but we'll get there soon. So this is really cool 'cause not only do you have now a Python Client implementation, and you have a Go client implementation of this, but you can use this protocol reference to do lots of other things, too. The obvious thing you could do is build more clients for other languages. So if you have a need for a client in some other language that are vertica doesn't support yet, now you have everything available to solve that problem and to go about doing so if you need to. But beyond clients, it's also used for other things. So you might use it for mocking and testing things. So rather than connecting to a real vertica database, you can simulate some of that. You can also use it to do things like query routing and proxies. So Uber, for example, this log here in this link tells a great story of how they route different queries to different vertical clusters by intercepting these protocol messages, parsing the queries in them and deciding which clusters to send them to. So a lot of these things are just ideas today, but now that you have the source code, there's no limit in sight to what you can do with this thing. And so we're very interested in hearing your ideas and requests and we're happy to offer advice and collaborate on building some of these things together. So let's take a look now at some of the things we've already built that do these things. So here's a picture of vertica's Grafana connector with some data powered from an example that we have in this blog link here. So this has an internet of things use case to it, where we have lots of different sensors recording flight data, feeding into Kafka which then gets loaded into vertica. And then finally, it gets visualized nicely here with Grafana. And Grafana's visualizations make it really easy to analyze the data with your eyes and see when something something happens. So in these highlighted sections here, you notice a drop in some of the activity, that's probably a problem worth looking into. It might be a lot harder to see that just by staring at a large table yourself. So how does a picture like that get generated with a tool like Grafana? Well, Grafana specializes in visualizing time series data. And time can be really tricky for computers to do correctly. You got time zones, daylight savings, leap seconds, negative infinity timestamps, please don't ever use those. In every system, if it wasn't hard enough, just with those problems, what makes it harder is that every system does it slightly differently. So if you're querying some time data, how do we deal with these semantic differences as we cross these domain boundaries from Vertica to Grafana's back end architecture, which is implemented in Go on it's front end, which is implemented with JavaScript? Well, you read this from bottom up in terms of the processing. First, you select the timestamp and Vertica is timestamp has to be converted to a Go time object. And we have to reconcile the differences that there might be as we translate it. So Go time has a different time zone specifier format, and it also supports nanosecond precision, while Vertica only supports microsecond precision. So that's not too big of a deal when you're querying data because you just see some extra zeros, not fractional seconds. But on the way in, if we're loading data, we have to find a way to resolve those things. Once it's into the Go process, it has to be converted further to render in the JavaScript UI. So that there, the Go time object has to be converted to a JavaScript Angular JS Date object. And there too, we have to reconcile those differences. So a lot of these differences might just be presentation, and not so much the actual data changing, but you might want to choose to render the date into a more human readable format, like we've done in this example here. Here's another picture. This is another picture of some time series data, and this one shows you can actually write your own queries with Grafana to provide answers. So if you look closely here you can see there's actually some functions that might not look too familiar with you if you know vertica's functions. Vertica doesn't have a dollar underscore underscore time function or a time filter function. So what's actually happening there? How does this actually provide an answer if it's not really real vertica syntax? Well, it's not sufficient to just know how to manipulate data, it's also really important that you know how to operate with metadata. So information about how the data works in the data source, Vertica in this case. So Grafana needs to know how time works in detail for each data source beyond doing that basic I/O that we just saw in the previous example. So it needs to know, how do you connect to the data source to get some time data? How do you know what time data types and functions there are and how they behave? How do you generate a query that references a time literal? And finally, once you've figured out how to do all that, how do you find the time in the database? How do you do know which tables have time columns and then they might be worth rendering in this kind of UI. So Go's database standard doesn't actually really offer many metadata interfaces. Nevertheless, Grafana needs to know those answers. And so it has its own plugin layer that provides a standardizing layer whereby every data source can implement hints and metadata customization needed to have an extensible data source back end. So we have another open source project, the Vertica-Grafana data source, which is a plugin that uses Grafana's extension points with JavaScript and the front end plugins and also with Go in the back end plugins to provide vertica connectivity inside Grafana. So the way this works, is that the plugin frameworks defines those standardizing functions like time and time filter, and it's our plugin that's going to rewrite them in terms of vertica syntax. So in this example, time gets rewritten to a vertica cast. And time filter becomes a BETWEEN predicate. So that's one example of how you can use Grafana, but also how you might build any arbitrary visualization tool that works with data in Vertica. So let's now look at some other examples and reference architectures that we have out in our GitHub page. For some advanced integrations, there's clearly a need to go beyond these standards. So SQL and these surrounding standards, like JDBC, and ODBC, were really critical in the early days of Vertica, because they really enabled a lot of generic database tools. And those will always continue to play a really important role, but the Big Data technology space moves a lot faster than these old database data can keep up with. So there's all kinds of new advanced analytics and query pushdown logic that were never possible 10 or 20 years ago, that Vertica can do natively. There's also all kinds of data-oriented application workflows doing things like streaming data, or Parallel Loading or Machine Learning. And all of these things, we need to build software with, but we don't really have standards to go by. So what do we do there? Well, open source implementations make for easier integrations, and applications all over the place. So even if you're not using Grafana for example, other tools have similar challenges that you need to overcome. And it helps to have an example there to show you how to do it. Take Machine Learning, for example. There's been many excellent Machine Learning tools that have arisen over the years to make data science and the task of Machine Learning lot easier. And a lot of those have basic database connectivity, but they generally only treat the database as a source of data. So they do lots of data I/O to extract data from a database like Vertica for processing in some other engine. We all know that's not the most efficient way to do it. It's much better if you can leverage Vertica scale and bring the processing to the data. So a lot of these tools don't take full advantage of Vertica because there's not really a uniform way to go do so with these standards. So instead, we have a project called vertica-ml-python. And this serves as a reference architecture of how you can do scalable machine learning with Vertica. So this project establishes a familiar machine learning workflow that scales with vertica. So it feels similar to like a scickit-learn project except all the processing and aggregation and heavy lifting and data processing happens in vertica. So this makes for a much more lightweight, scalable approach than you might otherwise be used to. So with vertica-ml-python, you can probably use this yourself. But you could also see how it works. So if it doesn't meet all your needs, you could still see the code and customize it to build your own approach. We've also got lots of examples of our UDX framework. And so this is an older GitHub project. We've actually had this for a couple of years, but it is really useful and important so I wanted to plug it here. With our User Defined eXtensions framework or UDXs, this allows you to extend the operators that vertica executes when it does a database load or a database query. So with UDXs, you can write your own domain logic in a C++, Java or Python or R. And you can call them within the context of a SQL query. And vertica brings your logic to that data, and makes it fast and scalable and fault tolerant and correct for you. So you don't have to worry about all those hard problems. So our UDX examples, demonstrate how you can use our SDK to solve interesting problems. And some of these examples might be complete, total usable packages or libraries. So for example, we have a curl source that allows you to extract data from any curlable endpoint and load into vertica. We've got things like an ODBC connector that allows you to access data in an external database via an ODBC driver within the context of a vertica query, all kinds of parsers and string processors and things like that. We also have more exciting and interesting things where you might not really think of vertica being able to do that, like a heat map generator, which takes some XY coordinates and renders it on top of an image to show you the hotspots in it. So the image on the right was actually generated from one of our intern gaming sessions a few years back. So all these things are great examples that show you not just how you can solve problems, but also how you can use this SDK to solve neat things that maybe no one else has to solve, or maybe that are unique to your business and your needs. Another exciting benefit is with testing. So the test automation strategy that we have in vertica-python these clients, really generalizes well beyond the needs of a database client. Anyone that's ever built a vertica integration or an application, probably has a need to write some integration tests. And that could be hard to do with all the moving parts, in the big data solution. But with our code being open source, you can see in vertica-python, in particular, how we've structured our tests to facilitate smooth testing that's fast, deterministic and easy to use. So we've automated the download process, the installation deployment process, of a Vertica Community Edition. And with a single click, you can run through the tests locally and part of the PR workflow via Travis CI. We also do this for multiple different python environments. So for all python versions from 2.7 up to 3.8 for different Python interpreters, and for different Linux distros, we're running through all of them very quickly with ease, thanks to all this automation. So today, you can see how we do it in vertica-python, in the future, we might want to spin that out into its own stand-alone testbed starter projects so that if you're starting any new vertica integration, this might be a good starting point for you to get going quickly. So that brings us to some of the future work we want to do here in the open source space . Well, there's a lot of it. So in terms of the the client stuff, for Python, we are marching towards our 1.0 release, which is when we aim to be protocol complete to support all of vertica's unique protocols, including COPY LOCAL and some new protocols invented to support complex types, which is our new feature in vertica 10. We have some cursor enhancements to do things like better streaming and improved performance. Beyond that we want to take it where you want to bring it. So send us your requests in the Go client fronts, just about a year behind Python in terms of its protocol implementation, but the basic operations are there. But we still have more work to do to implement things like load balancing, some of the advanced auths and other things. But they're two, we want to work with you and we want to focus on what's important to you so that we can continue to grow and be more useful and more powerful over time. Finally, this question of, "Well, what about beyond database clients? "What else might we want to do with open source?" If you're building a very deep or a robust vertica integration, you probably need to do a lot more exciting things than just run SQL queries and process the answers. Especially if you're an OEM or you're a vendor that resells vertica packaged as a black box piece of a larger solution, you might to have managed the whole operational lifecycle of vertica. There's even fewer standards for doing all these different things compared to the SQL clients. So we started with the SQL clients 'cause that's a well established pattern, there's lots of downstream work that that can enable. But there's also clearly a need for lots of other open source protocols, architectures and examples to show you how to do these things and do have real standards. So we talked a little bit about how you could do UDXs or testing or Machine Learning, but there's all sorts of other use cases too. That's why we're excited to announce here our awesome vertica, which is a new collection of open source resources available on our GitHub page. So if you haven't heard of this awesome manifesto before, I highly recommend you check out this GitHub page on the right. We're not unique here but there's lots of awesome projects for all kinds of different tools and systems out there. And it's a great way to establish a community and share different resources, whether they're open source projects, blogs, examples, references, community resources, and all that. And this tool is an open source project. So it's an open source wiki. And you can contribute to it by submitting yourself to PR. So we've seeded it with some of our favorite tools and projects out there but there's plenty more out there and we hope to see more grow over time. So definitely check this out and help us make it better. So with that, I'm going to wrap up. I wanted to thank you all. Special thanks to Siting Ren and Roger Huebner, who are the project leads for the Python and Go clients respectively. And also, thanks to all the customers out there who've already been contributing stuff. This has already been going on for a long time and we hope to keep it going and keep it growing with your help. So if you want to talk to us, you can find us at this email address here. But of course, you can also find us on the Vertica forums, or you could talk to us on GitHub too. And there you can find links to all the different projects I talked about today. And so with that, I think we're going to wrap up and now we're going to hand it off for some Q&A.
SUMMARY :
Also a reminder that you can maximize your screen and frameworks to solve the problems you need to solve.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tom Wall | PERSON | 0.99+ |
Sue LeClaire | PERSON | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Roger Huebner | PERSON | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
Tom | PERSON | 0.99+ |
Python 2 | TITLE | 0.99+ |
August 2019 | DATE | 0.99+ |
2019 | DATE | 0.99+ |
Python 3 | TITLE | 0.99+ |
two | QUANTITY | 0.99+ |
Sue | PERSON | 0.99+ |
Python | TITLE | 0.99+ |
python | TITLE | 0.99+ |
SQL | TITLE | 0.99+ |
late 2018 | DATE | 0.99+ |
First | QUANTITY | 0.99+ |
end of 2019 | DATE | 0.99+ |
Vertica | TITLE | 0.99+ |
today | DATE | 0.99+ |
Java | TITLE | 0.99+ |
Spark | TITLE | 0.99+ |
C++ | TITLE | 0.99+ |
JavaScript | TITLE | 0.99+ |
vertica-python | TITLE | 0.99+ |
Today | DATE | 0.99+ |
first time | QUANTITY | 0.99+ |
11 different releases | QUANTITY | 0.99+ |
UDXs | TITLE | 0.99+ |
Kafka | TITLE | 0.99+ |
Extending Vertica with the Latest Vertica Ecosystem and Open Source Initiatives | TITLE | 0.98+ |
Grafana | ORGANIZATION | 0.98+ |
PyODBC | TITLE | 0.98+ |
first | QUANTITY | 0.98+ |
UDX | TITLE | 0.98+ |
vertica 10 | TITLE | 0.98+ |
ODBC | TITLE | 0.98+ |
10 | DATE | 0.98+ |
Postgres | TITLE | 0.98+ |
DataDog | ORGANIZATION | 0.98+ |
40 customer reported issues | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
Gou Rao, Portworx & Julio Tapia, Red Hat | KubeCon + CloudNativeCon 2019
>> Announcer: Live from San Diego, California, it's theCUBE. Covering KubeCon and CloudNativeCon brought to you by Red Hat, the Cloud Native Computing Foundation, and its ecosystem partners. >> Welcome back to theCUBE here in San Diego for KubeCon CloudNativeCon, with John Troyer, I'm Stu Miniman, and happy to welcome to the program two guests, first time guests, I believe. Julio Tapia, who's the director of Cloud BU partner and community with Red Hat and Gou Rao, who's the founder and CEO at Portworx. Gentlemen, thanks so much for joining us. >> Thank you, happy to be here. >> Thanks for having us. >> Alright, let's start with community, ecosystem, it's a big theme we have here at the show. Tell us your main focus, what the team's doing here. >> Sure, so I'm part of a product team, we're responsible for OpenShift, OpenStack and Red Hat virtualization. And my responsibility is to build a partner ecosystem and to do our community development. On the partner front, we work with a lot of different partners. We work with ISVs, we work with OEMs, SIs, COD providers, TelCo partners. And my role is to help evangelize, to help on integrations, a lot of joint solutions, and then do a little bit of go to market as well. And the community side, it's to evangelize with upstream projects or customers with developers, and so forth. >> Alright, so, Gou, actually, it's not luck, but I had a chance to catch up with the Red Hat storage team. Back when I was on the vendor side I partnered with them. Red Hat doesn't sell gear, they're a software company. Everything open-source, and when it comes to data and storage, obviously they're working with partners. So put Portworx into the mix and tell us about the relationship and what you both do together. >> Sure, yeah, we're a Red Hat OpenShift partner. We've been working with them for quite some time now, partner with IBM as well. But yeah, Portworx, we focus on enabling cloud native storage, right? So we complement the OpenShift ecosystem. Essentially we enable people to run stateful services in OpenShift with a lot of agility and we bring DR backup functionality to OpenShift. I'm sure you're familiar with this, but, people, when they deploy OpenShift, they're running fleets of OpenShift clusters. So, multi-cluster management and data accessibility across clusters is a big topic. >> Yeah, if you could, I hear the term cloud native storage, what does that really mean? You know, back a few years ago, containers were stateless, I didn't have my persistent storage, it was super challenging as to how we deal with this. And now we have some options, but what is the goal of what we're doing here? >> There really is no notion of a stateless application, right? Especially when it comes to enterprise applications. What cloud native storage means is, to us at least, it signifies a couple of things. First of all, the consumer of storage is not a machine anymore, right? Typical storage systems are designed to provide storage to either a virtual machine or a hardware server. The consumer of storage is now a container that's running inside of a machine. And in fact, an application is never just one container, it's many containers running on different systems so it's a distributed problem. So what cloud native storage means is the following things. Providing container granular data services, being application aware, meaning that you're providing services to many containers that are running on different systems, and facilitating the data life cycle management of those applications from a Kubernetes way, right? The user experience is now driven through Kubernetes as opposed to a storage admin driving that functionality so it's these three things that make a platform cloud native. >> I want to dig into the operator concept for a little bit here, as it applies to storage. So, first, Operators. I first heard of this a couple years back with the CoreOS folks, who are now part of Red Hat and it's a piece of technology that came into the Kubernetes ecosystem, seems to be very well adopted, they talked about it today on the keynote. And I'd love to hear a little bit more about the ecosystem. But first I want to figure out what it is and in my head, I didn't quite understand it and I'm like, well, okay, automation and life cycle, I get it. There's a bunch of things, Puppet and Chef and Ansible and all sorts of things there. There's also things that know about cloud like Terraform, or Cloudform, or Halloumi, all these sort of things here. But this seems like this is a framework around life cycle, it might be a little higher in the semantic level or knows a little bit more about what's going on inside Kubernetes. >> I'll just touch on this, so Operators, it's a way to codify business logic into the application, so how to manage, how to install, how to manage the life cycle of the application on top of the Kubernetes cluster. So it's a way of automating. >> Right, but-- >> And just to add to that, you mentioned Ansible, Salt, right? So, as engineers, we're always trying to make our lives easier. And so, infrastructure automation certainly is a concept here. What Operators does is it elevates those same needs to more of an application construct level, right? So it's a piece of intelligent software that is watching the entire run-time of an application as opposed to provisioning infrastructure and stepping out of the way. Think of it as a living being, it is constantly running and reacting to what the application is doing and what its needs are. So, on one hand you have automation that sets things up and then the job is done. Here the job is never done, you're sort of, right there as a side car along with the application. >> Nice, but for any sort of life cycle or for any sort of project like this, you have to have code sharing and contributing, right? And so, Julio, can you tell us a little about that? >> What we do is we're obviously all in on Operators. And so we've invested a great deal in terms of documentation and training and workshops. We have certification programs, we're really helping create the ecosystem and facilitate the whole process. You may be familiar, we announced Operator Framework a year ago, it includes Operator SDKs. So we have an Operator SDK for Helm, for Ansible, for Go. We also have announced Operator Life Cycle Manager which does the install, the maintenance and the whole life cycle management process. And then earlier this year we did introduce also, Operatorhub.io which is a community of our Operators, we have about 150 Operators as part of that. >> How does the Operator Framework relate to OpenShare versus upstream Kubernetes? Is it an OpenShift and Red Hat specific thing, or? >> Yes, so, Operatorhub.io is a listing of Operators that includes community Operators. And then we also have certified Operators. And the community Operators run on any Kubernetes instance. The certified Operators make sure that we run on OpenShift specifically. So that's kind of the distinction between those two. >> I remember a Red Hat summit where you talked about some bits. So, give us a little walk around the show, some of the highlights from Operators, the ecosystem, obviously, we've got Portworx here but there's a broad ecosystem. >> Yeah, so we have a huge huge ecosystem. The ISVs play a big part of this. So we've got Operators database partners, security partners, app monitoring partners, storage partners. Yesterday we had an OpenShift commons event, we showcased five of our big Operator partnerships with Couchbase, with MongoDB, with Portworx obviously, with StorageOS and with Dynatrace. But we have a lot of partners in a lot of different areas that are creating these Operators, are certifying them, and they're starting to get a lot of use with customers so it's pretty exciting stuff. >> Gou, I'd love your viewpoint on this because of course, Portworx, good Red Hat partner but you need to work with all the Kubernetes opt-ins out there so, what's the importance of Operators to your business? >> Yeah, you know. OpenShift, obviously, it's one of the leading platforms for Kubernetes out there and so, the reason that is, it's because it's the expectations that it sets to an enterprise customer. It's that Red Hat experience behind it and so the notion of having an Operator that's certified by Red Hat and Red Hat going through the vetting process and making sure that all of the components that it is recommending from its ecosystem that you're putting onto OpenShift, that whole process gives a whole new level of enterprise experience, so, for us, that's been really good, right? Working with Red Hat, going through the process with them and making sure that they are actually double clicking on everything we submit, and there's a real, we iterate with them. So the quality of the product that's put out there within OpenShift is very high. So, we've deployed these Operators now, the Operator that Portworx just announced, right? We have it running in customers' hands so these are real end users, you'll be talking to Ford later on today. Harvard, for example, and so the level of automation that it has provided to them in their platform, it's quite high. >> I was kind of curious to shift maybe to the conference here that you all have a long history. With organizations and both of you personally in the Kubernetes world and cloud native world. We're here at KubeCon CloudNativeCon, North America, 2019. It's pretty big. And I see a lot of folks here, a lot of vendors, a lot of engineers, huge conference, 12,000 people. I mean, any perspective? >> So I've been at Red Hat a little over six years and I was at the very first KubeCon many years ago in San Francisco, I think we had about 200 people there. So this show has really grown over the years. And we're obviously big supporters, we've participated in KubeCon in Shanghai and Barcelona, we're obviously here. We're just super excited about seeing the ecosystem and the whole community grow and expand, so, very exciting. >> Gou? >> Yeah, I mean, like Julio mentioned, right? So, all the way from DockerCon to where we are today and I think last year was 8000 people in Seattle and I think there're probably I've heard numbers like 12? So it's also equally interesting to see the maturity of the products around Kubernetes. And that level of consistency and lack of fracture, right? From mainstream Kubernetes to how it's being adopted in OpenShift, there's consistency across the different Kubernetes platforms. Also, it's very interesting to see how on-prem and public cloud Kubernetes are coexisting. Four years ago we were kind of worried on how that would turn out, but I think it's enabling those hybrid-cloud workloads and I think today in this KubeCon we see a lot of people talking about that and having interest around it. >> That's a really great point there. Julio, want to give you the final word, for people that aren't yet engaged in the ecosystem of Operators, how can they learn more and get involved? >> Yeah, so we're excited to work with everybody, our ecosystem includes customers, partners, contributors, so as long as you're all in on Operators, we're ready to help. We've got tools, we've documentation, we have workshops, we have training, we have certification programs. And we also can help you with go to market. We're very fortunate to have a huge customer footprint, and so for those partners that have solutions, databases, storage solutions, there's a lot of joint opportunities out there that we can participate in. So, really excited to do that. >> Julio, Gou, thank you so much, you have a final word, Gou? >> I was just going to say, so, to follow up on the Operator comment on the certification that Julio mentioned earlier, so the Operator that we have, we were able to achieve level five certification. The level five signifies just the amount of automation that's built into it, so the concept of having Operators help people deploy these complex applications, that's a very important concept in Kubernetes itself. So, glad to be a Red Hat partner. >> That's actually a really good point, we have an Operator maturity model, level one, two, three, four, five. Level one and two are more your installations and upgrades. But the really highly capable ones, the fours and fives, are really to be commended. And Portworx is one of those partners. So we're excited to be here with them. >> That is a powerful statement, we talk about the complexity and how many pieces are in there. Everybody's looking to really help cross that chasm, get the vast majority of people. We need to allow environments to have more automation, more simplicity, a story I heard loud and clear at AnsibleFest earlier this year and through the partner ecosystem. It's good to see progress, so congratulations and thank you both for joining us. >> Thank you, thank you. >> Thank you. >> All right, for John Troyer, I'm Stu Miniman, back with lots more here from KubeCon CloudNativeCon 2019, thanks for watching theCUBE. (electronic music)
SUMMARY :
brought to you by Red Hat, I'm Stu Miniman, and happy to welcome to the program it's a big theme we have here at the show. And the community side, it's to evangelize to catch up with the Red Hat storage team. and we bring DR backup functionality to OpenShift. it was super challenging as to how we deal with this. and facilitating the data life cycle management that came into the Kubernetes ecosystem, into the application, so how to manage, and stepping out of the way. and facilitate the whole process. So that's kind of the distinction between those two. the ecosystem, obviously, we've got Portworx here and they're starting to get a lot of use with customers and so the notion of having an Operator in the Kubernetes world and cloud native world. and the whole community grow and expand, So it's also equally interesting to see the maturity for people that aren't yet engaged in the ecosystem And we also can help you with go to market. so the Operator that we have, the fours and fives, are really to be commended. and thank you both for joining us. back with lots more here
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John Troyer | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Julio | PERSON | 0.99+ |
Julio Tapia | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
two guests | QUANTITY | 0.99+ |
San Diego | LOCATION | 0.99+ |
five | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
San Diego, California | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
Shanghai | LOCATION | 0.99+ |
Gou Rao | PERSON | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Gou | PERSON | 0.99+ |
Portworx | ORGANIZATION | 0.99+ |
Ford | ORGANIZATION | 0.99+ |
KubeCon | EVENT | 0.99+ |
8000 people | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
12,000 people | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
first time | QUANTITY | 0.98+ |
Yesterday | DATE | 0.98+ |
Dynatrace | ORGANIZATION | 0.98+ |
TelCo | ORGANIZATION | 0.98+ |
Couchbase | ORGANIZATION | 0.98+ |
first | QUANTITY | 0.98+ |
a year ago | DATE | 0.98+ |
OpenShift | TITLE | 0.98+ |
Four years ago | DATE | 0.98+ |
three things | QUANTITY | 0.97+ |
one container | QUANTITY | 0.97+ |
over six years | QUANTITY | 0.97+ |
Kubernetes | TITLE | 0.97+ |
DockerCon | EVENT | 0.97+ |
Operatorhub.io | ORGANIZATION | 0.96+ |
CloudNativeCon | EVENT | 0.96+ |
12 | QUANTITY | 0.96+ |
about 200 people | QUANTITY | 0.96+ |
fives | QUANTITY | 0.95+ |
about 150 Operators | QUANTITY | 0.95+ |
Operator Framework | TITLE | 0.95+ |
2019 | DATE | 0.93+ |
CloudNativeCon 2019 | EVENT | 0.93+ |
earlier this year | DATE | 0.93+ |
Francesca Lazzeri, Microsoft | Microsoft Ignite 2019
>> Commentator: Live from Orlando, Florida It's theCUBE. Covering Microsoft Ignite. Brought to you by Cohesity. >> Hello everyone and welcome back to theCUBE's live coverage of Microsoft Ignite 2019. We are theCUBE, we are here at the Cohesity booth in the middle of the show floor at the Orange County Convention Center. 26,000 people from around the globe here. It's a very exciting show. I'm your host, Rebecca Knight, along with my co-host, Stu Miniman. We are joined by Francesca Lazzeri. She is a Ph.D Machine Learning Scientist and Cloud Advocate at Microsoft. Thank you so much for coming on the show. >> Thank you for having me. I'm very excited to be here. >> Rebecca: Direct from Cambridge, so we're an all Boston table here. >> Exactly. >> I love it. I love it. >> We are in the most technology cluster, I think, in the world probably. >> So two words we're hearing a lot of here at the show, machine learning, deep learning, can you describe, define them for us here, and tell us the difference between machine learning and deep learning. >> Yeah, this is a great question and I have to say a lot of my customers ask me this question very, very often. Because I think right now there are many different terms such as deep learning as you said, machine learning, AI, that have been used more or less in the same way, but they are not really the same thing. So machine learning is portfolio, I would say, of algorithms, and when you say algorithms I mean really statistical models, that you can use to run some data analysis. So you can use these algorithms on your data, and these are going to produce what we call an output. Output are the results. So deep learning is just a type of machine learning, that has a different structure. We call it deep learning because there are many different layers, in a neural network, which is again a type of machine learning algorithm. And it's very interesting because it doesn't look at the linear relation within the different variables, but it looks at different ways to train itself, and learn something. So you have to think just about deep learning as a type of machine learning and then we have AI. AI is just on top of everything, AI is a way of building application on top of machine learning models and they run on top of machine learning algorithms. So it's a way, AI, of consuming intelligent models. >> Yeah, so Francesca, I know we're going to be talking to Jeffrey Stover tomorrow about a topic, responsible AI. Can you talk a little bit about how Microsoft is making sure that unintentional biases or challenges with data, leave the machine learning to do things, or have biases that we wouldn't want to otherwise. >> Yes, I think that Microsoft is actually investing a lot in responsible AI. Because I have to say, as a data scientist, as a machine learning scientist, I think that it's very important to understand what the model is doing and why it's give me analysis of a specific result. So, in my team, we have a tool kit, which is called, interpretability toolkit, and it's really a way to unpack machine learning models, so it's a way of opening machine learning models and understand what are the different relations between the different viables, the different data points, so it's an easy way through different type of this relation, that you can understand why your model is giving you specific results. So that you get that visibility, as a data scientist, but also as a final consumer, final users of these AI application. And I think that visibility is the most important thing to prevent unbias, sorry, bias application, and to make sure that our results are fair, for everybody. So there are some technical tools that we can use for sure. I can tell you, as a data scientist, that bias and unfairness starts with the data. You have to make sure that the data is representative enough of the population that you are targeting with your AI applications. But this sometimes is not possible. That's why it's important to create some services, some toolkits, that are going to allow you, again, as a data scientist, as a user, to understand what the AI application, or the machine learning model is doing. >> So what's the solution? If the problem, if the root of the problem is the data in the first place, how do we fix this? Because this is such an important issue in technology today. >> Yes, and so there are a few ways that you can use... So first of all I want to say that it's not a issue that you can really fix. I would say that, again, as a data scientist, there are a few things that you can do, in order to check that your AI application is doing a good job, in terms of fairness, again. And so these few steps are, as you said, the data. So most of the time, people, or customers, they just use their own data. Something that is very helpful is also looking at external type of data, and also make sure that, again, as I said, the pure data is representative enough of the entire population. So for example, if you are collecting data from a specific category of people, of a specific age, from a specific geography, you have to make sure that you understand that their results are not general results, are results that the machine learning algorithm learn from that target population. And so it's important again, to look at different type of data, different type of data sets, and use, if you can, also external data. And then, of course, this is just the first step. There's a second step, that you can always make sure that you check your model with a business expert, with data expert. So sometimes we have data scientists that work in siloes, they do not really communicate what they're doing. And I think that this is something that you need to change within your company, within your organization, you have to, always to make sure, that data scientists, machine learning scientists are working closely with data experts, business experts, and everybody's talking. Again, to make sure that we understand what we are doing. >> Okay, there were so many things announced at the show this week. In your space, what are some of the highlights of the things that people should be taking away from Microsoft Ignite. >> So I think that as your machine learning platform has been announcing a lot of updates, I love the product because I think it's a very dynamic product. There is, what we now call, the designer, which is a new version of the old Azure Machine Learning Studio. It's a drag and drop tool so it's a tool that is great for people who do not want to, code to match, or who are just getting started with machine learning. And you can really create end-to-end machine learning pipelines with these tools, in just a matter of a few minutes. The nice thing is that you can also deploy your machine learning models and this is going to create an API for you, and this API can be used by you, or by other developers in your company, to just call the model that you deployed. As I mentioned before, this is really the part where AI is arriving, and it's the part where you create application on top of your models. So this is a great announcement and we also created a algorithm cheat sheet, that is a really nice map that you can use to understand, based on your question, based on your data, what's the best machine learning algorithm, what's the best designer module that you can use to be build your end-to-end machine learning solution. So this, I would say, is my highlight. And then of course, in terms of Azure Machine Learning, there are other updates. We have the Azure Machine Learning python SDK, which is more for pro data scientists, who wants to create customized models, so models that they have to build from scratch. And for them it's very easy, because it's a python-based environment, where they can just build their models, train it, test it, deploy it. So when I say it's a very dynamic and flexible tool because it's really a tool on the pla- on the Cloud, that is targeting more business people, data analysts, but also pro data scientists and AI developers, so this is great to see and I'm very, very excited for that. >> So in addition to your work as a Cloud advocate at Microsoft, you are also a mentor to research and post-doc students at the Massachusetts Institute of Technology, MIT, so tell us a little more about that work in terms of what kind of mentorship do you provide and what your impressions are of this young generation, a young generation of scientists that's now coming up. >> Yes. So that's another wonderful question because one of the main goal of my team is actually working with a academic type of audience, and we started this about a year ago. So we are, again, a team of Cloud advocates, developers, data scientists, and we do not want to work only with big enterprises, but we want to work with academic type of institutions. So when I say academics, of course I mean, some of the best universities, like I've been working a lot with MIT in Cambridge, Massachusetts Institute of Technology, Harvard, and also now I've been working with the Columbia University, in New York. And with all of them, I work with both the PhD and post-doc students, and most of the time, what I try to help them with is changing their mindset. Because these are all brilliant students, that need just to understand how they can translate what they have learned doing their years of study, and also their technical skillset, in to the real world. And when I say the real world, I mean more like, building applications. So there is this sort of skill transfer that needs to be done and again, working with these brilliant people, I have to say, something that is easy to do, because sometimes they just need to work on a specific project that I create for them, so I give data to them and then we work together in a sort of lab environment, and we build end-to-end solutions. But from a knowledge perspective, from a, I would say, technical perspective, these are all excellent students, so it's really, I find myself in a position in which I'm mentoring them, I prepare them for their industry, because most of them, they want to become data scientist, machine learning scientist, but I have to say that I also learn a lot from them, because at the end of the day, when we build these solutions, it's really a way to build something, a project, an app together, and then we also see, the beauty of this is also that we also see how other people are using that to build something even better. So it's an amazing experience, and I feel very lucky that I'm in Cambridge, where, as you know, we have the best schools. >> Francesca, you've dug in some really interesting things, I'd love to get just a little bit, if you can share, about how machine learning is helping drive competitiveness and innovation in companies today, and any tips you have for companies, and how they can get involved even more. >> Yeah, absolutely. So I think that everything really start with the business problem because I think that, as we started this conversation, we were mentioning words such as deep learning, machine learning, AI, so it's, a lot of companies, they just want to do this because they think that they're missing something. So my first suggestion for them is really trying to understand what's the business question that they have, if there is a business problem that they can solve, if there is an operation that they can improve, so these are all interesting questions that they can ask themselves their themes. And then as soon as they have this question in mind, the second step is understand that, if they have the data, the right data, that are needed to support this process, that is going to help them with the business question. So after that, you understand that the data, I mean, if you understand, if you have the right data, they are the steppings, of course you have to understand if you have also external data, and if you have enough data, as we were saying, because this is very, very important as a first step, in your machine learning journey. And you know, it's important also, to be able to translate the business question in to a machine learning question. Like, for example, in the supervised learning, which is an area of machine learning, we have what is called the regression. Regression is a great type of model, that is great for, to answer questions such as, how many, how much? So if you are a retailer and you wanted to predict how much, how many sales of a specific product you're going to have in the next two weeks, so for example, the regression model, is going to be a good first find, first step for you to start your machine learning journey. So the translation of the business problem into a machine learning question, so it's a consequence in to a machine learning algorithm, is also very important. And then finally, I would say that you always have to make sure that you are able to deploy this machine learning model so that your environment is ready for the deployment and what we call the operizational part. Because this is really the moment in which we are going to allow the other people, meaning internal stake holders, other things in your company, to consume the machine learning model. That's the moment really in which you are going to add business value to your machine learning solution. So yeah, my suggestion for companies who want to start this journey is really to make sure that they have cleared these steps, because I think that if they have cleared these steps, then their team, their developers, their data scientists, are going to work together to build these end-to-end solutions. >> Francesca Lenzetti, thank you so much for coming on theCUBE, it was a pleasure having you. >> Thank you. Thank you. >> I'm Rebecca Knight, Stu Miniman. Stay tuned for more of theCUBE's live coverage of Microsoft Ignite. (upbeat music)
SUMMARY :
Brought to you by Cohesity. in the middle of the show floor Thank you for having me. so we're an all Boston table here. I love it. We are in the most technology cluster, I think, can you describe, So you can use these algorithms on your data, leave the machine learning to do things, that you can understand why your model is giving you is the data in the first place, And I think that this is something that you need to change announced at the show this week. and it's the part where you create application So in addition to your work and most of the time, what I try to help them with I'd love to get just a little bit, if you can share, and if you have enough data, as we were saying, thank you so much for coming on theCUBE, Thank you. live coverage of Microsoft Ignite.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Francesca Lenzetti | PERSON | 0.99+ |
Francesca Lazzeri | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Francesca | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Rebecca | PERSON | 0.99+ |
Massachusetts Institute of Technology | ORGANIZATION | 0.99+ |
Jeffrey Stover | PERSON | 0.99+ |
MIT | ORGANIZATION | 0.99+ |
New York | LOCATION | 0.99+ |
26,000 people | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
Cambridge | LOCATION | 0.99+ |
Columbia University | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
second step | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
two words | QUANTITY | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Azure Machine Learning | TITLE | 0.99+ |
Orange County Convention Center | LOCATION | 0.99+ |
Cohesity | ORGANIZATION | 0.99+ |
Harvard | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
first suggestion | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
this week | DATE | 0.98+ |
python | TITLE | 0.98+ |
today | DATE | 0.95+ |
Azure Machine Learning Studio | TITLE | 0.95+ |
one | QUANTITY | 0.95+ |
theCUBE | ORGANIZATION | 0.94+ |
idge | ORGANIZATION | 0.92+ |
Cambr | LOCATION | 0.92+ |
Azure Machine Learning python SDK | TITLE | 0.87+ |
first place | QUANTITY | 0.87+ |
Cloud | TITLE | 0.87+ |
about | DATE | 0.85+ |
a year ago | DATE | 0.8+ |
next two weeks | DATE | 0.79+ |
2019 | DATE | 0.68+ |
Ignite | TITLE | 0.62+ |
Ignite 2019 | TITLE | 0.46+ |
Ignite | COMMERCIAL_ITEM | 0.44+ |
Ignite | EVENT | 0.31+ |
Parag Dave, Red Hat | AnsibleFest 2019
>> Narrator: Live from Atlanta, Georgia, it's theCUBE, covering Ansible Fest 2019. Brought to you by Red Hat. >> Welcome back, this is theCUBE's live coverage of Ansible Fest 2019, here in Atlanta, Gerogia. I'm Stu Miniman, my co-host is John Furrier and we're going to dig in and talk a bit about developers. Our guest on the program, Parag Dave, who is senior principle product manager with Red Hat. Thank you so much for joining us. >> Glad to be here, thanks for having me. >> Alright, so configuration management, really maturing into an entire automation journey for customers today, lets get into it. Tell us a little bit about your role and what brings you to the event. >> Yeah, so I actually have a very deep background in automation. I started by doing worker automation. Which is basically about how to help businesses do their processing. So, from processing an invoice, how do I create the flows to do that? And we saw the same thing, like automation was just kind of like a an operational thing and was brought on just to fulfill the business, make it faster and next thing you know it grew like, I don't know, like wildfire. I mean it was amazing and we saw the growth, and people saw the value, people saw how easy it was to use. Now, I think that combination is kicking in. So, now I'm focusing more on developers and the depth tools used at Red Hat and it's the same thing. You know, Parag, you know when you look in IT, you know Automation is not a new term. It's like we've been talking about this for decades. Talk to us a little bit about how it's different today and you know, you talked about some of the roles that are involved here, how does Ansible end up being a developer tool? >> Yeah, you know you see, it's very interesting, because Ansible was never really targeted for developers, right? And in fact, automation was always considered like an operational thing. Well, now what has happened is, the entire landscape of IT in a company is available to be executed programmatically. Before it was, interfaces were only available for a few programs. Everything else you had to kind of write your own programs to do, but now the advent of API's, you know with really rich CLI's it's very easy to interact with anything and not just like in software, you can interact with the other network devices, with your infrastructure, with your storage devices. So, all of the sudden when everything became available, developers who were trying to create applications and needed environments to test, to integrate, saw that automation is a great way to create something that cannot be replicated and be consistent every time you run it. So, the need for consistency and replication drove developers to adopt to the Ansible. And we were, you know cause they had the Ansible, we never marketed to developer and then we see that wow, they are really pulling it down, it's great. The whole infrastructure is code, which is one of the key pillars for devOps has become one of the key drivers for it, because now what you are seeing is the ability for developers to say that I can now, when I'm done with my coding and my application is ready for say a test environment or a staging environment, I can now provision everything I need right from configuring my network devices, getting the infrastructure ready for it, run my test, bring it down, and I can do all of that through code, right? So, that really drives the adoption for Ansible. >> And the could scale has shown customers at scale, whether its on-premises or cloud or Edge is really going to be a big factor in their architecture. The other thing that's interesting, and Stu were talking about this on our opening yesterday, is that you have the networking and the bottom of that stack moving up the stack and you have the applications kind of wanting to move down the stack. So, they're kind of meeting in the middle in this programmability in between them. You know, Containers, Kubernetes, Microservices, is developing as a nice middle layer between those two worlds. So, the networks have to telegraph up data and also be programmable, this is causing a lot of disruption and evasion. >> Parag: Absolutely. >> You're thought on this, 'cause it's DevSecOps beefs DevOps, that's DeVops. This is now all that's coming together. Exactly, and what's happening is, what we are seeing with developers is that there's a lot more empowerment going on. You know, before there was like a lot of silo's, there was like a lot of checks and balances in place that kind of made it hard to do things. It was okay, this what you, developers you write code, we will worry about all this. And now, this whole blending that has happened and developers being empowered to do it. And now, the empowerment is great and with great power comes great responsibility. SO, can you please make sure that you know, what you're using is enterprise grade, that it's going to be you know, you're not just doing things with your break environment So, once everybody become comfortable that yes, by merging these things together, we're actually not breaking things. You're actually increasing speed, 'cause what's the number one driver right now for organizations? Is speed with security, right? Can I achieve that business agility, so that by the time I need a feature develop, by the time I need a feature delivered in production and my tool comes for it, I need to close that gap. I cannot have a long gap between that. So, we are seeing a lot of that happening. >> People love automation, they love AI. These are two areas that, it's a no-brainer. When you have automation, you talk AI, yeah bring it on, right? What does that mean? So, when you think about automation the infrastructure that's in the hands of the operators, but also they want to enable applications to do it themselves as well, hence the DevOps. Where is the automation focus? Because that's the number one question. How do I land, get the adoption, and then expand out across. This seems to be the form that Ansible's kind of cracked the code on. The organic growth has been there, but now as a large enterprise comes in, I got to get the developers using it and it's got to be operator friendly. This seems to be the key, >> The balance has to be there >> the key to the kingdom. >> Yeah, no you're absolutely right. And so, when you look at it, like what do developers want? So, something that is frictionless to use, very quick, very easy, and so that I don't have to spend a lot of time learning it and doing it, right? And so we saw that with Ansible. It's like the fact that it's so easy to use, it's most of everything is in YAML. Which is very needed for developers, right? So, we see that from their perspective, they're very eager now, and they've been adopting it, if you look at the download stats it tells you. Like there's a lot of volume happening in terms of developers adopting it. What companies are now noticing is that, wait that's great, but now we have a lot developers doing their own thing. So, there is now like way of bringing all this together, right? So, it's like if I have 20 teams in one line of business and each team tries to do things their own way, what I'm going to end up with is a lot of repeatable, you know like a lot of work that gets repeated, I say it's duplicated. So, we see that's what we are seeing with collections for example. What Ansible is trying to bring to the table is okay, how do I help you kind of bring things into one umbrella? And how can I help you as a developer decide that, wow I got like 100 plus engine extra rolls I can use in Ansible. Well, which one do I pick? And you pick one, somebody else picks something else, Somebody creates a playbook with like one separate, you know one different thing in it, versus yours. How do we get our hands around it? And I think that's where we are seeing that happen. >> Right open star standpoint. I see Red Hat, Ansible doing great stuff and for the folks in the ivory tower, the executive CXO'S. They hear Ansible, glue layer, integration layer, and they go, wait a minute isn't that Kubernetes? Isn't Kubernetes suppose to provide all this stuff? So, talk about where Ansible fits in the wave that's coming with Kubernetes. Pat Gelsinger at VMware, thinks Kubernetes is going to be the dial-tone, it's going to be like the TCP/IP like protocol, to use his words, but there's a relationship that Ansible has with those Microservices that are coming. Can you explain that fit? >> You hit the nail on the head. Like, Kubernetes is like, we call it the new operating system. It's like that's what everything runs on now, right? And it's very easy for us, you know from a development perspective to say, great I have my Containers, I have my applications built, I can bring them up on demand, I don't have to worry about you know having the whole stack of an operating system delivered every time. So, Kubernetes has become like the defactual standard upon which things run. So, one of the concepts that has really caught a lot of momentum, is the operator framework, right? Which was introduced with the Kubernetes, the later Razor 3.x. Some of that, and operator framework, it's very easy now for application teams. I mean, it's not a great uptake from software vendors themselves. How do I give you my product, that you can very easily deliver on Kubernetes as a Container, but I'll give you enough configuration options, you can make it work the way you want to. So, we saw a lot oof software vendors creating and delivering their products as operators. Now we are seeing that a lot of software application developers themselves, for their own applications, want to create operators. It's a very easy way of actually getting your application deployed onto Kubernetes. So, Ansible operator is one of the easiest ways of creating an operator. Now, there are other options. You can do a Golang operator, you can do Helm, but Ansible operators has become extremely easier to get going. It doesn't require additional tools on top of it. Just because the operator SDK, you know, you're going to use playbooks. Which you're used to already and you're going to use playbooks to execute your application workflows. So, we feel that developers are really going to use Ansible operators as a way to create their own operators, get it out there, and this is true for any Kubernetes world. So, there's nothing different about, you know an Ansible operator versus any other operator. >> With no chains to Kubernetes, but Kubernetes obviously has the cons of the Microservices, which is literally non-user intervention. The apps take of all provisioning of services. This is an automation requirement, this feeds into the automation theme, right? >> Exactly, and what this does for you is it helps you, like if you look at operator framework, it goes all the way from basic deployers, everybody's use to, like okay, I want instantaneous deployment, automatically just does it. Automatically recognize changes that I give you in reconfiguration and go redeploy a new instance the way it should. So, how do I automate that? Like how do I ensure that my operator that is actually running my application can set up it's own private environment in Kubernetes and then it can actually do it automatically when I say okay now go make one change to it. Ansible operator allows you to do that and it goes all the way into the life cycle, the full five phases of life cycle that we have in the operator framework. Which is the last one's about autopilot. So, Autoscale, AutoRemedy itself. Your application now on Kubernetes through Ansible can do all that and you don't have to worry about coding at all. It's all provided to you because of the Ansible operator. >> Parag, in the demo this morning, I think the audience really, it resonated with the audience, it talked about some of the roles and how they worked together and it was kind of, okay the developers on this side and the developers expectation is, oh the infrastructure's not going to be ready, I'm not going to have what I need. Leave me alone, I'm going to play my video games until I can actually do my work and then okay, I'll get it done and do my magic. Speak a little bit to how Ansible is helping to break through those silo's and having developers be able to fully collaborate and communicate with all their other team members not just be off on their own. >> Oh yeah, that's a good point, you know. And what is happening is the developers, like what Ansible is bringing to the table is giving you a very prescriptive set of rules that you can actually incorporate into your developer flows. So, what developers are now doing is that I can't create a infrastructure contribution without actually having discussions with the infrastructure folks and the network team will have to share with me what is the ideal contribution I should be using. So, the empowerment that Ansible brings to the table is enabled cross team communications to happen. So, there is prescriptive way of doing things and you can create this all into an automation and then just set up so that it gets triggered every time a developer makes a change to it. So, internally they do that. Now other teams come and say, hey how are you doing this? Right, 'cause they need they same thing. Maybe you're destinations are going to be different obviously, but in the end the mechanism is the same, because you are under the same enterprise, right? So, you're going to have the same layer of network tools, same infrastructure tools. So, then teams start talking to each other. I was talking to the customer and they were telling me that they started with four teams working independently, building their own Ansible playbooks and then talking to the admins and next thing they know everybody had the full automation done and nobody knew about it. And now they're finding out and they were saying, wow, I got like hundreds of these teams doing this. So, A, I'm very happy, but B, now I would like these guests to talk to each other more and come up with a standard way of doing it. And going back to that collections concept. That's what's really going to help them. And we feel that with the collections it's very similar to what we did with Operator Hub for the OpenShift. It's where we have certified set of collections, so that they're supported by Red Hat. We have partners who contribute theirs and then they're supported by them, but we become a single source. So, as an enterprise you kind of have this way of saying, okay now I can feel confident about what I'm going to let you deploy in my environment and everybody's going to follow the same script and so now I can open up the floodgates in my entire organization and go for it. >> Yeah, what about how are people in the community getting to learn form everyone else? When you talk about a platform it should be if I do something not only can by organization learn from it, but potentially others can learn from it. That's kind of the value proposition of SaaS. >> Yes, yes it and having the galaxy offering out there, where we see so many users contributing, like we have close to a hundred thousand rolls out there now and that really brought the Ansible community together. It was already a strong community of contributors and everything. By giving them a platform where they can have these discussions, where they can see what everybody else is doing, it's the story is where you will now see a lot more happening like today, I think it was Ansible is like the top five Get Up projects in terms of progress that are happening out there. I mean the community is so wide run, it's incredible. Like they're driving this change and it's a community made up of developers, a lot of them. And that's what's creating this amazing synergy between all the different organizations. So, we feel that Ansible is actually bringing a lot of us together. Especially, as more and more automation becomes prevalent in the organizations. >> Alright, Parag want to give you a final word, Ansible Fest 2019, final take aways. >> No, this is great, this is my first one and I'd never been to one before and just the energy, and just seeing what all the other partners are also sharing, it's incredible. And Like I said with my backgrounds automations, I love this, anything automation for me, I think that's just the way to go. >> John: Alright, well that's it. >> Stu: Thank you so much for sharing the developer angle with us >> Thank you very much. >> For John Furrier, I'm Stu Miniman. Back to wrap-up from theCUBe's coverage of Ansible Fest 2019. Thanks for watching theCUBE. (intense music)
SUMMARY :
Brought to you by Red Hat. Thank you so much for joining us. and what brings you to the event. how do I create the flows to do that? but now the advent of API's, you know with really rich CLI's So, the networks have to telegraph up data that it's going to be you know, and it's got to be operator friendly. It's like the fact that it's so easy to use, and for the folks in the ivory tower, the executive CXO'S. So, one of the concepts that has really caught has the cons of the Microservices, It's all provided to you because of the Ansible operator. oh the infrastructure's not going to be ready, So, the empowerment that Ansible brings to the table That's kind of the value proposition of SaaS. it's the story is where you will now see Alright, Parag want to give you a final word, and I'd never been to one before and just the energy, Back to wrap-up from theCUBe's coverage
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Pat Gelsinger | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
20 teams | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Parag Dave | PERSON | 0.99+ |
Ansible | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
each team | QUANTITY | 0.99+ |
Atlanta, Georgia | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
two areas | QUANTITY | 0.99+ |
one line | QUANTITY | 0.99+ |
Kubernetes | TITLE | 0.99+ |
five phases | QUANTITY | 0.98+ |
two worlds | QUANTITY | 0.98+ |
VMware | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.97+ |
first one | QUANTITY | 0.97+ |
four teams | QUANTITY | 0.97+ |
Ansible Fest 2019 | EVENT | 0.96+ |
theCUBE | ORGANIZATION | 0.95+ |
100 plus | TITLE | 0.95+ |
AnsibleFest | EVENT | 0.95+ |
today | DATE | 0.95+ |
single source | QUANTITY | 0.92+ |
Atlanta, Gerogia | LOCATION | 0.91+ |
DevSecOps | TITLE | 0.91+ |
Razor 3.x. | TITLE | 0.91+ |
Operator Hub | ORGANIZATION | 0.88+ |
playbooks | TITLE | 0.86+ |
one change | QUANTITY | 0.83+ |
hundred thousand | QUANTITY | 0.83+ |
one question | QUANTITY | 0.8+ |
this morning | DATE | 0.8+ |
SDK | TITLE | 0.8+ |
theCUBe | ORGANIZATION | 0.8+ |
DevOps | TITLE | 0.79+ |
Kubernetes | ORGANIZATION | 0.79+ |
top five | QUANTITY | 0.73+ |
decades | QUANTITY | 0.7+ |
Parag | ORGANIZATION | 0.62+ |
pillars | QUANTITY | 0.61+ |
devOps | TITLE | 0.6+ |
Rob Szumski, Red Hat OpenShift | KubeCon + CloudNativeCon EU 2019
>> Live from Barcelona, Spain. It's theCUBE! Covering KubeCon, CloudNativeCon, Europe 2019. Brought to you by Red Hat, the Cloud Native Computing Foundation, and Ecosystem Partners. >> Hi, and welcome back. This is KubeCon, CloudNativeCon 2019 here in Barcelona. 7700 in attendance according to the CNCF foundation. I'm Stu Miniman and my co-host for this week is Corey Quinn. And happy to welcome back to the program, a cube-i-lom Rob Szumski, who's the Product Manager for Red Hat OpenShift. Rob, thanks so much for joining us >> Happy to be here. >> All right, so a couple of weeks ago, we had theCUBE in Boston. You know, short drive for me, didn't have to take a flight as opposed to... I'm doing okay with the jet lag here, but Red Hat Summit was there. And it was a big crowd there, and the topic we're going to talk about with you is operators. And it was something we talked about a lot, something about the ecosystem. But let's start there. For our audience that doesn't know, What is an operator? How does it fit into this whole cloud-native space in this ecosystem? >> (Corey) And where can you hire one? >> (laughs) So there's software programs first of all. And the idea of an operator is everything it takes to orchestrate one of these complex distributor applications, databases, messaging queues, machine learning services. They all are distinct components that all need to be life-cycled. And so there's operational expertise around that, and this is something that might have been in a bash script before, you have a Wiki page. It's just in your head, and so it's putting that into software so that you can stamp out mini copies of that. So the operational expertise from the experts, so you want to go to the folks that make MongoDB for Mongo, for Reddits, for CouchBase, for TensorFlow, whatever it is. Those organizations can embed that expertise, and then take your user configuration and turn that into Kubernetes. >> Okay, and is there automation in that? When I hear the description, it reminds me a little bit of robotic process automation, or RPA, which you talk about, How can I harem them? RPA is, well there's certain jobs that are rather repetitive and we can allow software to do that, so maybe that's not where it is. But help me to put it into the >> No, I think it is. >> Okay, awesome. >> When you think about it, there's a certain amount of toil involved in operating anything and then there's just mistakes that are made by humans when you're doing this. And so you would rather just automate away that toil so you can spend you human capitol on higher level tasks. So that's what operator's all about. >> (Stu) All right. Great. >> Do you find that operator's are a decent approach to taking things that historically would not have been well-suited for autoscaling, for example, because there's manual work that has to happen whenever a no-joinser leaves a swarm. Is that something operators tend to address more effectively? Or am I thinking about this slightly in the wrong direction? >> Yeah, so you can do kind of any Kubernetes event you can hook into, so if your application cares about nodes coming and leaving, for example, this is helpful for operators that are operating the infrastructure itself, which OpenShift has under the hood. But you might care about when new name spaces are created or this pod goes away or whatever it is. You can kind of hook into everything there. >> So, effectively it becomes a story around running stateful things in what was originally designed for stateless containers. >> Yeah, that can help you because you care about nodes going away because your storage was on it, for example. Or, now I need to re-balance that. Whatever that type of thing is it's really critical for running stateful workloads. >> Okay, maybe give us a little bit of context as to the scope of operators and any customer examples you have that could help us add a little bit of concreteness to it. >> Yeah, they're designed to run almost anything. Every common workload that you can think about on an OpenShift cluster, you've got your messaging queues. We have a product that uses an operator, AMQ Streams. It's Kafka. And we've got folks that heavily use a Prometheus operator. I think there's a quote that's been shared around about one of our customer's Ticketmaster. Everybody needed some container native monitoring and everybody could figure out Prometheus on their own. Or they could use operator. So, they were running, I think 300-some instances of Prometheus and dev and staging and this team, that team, this person just screwing around with something over here. So, instead of being experts in Prometheus, they just use the operator then they can scale out very quickly. >> That's great because one of the challenges in this ecosystem, there's so many pieces of it. We always ask, how many companies need to be expert on not just Kubernetes, but any of these pieces. How does this tie into the CNCF, all the various projects that are available? >> I think you nailed it. You have to integrate all this stuff all together and that's where the value of something like OpenShift comes at the infrastructure layer. You got to pick all your networking and storage and your DNS that you're going to use and wire all that together and upgrade that. Lifecycle it. The same thing happens at a higher level, too. You've got all these components, getting your Fluentd pods down to operating things like Istio on Service Mesh's, serviceless workloads. All this stuff needs to be configured and it's all pretty complex. It's moving so fast, nobody can be an expert. The operator's actually the expert, embedded from those teams which is really awesome. >> You said something before we got started. A little bit about a certification program for operators. What is that about? >> We think of it as the super set of our community operators. We've got the TensorFlow community, for example, curates an operator. But, for companies that want to go to market jointly with Red Hat, we have a certification program that takes any of their community content, or some of their enterprise distributions and makes sure that it's well-tested on OpenShift and can be jointly supported by OpenShift in that partner. If you come to Red Hat with a problem with a MongoDB operator, for example, we can jointly solve that problem with MongoDB and ultimately keep your workload up and keep it running. We've got that times a bunch of databases and all kinds of servers like that. You can access those directly from OpenShift which is really exciting. One-click install of a production-ready Mongo cluster. You don't need to dig through a bunch of documentation for how that works. >> All right, so Rob, are all of these specific only to OpenShift, or will they work with flavors of Kubernetes? >> Most of the operators work just against the generic Kubernetes cluster. Some of them also do hook into OpenShift to use some of our specialized security primitives and things like that. That's where you get a little bit more value on OpenShift, but you're just targeting Kubernetes at the end of the day. >> What do you seeing customers doing with this specifically? I guess, what user stories are you seeing that is validating that this is the right direction to go in? >> It's a number of different buckets. The first one is seeing folks running services internally. You traditionally have a DBA team that maybe runs the shared database tier and folks are bringing that the container native world from their VM's that they're used to. Using operators to help with that and so now it's self-service. You have a dedicated cluster infrastructure team that runs clusters and gives out quota. Then, you're just eating into that quota to run whatever workloads that you want in an operator format. That's kind of one bucket of it. Then, you see folks that are building operators for internal operation. They've got deep expertise on one team, but if you're running any enterprise today especially like a large scale Ecommerce shop, there's a number of different services. You've got caching tier, and load balancing tiers. You've got front-ends, you've got back-ends, you've got queues. You can build operators around each one of those, so that those teams even when they're sharing internally, you know, hey where's the latest version of your stack? Here's the operator, go to town. Run it in staging QA, all that type of stuff. Then, lastly, you see these open source communities building operators which is really cool. Something like TensorFlow, that community curates an operator to get you one consistent install, so everyone's not innovating on 30 different ways to install it and you're actually using it. You're using high level stuff with TensorFlow. >> It's interesting to lay it out. Some of these okay, well, a company is doing that because it's behind something. Others you're saying it's a community. Remind me, just Red Hat's long history of helping to give if you will, adult supervision for all of these changes that are happening in the world out there. >> It's a fast moving landscape and some tools that we have are our operator SDK are helping to tame some of that. So, you can get quickly up and running, building an operator whether you are one of those communities, you are a commercial vendor, you're one of our partners, you're one of our customers. We've got tools for everybody. >> Anything specific in the database world that's something we're seeing, that Cambrian explosion in the database world? >> Yeah, I think that folks are finally wrapping their heads around that Kubernetes is for all workloads. And, to make people feel really good about that, you need something like an operator that's got this extremely well-tested code path for what happens when these databases do fail, how do I fail it over? It wasn't just some person that went in and made this. It's the expert, the folks that are committing to MongoDB, to CouchBase, to MySQL, to Postgres. That's the really exciting thing. You're getting that expertise kind of as extension of your operations team. >> For people here at the show, are there sessions about operators? What's the general discussion here at the show for your team? >> There's a ton. Even too many to mention. There's from a bunch of different partners and communities that are curating operators, talking about best practices for managing upgrades of them. Users, all that kind of stuff. I'm going to be giving a keynote, kind of an update about some of stuff we've been talking about here later on this evening. It's all over the place. >> What do you think right now in the ecosystem is being most misunderstood about operators, if anything? >> I think that nothing is quite misunderstood, it's just wrapping your head around what it means to operate applications in this manner. Just like Kubernetes components, there's this desired state loop that's in there and you need to wrap your head around exactly what needs to be in that. You're declarative state is just the Kubernetes API, so you can look at desired and actual and make that happen, just like all the Kub components. So, just looking at a different way of thinking. We had a panel yesterday at the OpenShift Commons about operators and one of the questions that had some really interesting answers was, What did you understand about your software by building an operator? Cause sometimes you need to tease apart some of these things. Oh, I had hard coded configuration here, one group shared that their leader election was not actually working correctly in every single incidences and their operator forced them to dig into that and figure out why. So, I think it's a give and take that's pretty interesting when you're building one of these things. >> Do you find that customers are starting to rely on operators to effectively run their own? For example, MongoDB inside of their Kubernetes clusters, rather than depending upon a managed service offering provided by their public cloud vendor, for example. Are you starting to see people effectively reducing public cloud to baseline primitives at a place to run containers, rather than the higher level services that are starting to move up the stack? >> A number of different reasons for that too. You see this for services if you find a bug in that service, for example, you're just out of luck. You can't go introspect the versions, you can't see how those components are interacting. With an operator you have an open source stack, it's running on your cluster in your infrastructure. You can go introspect exactly what's going on. The operator has that expertise built in, so it's not like you can screw around with everything. But, you have much more insight into what's going on. Another thing you can't get with a cloud service is you can't run it locally. So, if you've got developers that are doing development on an airplane, or just want to have something local so it's running fast, you can put your whole operator stack right on your laptop. Not something you can do with a hosted service which is really cool. Most of these are opens source too, so you can go see exactly how the operator's built. It's very transparent, especially if you're going to trust this for a core part of the infrastructure. You really want to know what's going on under the hood. >> Just to double check, all this can run on OpenShift? It is agnostic to where it lives, whether public cloud or data center? >> Exactly. These are truly hybrid services, so if you're migrating your database to here, for example, over now you have a truly hybrid just targeting Kubernetes environment. You can move that in any infrastructure that you like. This is one of the things that we see OpenShift customers do. Some of them want to be cloud-to-cloud, cloud-to-on-prem, different environments on prem only, because you've got database workloads that might not be leaving or a mainframe you need to tie into, a lot of our FSI customers. Operators can help you there where you can't move some of those workloads. >> Cloud-on-prem makes a fair bit of sense to me. One thing I'm not seeing as much of in the ecosystem is cloud-to-cloud. What are you seeing that's driving that? >> I think everybody has their own cloud that they prefer for whatever reasons. I think it's typically not even cost. It's tooling and cultural change. And, so you kind of invest in one of those. I think people are investing in technologies that might allow them to leave in the future, and operators and Kubernetes being one of those important things. But, that doesn't meant that they're not perfectly happy running on one cloud versus the other, running Kubernetes on top of that. >> Rob, really appreciate all the updates on operators. Thanks so much for joining us again. >> Absolutely. It's been fun. >> Good luck on the keynote. >> Thank you. >> For Corey Quinn, I'm Stu Miniman, back with more coverage two days live from wall to wall here at KubeCon CloudNativeCon 2019 in Barcelona, Spain. Thanks for watching.
SUMMARY :
Brought to you by Red Hat, 7700 in attendance according to the CNCF foundation. and the topic we're going to talk about so that you can stamp out mini copies of that. which you talk about, How can I harem them? so you can spend you human capitol on higher level tasks. (Stu) All right. Do you find that operator's are a decent approach Yeah, so you can do kind of any So, effectively it becomes a story Yeah, that can help you because you care and any customer examples you have Every common workload that you can think about That's great because one of the challenges You got to pick all your networking and storage What is that about? and can be jointly supported by OpenShift in that partner. That's where you get a little bit more value and folks are bringing that the container native world that are happening in the world out there. So, you can get quickly up and running, the folks that are committing to MongoDB, to CouchBase, and communities that are curating operators, and you need to wrap your head around Do you find that customers are starting to so it's not like you can screw around with everything. You can move that in any infrastructure that you like. What are you seeing that's driving that? that might allow them to leave in the future, Rob, really appreciate all the updates on operators. It's been fun. at KubeCon CloudNativeCon 2019 in Barcelona, Spain.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Corey Quinn | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
Rob Szumski | PERSON | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Rob | PERSON | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
30 different ways | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
One-click | QUANTITY | 0.99+ |
two days | QUANTITY | 0.99+ |
MySQL | TITLE | 0.99+ |
KubeCon | EVENT | 0.99+ |
Barcelona, Spain | LOCATION | 0.99+ |
Ecosystem Partners | ORGANIZATION | 0.99+ |
Prometheus | TITLE | 0.99+ |
Corey | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
OpenShift | TITLE | 0.99+ |
MongoDB | TITLE | 0.98+ |
Kafka | TITLE | 0.98+ |
Kubernetes | TITLE | 0.98+ |
Red Hat Summit | EVENT | 0.98+ |
one team | QUANTITY | 0.98+ |
CloudNativeCon | EVENT | 0.98+ |
first one | QUANTITY | 0.96+ |
CloudNativeCon 2019 | EVENT | 0.96+ |
one cloud | QUANTITY | 0.96+ |
this week | DATE | 0.94+ |
CouchBase | TITLE | 0.94+ |
CloudNativeCon EU 2019 | EVENT | 0.93+ |
TensorFlow | TITLE | 0.91+ |
One thing | QUANTITY | 0.91+ |
Europe | LOCATION | 0.9+ |
2019 | EVENT | 0.89+ |
KubeCon CloudNativeCon 2019 | EVENT | 0.89+ |
today | DATE | 0.88+ |
couple of weeks ago | DATE | 0.87+ |
one group | QUANTITY | 0.87+ |
300 | OTHER | 0.85+ |
each one | QUANTITY | 0.85+ |
Reddits | ORGANIZATION | 0.83+ |
OpenShift Commons | ORGANIZATION | 0.83+ |
Ticketmaster | ORGANIZATION | 0.83+ |
this evening | DATE | 0.79+ |
one of | QUANTITY | 0.79+ |
7700 | QUANTITY | 0.79+ |
Postgres | TITLE | 0.77+ |
single incidences | QUANTITY | 0.75+ |
FSI | ORGANIZATION | 0.73+ |
double | QUANTITY | 0.69+ |
Jason Woosley, Adobe | Adobe Summit 2019
>> Narrator: Live from Las Vegas It's The Cube covering Adobe Summit 2019 brought to you by Adobe. >> Hello everyone, welcome back to The Cube's live coverage here in Las Vegas for Adobe Summit 2019. I'm John Furrier with Jeff Frick. Our next guest is Jason Woosley, Vice President of Commerce Product and Platform for Adobe, part of the big keynote display this morning and news on the announcement of the Commerce Cloud, formerly Magento. Congratulations. Welcome to The Cube. >> Hey, thanks so much for having me. It's great to be here. >> Love the commerce angle because now that's a big part of a journey, people buy stuff. >> Absolutely. >> That's the most important, one of the most important parts. >> So when you think about an experience end to end, right it culminates hopefully in a transaction, and that's one of the pieces that makes the Magento acquisition fit so well into the Adobe family. We actually kind of finished that last mile of the transition getting to actual ownership. >> You know, I love this event because it feels a little like Woodstock, as Steve Lucas said on stage because you've got the best of big data all the intoxicating conversations and discussions. You get the best of the cloud, all the geek stuff under the hood. >> Oh, yeah. >> Then you've got the applications which are super relevant. So, it's really kind of, I love the content, love that you guys are in the middle of, I think, a great wave of innovation coming. But if you look at the big picture, you're seeing the same kind of themes, latency, relevance. I mean, these are tech terms used on your product in commerce a lot different than other things. So, you start to see these geek terms kind of weaving into this new cloud. >> I think you're really starting to see a convergence of some of the terminology and what really matters and that's the customer experience, right. It's really about answering what the customer wants and getting that is, that's the magic. >> It's accepting the fact that it's a disjointed journey. I love the journey conversation but it's not the straight pipe like it used to be. You're in and out, you're looking on a website, you're jumping over from a tweet, you know, there's so many kind of in's and out's, in's and out's, in's and outs before you get to that buy. >> And consumers are so sophisticated now, right. I mean they absolutely take advantage of all of those channels and that's why it's so important for merchants who are trying to be relevant. You've got to be present at every point where your customers are and it's a tough thing to do because there's just a proliferation of channels, I mean, you know, we've got digital kiosks, we've got buy online pick up in store, all these omnichannels operations coming together now. So it becomes even more important for merchants to make that investment and make sure that not only are they at the place where their customers are but they're there with a relevant and personalized message. >> Jason, I've got to ask you a question. I bring this up in a lot of these kind of user experience conversations. When you have new things coming on the market that are hard to operationalize out of the gate. It takes some time. We're starting to see that with you guys that built the platform. People are starting to operationalize new capabilities. But on the consumer side, the user side, expectations become the new experience. It's kind of a cliche in the tech world. What are some of those experiences that you're seeing that's becoming the new expectations. To your point about, the old way, I can smell a marketing funnel a mile away. I'm trying to buy something and all this other distractions that are not relevant to me are there. So you start to see some frustration but now users expect something new. What is that expectation that's converting it to experience? >> It's across the board and expectation are sky high, right. And it seems like every time we see something innovative you think about Amazon Prime, right, two day shipping. That was crazy back in the day and now, two day shipping is considered standard shipping, right. If you wanna be fast, you're doing same day. And that kind of, it's so hard to keep up with that pace of innovation and it happens all over the place. It's not just in logistics. People are expecting to be able to take advantage of omnichannel operations, right. Millennials especially. 60% of them really prefer to be able to have a tangible interaction with the product before they buy it. But they still want to buy online. So now they do buy online pick up in store or click and collect, they call it in Europe. And it's just become a huge fad. We've seen a 250% increase of the largest retailers of buy online pick up in store in the last year. Absolutely crazy. >> It's pretty wild when Best Buy gets on stage and says, we're not a brick and mortar retailer. (laughing) >> It actually changes the game, right. What else is interesting though is these brick and mortars that have an online presence, they actually have a distinct advantage because of that tangibility, right. You've got the opportunity to do all of your shopping online but you've also got a place to go do showcasing and actually interact with some of those especially more high tech tools. >> Right. >> You guys have been out front on the Magento side. We covered your event last year for the acquisition. And a couple things popped out at me that I want to get your reaction to now. One is obviously the role of the community. But as you started getting into the cloud kind of play the economics are changing, too, right. So you have community, economics and then large scale. These are new table stakes. So what's your reaction to that? How is Adobe and how are your customers adjusting to this new normal? Your thoughts on this shift? >> Yeah, I think that they adjust faster than we expect them to. It's really interesting because as you see these demands for things like cloud operations. Really, that's taking a whole set of responsibilities away from the merchant and allowing a single vendor to provide that as a service and we're seeing that again and again, right. This service based economy that's just becoming much, much more prevalent. What it means for our community and I'm glad you brought that up because our commerce community is the largest in the world, it's highly engaged. We have a tremendous amount of participation from those guys. And they're actually helping lead the way. They help merchants feel good about adopting new technologies. They're also incredibly innovative and they take our product and do things that we would never have thought of. >> They provide product feedback, too, the developers, that creates a nice fly wheel. >> It is a great fly wheel. >> It's a great use case. Congratulations, you guys done some nice work there. >> Oh, thanks, thanks. >> And Adobe's certainly gonna get the benefits of that. The other question I wanna ask you is something I noticed on digital over the years is that, it's gotten more prevalent now that everyone's connected. You know, the old days of buying tech. Let's buy this great project, we'll build it out and multiple year payback and everyone nerds out. It's like a project and they have fun doing it. And then, like, what was the value. When the value today is about money. When people lose money, the friction, all those other kinds of coolness, the shiny new toy, it goes away. >> Yeah, it falls away. >> You're in the middle of that. You see more of that now. People whose businesses are on the line. Security breach or revenue. >> Jason: Yeah. >> I mean, the optimization around the new way just goes right to the problem right there. >> The very best way to tackle that is an iterative experimental way. You've go to be able to make small bets. Learn from those bets and then pivot. This concept that we can take an idea, go into our back rooms and code it for three years and come back out with something that meets the market, it's a fallacy. It's never gonna work, right? So you've gotta start delivering shippable increments much faster, smaller pieces and then make sure that you've got that feedback loop closed so that you can actually respond to your customers. >> Jeff: Right, the other piece which you just talked on briefly but I wanna unpack it in reference to what you just said, two big words. Open source and ecosystem. >> Jason: Yeah. >> And as you said, you can't just go in the back room. Even if you knew the product, you can't necessarily go in the back room and build it yourself. >> Jason: Yeah. >> Fundamentally, believe that not all the experts are in your four walls and that there's, by rule, a lot more outside and leveraging that capability is really a game-changer. >> Yeah, absolutely, I mean, we have three hundred thousand developers that call themselves Magento engineers and don't take a paycheck from Adobe. It's phenomenal what they're able to do and they help us move very, very quickly. We saw last year when the Amazon patent expired for one-click checkout on the day that it expired one of our community members created a pool request that made every Magento store able to take advantage of it. >> John: They were probably waiting right there on that clock. >> Oh no, they were waiting. (John laughing) Because the licensing fees were extortion. >> That's innovation. >> It is. >> That's our example of community driven innovation. >> And that's a great place to go get that, right. Within your four walls, you've got lots of expertise but you always end up with some blinders on. We've got profit margins to go chase. We've got all kinds of good business things to go do. The community, however, completely unfettered. They've got the ability to go try all kinds of cool stuff. >> Two questions on that thread. One is community. A lot of people try the buzzword. Hey, let's get a community. You can't buy a community. You've got to earn it. Talk about that dynamic and then talk about how Adobe's reacted to Magento's community because Adobe's pretty open. >> Yeah. >> They're creatives. I don't think they'd be anti-community. They have developers. They got a bunch of community themselves. So, community, buying a community versus earning it, and then the impact of Magento's community to Adobe. >> You cannot buy it. 100% you cannot buy a community. And you have to deserve it. And really, you have to think about yourselves as custodians of a community rather than, I mean, we're members. We used to have this saying, we are Magento. Everybody inside Magento, in the ecosystem, our partners, our developers. Everybody is part of that solution so trying to own it, trying to exert control over it, it's a recipe for not having it at all, right. So you have to be very cautious and it really is a custodianship. It's an honor and it's a privilege and you have to kind of take it seriously. >> If you get it right, the benefits are multi-fold. >> That's exactly it. >> Now, Adobe, obviously they have, we heard and we see that they're open to that and working with it. >> Adobe has been terrific and it was, I think, one of the biggest fears from our community as acquisition unfolded was hey, Adobe, big corporate company not a lot of open source projects. They've got some but their core isn't about open source and what was gonna happen to our community as we came in. It's been absolutely terrific because Adobe has been absolutely investing and making sure that we continue to be terrific custodians of this community and in fact, they're trying now to expand that community to the rest of their products. They would love to have our community members that are able to go out and innovate so rapidly, do so across the entire Adobe portfolio. >> Well, it's interesting, too. If you have a platform play in the cloud scale and some of these cross functional connection tissue points that's recipe for robust ecosystem development. >> Exactly. >> Because they means there's white space, there's opportunities to build on top of. That's a platform. >> Right, and you will see innovation and ingenuity from that you'll never expect. It's just phenomenal. >> So I'm curious to get your take on a specific feature I wanna dive into which is dynamic pricing. Right, hotels have been doing dynamic pricing forever. You give the authorization to the kid working at the front counter if it's 11 o'clock, you got a open room take whatever walks in the door. >> Jason: Yeah. >> To the airline, it's got very sophisticated but most companies haven't really be able to excuse dynamic pricing. Just curious, when you bring in capabilities that you get now with the Adobe suite and the data now that you have around the customer and the data that you now have around the context, I mean, are we gonna see much better execution of things like dynamic pricing. >> We're gonna see democratization of a lot of those things that were typically reserved to the very, very big industries, right. I think you're looking at airlines, they did a great job. But they invested hundreds of millions of dollars into systems to go do that. Now, with things like Sensei and artificial intelligence our machine learning capabilities, we can actually bring those capabilities to small merchants and everyday folks to go out and do those experiments with your pricing and understand where you have elasticity and where you don't. Once you have that information, you're making much better decisions across the board for your business. >> And that's actually the benefits of the Magento platform and scale that you have. So the question is, as you guys continue to get this cloud scale going, what are some of the platforms priorities for you guys? What product areas you looking at? What white spaces are gonna leap for the ecosystem? Can you share a little insight into what you guys are thinking? >> Yeah, I mean, one, we try to open everything to the ecosystem. There's really not a lot of advantage for us to have anything that's super closed off and secret sauce. We try to make sure that everything is available and so what you'll see is investments in things like SDK's. An SDK is software development kit basically lets you use any language, any tool that you're comfortable with to go ahead and integrate, extend and contribute to our core capabilities. You'll see us continue to invest in making sure that everybody that wants to participate has a very, very easy path to do so. >> And in terms of the developer program, you mention SDK, what's your impression of that? Can you give an update? We're not really familiar with that much, we're learning Adobe. What do you guys have for developer programs within Adobe? >> Well, it is terrific. We have a project called Adobe I/O that actually does a terrific job at sort of standardizing the API and interfaces between all of the different components within the digital experience suite. So, you'll continue to see us investing in that. Certainly, commerce is gonna start participating in that Adobe I/O model and that's going to make it even more broadly available to these great folks. >> Even one of the things we had on The Cube today was a historic moment. We been doing this for 10 years, hundreds of shows a year. We had our first guest on, one of your customers from Metlite. His title was Marketing CIO and I'm like, okay. He's part of the global technology operations team of Metlite. But I think the bigger story there is that we think we'll be a bigger trend than just one-off. We think, we're seeing the connection between the IT world, data, developers, applications coming together where marketing is like a CIO. >> And it's exactly right. We look at the CMO and the CIO as two sides of the same coin. And more often than not they have the same objectives. They're coming at it from a slightly different perspective and so you really do end up having to marry the message so that it resonates not only with the IT folks and usually that's about cloud processes, ease of use, ease of deployment, low cost operation and then on the marketing side it's really about feature availability and visual merchandising and being able to bring their great products to life. >> And an interesting quote, he said, what's it like, to be a marketing CIO, share to others who might to be that. He goes, well, I'm kind of a matchmaker and a translator. (laughing) >> I think that's pretty good a way to put it. Yeah, that makes good sense. >> He puts projects together, translating jargon to business benefits. Emphasis was on the business. You got to know the business. We had Dollar Shape Club on earlier, another one of your Adobe's customers. They were like, no, we need to know the business. It's about the data, data processing, the data systems, business. It has to be blended. It's the art and science of business and technology. >> Yeah, the only get that right when you put the customer right in the middle. You have to build all of those business processes and all of those systems around what that customer's looking for. >> So I'm just curious, Jason, what's changed over the last couple of years, 'cos we've been talking about the 360 view of the customer since, I don't when, but a while. >> A while, yeah. >> And we've been talking about omnichannel marketing and touching the customer for a while but it seems like we've hit a tipping point. Maybe I'm misreading the tealeaves but you know, what are the kind of critical factors that are making that much more a reality than just talk it was a couple years back? >> Well, on omnichannel, we're certainly seeing a maturity, an understanding of what it takes to do omnichannel. It's not just a commerce operation. omnichannel actually stretches back into your supply chain. To be able to really think about the way you deliver to customers as a single channel. Your supply chain has to be highly flexible. Your logistic capabilities have to be extremely flexible and they have to be able to tuned for the things that are important to your customers. Either speed of delivery or cost of delivery. All of those kinds of things. In the omnichannel space, I think we're finally starting to see the maturity of, okay, how do we make these things real. And that's critically important. And the other one. >> 360, 360 view of the customer. >> 360 view of the customer. Almost the same thing there, right. We're finally seeing the technology start to catch up and the big challenge there was we always had one view or the other. You either had a behavioral view of your customer, how they interact with your content. Or you had this great transactional view, the dollar and cents behind a relationship. Now, we're starting to see companies especially like Adobe, that have made these incredible investments to bring those two houses of data together, and that really starts to tell the full story. Again, going back to that customer journey, you need to be able to observe that entire journey in order to make those kinds of decisions. >> Jason, I wish we had more time. I wanna get one more question. I know we might wanna break here. Maybe we can follow up as a separate conversation in Palo Alto. You know, having a digital footprint you hear that buzzword, I'll get a digital footprint out there. It makes a lot of sense but a world that has been dominated by silos, it's hard to have footprint when you have siloed entities. So, in your mind, your reaction between something that's foundational and then data silos. Maybe silos could be okay at the app level but what's the foundational footprint? I mean, foundation's everything. >> Jason: It is. >> Without a foundation, you clearly can't build on. >> Yeah, and we talked a little bit about the Adobe experience platform this morning. Eric Shantenu and Anje will come on and talk about, we've got this amazing capability now to really take that data, standardize it and make it available for all kinds of systems and processes. And I think that's where you're going to see the real foundation for all of these siloed efforts. It's gonna be in this kind of common data understanding, what they call a XDM. >> And customers got silos, too. They've got agencies. All kinds of things out there. >> Absolutely. >> Data everywhere. Jason, thanks for coming on. We really appreciate it. >> Hey, guys, I really appreciate it. Thanks so much. >> Jason Woosley on The Cube here at Adobe Summit 2019. I'm John Furrier. Day one of two days of wall-to-wall live coverage. Stay with us for more coverage after this short break. (electronic music)
SUMMARY :
brought to you by Adobe. and news on the announcement It's great to be here. Love the commerce angle one of the most important parts. and that's one of the pieces that makes You get the best of the cloud, love that you guys are in the middle and getting that is, that's the magic. but it's not the straight pipe and make sure that not only are they We're starting to see that with you guys and it happens all over the place. and says, we're not a brick and mortar retailer. You've got the opportunity One is obviously the role of the community. and I'm glad you brought that up the developers, that creates a nice fly wheel. Congratulations, you guys done some nice work there. And Adobe's certainly gonna get the benefits of that. You're in the middle of that. I mean, the optimization around the new way so that you can actually respond to your customers. Jeff: Right, the other piece which you And as you said, you can't just go in the back room. Fundamentally, believe that not all the experts on the day that it expired John: They were probably waiting Because the licensing fees were extortion. They've got the ability to go try all kinds of cool stuff. You've got to earn it. and then the impact of Magento's community to Adobe. and you have to kind of take it seriously. that they're open to that and working with it. that are able to go out and innovate so rapidly, If you have a platform play in the cloud scale there's opportunities to build on top of. Right, and you will see innovation You give the authorization to the kid working and the data now that you have around the customer and understand where you have elasticity and scale that you have. to the ecosystem. And in terms of the developer program, you mention SDK, and that's going to make it even more broadly available Even one of the things we had and so you really do end up having to marry the message to be a marketing CIO, share to others Yeah, that makes good sense. It's about the data, data processing, and all of those systems around what about the 360 view of the customer since, I don't when, Maybe I'm misreading the tealeaves but you know, the way you deliver to customers and that really starts to tell the full story. it's hard to have footprint when you have siloed entities. about the Adobe experience platform this morning. All kinds of things out there. We really appreciate it. Hey, guys, I really appreciate it. Day one of two days of wall-to-wall live coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jason | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Jeff | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Jason Woosley | PERSON | 0.99+ |
Steve Lucas | PERSON | 0.99+ |
Metlite | ORGANIZATION | 0.99+ |
Adobe | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
two day | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
10 years | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
100% | QUANTITY | 0.99+ |
250% | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Eric Shantenu | PERSON | 0.99+ |
two days | QUANTITY | 0.99+ |
11 o'clock | DATE | 0.99+ |
60% | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Best Buy | ORGANIZATION | 0.99+ |
three years | QUANTITY | 0.99+ |
Anje | PERSON | 0.99+ |
One | QUANTITY | 0.99+ |
Two questions | QUANTITY | 0.99+ |
two sides | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Dollar Shape Club | ORGANIZATION | 0.99+ |
one-click | QUANTITY | 0.99+ |
Adobe Summit 2019 | EVENT | 0.98+ |
two houses | QUANTITY | 0.98+ |
two big words | QUANTITY | 0.98+ |
one view | QUANTITY | 0.98+ |
Magento | ORGANIZATION | 0.98+ |
first guest | QUANTITY | 0.98+ |
360 view | QUANTITY | 0.97+ |
one more question | QUANTITY | 0.96+ |
hundreds of millions of dollars | QUANTITY | 0.96+ |
Prime | COMMERCIAL_ITEM | 0.96+ |
three hundred thousand developers | QUANTITY | 0.95+ |
single vendor | QUANTITY | 0.95+ |
hundreds of shows a year | QUANTITY | 0.94+ |
today | DATE | 0.93+ |
Magento | TITLE | 0.93+ |
Sensei | ORGANIZATION | 0.92+ |
single channel | QUANTITY | 0.91+ |
last couple of years | DATE | 0.89+ |
a mile | QUANTITY | 0.88+ |
Day one | QUANTITY | 0.86+ |
Jason Edelman, Network to Code | Cisco Live EU 2019
>> Live, from Barcelona Spain, it's theCUBE, covering Cisco Live! Europe. Brought to you by Cisco and its ecosystem partners. >> Welcome back to theCUBE, here at Cisco Live! 2019 in Barcelona, Spain, I'm Stu Miniman, happy to welcome to the program a first-time guest, but someone I've known for many years, Jason Edelman, who is the founder of Network to Code. Jason, great to see you, and thanks for joining us. >> Thank you for having me, Stu. >> Alright, Jason, let's first, for our audiences, this is your first time on the program, give us a little bit about your background, and what led to you being the founder of Network to Code. >> Right, so my background is that of a traditional network engineer. I've spent 10+ years managing networks, deploying networks, and really, acting in a pre-sales capacity, supporting Cisco infrastructure. And it was probably around 2012 or 13, working for a large Cisco VAR, that we had access to something called Cisco onePK, and we kind of dove into that as the first SDK to control network devices. We have today iPhone SDKs, SDKs for Android, to program for phone apps, this was one of the first SDKs to program against a router and a switch. And that, for me, was just eye-opening, this is kind of back in 2013 or so, to see what could be done to write code in Python, Seer, Java, against network devices. Now, when this was going on, I didn't know how to code, so I kind of used that as the entrance to ramp up, but that was, for me, the pivot point. And then, the same six-week period, I had a demo of Puppet and Ansible automated networking devices, and so that was the pivot point where it was like, wow, realizing I've spent a career architecture and designing networks, and realizing there's a challenge in operating networks day to day. >> Yeah, Jason, dial back. You've some Cisco certifications in your background? >> Sure, yes, CCIE, yeah. >> Yeah, so I think back, when this all, OpenFlow, and before we even called it Software-Defined Networking, you were blogging about this type of stuff. But, as you said, you weren't a coder. It wasn't your background, you were a network guy, and I think the Network to Code, a lot of the things we've been looking at, career-wise, it's like, does everyone need to become coders? How will the tools mature? Give us a little bit about that journey, as how you got into coding and let's go from there. >> Yeah, it was interesting. In 2010, I started blogging OpenFlow-related, I thought it was going to change the world, saw what NICRO was doing at the time, and then Big Switch at the time, and I just speculated and blogged and really just envisioned this world where networks were different in some capacity. And it took a couple years to really shed light on management and operations of networking, and I made some career shifts. And I remember going back to onePK, at the time, my manager then, who is now our CEO at Network to Code, he actually asked, well, why don't you do it? And it was just like, me? Me, automate our program? What do you mean? And so it was kind of like a moment for me to kind of reflect on what I can do. Now, I will say I don't believe every network engineer should know how to code. That was my on-ramp because of partnership with Cisco at the time, and learning onePK and programming languages, but that was for me, I guess, what I needed as that kick in the butt to say, you know what? I am going to do this. I do believe in the shift that's going to happen in the next couple years, and that was where I kind of just jumped in feet first, and now we are where we are. >> Yeah, Jason, some great points there. I know for myself, I look at, Cisco's gone through so much change. A year ago, up on stage, Cisco's talking about their future is as a software company. You might not even think of us as networking first, you will talk to us about software first. So that initial shift that you saw back in 2010, it's happening. It's a different form than we might have thought originally, and it's not necessarily a product, but we're going through that shift. And I like what you said about how not everybody needs to code, but it's this change in paradigms and what we need to do are different. You've got some connections, we're here in the DevNet Zone. I saw, at the US show in Orlando last year, Network to Code had a small booth, there were a whole bunch of startups in that space. Tell us how you got involved into DevNet, really since the earliest days. >> Yes, since the early days, it was really pre-DevNet. So the emergence of DevNet, I've seen it grow into, the last couple years, Cisco Live! And for us, given what we do at Network to Code, as a network-automation-focused company, we see DevNet in use by our clients, by DevNet solutions and products, things like, mentioned yesterday on a panel, but DevNet has always-on sandboxes, too. One of the biggest barriers we've seen with our clients is getting access to the right lab gear on getting started to automate. So DevNet has these sandboxes always on to hit Nexus API or Catalyst API, right? Things like that. And there's really a very good, structured learning path to get started through DevNet, which usually, where we intersect in our client engagement, so it's kind of like post-DevNet, you're kind of really showing what's possible, and then we'll kind of get in and craft some solutions for our clients. >> Yeah, take us inside some of your clients, if you can. Are most of them hitting the API instead of the COI now when they're engaging? >> Yeah, it's actually a good question. Not usually talked about, but the reality is, APIs are still very new. And so we actively test a lot of the newer APIs from Cisco, as an example. IOS XE has some of the best APIs that exist around RESTCONF, NETCONF, modeled from the same YANG models, and great APIs. But the truth is that a lot of our clients, large enterprises that've been around for 20+ years, the install base is still largely not API-enabled. So a lot of the automation that we do is definitely SSH-based. And when you look at what's possible with platforms, if it is something like a custom in Python, or even an ANSEL off the shelf, a lot of the integrations are hidden from the user, so as long as we're able to accomplish the goal, it's the most important thing right now. And our clients' leaderships sometimes care, and it's true, right? You want the outcome. And initially, it's okay if we're not using the API, but once we do flip that switch, it does provide a bit more structure and safety for automating. But the install base is so large right now that, to automate, you have to use SSH, and we don't believe in waiting 'til every device is API-enabled because it'll just take a while to turn that base. >> Alright, Jason, a major focus of the conference this year has been around multi-cloud. How's that impacting your business and your customers? >> So, it's in our path as a company. Right now, there's a lot of focus around multi-cloud and data center, and the truth is, we're doing a lot of automation in the Campus networking space. Right, automating networks to get deployed in wiring closets and firewalls and load balancers and things like that. So from our standpoint, as we start planning with our clients, we see the services that we offer really port over to multi-cloud and making sure that with whatever automation is being deployed today, regardless of toolset, and look at a tool chain to deploy, if it's a CI/CD Pipeline for networking, be able to do that if you're managing a network in the Campus, a data center network, or multi-cloud network, to make sure we have a uniform-looking field to operations, and doing that. >> Alright, so Jason, you're not only founder of your company, you're also an author. Maybe tell us about the, I believe it's an update, or is it a new book, that recently got out. >> Yes, I'm a co-author of a book with Matt Oswalt and Scott Lowe, and it's an O'Reilly book that was published last year. And look, I'm a believer in education, and to really make a change and change an industry, we have to educate, and I think the book, the goal was to play a small part in really bringing concepts to light. As a network engineer by trade, there's fundamental concepts that network engineers should be aware of, and it could be basics and a lot of these, it could be Python or Jinja templating in YAML and Git and Linux, for that matter. It's just kind of providing that baseline of skills as an entrance into automation. And once you have the baseline, it kind of really uncovers what's possible. So writing the book was great. Great opportunity, and thank you to Matt and Scott for getting involved there. It really took a lot of the work effort and collaborated with them on it. >> Want to get your perception on the show, also. Education, always a key feature of what happens at the show. Not far from us is the Cisco bookshop. I see people getting a lot of the big Cisco books, but I think ten years ago, it was like, everybody, get my CCIE, all my different certifications updated, here. Here in the DevNet Zone, a lot of people, they're building stuff, they're building new pieces, they're playing in the labs, and they're doing some of these environments. What's your experience here at the show? Anything in particular that catches your eye? >> So, I do believe in education. I think to do anything well, you have to be educated on it. And I've read Cisco Press books over the years, probably a dozen of them, for the CCIE and beyond. I think when we look at what's in DevNet, when we look at what's in the bookstore, people have to immerse themselves into the technology, and reading books, like the learning labs that are here in the DevNet Zone, the design sessions that are right behind us. Just amazing for me to have seen the DevNet Zone grow to be what it is today. And really the goal of educating the market of what's possible. See, even from the start, Network to Code, we started as doing a lot of training, because you really can't change the methodology of network operations without being aware of what's possible, and it really does kind of come back to training. Whatever it is, on-demand, streaming, instructor-led, reading a book. Just glad to see this happen here, and a lot more to do around the industry, in the space around community involvement and development, but training, a huge part of it. >> Alright, Jason, want to give you the final word, love the story of network engineer gone entrepreneurial, out of your comfort zone, coding, helping to build a business. So tell us what you see, going forward. >> So, we've grown quite a bit in the past couple years. Right now, we're over 20 engineers strong, and starting from essentially just one a couple years ago, was a huge transformation, and seeing this happen. I believe in bringing on A-players to help make that happen. I think for us as a business, we're continuing to grow and accelerating what we do in this network automation space, but I just think, one thought to throw out there is, oftentimes we talk about lower-level tools, Python, Git, YAML, a lot of new acronyms and buzzwords for network engineers, but also, the flip side is true, too. As our client base evolves, and a lot of them are in the Fortune 100, so large clients, looking at consumption models of technology's super-important, meaning is there ITSM tools deployed today, like a ServiceNow, or Webex teams, or Slack for chat integration. To really think through early on how the internal customers of automation will consume automation, 'cause it really does us no good, Cisco, vendors, or clients no good, if we deploy a great network automation platform, and no one uses it, because it doesn't fit the culture of the brand of the organization. So it's just, as we continue to grow, that's really what's top of mind for us right now. >> Alright, well Jason, congratulations on everything that you've done so far, wish you the best of luck going forward, and thank you so much, of course, for watching. We'll have more coverage, three day, wall-to-wall, here at Cisco Live! 2019 in Barcelona. I'm Stu Miniman, and thanks for watching theCUBE. (electronic music)
SUMMARY :
Brought to you by Cisco and its ecosystem partners. Jason, great to see you, and thanks for joining us. and what led to you being the founder of Network to Code. to program for phone apps, this was one of the first You've some Cisco certifications in your background? and I think the Network to Code, as that kick in the butt to say, you know what? And I like what you said about One of the biggest barriers we've seen with our clients instead of the COI now when they're engaging? So a lot of the automation that we do Alright, Jason, a major focus of the conference this year and data center, and the truth is, or is it a new book, that recently got out. And look, I'm a believer in education, and to really Here in the DevNet Zone, a lot of people, the DevNet Zone grow to be what it is today. So tell us what you see, going forward. I believe in bringing on A-players to help make that happen. and thank you so much, of course, for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jason | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Matt | PERSON | 0.99+ |
Jason Edelman | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Matt Oswalt | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
2010 | DATE | 0.99+ |
Scott Lowe | PERSON | 0.99+ |
Scott | PERSON | 0.99+ |
10+ years | QUANTITY | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Stu | PERSON | 0.99+ |
IOS XE | TITLE | 0.99+ |
Network to Code | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
13 | DATE | 0.99+ |
first time | QUANTITY | 0.99+ |
Orlando | LOCATION | 0.99+ |
Barcelona Spain | LOCATION | 0.99+ |
O'Reilly | ORGANIZATION | 0.99+ |
Java | TITLE | 0.99+ |
Python | TITLE | 0.99+ |
Barcelona, Spain | LOCATION | 0.99+ |
three day | QUANTITY | 0.99+ |
six-week | QUANTITY | 0.99+ |
Android | TITLE | 0.99+ |
first-time | QUANTITY | 0.98+ |
20+ years | QUANTITY | 0.98+ |
NICRO | ORGANIZATION | 0.98+ |
yesterday | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
ten years ago | DATE | 0.98+ |
Linux | TITLE | 0.98+ |
Git | TITLE | 0.97+ |
Cisco Press | ORGANIZATION | 0.97+ |
DevNet | TITLE | 0.97+ |
A year ago | DATE | 0.97+ |
Cisco Live! 2019 | EVENT | 0.97+ |
today | DATE | 0.97+ |
over 20 engineers | QUANTITY | 0.96+ |
this year | DATE | 0.96+ |
YAML | TITLE | 0.96+ |
RESTCONF | TITLE | 0.95+ |
Jinja | TITLE | 0.93+ |
first SDK | QUANTITY | 0.93+ |
next couple years | DATE | 0.92+ |
US | LOCATION | 0.92+ |
first SDKs | QUANTITY | 0.92+ |
Seer | TITLE | 0.91+ |
Nexus API | TITLE | 0.91+ |
theCUBE | ORGANIZATION | 0.91+ |
NETCONF | TITLE | 0.88+ |
Cisco Live EU 2019 | EVENT | 0.86+ |
last couple years | DATE | 0.86+ |
2012 | DATE | 0.85+ |
One | QUANTITY | 0.85+ |
Catalyst API | TITLE | 0.83+ |
SSH | TITLE | 0.81+ |
Diane Mueller & Rob Szumski, Red Hat | KubeCon 2018
>> Live from Seattle, Washington, it's theCUBE, covering KubeCon, and CloudNativeCon North America 2018. Brought to you by Red Hat, the CloudNative Computing Foundation, and the Antigo System Partners. >> Hey, welcome back everyone live here in Seattle for the theCUBE's coverage of KubeCon and CloundNativeCon 2018. I'm John Furrier, theCUBE with Stu Miniman, breaking down all the action. Three days of coverage, we're in day two. A lot of action at Open-source. 8,000 attendees, up from 4,000 North America, they were in China, they were all over Europe. The community's growing in a massive way. We had two great guests from Red Hat, all making it happen, part of the community. We've got Diane Mueller, whose theCUBE alumni director of community development, many times on theCUBE, good to see you, and Rob Szumski, principal product manager, both at Red Hat. Guys, thanks for coming on. Great to see you again. >> Yeah, glad to be here. - Great to be here. >> So the world's changing a lot, and there was some news recently around Red Hat. I can't remember what it was. Recently, something big news, but you guys have been big players in Open-source for years. We always cover it, we always wax on about the origination of it and how the evolution, but the CloudNative piece has gotten so real, and your role in it particularly, we've had many conversations, going maybe back to the OpenStack days of how OpenShift was developing, then the bet on Kubernetes that you made, Core OS acquisition, those two things I think, to me, at least from my perspective, really catalyzed a lot of things at the right time, right? So, from there, just a lot of things has just been happening really in a good way. Big tail wind for you guys, CloudNative app developers are using Open-source, CI/CD pipeline, and then also policy based up under the hood, completely big shift in moving the game down the field. So big congratulations first of all. But what's new? What's the update? >> The update is Operators. I think the next big thing that we are really focusing on, and that's a game changer for all the second day operations type things, and we'll make Rob talk about it in detail, is the rise of Kubernetes' Operators. It's not a scary thing, it's not like terminator day, or anything like that, but it is really the thing that helps us make the service catalogs, the Kubernetes marketplaces really accessible to all of the data bases as a service, and all of the other things, and takes out some of the complexity of delivering applications and database as a Service to anybody running Kubernetes anywhere. >> Take a minute to explain Operator, real quick, and then we can jump into it, because I think this is a fundamental trend, that we're seeing. Developer trend is pretty obvious, it's been that word for awhile, CloudScale, ML, machine learning, and all the goodness around application development, but the Operator side of it has been an IT thing. But now you guys have a different, a new approach that's winning. What is it? What is Operator? >> Well, it's Kubernetes that has the approach, and I'll let you-- >> Yeah, so it's basically like the rise of containers was great, because you could take a single container and package an application and give to somebody, and know that they can run it successfully. And Operator does that for a distributed system in the exact same way. So you're using all the Kubernetes primitives, so you're not reinventing service discovery, and seeker management, and all that. And you can give somebody an entire Kafka stack, or a machine learning stack, or whatever it is, these very complex distributed systems, and have them run it without having to be an expert. They need to know Kafka at a high level, but not exactly all the underpinnings of it, because that's all baked in the software. >> And the benefit and the impact of the organization is what? >> And just to clarify, so this was added in, I believe Kubernetes is like 1.7, it's something that's in there, it's not something Red Hat specific- >> Yeah, it's like-- >> So you're extending Kubernetes so that you have a custom resource definition, which is an extensible mechanism for saying, hey, I've got a deployment or a staple set, but what if I want to have a new object called a MongoDB? That knows how to deploy, and manage, and upgrade MongoDB. So that's the extension mechanism that we're using. >> Yeah, so you got to think, there's certain applications that this is going to make, just a lot easier how I manage them, deploy them, things like that. Any specific examples you want to share as to-- >> All the clustered data bases. >> There's a lot of the application side in this model have been very excited about this. >> So its all the vendors and partners that want a hybrid Cloud story, just targeting Kubernetes, and we're using Kubernetes under the hood, and then everybody wants to run like a staple data base tier, whether that's Mongo and Couchbase, and Cassandra, whatever. And these are all distributed systems. >> Alright, so I want you to just perch, you said a hybrid Cloud. Explain that model, because there's just something in general discussion that is hybrid or multi means I'm running multiple places, I'm not necessarily stretching an application, but I have instances there, just want to make sure we're on the same page. >> So this would be more the compatibility that you're programming against when you're building an operator, is Kubernetes. It's not a Cloud offering, it's not OpenShift, so you're just targeting Kubernetes, and so you can run MongoDB on prem, in the Cloud, and have it function the exact same, by standing up one of these Operators. And then if that Operator has higher level constructs for how to do multi-cluster aware data rebalancing, you can take advantage of that too. >> And the Open-source status of this product is what? >> It's all Open-source, it's all in the github repos, there's a Google group for Operator framework, that anyone can come and participate in. We hold SIG meetings on the third Friday of every month, 9 a.m. Pacific Time, and it's a completely Open-source project. There's a whole framework around it, so there's the Operator SDK, the Operator Lifecycle Management, and Operator metering, all the tooling there to help people build and manage these Operators, and it's all being built out there in the open with the community's support and feedback loops. >> What's the feedback? What's the top feedback you guys are getting right now? Seeing right now? >> I have to say, this is really, like I've been hanging out with you guys like for the past three, four months on this topic, trying to get my head around it and everything, and we came here and we had two sessions, an intro session and a deep dive session, intro yesterday, deep dive today. Today's deep dive, the room was about 250 people, and they're were people outside of it-- >> Security guards blocking people from coming in. >> Nobody could come in and it's like, it's insane. It's like, everybody needs these things, and everybody wants to figure out that, and when you ask people in the room whose building one, half the room raises their hands. It's just crazy. This thing crept up on us really, maybe not on Core OS, okay, it crept up on me very quickly, and it's very rapid adoption. We have a Kubernetes Operators workshop on Friday, so not only do we have pre-conference days of like OpenShift Cons that are huge now, but now we're starting to book end, CNCF events and put on other things, just because, and that, we had 100 seats that we were hoping we would fill, and it sold out in like minutes once it got in there, and there's a waiting list of like 300 people. It is like one of, aside from Knative, and all the other wonderful hot things too, it is one of the most interesting developments I think right now. >> Thirst for the content. Would it impact? >> Yeah, and you can get all of the documentation is out there now, and people are already building them. We have a list of 50 community Operators. It's just, it's phenomenal how quickly it's growing. >> You know, Diane and Rob, it's funny because you know, we do so many of these theCUBE interviews, and this is our 10th year doing theCUBE coming up, and I remember the conversations going back in the OpenStack days, we would ask questions like, if you had a magic wand, what would you like, hope to have happened, right? And you know, those are parts of the evolution, where it's like, it's aspirational, things are being built. It seems now with Kubernetes, it's almost like, wait a minute, it's actually, this is like the goodness is so compelling, above and below Kubernetes that it's almost like uncomprehendible. You think about, oh this is actually happening. Finally the kinds of steady state kind of operational things that have been a pain in the butt for years-- >> Yeah, the toil, it's gone, for the most part. >> Yeah. >> So Rob, I've been having a lot of just thinking back to, you're employee number two at Core OS, when I first talked to Core OS, it was, we're going to build all of these individual tools, and we're going to Open-source them, and it's going to be good. We watched this just rising ecosystem and the CNCF, and it feels like what's nice and what's different that I see, compared to some previous things, is it's not one product or even a small group of companies. It's, I have this tool kit, and some of them work together, but many of them are independently used. We've talked to your peers earlier about it, etCD. etCD is totally stand alone, doesn't need to be Kubernetes. What have you seen, if you go back to that original vision, would Core OS just been, part of this whole ecosystem, and done it, if this was available, and has this delivering on a promise that your team had hoped to work on? >> Yeah, so we've always filled in where we see gaps, and so something like etCD, the concept is not new, and it comes from Google, and they have a system internally, and as Brandon got up on stage and said, we needed that coordinate, reboot, to grow out, to cluster of machines. It didn't exist so we had to build it. Same thing with how we wanted to manage Linux. There was no distro that even resembled what we were doing. Wanted to do automatic upgrades, people thought that was crazy, so we had to go build it. And so, but we always adopted the best of breed technology, when it existed. In our early bet Kubernetes, we just saw, this is the thing, and went for it. I don't even remember what version, but it was months and months before it was zero point oh, or one point oh, so it was, we've been doing it forever. And you just see the right thing, and it's the little nugget that you need, and if you don't see it, then you build it. >> What are you surprised about Rob, in terms of the ecosystem now, you mentioned some goodness is happening, still a lot more to do, visibility around value creation, you're starting to see spots where value can be created in the ecosystem, which is great. Still more work areas, but what's surprising you? What do you see as opportunities, challenges? Your thoughts, because this vision of ease of use and programmability, is happening, right? So there's still more work to do. What's your vision there? What's your thoughts? >> I mean, I think self service is key, so this is like the rise of the Cloud comes from self service for developers, and Kubernetes gives you the right abstraction, where self service for VM's, like OpenStack, which is not quite at the level of what you want. You don't want a VM, you actually wanted a place to deploy an application, you wanted load balancing, you wanted service discovery, you didn't want like a bare Ubuntu VM, and so Kubernetes raises you up to where you're productive, and then it's about building stuff on top. But what's interesting, in the space is, we're still kind of competing on Kubernetes installers, and stuff like that, so we're not even really into like the phase where people are being super productive on the platform, other than these leading companies. So I think we'll democratize that, and we'll have a whole new landscape. >> And so 2019 you see as what being a key theme for Kubernetes? >> I think it'll be Core stuff built on top, like all the serverless frameworks, a bunch of container natives storage solutions, solving some of these problems that folks are reaching out to external machine learning, but bringing that onto the cluster, GPU support, that type of stuff. It's all about the workloads. >> And tradition end users, you have a huge install base, with Red Hat, well documented, as the end users start coming in and looking at CloudNative, and doing a reimagine of their environment, whether it's IT span, IT investments, to have a run their coding and the deployments. It's going to change. 2019's going to have an impact on what I call mainstream enterprise, for lack of a better description. What's the impact of those guys, 'cause now, they now have head room, they can do more, what's the main stream enterprise look like right now with the impact of Kubernetes? >> I think they're going to start deploying applications and get like lower the time to business value, much, much lower. And I was just talking to a customer, and they ordered bare metal machines like a year ago, and they're still not racked and in the data center. And so people are still getting over that type of stuff, but once you have like a shared Kubernetes layer, you can onboard teams like crazy. I mean, name spaces are free, quote, unquote, and you can get 35 engineering teams on a Kubernetes cluster super easy. >> So they can ramp up in development teams basically, as they bring value in-house, versus outsourcing everything. They start getting development teams, this is where the action is. >> I think you're also going to see the rise of those end users contributing back things, to the Kubernetes community and as Lyft, and Uber, and everybody are great examples of that. Uber with Jaeger, and Lyft is, we were just in the Operators thing, and they raised their hand that they are about to Open-source it, a few Operators that they're building and stuff, and you're just going to see people that you didn't normally see. Often these large foundation driven things are vendor driven, but I think what you see here, is the end user community is now embracing the Open-source, is getting the legal teams there, allowing them to share their things, because one, they get more people to maintain them, and more people working on them, but it's really I think the rise of the end user we'll see, as they start participating more and more in here. And that's the promise of Open-source. >> And that's where CNCF really made it's bones. It wasn't really vendor led per se, it was really end users, the guys building out their stuff for the first time. You see Lyft for instance, great example, you guys did a Core OS, this is like the new generational model. Final question before we break. I want to get this out there. Get a plug in for Red Hat. What are you guys, what's the focus for the show? What's the news? What's the big story for Red Hat here at KubeCon this year? >> I think it's Operators, that's what we're here talking about. It's a really big push to once again get smarter workloads onto the cluster. We've got a really great hybrid story, we've got a really great over the air upgrade story that we're bringing from some of the Core OS technology, and then the next thing is, once it's easy to run 35 clusters, we need a bunch of workloads to put on there. And so we want to save folks from the toil of running all those workloads as well, just like we did at the cluster level. >> Awesome. >> Well put. I couldn't add more. One of the things that Core OS did, you hit the nail on the head earlier, is when there was something missing, they helped us build it, and with the Operator SDK, and the Lifecycle Management, and the metering, and whatever else the tooling is, they have really been inspirational inside of Red Hat. And so they filled a number of gaps, and it's just been all Operators all the time right now. >> It's great when a plan comes together. You guys got a great tail wind. Congratulations on all the success, and it's just the beginning of the wave. It's theCUBE, covering the wave of innovation here at KubeCon CloudNativeCon 2018, we'll be back with more live coverage. Day two of Three days of Kube Coverage. We'll be right back. (upbeat music)
SUMMARY :
and the Antigo System Partners. Great to see you again. Yeah, glad to be here. but the CloudNative piece has gotten so real, and all of the other things, and all the goodness around application development, and package an application and give to somebody, And just to clarify, so this was added in, So that's the extension mechanism that we're using. that this is going to make, There's a lot of the application side So its all the vendors and partners on the same page. and have it function the exact same, It's all Open-source, it's all in the github repos, and we came here and we had two sessions, and all the other wonderful hot things too, Thirst for the content. Yeah, and you can get all of the documentation and I remember the conversations going back and it's going to be good. and it's the little nugget that you need, in the ecosystem, which is great. and so Kubernetes raises you up to where you're productive, but bringing that onto the cluster, GPU support, What's the impact of those guys, 'cause now, and get like lower the time to business value, So they can ramp up in development teams basically, And that's the promise of Open-source. What's the big story for Red Hat here at KubeCon this year? and then the next thing is, and it's just been all Operators all the time right now. and it's just the beginning of the wave.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Diane Mueller | PERSON | 0.99+ |
Rob Szumski | PERSON | 0.99+ |
China | LOCATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
two sessions | QUANTITY | 0.99+ |
Seattle | LOCATION | 0.99+ |
CloudNative Computing Foundation | ORGANIZATION | 0.99+ |
Diane | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Rob | PERSON | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Lyft | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
100 seats | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
10th year | QUANTITY | 0.99+ |
Jaeger | ORGANIZATION | 0.99+ |
Antigo System Partners | ORGANIZATION | 0.99+ |
Friday | DATE | 0.99+ |
35 clusters | QUANTITY | 0.99+ |
Core OS | TITLE | 0.99+ |
2019 | DATE | 0.99+ |
today | DATE | 0.99+ |
8,000 attendees | QUANTITY | 0.99+ |
MongoDB | TITLE | 0.99+ |
KubeCon | EVENT | 0.99+ |
ORGANIZATION | 0.99+ | |
Three days | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
yesterday | DATE | 0.99+ |
Kafka | TITLE | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
Kubernetes | TITLE | 0.98+ |
300 people | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Seattle, Washington | LOCATION | 0.98+ |
35 engineering teams | QUANTITY | 0.98+ |
one point | QUANTITY | 0.98+ |
CloudNativeCon North America 2018 | EVENT | 0.98+ |
first time | QUANTITY | 0.98+ |
zero point | QUANTITY | 0.98+ |
two great guests | QUANTITY | 0.97+ |
Brandon | PERSON | 0.97+ |
one product | QUANTITY | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
CloundNativeCon 2018 | EVENT | 0.97+ |
first | QUANTITY | 0.97+ |
two things | QUANTITY | 0.96+ |
OpenShift | TITLE | 0.96+ |
this year | DATE | 0.96+ |
one | QUANTITY | 0.96+ |
second day | QUANTITY | 0.96+ |
50 community Operators | QUANTITY | 0.95+ |
One | QUANTITY | 0.95+ |
9 a.m. Pacific Time | DATE | 0.95+ |
Day two | QUANTITY | 0.95+ |
single container | QUANTITY | 0.95+ |
Ubuntu | TITLE | 0.95+ |
OpenStack | TITLE | 0.94+ |
North America | LOCATION | 0.94+ |
about 250 people | QUANTITY | 0.94+ |
day two | QUANTITY | 0.92+ |
CloudNative | TITLE | 0.92+ |
a year ago | DATE | 0.91+ |
four months | QUANTITY | 0.9+ |
4,000 | QUANTITY | 0.9+ |
OpenShift Cons | EVENT | 0.9+ |
Susan St. Ledger, Splunk | Splunk .conf18
live from Orlando Florida it's the cube covered conf 18 got to you by Splunk welcome back to our land Oh everybody I'm Dave Volante with my co-hosts two minima and you're watching the cube the leader in live tech coverage we're brought here by Splunk toises Splunk off 18 hashtag spunk conf 18 Susan st. Leger is here she's the president of worldwide field operations at Splunk Susan thanks for coming on the cube thanks so much for having me today so you're welcome so we've been reporting actually this is our seventh year we've been watching the evolution of Splunk going from sort of hardcore IT OPSEC ops now really evolving in doing some of the things that when everybody talked about big data back in the day and spunk really didn't they talked about doing all these things that actually they're using Splunk for now so it's really interesting to see that this has been a big tailwind for you guys but anyway big week for you guys how do you feel I feel incredible we had you know we've it announced more innovations today just today then we have probably in the last three years combined we have another big set of innovations to announce tomorrow and you know just as an indicator of that I think you heard Tim today our CTO say on stage we to date have 282 patents and we are one of the world leaders in terms of the number of patents that we have and we have 500 pending right so if you think about 282 since the inception of the company and 500 pending it's a pretty exciting time for spunk people talk about that flywheel we were talking stew and I were talking earlier about some of the financial metrics and you know you have a lot of a large deal seven-figure deals which which you guys pointed out on your call let's see that's the outcome of having happy customers it's not like you turn to engineer that you just serving customers and that's what what they do I talk about how Splunk next is really bringing you into new areas yeah so spike next is so exciting there's really three three major pillars if you will design principles to spunk next one is to help our customers access data wherever it lives another one is to get actionable outcomes from the data and the third one is to allow unleash the power spunk to more users so there really the three pillars and if you think about maybe how we got there we have all of these people within IT and security that are the experts on Splunk the swing ninjas ful and their being they see the power of spunk and how it can help all these other departments and so they're being pulled in to help those other departments and they're basically saying Splunk help us help our business partners make it easier to get there to help them unleash the power spunk for them so they don't necessarily need us for all of their needs and so that's really what's what next is all about it's about making it again access data easier actionable outcomes and then more users and so we're really excited about it so talk about those new users I mean obviously the ITA ops they're your peeps so are they sort of advocating to you into the line of business or are you probably being dragged into the line of business what's that dynamic like yeah it's definitely we're customer success first and we're listening to our customers and they're asking us to take them that should go there with them right there being pulled that they know that what we what we say with our customers what are what our deepest customers understand about us is everybody needs funk it's just not everyone knows it yet and I said they're teaching their business why they need it and so it's really a powerful thing and so we're partnering with them to say how do we help them create business applications more which you'll see tomorrow in our announcements to help their business users you know one of the things that strikes us if we were talking it was the DevOps gentleman when you look at the companies that are successful with so-called digital transformation they have data at the core and they have sort of I guess I don't want to say a single data model but it's not a data model of stovepipes and that's what he described and essentially if I understand the power of Splunk just in talking to some of your customers it's really that singular data model that everybody can collaborate on with get advice from each other across the organization so not this sort of stovepipe model it seems like a fundamental linchpin of digital transformation even though you guys haven't been using that overusing that term thank you sort of a sign of smug you didn't use the big data term when big data was all hot now you use it same thing with digital transformation you're a fundamental it would seem to me to a lot of companies digital transformation that's exactly if you think about we started nineteen security but the reason for that is they were the first ones to truly do digital transformation right those are just the two the two organizations that started but exactly the way that they did it now all the other business units are trying to do it and that same exact platform that same exact platform that we use there's no reason we can't use it for those other areas those other functions but but if we want to go there faster we have to make it easier to use spunk and that's what you're seeing with spunk next you know I look at my career the last couple of decades we've been talking about oh well there's going to we're gonna leverage data and there's go where we want to be predictive on the models but that the latest wave of kind of AI ml and deep learning what I heard what you're talking about and in the Splunk next maybe you could talk a little bit about why it's real now and why we're actually going to be able to do more with our data to be able to extract the value out of it and really enable businesses sure so I think machine learning is that is at the heart of it and you know we we actually do two things from a machine learning perspective number one is within each of our market groups so IT security IT operations we have data scientists that work to build models within our applications so we build our own models and then we're hugely transparent with our customers about what those models are so they can tweak them if they like but we pre build those so that they have them in each of those applications so that's number one and and that's part of the actionable outcomes right ml helps drive actionable outcomes so much faster the second aspect is the ML TK right which is we give the our customers in ml TK so they can you know build their own algorithms and leverage everything all of the models that are out there as well so I think that two-fold approach really helps us accelerate the insights that we give to our customers Susan how are you evolving your go-to-market model as you think about Splunk next and just think about more line of business interactions so what are you doing on the go-to-market side yeah so the go to market when you think about reaching all of those other verticals if you will right it's very much going to be about the ecosystem all right so it's it's going to be about the solution provider ecosystem about the ISV ecosystem about the big the si is both boutique and the global s is to help us really Drive Splunk into all the verticals and meet their needs and so that will be one of the big things that you see we will obviously still have our horizontal focus across IT and security but we are really understanding what are the use cases within financial services what are the use cases within healthcare that can be repeated thousands of times and if you saw some of the announcements today in particular the data stream processor which allows you to act on data in motion with millisecond response that now puts you as close to real-time as anything we've ever seen in the data landscape and that's going to open up just a series of use cases that nobody ever thought of using spoil for so I wonder what you're hearing from customers when they talk about how do they manage that that pace of change out there I really like I walked around the show floor stuff I've been hearing lots people talking about you know containers and we had one of the your customers talking about how kubernetes fits into what they're doing seems like it really is a sweet spot for spunk that you can deal with all of these different types of information and it makes it even more important for customers to come to you yeah as you heard from Doug today in our keynote our CEO and the keynote it is a messy world right and part of the message just because it's a digital explosion and it's not going to get any slower it's just going to continue to get faster and I know you met with some of our customers earlier today and if'n carnival if you think about the landscape of NIF right I mean their mission is to protect the arsenal of nuclear weapons for the country right to make them more efficient to make them safer and if you think about all of it they not only have traditional IT operations and security they have to worry about but they have this landscape of lasers and all these sensors everywhere and that and when you look at that that's the messy data landscape and I think that's where Splunk is so uniquely positioned because of our approach you can operate on data in motion or at rest and because there is no structuring upfront I would I want to come back to what you said about real-time because that you know I oh I've said this now for a couple years but never used to use the term when Big Data was at its the peak of what does a gardener call it the hype cycle you guys didn't use that term and and so when you think about the use cases and in the Big Data world you've been hearing about real time forever now you're talking about it enterprise data warehouse you know cheaper EDW is fraud detection better analytics for the line of business obviously security and IT ops these are some of the use cases that we used to hear about in Big Data you're doing like all these now and sort of your platform can be used in all of these sort of traditional Big Data use cases am i understanding that problem 100% understanding it properly you know Splunk has again really evolved and if you think about again some of the announcements today think about date of fabric search right rather than saying you have to put everything into one instance or everything into one place right we're saying we will let you operate across your entire landscape and do your searches at scale and you know spunk was already the fastest at searching across your global enterprise to start with and when we were two to three times faster than anybody who compete it with us and now we improve that today by fourteen hundred percent I don't I don't even know where like you just look at again it ties back to the innovations and what's being done in our developer community within our engineering and team in those traditional use cases that I talked about in big data it was it was kind of an open source mess really complex zookeeper is the big joke right and always you know hive and pig and you know HBase and blah blah blah and we're practitioners of a lot of that stuff that's it's very complex essentially you've got a platform that now can be used the same platform that you're using in your traditional base that you're bringing to the line of business correct okay right it's the same exact platform we are definitely putting the power of Splunk in in the users hand so by doing things like mobile use on mobile and AR today and again I wish I could talk about what's coming tomorrow but let's just say our business users are going to be pretty blown away by what they're going to see tomorrow in our announcements yeah so I mean I'm presuming these are these are modern it's modern software micro services API base so if I want to bring in those open source tool tools I can in fact what you'll actually see when you understand more about the architecture is we're actually leveraging a lot of open-source and what we do so you know capabilities a spark and flink and but what we're doing is we're masking the complex the complexity of those from the user so instead of you having to do your own spark environment your own flink environment and you know having to figure out Kafka on your own and how you subscribe to what we're giving you all that we're we're masking all that for you and giving you the power of leveraging those tools so this becomes increasingly important my opinion especially as you start bringing in things like AI and machine learning and deep learning because that's going to be adopted both within a platform like use as yours but outside as well so you have to be able to bring in innovations from others but at the same time to simplify it and reduce that complexity you've got to infuse AI into your own platform and that's exactly what you're doing it's exactly what we're doing it's in our platform it's in our applications and then we provide the toolkit the SDK if you will so users can take it to another level all right so you've got 16,000 customers today if I understand the vision of SPARC next you're looking to get an order of magnitude more customers that you of it as addressable market talk to us about the changes that need to happen in the field is it just you're hitting an inflection point you've got those you know evangelists out there and I you know I see the capes and the fezzes all over the show so how is your field get ready to reach that broader audience yeah I think that's a great question again once again it will I'll tell you what we're doing internally but it's also about the ecosystem right in order to go broader it has to be about this this Splunk ecosystem and on the technology side we're opening the aperture right it's micro services it's ap eyes it's cloud there's there's so much available for that ecosystem and then from a go-to-market perspective it's really about understanding where the use cases are that can be repeated thousands of times right that the the the big problems that each of those verticals are trying to solve as opposed to the one corner use case that you know you could you could solve for one customer and that was actually one of the things we found is when we did analysis we used to do case studies on Big Data number one use case that always came back was custom because nothing was repeatable and that's how we were seeing you know a little bit more industry specific issues I was at soft ignite last week and you know Microsoft is going deep on verticals to get specific as to you know for IOT and AI how they can get specific in those environments I agreed I think again one of the things that so unique about Splunk platform is because it is the same platform that's at the underlying aspect that serves all of those use cases we have the ability in my opinion to do it in a way that's far less custom than anybody else and so we've seen the ecosystem evolve as well again six seven years ago it was kind of a tiny technology ecosystem and last year in DC we saw it really starting to expand now you walk around here you see you know some big booths from some of the SI partners that's critical because that's global scale deep deep industry expertise but also board level relationships absolutely that's another part of the the go-to markets Splunk becomes more strategic this is a massive Tam expansion that where we are potentially that we're witnessing with Splunk how do you see those conversations changing are you personally involved in more of those boardroom discussions definitely personally involved in your spot on to say that that's what's happening and I think a perfect example is you talk to Carnival today right we didn't typically have a lot of CEOs at the Splunk conference right now we have CEOs coming to the spunk conference right because it is at that level of strategic to our customers and so when you think about Carnival and yes they're using it for the traditional IT ops and security use cases but they're also using it for their customer experience and who would ever think you know ten years ago or even five years ago of Splunk as a customer experience platform but really what's at the heart of customer experience it's data so speaking of the CEO of Carnival Arnold Donald it's kind of an interesting name and and so he he stood up in the States today talking about diversity doubling down on diversity as an african-american you know you frankly in our industry you don't see a lot of african-americans CEOs you don't see a ton of women CEOs you don't see the son of women with with president in their title so he he made a really kind of interesting statement where he said something to the effect of forty years ago when I started in the business I didn't work with a lot of people like me and I thought that was a very powerful statement and he also said essentially look at if we're diverse we're gonna beat you every time your thoughts as an executive and in tech and a woman in tech so first of all i 100% agree with him and i can actually go back to my start i was a computer scientist at NSA so i didn't see a lot of people who looked like me and so from that perspective I know exactly where he's coming from and I am I'll tell you at Splunk we have a huge investment in diversity and not because it's a checkbox but because we believe in exactly what he says it's a competitive edge when you get people who think differently because you came from a different background because you're a different ethnicity because you were educated differently whatever it is whether it's gender whether it's ethnicity whether it's just a different approach to thinking all differentiation puts a different lens and and that way you don't get stove you don't have stovepipe thinking and I what I love about our culture at spunk is that we we call it a high growth mindset and if you're not intellectually curious and you don't want to think beyond the boundaries then it's probably not a good fit for you and a big part of that is having a diverse environment we do a lot of spunk to drive that we actually posted our gender diversity statistics last year because we believe if you don't measure it you're never going to improve it and it was a big step right to say we want to publish it we want to hold herself accountable and we've done a really nice job of moving it a little over 1% in one year which for our population is pretty big but we're doing really unique things like we have all job descriptions are now analyzed there's actually a scientific analysis that can be done to make sure that the job description does not bias whether men are women whether men alone or whether it's you know gender neutral so that that's exciting obviously we have a big women in technology program and we have a high potential focus on our top women as well what's interesting about your story Susan and we spent a lot of time on the cube talking about diversity generally in women in tech specifically we support a lot of WI t and we always talk him frequently we're talking about women and engineering roles or computer science roles and how they they oftentimes even when they graduate with that degree they don't come into tech and what strikes me about your path is your technical and yet now you've become this business executive so and I would imagine that having that background that technical background only helped in terms of especially in this industry so there are paths beyond just the technical role one hundred percent it first of all it's a huge advantage I believe it's the core reason why I am where I am today because I have the technical aptitude and while I enjoyed the business side of it as much and I love the sales side and the marketing side and all of the above the truth of the matter is at my core I think it's that intellectual curiosity that came out of my technical background that kept me going and really made me very I took risks right and if you look at my career it's much more of a jungle gym than a ladder and the way you know I always give advice to young people who generally it's young women who ask but oh sometimes it's the young men as well which is like how did you get to where you are how do I plan that how do I get and the truth of the matter is you can't if you try and plan it it's probably not going to work out the exactly the way you plan and so my advice is to make sure that you every time you're going to make a move your ask yourself what am I going to learn Who am I going to learn from and what is it going to add to my experience that I can materially you know say is going to help me on a path to where I ultimately want to be but I think if you try and figure it out and plan a perfect ladder I also think that when you try and do a ladder you don't have what I call pivots which is looking at things from different lenses right so me having been on the engineering side on the sales side on the services side of things it gives me a different lens and understanding the entire experience of our customers as well as the internals of an organization and I think that people who pivot generally are people who are intellectually curious and have intellectual capacity to learn new things and that's what I look for when I hire people I love that you took a nonlinear progression to the path that you're in now and it's speaking of you know the the technical I think if you're in this business you better like tech or what are you doing in this business but the more you understand technology the more you can connect the dots between how technology is impacting business and then how it can be applied in new ways so well congratulations on your careers you got a long way to go and thanks so much for coming on the queue so much David I really appreciate it thank you okay keep it right - everybody stew and I'll be back with our next guest we're live from Splunk Don Capcom 18 you're watching the cube [Music]
SUMMARY :
it's the cube covered conf 18 got to you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Volante | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Susan | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Susan St. Ledger | PERSON | 0.99+ |
fourteen hundred percent | QUANTITY | 0.99+ |
282 patents | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
Splunk | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
second aspect | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
tomorrow | DATE | 0.99+ |
Orlando Florida | LOCATION | 0.99+ |
Doug | PERSON | 0.99+ |
NSA | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
last year | DATE | 0.99+ |
seventh year | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.98+ |
16,000 customers | QUANTITY | 0.98+ |
Tim | PERSON | 0.98+ |
thousands of times | QUANTITY | 0.98+ |
Carnival | ORGANIZATION | 0.98+ |
two organizations | QUANTITY | 0.98+ |
forty years ago | DATE | 0.98+ |
two-fold | QUANTITY | 0.97+ |
one year | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
DC | LOCATION | 0.97+ |
five years ago | DATE | 0.97+ |
african-americans | OTHER | 0.97+ |
one customer | QUANTITY | 0.97+ |
Susan st. Leger | PERSON | 0.97+ |
each | QUANTITY | 0.96+ |
third one | QUANTITY | 0.96+ |
three pillars | QUANTITY | 0.96+ |
african-american | OTHER | 0.96+ |
both | QUANTITY | 0.96+ |
ten years ago | DATE | 0.95+ |
stew | PERSON | 0.95+ |
two things | QUANTITY | 0.95+ |
six seven years ago | DATE | 0.94+ |
one hundred percent | QUANTITY | 0.94+ |
one corner | QUANTITY | 0.93+ |
first ones | QUANTITY | 0.93+ |
three times | QUANTITY | 0.93+ |
over 1% | QUANTITY | 0.92+ |
single | QUANTITY | 0.91+ |
seven-figure deals | QUANTITY | 0.9+ |
thousands of times | QUANTITY | 0.89+ |
earlier today | DATE | 0.89+ |
500 pending | QUANTITY | 0.88+ |
spunk | PERSON | 0.86+ |
last couple of decades | DATE | 0.84+ |
EDW | ORGANIZATION | 0.82+ |
one place | QUANTITY | 0.81+ |
lot | QUANTITY | 0.8+ |
three | QUANTITY | 0.8+ |
last three years | DATE | 0.79+ |
Kafka | TITLE | 0.79+ |
three major pillars | QUANTITY | 0.78+ |
a ton of women | QUANTITY | 0.77+ |
Farah Papaioannou and Kilton Hopkins, Edgeworx.io | CUBEConversation, 2018
(intense orchestral music) >> Hey, welcome back everybody, Jeff Frick here with theCUBE, we're at our Palo Alto studios for a CUBEConversation, and we're talking about startups today, which we don't often get to do but it's really one of the more exciting things that we get to do, because that's what really, what keeps Silicon Valley Silicon Valley; and this next new company is playing on a very hot space which is edge, you're all about cloud then the next big move is edge, especially with internet things and industrial internet things. So we're really happy to welcome Edgeworx here, fresh off the announcement of the new company and their funding. We got the, both Founders, we have Farah Papaioannou, and she is the President, and Kilton Hopkins, the CEO, both of Edgeworx, welcome. >> Thank you, >> Thanks. >> thanks for having us. >> So for those of us that aren't familiar, give us kind of the quick 101 on Edgeworx. >> So I've been looking at the space as a venture capitalist before I've joined up with Kilton, and I've been looking at edge computing for a long time because it just made intuitive sense to me. You're looking at all these devices that are now not just devices but they're compute platforms, or you know generating all this data; well how are we going to address all that data? If you think about sending all that back to the cloud, latency, bandwidth, and cost, you talk about breaking the internet, this is what's going to break the internet not Kim Kardashian's you know butt photo right? (guys laugh) So, how do you solve that problem? You know if you think about autonomous vehicles for example these are now computers on wheels, they're not just a transportation mechanism. If they're generating all this data, and they need to interact with each other, and make decisions in near realtime; how are they going to do that if they have to send all that data back to the cloud? >> Right, great. >> So that's where I came across Kilton's company, or actually the technology that he'd built, and we formed a company together. I looked at everything, and the technology that he'd developed, was far, leaps and bounds beyond anything anyone else had come to to date, so. >> So, Kilton, how did you start on that project? >> Yeah, so this actually goes way back, this goes way back to like about 2010. Back in Chicago I was looking at what architecture is going to allow us to do the types of processing that's really expensive, and do it close to where the data is? This architecture was in the back of my mind. When I came to the bay area, I jumped in with the city of San Francisco as an IOT Advisor; and everywhere I looked I saw the same problems. Nobody was doing secure processing at the edge in any kind of way that was manageable, so I started to solve it. Then, years later after doing, you know I did some deployments myself, and after seeing how was this stuff working, it finally arrived at an architecture that I thought: okay, this thing's passing all these trials, and now I think we've got this pretty well nailed, so. I basically got into it before the terms fog and edge computing were being thrown around, and just said this is what has to happen. And then of course, it turns out that the world catches up, and now of course there's terms for it, and everyone's talking about the edge. >> So it's an interesting problem, right, it's the same old problem we've been having forever, which is do you move the data to the compute or do you move the compute to the data? And then we've had these other things happening with suddenly this you know huge swell of data flow, and that's even before we start you know kind of the IOT connection on the data flow, luckily the networks are getting faster, 5G's around the corner, chips are getting faster and cheaper, memory's getting cheaper and faster. And then we had the development of the cloud and really the hyper growth of the public cloud. But that still doesn't help you with kind of these low latency applications that you have to execute on the edge. And obviously we've talked to GE a lot, and everyone wants to talk about turbines and you know harsh conditions and you know nasty weather, and it's not this pristine data center; how do you put compute, and how much compute do you put at the edge, and how do you manage kind of that data flow? What can you deal with there, what do you have to send up? And of course this pesky thing called physics and latency, which just prohibits, as you said, the ability to get stuff up to some compute and get it back in time necessarily to do something about it. So what is the approach that you guys are taking? What's a little bit different about what you've built with Edgeworx? >> Sure. >> So, in most cases, people think about the edge as like almost a lead into the cloud. They say: how can I pre-process the data, maybe curtail some of the bandwidth volume that I need in order to send data up to the cloud? But that doesn't actually solve the problem, you'll never get rid of cloud latency if you're sending just smaller packages. And in addition, you have done nothing to address the security issues of the edge, if you're just trying to package data, maybe reduce it a bit and send it to the cloud. So what's different about us is with us you can use the cloud, but you don't have to, we're completely at the edge. So you can run software with Edgeworx that stays within the four walls of a factory, if you so choose, and no data will ever leave the building; and that is a stark difference from the approaches that've been taken to date which've been tied to the cloud, but we do a little at the edge, it's like come on, this is real edge. >> Right, right. And so is it a software layer that sits on top of whatever kind of bios and firmware are on a lot of these dumb sensors, is that kind of the idea? >> Yeah, no actually it sits, exactly, it sits above the bios level, it sits above the firmware. It creates an application runtime, so it allows developers to write applications that are containerized, so we run containers at the edge, which allows our developers to run applications they've already developed for the cloud, to write new applications, but they don't have to learn an entirely new framework or an entirely new SDK, they can write using tools they already know: Java, C#, C++, Python, if you can write that language, we can run it, and at the edge. Which again allows people to use skillsets that they already know, they don't have to like learn specialized skillsets for the edge, why should they have to do that you know? >> I think, and you know good for you guys, to get Stacey Higginbotham to write a nice article about the company long before you launched, which is good. But I thought she had a really interesting breakdown on kind of edge computing, and she broke it down into four layers: the device, the sensors, as you said as dumb as it can be, right, you want a lot of these things. Then this gateway layer that collects the data. You know some level of compute close to the edge, not necessarily in the camera or in any of these sensors, but close, and then of course a connection back to the cloud. So you guys run in the sensor, or probably more likely in that gateway layer? Or do you see, in some of the early customers you're talkin' to, are they putting these like little micro data centers? I mean how are you actually seeing this stuff deployed in the field at scale? >> So we actually gave Stacey that four layer chart because were trying to explain people to the edge, to people who didn't understand what that was, and again, people refer to all these different layers at the edge. We actually think that the layer right above the sensors is actually the most difficult to solve for. And the reason we don't want to run on the sensor level is because sensors are becoming more and more commoditized, a customer would rather have a thousand dumb sensors where they could get more and more data, than have like 10 really smart sensors where they could run compute on them. So, unless there's special circumstances, like you know a case of a camera where we're actually working with a camera that has GPU capability, where they can actually run on the edge, we'd like to run at a level up there, and there's a couple of reasons for that. One is, if you run on the devices itself, you can't really aggregate each other's devices, you can't aggregate-- a temperature sensor cannot aggregate a pressure sensor's data, you need to set up a layer above. Also we're able to serve as a broker between low levels of you know Wi-Fi and Bluetooth, versus you know high levels of TCP/IP, right, which you also cannot do at the sensor level. If you were run at the sensor, you'd basically have to do what Amazon does, which is device-to-cloud; which doesn't really afford you the capability of running real software at the edge. >> Right. So, when you're out, let's just say the camera, we talked a little bit before we turned the cameras on about the surveillance and surveillance cameras, I mean where are those gateways, and where's the power and the connectivity to that gateway, what're you seeing in some of these early examples? >> So, you know, for cameras you've got basically two choices, either the camera is a dumb camera that puts a video feed to some kind of a compute box that's nearby, or is on a wired network, or wireless network that's private to it, so. In building cameras that are already in place, that are analog, you can put a box in the building that can take the feeds, but the better option than that even is to have smart cameras, so probably a new greenfield deploy would have smart cameras that have the ability to do the AI processing right there in the module. So the answer is: somewhere you have a feed of sensor data, whether it be video, audio, or just like a temperature, you know time series data, and then it hits a point of where you're still on the edge, but you can do compute. Sometimes they're in the same unit, sometimes they're a little spread out, sometimes they're over wireless; that first layer up is where we sit no matter how the compute is done. >> Okay. And I'm just curious on some of the early use cases. How do people see the opportunity now to have kind of a software-driven IOT device that's separate from the actual firmware that's in the in the sensor? What is that going to enable them to do that they're excited to do they couldn't do before? >> Yeah, so if you think about the older model, it's: how can I make this device, get it's sensor readings and somehow communicate that data, and I'm going to write low-level code, probably C code or whatever to operate that and it's how often do I pull the sensor? And you're really thinking about just jeez I need this data somewhere to make useable. And when you use us you think: okay, I have streams of data, what would I do if I wanted to run software right where the data is, I can increase my sampling frequency, I can undo everything we were going to do in the cloud, and do it right there for free once it's deployed there's no bandwidth cost. So it opens the world of, of thinking, we're now running software at the edge, instead of running firmware, so I can just move the data upstream. You stop moving the data, and you start moving the applications, and that's what's like the world changer for everybody. >> Right, right. >> Plus you can use the same skillsets you have for the cloud and up until now programming IOT devices has been a matter of saying oh, you know, if I know how to work the GPIO pins you know and you know I can write in C, maybe I can make it work. And now you say: I know Python, and I know how to do data analytics with Python, I can just move that into the sensor module, if it's smart enough, or the gateway right there, and I can pretty much push my code into the factory instead of waiting for the factory to wire the data to me. >> And we actually have a customer right now that's doing real-time surveillance at the edge, and they have smart city deployments and they're looking at an example of, border control for example. And what they want to be able to do is put these cameras out there and say: well, I've detected something on the maritime border here, is it a whale, is it debris, or is it a boat full of refugees, or is it a boat full of like pirates, or is it a boat full of migrants? Well before what they would have to do is okay well, as an edge device maybe I, at the basic level of processing I could run is to say let me compress that video data and send some of it back, right, and then do the analysis back there; well that's not really going to be that helpful, because if I have to send it back to some cloud and do some analysis, by the time I've recognized what's out there: too late. What we can do now with our software capability, because we have our platform running on these cameras is we can deploy software that says: okay well I can detect, right there, right at the edge, what we're seeing, and I can not just send back video data, which I don't really want to do, that's really you know heavy on bandwidth and latency, cost as well, is I can just send back text data and say: well, I've actually detected something, so let's take some sort of action on it, and say okay the next camera should be able to detect it or pick it up or send some notifications that we need to address it back here. If I'm sending textual data back, and say I've already done that processing right there and then, I can run thousands of cameras out there at the edge versus just 10 or you know, 10 or 12 because of the amount of cost and latency. And then the customer can decide, well you know what, I want to add another application that you know does target tracking of certain individual terrorists, right? Okay, well that's easy for me to deploy that software because our platform's already running. We can, you know, and just push it out there at the edge. Oh, you know what, I'm able to model train at the edge, and I can actually do better detection, I can go from 80% to 90%, well I can just push that data and do an upgrade right there at the edge as opposed to going out there and flashing that board, and you know upgrading that way, or sending out some sort of firmware upgrade; so it allows a lot of flexibility that we couldn't do before. >> Right. Well I just got to ask ya now, you got a pile of money, which is exciting, and congratulations. >> Thank you. >> I was going to say, kind of, where do you kind of focus on your go-to-market, you know within any particular vertical, or any specific horizontal application? But it sounds like, I think we've use cameras now three or four times (laughs) in the last three or four questions, so I'm guessin' that's a, that's a good-- >> That's been a strong one for us. >> You know kind of early early adopt to market for you guys. >> That one's been a strong one for us, yeah. We've had some real success with telco's, another use case that we've seen some real good traction is being able to detect quality-of-service issues on Wi-Fi routers, so, that's one that we're looking at as well that's had some adoption. Oil and gas has been pretty strong for us as well. So it seems to be kind of a horizontal play for us, and we're excited about the opportunity. >> Alright. Well thanks for comin' on and tellin' the story, and congratulations on your funding and launching the company, and, >> Thank you. >> And bringin' it to reality. >> Great, thanks. >> Alright, Kilton, Farah, I'm Jeff, you're watchin' theCUBE, thanks for watchin' we'll see ya next time. (intense orchestral music)
SUMMARY :
and she is the President, So for those of us that aren't familiar, and they need to interact with each other, and the technology that he'd developed, and do it close to where the data is? and you know harsh conditions from the approaches that've been taken to date which've been is that kind of the idea? for the edge, why should they have to do that you know? about the company long before you launched, which is good. is actually the most difficult to solve for. what're you seeing in some of these early examples? that have the ability to do the AI And I'm just curious on some of the early use cases. and you start moving the applications, if I know how to work the GPIO pins you know and and say okay the next camera should be able to Well I just got to ask ya now, you got a pile of money, So it seems to be kind of a horizontal play for us, and launching the company, and, you're watchin' theCUBE,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Stacey Higginbotham | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Chicago | LOCATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Kilton | PERSON | 0.99+ |
Farah Papaioannou | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
Edgeworx | ORGANIZATION | 0.99+ |
Stacey | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
12 | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
Java | TITLE | 0.99+ |
Python | TITLE | 0.99+ |
GE | ORGANIZATION | 0.99+ |
Farah | PERSON | 0.99+ |
Kilton Hopkins | PERSON | 0.99+ |
C++ | TITLE | 0.99+ |
Jeff | PERSON | 0.99+ |
Kim Kardashian | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
two choices | QUANTITY | 0.99+ |
four times | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
90% | QUANTITY | 0.98+ |
four questions | QUANTITY | 0.98+ |
thousands of cameras | QUANTITY | 0.98+ |
10 really smart sensors | QUANTITY | 0.98+ |
C# | TITLE | 0.98+ |
One | QUANTITY | 0.98+ |
telco | ORGANIZATION | 0.98+ |
Silicon Valley | LOCATION | 0.98+ |
first layer | QUANTITY | 0.97+ |
IOT | ORGANIZATION | 0.93+ |
one | QUANTITY | 0.92+ |
today | DATE | 0.9+ |
SDK | TITLE | 0.9+ |
2018 | DATE | 0.89+ |
years later | DATE | 0.87+ |
Kilton | ORGANIZATION | 0.84+ |
four layer | QUANTITY | 0.79+ |
2010 | DATE | 0.77+ |
four walls | QUANTITY | 0.76+ |
Edgeworx.io | OTHER | 0.6+ |
theCUBE | ORGANIZATION | 0.54+ |
sensors | QUANTITY | 0.53+ |
couple | QUANTITY | 0.52+ |
Edgeworx | TITLE | 0.46+ |
thousand | QUANTITY | 0.42+ |
CUBEConversation | EVENT | 0.42+ |
Day One Afternoon Keynote | Red Hat Summit 2018
[Music] [Music] [Music] [Music] ladies and gentlemen please welcome Red Hat senior vice president of engineering Matt Hicks [Music] welcome back I hope you're enjoying your first day of summit you know for us it is a lot of work throughout the year to get ready to get here but I love the energy walking into someone on that first opening day now this morning we kick off with Paul's keynote and you saw this morning just how evolved every aspect of open hybrid cloud has become based on an open source innovation model that opens source the power and potential of open source so we really brought me to Red Hat but at the end of the day the real value comes when were able to make customers like yourself successful with open source and as much passion and pride as we put into the open source community that requires more than just Red Hat given the complexity of your various businesses the solution set you're building that requires an entire technology ecosystem from system integrators that can provide the skills your domain expertise to software vendors that are going to provide the capabilities for your solutions even to the public cloud providers whether it's on the hosting side or consuming their services you need an entire technological ecosystem to be able to support you and your goals and that is exactly what we are gonna talk about this afternoon the technology ecosystem we work with that's ready to help you on your journey now you know this year's summit we talked about earlier it is about ideas worth exploring and we want to make sure you have all of the expertise you need to make those ideas a reality so with that let's talk about our first partner we have him today and that first partner is IBM when I talk about IBM I have a little bit of a nostalgia and that's because 16 years ago I was at IBM it was during my tenure at IBM where I deployed my first copy of Red Hat Enterprise Linux for a customer it's actually where I did my first professional Linux development as well you and that work on Linux it really was the spark that I had that showed me the potential that open source could have for enterprise customers now iBM has always been a steadfast supporter of Linux and a great Red Hat partner in fact this year we are celebrating 20 years of partnership with IBM but even after 20 years two decades I think we're working on some of the most innovative work that we ever have before so please give a warm welcome to Arvind Krishna from IBM to talk with us about what we are working on Arvind [Applause] hey my pleasure to be here thank you so two decades huh that's uh you know I think anything in this industry to going for two decades is special what would you say that that link is made right Hatton IBM so successful look I got to begin by first seeing something that I've been waiting to say for years it's a long strange trip it's been and for the San Francisco folks they'll get they'll get the connection you know what I was just thinking you said 16 it is strange because I probably met RedHat 20 years ago and so that's a little bit longer than you but that was out in Raleigh it was a much smaller company and when I think about the connection I think look IBM's had a long long investment and a long being a long fan of open source and when I think of Linux Linux really lights up our hardware and I think of the power box that you were showing this morning as well as the mainframe as well as all other hardware Linux really brings that to life and I think that's been at the root of our relationship yeah absolutely now I alluded to a little bit earlier we're working on some new stuff and this time it's a little bit higher in the software stack and we have before so what do you what would you say spearheaded that right so we think of software many people know about some people don't realize a lot of the words are called critical systems you know like reservation systems ATM systems retail banking a lot of the systems run on IBM software and when I say IBM software names such as WebSphere and MQ and db2 all sort of come to mind as being some of that software stack and really when I combine that with some of what you were talking about this morning along hybrid and I think this thing called containers you guys know a little about combining the two we think is going to make magic yeah and I certainly know containers and I think for myself seeing the rise of containers from just the introduction of the technology to customers consuming at mission-critical capacities it's been probably one of the fastest technology cycles I've ever seen before look we completely agree with that when you think back to what Paul talks about this morning on hybrid and we think about it we are made of firm commitment to containers all of our software will run on containers and all of our software runs Rell and you put those two together and this belief on hybrid and containers giving you their hybrid motion so that you can pick where you want to run all the software is really I think what has brought us together now even more than before yeah and the best part I think I've liked we haven't just done the product in downstream alignment we've been so tied in our technology approach we've been aligned all the way to the upstream communities absolutely look participating upstream participating in these projects really bringing all the innovation to bear you know when I hear all of you talk about you can't just be in a single company you got to tap into the world of innovation and everybody should contribute we firmly believe that instead of helping to do that is kind of why we're here yeah absolutely now the best part we're not just going to tell you about what we're doing together we're actually going to show you so how every once you tell the audience a little bit more about what we're doing I will go get the demo team ready in the back so you good okay so look we're doing a lot here together we're taking our software and we are begging to put it on top of Red Hat and openshift and really that's what I'm here to talk about for a few minutes and then we go to show it to you live and the demo guard should be with us so it'll hopefully go go well so when we look at extending our partnership it's really based on three fundamental principles and those principles are the following one it's a hybrid world every enterprise wants the ability to span across public private and their own premise world and we got to go there number two containers are strategic to both of us enterprise needs the agility you need a way to easily port things from place to place to place and containers is more than just wrapping something up containers give you all of the security the automation the deploy ability and we really firmly believe that and innovation is the path forward I mean you got to bring all the innovation to bear whether it's around security whether it's around all of the things we heard this morning around going across multiple infrastructures right the public or private and those are three firm beliefs that both of us have together so then explicitly what I'll be doing here number one all the IBM middleware is going to be certified on top of openshift and rel and through cloud private from IBM so that's number one all the middleware is going to run in rental containers on OpenShift on rail with all the cloud private automation and deployability in there number two we are going to make it so that this is the complete stack when you think about from hardware to hypervisor to os/2 the container platform to all of the middleware it's going to be certified up and down all the way so that you can get comfort that this is certified against all the cyber security attacks that come your way three because we do the certification that means a complete stack can be deployed wherever OpenShift runs so that way you give the complete flexibility and you no longer have to worry about that the development lifecycle is extended all the way from inception to production and the management plane then gives you all of the delivery and operation support needed to lower that cost and lastly professional services through the IBM garages as well as the Red Hat innovation labs and I think that this combination is really speaks to the power of both companies coming together and both of us working together to give all of you that flexibility and deployment capabilities across one can't can't help it one architecture chart and that's the only architecture chart I promise you so if you look at it right from the bottom this speaks to what I'm talking about you begin at the bottom and you have a choice of infrastructure the IBM cloud as well as other infrastructure as a service virtual machines as well as IBM power and IBM mainframe as is the infrastructure choices underneath so you choose what what is best suited for the workload well with the container service with the open shift platform managing all of that environment as well as giving the orchestration that kubernetes gives you up to the platform services from IBM cloud private so it contains the catalog of all middle we're both IBM's as well as open-source it contains all the deployment capability to go deploy that and it contains all the operational management so things like come back up if things go down worry about auto scaling all those features that you want come to you from there and that is why that combination is so so powerful but rather than just hear me talk about it I'm also going to now bring up a couple of people to talk about it and what all are they going to show you they're going to show you how you can deploy an application on this environment so you can think of that as either a cloud native application but you can also think about it as how do you modernize an application using micro services but you don't want to just keep your application always within its walls you also many times want to access different cloud services from this and how do you do that and I'm not going to tell you which ones they're going to come and tell you and how do you tackle the complexity of both hybrid data data that crosses both from the private world to the public world and as well as target the extra workloads that you want so that's kind of the sense of what you're going to see through through the demonstrations but with that I'm going to invite Chris and Michael to come up I'm not going to tell you which one's from IBM which runs from Red Hat hopefully you'll be able to make the right guess so with that Chris and Michael [Music] so so thank you Arvind hopefully people can guess which ones from Red Hat based on the shoes I you know it's some really exciting stuff that we just heard there what I believe that I'm I'm most excited about when I look out upon the audience and the opportunity for customers is with this announcement there are quite literally millions of applications now that can be modernized and made available on any cloud anywhere with the combination of IBM cloud private and OpenShift and I'm most thrilled to have mr. Michael elder a distinguished engineer from IBM here with us today and you know Michael would you maybe describe for the folks what we're actually going to go over today absolutely so when you think about how do I carry forward existing applications how do I build new applications as well you're creating micro services that always need a mixture of data and messaging and caching so this example application shows java-based micro services running on WebSphere Liberty each of which are then leveraging things like IBM MQ for messaging IBM db2 for data operational decision manager all of which is fully containerized and running on top of the Red Hat open chip container platform and in fact we're even gonna enhance stock trader to help it understand how you feel but okay hang on so I'm a little slow to the draw sometimes you said we're gonna have an application tell me how I feel exactly exactly you think about your enterprise apps you want to improve customer service understanding how your clients feel can't help you do that okay well this I'd like to see that in action all right let's do it okay so the first thing we'll do is we'll actually take a look at the catalog and here in the IBM cloud private catalog this is all of the content that's available to deploy now into this hybrid solution so we see workloads for IBM will see workloads for other open source packages etc each of these are packaged up as helm charts that are deploying a set of images that will be certified for Red Hat Linux and in this case we're going to go through and start with a simple example with a node out well click a few actions here we'll give it a name now do you have your console up over there I certainly do all right perfect so we'll deploy this into the new old namespace and will deploy notate okay alright anything happening of course it's come right up and so you know what what I really like about this is regardless of if I'm used to using IBM clout private or if I'm used to working with open shift yeah the experience is well with the tool of whatever I'm you know used to dealing with on a daily basis but I mean you know I got to tell you we we deployed node ourselves all the time what about and what about when was the last time you deployed MQ on open shift you never I maybe never all right let's fix that so MQ obviously is a critical component for messaging for lots of highly transactional systems here we'll deploy this as a container on the platform now I'm going to deploy this one again into new worlds I'm gonna disable persistence and for my application I'm going to need a queue manager so I'm going to have it automatically setup my queue manager as well now this will deploy a couple of things what do you see I see IBM in cube all right so there's your stateful set running MQ and of course there's a couple of other components that get stood up as needed here including things like credentials and secrets and the service etc but all of this is they're out of the box ok so impressive right but that's the what I think you know what I'm really looking at is maybe how a well is this running you know what else does this partnership bring when I look at IBM cloud private windows inches well so that's a key reason about why it's not just about IBM middleware running on open shift but also IBM cloud private because ultimately you need that common management plane when you deploy a container the next thing you have to worry about is how do I get its logs how do I manage its help how do I manage license consumption how do I have a common security plan right so cloud private is that enveloping wrapper around IBM middleware to provide those capabilities in a common way and so here we'll switch over to our dashboard this is our Griffin and Prometheus stack that's deployed also now on cloud private running on OpenShift and we're looking at a different namespace we're looking at the stock trader namespace we'll go back to this app here momentarily and we can see all the different pieces what if you switch over to the stock trader workspace on open shipped yeah I think we might be able to do that here hey there it is alright and so what you're gonna see here all the different pieces of this op right there's d b2 over here I see the portfolio Java microservice running on Webster Liberty I see my Redis cash I see MQ all of these are the components we saw in the architecture picture a minute ago ya know so this is really great I mean so maybe let's take a look at the actual application I see we have a fine stock trader app here now we mentioned understanding how I feel exactly you know well I feel good that this is you know a brand new stock trader app versus the one from ten years ago that don't feel like we used forever so the key thing is this app is actually all of those micro services in addition to things like business rules etc to help understand the loyalty program so one of the things we could do here is actually enhance it with a a AI service from Watson this is tone analyzer it helps me understand how that user actually feels and will be able to go through and submit some feedback to understand that user ok well let's see if we can take a look at that so I tried to click on youth clearly you're not very happy right now here I'll do one quick thing over here go for it we'll clear a cache for our sample lab so look you guys don't actually know as Michael and I just wrote this no js' front end backstage while Arvin was actually talking with Matt and we deployed it real-time using continuous integration and continuous delivery that we have available with openshift well the great thing is it's a live demo right so we're gonna do it all live all the time all right so you mentioned it'll tell me how I'm feeling right so if we look at so right there it looks like they're pretty angry probably because our cache hadn't been cleared before we started the demo maybe well that would make me angry but I should be happy because I mean I have a lot of money well it's it's more than I get today for sure so but you know again I don't want to remain angry so does Watson actually understand southern I know it speaks like eighty different languages but well you know I'm from South Carolina to understand South Carolina southern but I don't know about your North Carolina southern alright well let's give it a go here y'all done a real real know no profanity now this is live I've done a real real nice job on this here fancy demo all right hey all right likes me now all right cool and the key thing is just a quick note right it's showing you've got a free trade so we can integrate those business rules and then decide to I do put one trade if you're angry give me more it's all bringing it together into one platform all running on open show yeah and I can see the possibilities right of we've not only deployed services but getting that feedback from our customers to understand well how well the services are being used and are people really happy with what they have hey listen Michael this was amazing I read you joining us today I hope you guys enjoyed this demo as well so all of you know who this next company is as I look out through the crowd based on what I can actually see with the sun shining down on me right now I can see their influence everywhere you know Sports is in our everyday lives and these guys are equally innovative in that space as they are with hybrid cloud computing and they use that to help maintain and spread their message throughout the world of course I'm talking about Nike I think you'll enjoy this next video about Nike and their brand and then we're going to hear directly from my twitting about what they're doing with Red Hat technology new developments in the top story of the day the world has stopped turning on its axis top scientists are currently racing to come up with a solution everybody going this way [Music] the wrong way [Music] please welcome Nike vice president of infrastructure engineering Mike witig [Music] hi everybody over the last five years at Nike we have transformed our technology landscape to allow us to connect more directly to our consumers through our retail stores through Nike comm and our mobile apps the first step in doing that was redesigning our global network to allow us to have direct connectivity into both Asia and AWS in Europe in Asia and in the Americas having that proximity to those cloud providers allows us to make decisions about application workload placement based on our strategy instead of having design around latency concerns now some of those workloads are very elastic things like our sneakers app for example that needs to burst out during certain hours of the week there's certain moments of the year when we have our high heat product launches and for those type of workloads we write that code ourselves and we use native cloud services but being hybrid has allowed us to not have to write everything that would go into that app but rather just the parts that are in that application consumer facing experience and there are other back-end systems certain core functionalities like order management warehouse management finance ERP and those are workloads that are third-party applications that we host on relevent over the last 18 months we have started to deploy certain elements of those core applications into both Azure and AWS hosted on rel and at first we were pretty cautious that we started with development environments and what we realized after those first successful deployments is that are the impact of those cloud migrations on our operating model was very small and that's because the tools that we use for monitoring for security for performance tuning didn't change even though we moved those core applications into Azure in AWS because of rel under the covers and getting to the point where we have that flexibility is a real enabler as an infrastructure team that allows us to just be in the yes business and really doesn't matter where we want to deploy different workload if either cloud provider or on-prem anywhere on the planet it allows us to move much more quickly and stay much more directed to our consumers and so having rel at the core of our strategy is a huge enabler for that flexibility and allowing us to operate in this hybrid model thanks very much [Applause] what a great example it's really nice to hear an IQ story of using sort of relish that foundation to enable their hybrid clout enable their infrastructure and there's a lot that's the story we spent over ten years making that possible for rel to be that foundation and we've learned a lot in that but let's circle back for a minute to the software vendors and what kicked off the day today with IBM IBM s one of the largest software portfolios on the planet but we learned through our journey on rel that you need thousands of vendors to be able to sport you across all of your different industries solve any challenge that you might have and you need those vendors aligned with your technology direction this is doubly important when the technology direction is changing like with containers we saw that two years ago bread had introduced our container certification program now this program was focused on allowing you to identify vendors that had those shared technology goals but identification by itself wasn't enough in this fast-paced world so last year we introduced trusted content we introduced our container health index publicly grading red hats images that form the foundation for those vendor images and that was great because those of you that are familiar with containers know that you're taking software from vendors you're combining that with software from companies like Red Hat and you are putting those into a single container and for you to run those in a mission-critical capacity you have to know that we can both stand by and support those deployments but even trusted content wasn't enough so this year I'm excited that we are extending once again to introduce trusted operations now last week we announced that cube con kubernetes conference the kubernetes operator SDK the goal of the kubernetes operators is to allow any software provider on kubernetes to encode how that software should run this is a critical part of a container ecosystem not just being able to find the vendors that you want to work with not just knowing that you can trust what's inside the container but knowing that you can efficiently run that software now the exciting part is because this is so closely aligned with the upstream technology that today we already have four partners that have functioning operators specifically Couchbase dynaTrace crunchy and black dot so right out of the gate you have security monitoring data store options available to you these partners are really leading the charge in terms of what it means to run their software on OpenShift but behind these four we have many more in fact this morning we announced over 60 partners that are committed to building operators they're taking their domain expertise and the software that they wrote that they know and extending that into how you are going to run that on containers in environments like OpenShift this really brings the power of being able to find the vendors being able to trust what's inside and know that you can run their software as efficiently as anyone else on the planet but instead of just telling you about this we actually want to show you this in action so why don't we bring back up the demo team to give you a little tour of what's possible with it guys thanks Matt so Matt talked about the concept of operators and when when I think about operators and what they do it's taking OpenShift based services and making them even smarter giving you insight into how they do things for example have we had an operator for the nodejs service that I was running earlier it would have detected the problem and fixed itself but when we look at it what really operators do when I look at it from an ecosystem perspective is for ISVs it's going to be a catalyst that's going to allow them to make their services as manageable and it's flexible and as you know maintainable as any public cloud service no matter where OpenShift is running and to help demonstrate this I've got my buddy Rob here Rob are we ready on the demo front we're ready awesome now I notice this screen looks really familiar to me but you know I think we want to give folks here a dev preview of a couple of things well we want to show you is the first substantial integration of the core OS tectonic technology with OpenShift and then the other thing is we are going to dive in a little bit more into operators and their usefulness so Rob yeah so what we're looking at here is the service catalog that you know and love and openshift and we've got a few new things in here we've actually integrated operators into the Service Catalog and I'm going to take this filter and give you a look at some of them that we have today so you can see we've got a list of operators exposed and this is the same way that your developers are already used to integrating with products they're right in your catalog and so now these are actually smarter services but how can we maybe look at that I mentioned that there's maybe a new view I'm used to seeing this as a developer but I hear we've got some really cool stuff if I'm the administrator of the console yeah so we've got a whole new side of the console for cluster administrators to get a look at under the infrastructure versus this dev focused view that we're looking at today today so let's go take a look at it so the first thing you see here is we've got a really rich set of monitoring and health status so we can see that we've got some alerts firing our control plane is up and we can even do capacity planning anything that you need to do to maintenance your cluster okay so it's it's not only for the the services in the cluster and doing things that you know I may be normally as a human operator would have to do but this this console view also gives me insight into the infrastructure itself right like maybe the nodes and maybe handling the security context is that true yes so these are new capabilities that we're bringing to open shift is the ability to do node management things like drain and unscheduled nodes to do day-to-day maintenance and then as well as having security constraints and things like role bindings for example and the exciting thing about this is this is a view that you've never been able to see before it's cross-cutting across namespaces so here we've got a number of admin bindings and we can see that they're connected to a number of namespaces and these would represent our engineering teams all the groups that are using the cluster and we've never had this view before this is a perfect way to audit your security you know it actually is is pretty exciting I mean I've been fortunate enough to be on the up and shift team since day one and I know that operations view is is something that we've you know strived for and so it's really exciting to see that we can offer that now but you know really this was a we want to get into what operators do and what they can do for us and so maybe you show us what the operator console looks like yeah so let's jump on over and see all the operators that we have installed on the cluster you can see that these mirror what we saw on the Service Catalog earlier now what we care about though is this Couchbase operator and we're gonna jump into the demo namespace as I said you can share a number of different teams on a cluster so it's gonna jump into this namespace okay cool so now what we want to show you guys when we think about operators you know we're gonna have a scenario here where there's going to be multiple replicas of a Couchbase service running in the cluster and then we're going to have a stateful set and what's interesting is those two things are not enough if I'm really trying to run this as a true service where it's highly available in persistent there's things that you know as a DBA that I'm normally going to have to do if there's some sort of node failure and so what we want to demonstrate to you is where operators combined with the power that was already within OpenShift are now coming together to keep this you know particular database service highly available and something that we can continue using so Rob what have you got there yeah so as you can see we've got our couch based demo cluster running here and we can see that it's up and running we've got three members we've got an off secret this is what's controlling access to a UI that we're gonna look at in a second but what really shows the power of the operator is looking at this view of the resources that it's managing you can see that we've got a service that's doing load balancing into the cluster and then like you said we've got our pods that are actually running the software itself okay so that's cool so maybe for everyone's benefit so we can show that this is happening live could we bring up the the Couchbase console please and keep up the openshift console both sides so what we see there we go so what we see on the on the right hand side is obviously the same console Rob was working in on the left-hand side as you can see by the the actual names of the pods that are there the the couch based services that are available and so Rob maybe um let's let's kill something that's always fun to do on stage yeah this is the power of the operator it's going to recover it so let's browse on over here and kill node number two so we're gonna forcefully kill this and kick off the recovery and I see right away that because of the integration that we have with operators the Couchbase console immediately picked up that something has changed in the environment now why is that important normally a human being would have to get that alert right and so with operators now we've taken that capability and we've realized that there has been a new event within the environment this is not something that you know kubernetes or open shipped by itself would be able to understand now I'm presuming we're gonna end up doing something else it's not just seeing that it failed and sure enough there we go remember when you have a stateful application rebalancing that data and making it available is just as important as ensuring that the disk is attached so I mean Rob thank you so much for you know driving this for us today and being here I mean you know not only Couchbase but as was mentioned by matt we also have you know crunchy dynaTrace and black duck I would encourage you all to go visit their booths out on the floor today and understand what they have available which are all you know here with a dev preview and then talk to the many other partners that we have that are also looking at operators so again rub thank you for joining us today Matt come on out okay this is gonna make for an exciting year of just what it means to consume container base content I think containers change how customers can get that I believe operators are gonna change how much they can trust running that content let's circle back to one more partner this next partner we have has changed the landscape of computing specifically with their work on hardware design work on core Linux itself you know in fact I think they've become so ubiquitous with computing that we often overlook the technological marvels that they've been able to overcome now for myself I studied computer engineering so in the late 90s I had the chance to study processor design I actually got to build one of my own processors now in my case it was the most trivial processor that you could imagine it was an 8-bit subtractor which means it can subtract two numbers 256 or smaller but in that process I learned the sheer complexity that goes into processor design things like wire placements that are so close that electrons can cut through the insulation in short and then doing those wire placements across three dimensions to multiple layers jamming in as many logic components as you possibly can and again in my case this was to make a processor that could subtract two numbers but once I was done with this the second part of the course was studying the Pentium processor now remember that moment forever because looking at what the Pentium processor was able to accomplish it was like looking at alien technology and the incredible thing is that Intel our next partner has been able to keep up that alien like pace of innovation twenty years later so we're excited have Doug Fisher here let's hear a little bit more from Intel for business wide open skies an open mind no matter the context the idea of being open almost only suggests the potential of infinite possibilities and that's exactly the power of open source whether it's expanding what's possible in business the science and technology or for the greater good which is why-- open source requires the involvement of a truly diverse community of contributors to scale and succeed creating infinite possibilities for technology and more importantly what we do with it [Music] you know what Intel one of our core values is risk-taking and I'm gonna go just a bit off script for a second and say I was just backstage and I saw a gentleman that looked a lot like Scott Guthrie who runs all of Microsoft's cloud enterprise efforts wearing a red shirt talking to Cormier I'm just saying I don't know maybe I need some more sleep but that's what I saw as we approach Intel's 50th anniversary these words spoken by our co-founder Robert Noyce are as relevant today as they were decades ago don't be encumbered by history this is about breaking boundaries in technology and then go off and do something wonderful is about innovation and driving innovation in our industry and Intel we're constantly looking to break boundaries to advance our technology in the cloud in enterprise space that is no different so I'm going to talk a bit about some of the boundaries we've been breaking and innovations we've been driving at Intel starting with our Intel Xeon platform Orion Xeon scalable platform we launched several months ago which was the biggest and mark the most advanced movement in this technology in over a decade we were able to drive critical performance capabilities unmatched agility and added necessary and sufficient security to that platform I couldn't be happier with the work we do with Red Hat and ensuring that those hero features that we drive into our platform they fully expose to all of you to drive that innovation to go off and do something wonderful well there's taking advantage of the performance features or agility features like our advanced vector extensions or avx-512 or Intel quick exist those technologies are fully embraced by Red Hat Enterprise Linux or whether it's security technologies like txt or trusted execution technology are fully incorporated and we look forward to working with Red Hat on their next release to ensure that our advancements continue to be exposed and their platform and all these workloads that are driving the need for us to break boundaries and our technology are driving more and more need for flexibility and computing and that's why we're excited about Intel's family of FPGAs to help deliver that additional flexibility for you to build those capabilities in your environment we have a broad set of FPGA capabilities from our power fish at Mac's product line all the way to our performance product line on the 6/10 strat exten we have a broad set of bets FPGAs what i've been talking to customers what's really exciting is to see the combination of using our Intel Xeon scalable platform in combination with FPGAs in addition to the acceleration development capabilities we've given to software developers combining all that together to deliver better and better solutions whether it's helping to accelerate data compression well there's pattern recognition or data encryption and decryption one of the things I saw in a data center recently was taking our Intel Xeon scalable platform utilizing the capabilities of FPGA to do data encryption between servers behind the firewall all the while using the FPGA to do that they preserve those precious CPU cycles to ensure they delivered the SLA to the customer yet provided more security for their data in the data center one of the edges in cyber security is innovation and route of trust starts at the hardware we recently renewed our commitment to security with our security first pledge has really three elements to our security first pledge first is customer first urgency we have now completed the release of the micro code updates for protection on our Intel platforms nine plus years since launch to protect against things like the side channel exploits transparent and timely communication we are going to communicate timely and openly on our Intel comm website whether it's about our patches performance or other relevant information and then ongoing security assurance we drive security into every one of our products we redesigned a portion of our processor to add these partition capability which is adding additional walls between applications and user level privileges to further secure that environment from bad actors I want to pause for a second and think everyone in this room involved in helping us work through our security first pledge this isn't something we do on our own it takes everyone in this room to help us do that the partnership and collaboration was next to none it's the most amazing thing I've seen since I've been in this industry so thank you we don't stop there we continue to advance our security capabilities cross-platform solutions we recently had a conference discussion at RSA where we talked about Intel Security Essentials where we deliver a framework of capabilities and the end that are in our silicon available for those to innovate our customers and the security ecosystem to innovate on a platform in a consistent way delivering that assurance that those capabilities will be on that platform we also talked about things like our security threat technology threat detection technology is something that we believe in and we launched that at RSA incorporates several elements one is ability to utilize our internal graphics to accelerate some of the memory scanning capabilities we call this an accelerated memory scanning it allows you to use the integrated graphics to scan memory again preserving those precious cycles on the core processor Microsoft adopted this and are now incorporated into their defender product and are shipping it today we also launched our threat SDK which allows partners like Cisco to utilize telemetry information to further secure their environments for cloud workloads so we'll continue to drive differential experiences into our platform for our ecosystem to innovate and deliver more and more capabilities one of the key aspects you have to protect is data by 2020 the projection is 44 zettabytes of data will be available 44 zettabytes of data by 2025 they project that will grow to a hundred and eighty s data bytes of data massive amount of data and what all you want to do is you want to drive value from that data drive and value from that data is absolutely critical and to do that you need to have that data closer and closer to your computation this is why we've been working Intel to break the boundaries in memory technology with our investment in 3d NAND we're reducing costs and driving up density in that form factor to ensure we get warm data closer to the computing we're also innovating on form factors we have here what we call our ruler form factor this ruler form factor is designed to drive as much dense as you can in a 1u rack we're going to continue to advance the capabilities to drive one petabyte of data at low power consumption into this ruler form factor SSD form factor so our innovation continues the biggest breakthrough and memory technology in the last 25 years in memory media technology was done by Intel we call this our 3d crosspoint technology and our 3d crosspoint technology is now going to be driven into SSDs as well as in a persistent memory form factor to be on the memory bus giving you the speed of memory characteristics of memory as well as the characteristics of storage given a new tier of memory for developers to take full advantage of and as you can see Red Hat is fully committed to integrating this capability into their platform to take full advantage of that new capability so I want to thank Paul and team for engaging with us to make sure that that's available for all of you to innovate on and so we're breaking boundaries and technology across a broad set of elements that we deliver that's what we're about we're going to continue to do that not be encumbered by the past your role is to go off and doing something wonderful with that technology all ecosystems are embracing this and driving it including open source technology open source is a hub of innovation it's been that way for many many years that innovation that's being driven an open source is starting to transform many many businesses it's driving business transformation we're seeing this coming to light in the transformation of 5g driving 5g into the networked environment is a transformational moment an open source is playing a pivotal role in that with OpenStack own out and opie NFV and other open source projects were contributing to and participating in are helping drive that transformation in 5g as you do software-defined networks on our barrier breaking technology we're also seeing this transformation rapidly occurring in the cloud enterprise cloud enterprise are growing rapidly and innovation continues our work with virtualization and KVM continues to be aggressive to adopt technologies to advance and deliver more capabilities in virtualization as we look at this with Red Hat we're now working on Cube vert to help move virtualized workloads onto these platforms so that we can now have them managed at an open platform environment and Cube vert provides that so between Intel and Red Hat and the community we're investing resources to make certain that comes to product as containers a critical feature in Linux becomes more and more prevalent across the industry the growth of container elements continues at a rapid rapid pace one of the things that we wanted to bring to that is the ability to provide isolation without impairing the flexibility the speed and the footprint of a container with our clear container efforts along with hyper run v we were able to combine that and create we call cotta containers we launched this at the end of last year cotta containers is designed to have that container element available and adding elements like isolation both of these events need to have an orchestration and management capability Red Hat's OpenShift provides that capability for these workloads whether containerized or cube vert capabilities with virtual environments Red Hat openshift is designed to take that commercial capability to market and we've been working with Red Hat for several years now to develop what we call our Intel select solution Intel select solutions our Intel technology optimized for downstream workloads as we see a growth in a workload will work with a partner to optimize a solution on Intel technology to deliver the best solution that could be deployed quickly our effort here is to accelerate the adoption of these type of workloads in the market working with Red Hat's so now we're going to be deploying an Intel select solution design and optimized around Red Hat OpenShift we expect the industry's start deploying this capability very rapidly I'm excited to announce today that Lenovo is committed to be the first platform company to deliver this solution to market the Intel select solution to market will be delivered by Lenovo now I talked about what we're doing in industry and how we're transforming businesses our technology is also utilized for greater good there's no better example of this than the worked by dr. Stephen Hawking it was a sad day on March 14th of this year when dr. Stephen Hawking passed away but not before Intel had a 20-year relationship with dr. Hawking driving breakthrough capabilities innovating with him driving those robust capabilities to the rest of the world one of our Intel engineers an Intel fellow which is the highest technical achievement you can reach at Intel got to spend 10 years with dr. Hawking looking at innovative things they could do together with our technology and his breakthrough innovative thinking so I thought it'd be great to bring up our Intel fellow Lema notch Minh to talk about her work with dr. Hawking and what she learned in that experience come on up Elina [Music] great to see you Thanks something going on about the breakthrough breaking boundaries and Intel technology talk about how you use that in your work with dr. Hawking absolutely so the most important part was to really make that technology contextually aware because for people with disability every single interaction takes a long time so whether it was adapting for example the language model of his work predictor to understand whether he's gonna talk to people or whether he's writing a book on black holes or to even understand what specific application he might be using and then making sure that we're surfacing only enough actions that were relevant to reduce that amount of interaction so the tricky part is really to make all of that contextual awareness happen without totally confusing the user because it's constantly changing underneath it so how is that your work involving any open source so you know the problem with assistive technology in general is that it needs to be tailored to the specific disability which really makes it very hard and very expensive because it can't utilize the economies of scale so basically with the system that we built what we wanted to do is really enable unleashing innovation in the world right so you could take that framework you could tailor to a specific sensor for example a brain computer interface or something like that where you could actually then support a different set of users so that makes open-source a perfect fit because you could actually build and tailor and we you spoke with dr. Hawking what was this view of open source is it relevant to him so yeah so Stephen was adamant from the beginning that he wanted a system to benefit the world and not just himself so he spent a lot of time with us to actually build this system and he was adamant from day one that he would only engage with us if we were commit to actually open sourcing the technology that's fantastic and you had the privilege of working with them in 10 years I know you have some amazing stories to share so thank you so much for being here thank you so much in order for us to scale and that's what we're about at Intel is really scaling our capabilities it takes this community it takes this community of diverse capabilities it takes two births thought diverse thought of dr. Hawking couldn't be more relevant but we also are proud at Intel about leading efforts of diverse thought like women and Linux women in big data other areas like that where Intel feels that that diversity of thinking and engagement is critical for our success so as we look at Intel not to be encumbered by the past but break boundaries to deliver the technology that you all will go off and do something wonderful with we're going to remain committed to that and I look forward to continue working with you thank you and have a great conference [Applause] thank God now we have one more customer story for you today when you think about customers challenges in the technology landscape it is hard to ignore the public cloud these days public cloud is introducing capabilities that are driving the fastest rate of innovation that we've ever seen in our industry and our next customer they actually had that same challenge they wanted to tap into that innovation but they were also making bets for the long term they wanted flexibility and providers and they had to integrate to the systems that they already have and they have done a phenomenal job in executing to this so please give a warm welcome to Kerry Pierce from Cathay Pacific Kerry come on thanks very much Matt hi everyone thank you for giving me the opportunity to share a little bit about our our cloud journey let me start by telling you a little bit about Cathay Pacific we're an international airline based in Hong Kong and we serve a passenger and a cargo network to over 200 destinations in 52 countries and territories in the last seventy years and years seventy years we've made substantial investments to develop Hong Kong as one of the world's leading transportation hubs we invest in what matters most to our customers to you focusing on our exemplary service and our great product and it's both on the ground and in the air we're also investing and expanding our network beyond our multiple frequencies to the financial districts such as Tokyo New York and London and we're connecting Asia and Hong Kong with key tech hubs like San Francisco where we have multiple flights daily we're also connecting Asia in Hong Kong to places like Tel Aviv and our upcoming destination of Dublin in fact 2018 is actually going to be one of our biggest years in terms of network expansion and capacity growth and we will be launching in September our longest flight from Hong Kong direct to Washington DC and that'll be using a state-of-the-art Airbus a350 1000 aircraft so that's a little bit about Cathay Pacific let me tell you about our journey through the cloud I'm not going to go into technical details there's far smarter people out in the audience who will be able to do that for you just focus a little bit about what we were trying to achieve and the people side of it that helped us get there we had a couple of years ago no doubt the same issues that many of you do I don't think we're unique we had a traditional on-premise non-standardized fragile infrastructure it didn't meet our infrastructure needs and it didn't meet our development needs it was costly to maintain it was costly to grow and it really inhibited innovation most importantly it slowed the delivery of value to our customers at the same time you had the hype of cloud over the last few years cloud this cloud that clouds going to fix the world we were really keen on making sure we didn't get wound up and that so we focused on what we needed we started bottom up with a strategy we knew we wanted to be clouded Gnostic we wanted to have active active on-premise data centers with a single network and fabric and we wanted public clouds that were trusted and acted as an extension of that environment not independently we wanted to avoid single points of failure and we wanted to reduce inter dependencies by having loosely coupled designs and finally we wanted to be scalable we wanted to be able to cater for sudden surges of demand in a nutshell we kind of just wanted to make everything easier and a management level we wanted to be a broker of services so not one size fits all because that doesn't work but also not one of everything we want to standardize but a pragmatic range of services that met our development and support needs and worked in harmony with our public cloud not against it so we started on a journey with red hat we implemented Red Hat cloud forms and ansible to manage our hybrid cloud we also met implemented Red Hat satellite to maintain a manager environment we built a Red Hat OpenStack on crimson vironment to give us an alternative and at the same time we migrated a number of customer applications to a production public cloud open shift environment but it wasn't all Red Hat you love heard today that the Red Hat fits within an overall ecosystem we looked at a number of third-party tools and services and looked at developing those into our core solution I think at last count we had tried and tested somewhere past eight different tools and at the moment we still have around 62 in our environment that help us through that journey but let me put the technical solution aside a little bit because it doesn't matter how good your technical solution is if you don't have the culture and the people to get it right as a group we needed to be aligned for delivery and we focused on three core behaviors we focused on accountability agility and collaboration now I was really lucky we've got a pretty fantastic team for whom that was actually pretty easy but but again don't underestimate the importance of getting the culture and the people right because all the technology in the world doesn't matter if you don't have that right I asked the team what did we do differently because in our situation we didn't go out and hire a bunch of new people we didn't go out and hire a bunch of consultants we had the staff that had been with us for 10 20 and in some cases 30 years so what did we do differently it was really simple we just empowered and supported our staff we knew they were the smart ones they were the ones that were dealing with a legacy environment and they had the passion to make the change so as a team we encouraged suggestions and contributions from our overall IT community from the bottom up we started small we proved the case we told the story and then we got by him and only did did we implement wider the benefits the benefit through our staff were a huge increase in staff satisfaction reduction and application and platform outage support incidents risk free and failsafe application releases work-life balance no more midnight deployments and our application and infrastructure people could really focus on delivering customer value not on firefighting and for our end customers the people that travel with us it was really really simple we could provide a stable service that allowed for faster releases which meant we could deliver value faster in terms of stats we migrated 16 production b2c applications to a public cloud OpenShift environment in 12 months we decreased provisioning time from weeks or occasionally months we were waiting for hardware two minutes and we had a hundred percent availability of our key customer facing systems but most importantly it was about people we'd built a culture a culture of innovation that was built on a foundation of collaboration agility and accountability and that permeated throughout the IT organization not those just those people that were involved in the project everyone with an IT could see what good looked like and to see what it worked what it looked like in terms of working together and that was a key foundation for us the future for us you will have heard today everything's changing so we're going to continue to develop our open hybrid cloud onboard more public cloud service providers continue to build more modern applications and leverage the emerging technology integrate and automate everything we possibly can and leverage more open source products with the great support from the open source community so there you have it that's our journey I think we succeeded by not being over awed and by starting with the basics the technology was key obviously it's a cool component but most importantly it was a way we approached our transition we had a clear strategy that was actually developed bottom-up by the people that were involved day to day and we empowered those people to deliver and that provided benefits to both our staff and to our customers so thank you for giving the opportunity to share and I hope you enjoy the rest of the summer [Applause] I got one thanks what a great story would a great customer story to close on and we have one more partner to come up and this is a partner that all of you know that's Microsoft Microsoft has gone through an amazing transformation they've we've built an incredibly meaningful partnership with them all the way from our open source collaboration to what we do in the business side we started with support for Red Hat Enterprise Linux on hyper-v and that was truly just the beginning today we're announcing one of the most exciting joint product offerings on the market today let's please give a warm welcome to Paul correr and Scott Scott Guthrie to tell us about it guys come on out you know Scot welcome welcome to the Red Hat summer thanks for coming really appreciate it great to be here you know many surprises a lot of people when we you know published a list of speakers and then you rock you were on it and you and I are on stage here it's really really important and exciting to us exciting new partnership we've worked together a long time from the hypervisor up to common support and now around hybrid hybrid cloud maybe from your perspective a little bit of of what led us here well you know I think the thing that's really led us here is customers and you know Microsoft we've been on kind of a transformation journey the last several years where you know we really try to put customers at the center of everything that we do and you know as part of that you quickly learned from customers in terms of I'm including everyone here just you know you've got a hybrid of state you know both in terms of what you run on premises where it has a lot of Red Hat software a lot of Microsoft software and then really is they take the journey to the cloud looking at a hybrid of state in terms of how do you run that now between on-premises and a public cloud provider and so I think the thing that both of us are recognized and certainly you know our focus here at Microsoft has been you know how do we really meet customers with where they're at and where they want to go and make them successful in that journey and you know it's been fantastic working with Paul and the Red Hat team over the last two years in particular we spend a lot of time together and you know really excited about the journey ahead so um maybe you can share a bit more about the announcement where we're about to make today yeah so it's it's it's a really exciting announcement it's and really kind of I think first of its kind in that we're delivering a Red Hat openshift on Azure service that we're jointly developing and jointly managing together so this is different than sort of traditional offering where it's just running inside VMs and it's sort of two vendors working this is really a jointly managed service that we're providing with full enterprise support with a full SLA where the you know single throat to choke if you will although it's collectively both are choke the throats in terms of making sure that it works well and it's really uniquely designed around this hybrid world and in that it supports will support both Windows and Linux containers and it role you know it's the same open ship that runs both in the public cloud on Azure and on-premises and you know it's something that we hear a lot from customers I know there's a lot of people here that have asked both of us for this and super excited to be able to talk about it today and we're gonna show off the first demo of it just a bit okay well I'm gonna ask you to elaborate a bit more about this how this fits into the bigger Microsoft picture and I'll get out of your way and so thanks again thank you for coming here we go thanks Paul so I thought I'd spend just a few minutes talking about wouldn't you know that some of the work that we're doing with Microsoft Asher and the overall Microsoft cloud I didn't go deeper in terms of the new offering that we're announcing today together with red hat and show demo of it actually in action in a few minutes you know the high level in terms of you know some of the work that we've been doing at Microsoft the last couple years you know it's really been around this this journey to the cloud that we see every organization going on today and specifically the Microsoft Azure we've been providing really a cloud platform that delivers the infrastructure the application and kind of the core computing needs that organizations have as they want to be able to take advantage of what the cloud has to offer and in terms of our focus with Azure you know we've really focused we deliver lots and lots of different services and features but we focused really in particular on kind of four key themes and we see these four key themes aligning very well with the journey Red Hat it's been on and it's partly why you know we think the partnership between the two companies makes so much sense and you know for us the thing that we've been really focused on has been with a or in terms of how do we deliver a really productive cloud meaning how do we enable you to take advantage of cutting-edge technology and how do we kind of accelerate the successful adoption of it whether it's around the integration of managed services that we provide both in terms of the application space in the data space the analytic and AI space but also in terms of just the end-to-end management and development tools and how all those services work together so that teams can basically adopt them and be super successful yeah we deeply believe in hybrid and believe that the world is going to be a multi cloud and a multi distributed world and how do we enable organizations to be able to take the existing investments that they already have and be able to easily integrate them in a public cloud and with a public cloud environment and get immediate ROI on day one without how to rip and replace tons of solutions you know we're moving very aggressively in the AI space and are looking to provide a rich set of AI services both finished AI models things like speech detection vision detection object motion etc that any developer even at non data scientists can integrate to make application smarter and then we provide a rich set of AI tooling that enables organizations to build custom models and be able to integrate them also as part of their applications and with their data and then we invest very very heavily on trust Trust is sort of at the core of a sure and we now have more compliant certifications than any other cloud provider we run in more countries than any other cloud provider and we really focus around unique promises around data residency data sovereignty and privacy that are really differentiated across the industry and terms of where Iser runs today we're in 50 regions around the world so our region for us is typically a cluster of multiple data centers that are grouped together and you can see we're pretty much on every continent with the exception of Antarctica today and the beauty is you're going to be able to take the Red Hat open shift service and run it on ashore in each of these different locations and really have a truly global footprint as you look to build and deploy solutions and you know we've seen kind of this focus on productivity hybrid intelligence and Trust really resonate in the market and about 90 percent of Fortune 500 companies today are deployed on Azure and you heard Nike talked a little bit earlier this afternoon about some of their journeys as they've moved to a dot public cloud this is a small logo of just a couple of the companies that are on ashore today and what I do is actually even before we dive into the open ship demo is actually just show a quick video you know one of the companies thing there are actually several people from that organization here today Deutsche Bank who have been working with both Microsoft and Red Hat for many years Microsoft on the other side Red Hat both on the rel side and then on the OpenShift side and it's just one of these customers that have helped bring the two companies together to deliver this managed openshift service on Azure and so I'm just going to play a quick video of some of the folks that Deutsche Bank talking about their experiences and what they're trying to get out of it so we could roll the video that'd be great technology is at the absolute heart of Deutsche Bank we've recognized that the cost of running our infrastructure was particularly high there was a enormous amount of under utilization we needed a platform which was open to polyglot architecture supporting any kind of application workload across the various business lines of the third we analyzed over 60 different vendor products and we ended up with Red Hat openshift I'm super excited Microsoft or supporting Linux so strongly to adopting a hybrid approach we chose as here because Microsoft was the ideal partner to work with on constructs around security compliance business continuity as you as in all the places geographically that we need to be we have applications now able to go from a proof of concept to production in three weeks that is already breaking records openshift gives us given entities and containers allows us to apply the same sets of processes automation across a wide range of our application landscape on any given day we run between seven and twelve thousand containers across three regions we start see huge levels of cost reduction because of the level of multi-tenancy that we can achieve through containers open ship gives us an abstraction layer which is allows us to move our applications between providers without having to reconfigure or recode those applications what's really exciting for me about this journey is the way they're both Red Hat and Microsoft have embraced not just what we're doing but what each other are doing and have worked together to build open shift as a first-class citizen with Microsoft [Applause] in terms of what we're announcing today is a new fully managed OpenShift service on Azure and it's really the first fully managed service provided end-to-end across any of the cloud providers and it's jointly engineer operated and supported by both Microsoft and Red Hat and that means again sort of one service one SLA and both companies standing for a link firmly behind it really again focusing around how do we make customers successful and as part of that really providing the enterprise-grade not just isolates but also support and integration testing so you can also take advantage of all your rel and linux-based containers and all of your Windows server based containers and how can you run them in a joint way with a common management stack taking the advantage of one service and get maximum density get maximum code reuse and be able to take advantage of a containerized world in a better way than ever before and make this customer focus is very much at the center of what both companies are really centered around and so what if I do be fun is rather than just talk about openshift as actually kind of show off a little bit of a journey in terms of what this move to take advantage of it looks like and so I'd like to invite Brendan and Chris onstage who are actually going to show off a live demo of openshift on Azure in action and really walk through how to provision the service and basically how to start taking advantage of it using the full open ship ecosystem so please welcome Brendan and Chris we're going to join us on stage for a demo thanks God thanks man it's been a good afternoon so you know what we want to get into right now first I'd like to think Brandon burns for joining us from Microsoft build it's a busy week for you I'm sure your own stage there a few times as well you know what I like most about what we just announced is not only the business and technical aspects but it's that operational aspect the uniqueness the expertise that RedHat has for running OpenShift combined with the expertise that Microsoft has within Azure and customers are going to get this joint offering if you will with you know Red Hat OpenShift on Microsoft Azure and so you know kind of with that again Brendan I really appreciate you being here maybe talk to the folks about what we're going to show yeah so we're going to take a look at what it looks like to deploy OpenShift on to Azure via the new OpenShift service and the real selling point the really great part of this is the the deep integration with a cloud native app API so the same tooling that you would use to create virtual machines to create disks trade databases is now the tooling that you're going to use to create an open chip cluster so to show you this first we're going to create a resource group here so we're going to create that resource group in East us using the AZ tool that's the the azure command-line tooling a resource group is sort of a folder on Azure that holds all of your stuff so that's gonna come back into the second I've created my resource group in East us and now we're gonna use that exact same tool calling into into Azure api's to provision an open shift cluster so here we go we have AZ open shift that's our new command line tool putting it into that resource group I'm gonna get into East us alright so it's gonna take a little bit of time to deploy that open shift cluster it's doing a bunch of work behind the scenes provisioning all kinds of resources as well as credentials to access a bunch of different as your API so are we actually able to see this to you yeah so we can cut over to in just a second we can cut over to that resource group in a reload so Brendan while relating the beauty of what you know the teams have been doing together already is the fact that now open shift is a first-class citizen as it were yeah absolutely within the agent so I presume not only can I do a deployment but I can do things like scale and check my credentials and pretty much everything that I could do with any other service with that that's exactly right so we can anything that you you were used to doing via the my computer has locked up there we go the demo gods are totally with me oh there we go oh no I hit reload yeah that was that was just evil timing on the house this is another use for operators as we talked about earlier today that's right my dashboard should be coming up do I do I dare click on something that's awesome that was totally it was there there we go good job so what's really interesting about this I've also heard that it deploys you know in as little as five to six minutes which is really good for customers they want to get up and running with it but all right there we go there it is who managed to make it see that shows that it's real right you see the sweat coming off of me there but there you can see the I feel it you can see the various resources that are being created in order to create this openshift cluster virtual machines disks all of the pieces provision for you automatically via that one single command line call now of course it takes a few minutes to to create the cluster so in order to show the other side of that integration the integration between openshift and Azure I'm going to cut over to an open shipped cluster that I already have created alright so here you can see my open shift cluster that's running on Microsoft Azure I'm gonna actually log in over here and the first sign you're gonna see of the integration is it's actually using my credentials my login and going through Active Directory and any corporate policies that I may have around smart cards two-factor off anything like that authenticate myself to that open chef cluster so I'll accept that it can access my and now we're gonna load up the OpenShift web console so now this looks familiar to me oh yeah so if anybody's used OpenShift out there this is the exact same console and what we're going to show though is how this console via the open service broker and the open service broker implementation for Azure integrates natively with OpenShift all right so we can go down here and we can actually see I want to deploy a database I'm gonna deploy Mongo as my key value store that I'm going to use but you know like as we talk about management and having a OpenShift cluster that's managed for you I don't really want to have to manage my database either so I'm actually going to use cosmos DB it's a native Azure service it's a multilingual database that offers me the ability to access my data in a variety of different formats including MongoDB fully managed replicated around the world a pretty incredible service so I'm going to go ahead and create that so now Brendan what's interesting I think to me is you know we talked about the operational aspects and clearly it's not you and I running the clusters but you do need that way to interface with it and so when customers are able to deploy this all of this is out of the box there's no additional contemporary like this is what you get when you create when you use that tool to create that open chef cluster this is what you get with all of that integration ok great step through here and go ahead don't have any IP ranges there we go all right and we create that binding all right and so now behind the scenes openshift is integrated with the azure api's with all of my credentials to go ahead and create that distributed database once it's done provisioning actually all of the credentials necessary to access the database are going to be automatically populated into kubernetes available for me inside of OpenShift via service discovery to access from my application without any further work so I think that really shows not only the power of integrating openshift with an azure based API but actually the power of integrating a Druze API is inside of OpenShift to make a truly seamless experience for managing and deploying your containers across a variety of different platforms yeah hey you know Brendan this is great I know you've got a flight to catch because I think you're back onstage in a few hours but you know really appreciate you joining us today absolutely I look forward to seeing what else we do yeah absolutely thank you so much thanks guys Matt you want to come back on up thanks a lot guys if you have never had the opportunity to do a live demo in front of 8,000 people it'll give you a new appreciation for standing up there and doing it and that was really good you know every time I get the chance just to take a step back and think about the technology that we have at our command today I'm in awe just the progress over the last 10 or 20 years is incredible on to think about what might come in the next 10 or 20 years really is unthinkable you even forget 10 years what might come in the next five years even the next two years but this can create a lot of uncertainty in the environment of what's going to be to come but I believe I am certain about one thing and that is if ever there was a time when any idea is achievable it is now just think about what you've seen today every aspect of open hybrid cloud you have the world's infrastructure at your fingertips and it's not stopping you've heard about this the innovation of open source how fast that's evolving and improving this capability you've heard this afternoon from an entire technology ecosystem that's ready to help you on this journey and you've heard from customer after customer that's already started their journey in the successes that they've had you're one of the neat parts about this afternoon you will aren't later this week you will actually get to put your hands on all of this technology together in our live audience demo you know this is what some it's all about for us it's a chance to bring together the technology experts that you can work with to help formulate how to pull off those ideas we have the chance to bring together technology experts our customers and our partners and really create an environment where everyone can experience the power of open source that same spark that I talked about when I was at IBM where I understood the but intial that open-source had for enterprise customers we want to create the environment where you can have your own spark you can have that same inspiration let's make this you know in tomorrow's keynote actually you will hear a story about how open-source is changing medicine as we know it and literally saving lives it is a great example of expanding the ideas it might be possible that we came into this event with so let's make this the best summit ever thank you very much for being here let's kick things off right head down to the Welcome Reception in the expo hall and please enjoy the summit thank you all so much [Music] [Music]
SUMMARY :
from the bottom this speaks to what I'm
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Doug Fisher | PERSON | 0.99+ |
Stephen | PERSON | 0.99+ |
Brendan | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Deutsche Bank | ORGANIZATION | 0.99+ |
Robert Noyce | PERSON | 0.99+ |
Deutsche Bank | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Michael | PERSON | 0.99+ |
Arvind | PERSON | 0.99+ |
20-year | QUANTITY | 0.99+ |
March 14th | DATE | 0.99+ |
Matt | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Nike | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
Hong Kong | LOCATION | 0.99+ |
Antarctica | LOCATION | 0.99+ |
Scott Guthrie | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
Asia | LOCATION | 0.99+ |
Washington DC | LOCATION | 0.99+ |
London | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
two minutes | QUANTITY | 0.99+ |
Arvin | PERSON | 0.99+ |
Tel Aviv | LOCATION | 0.99+ |
two numbers | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
Paul correr | PERSON | 0.99+ |
September | DATE | 0.99+ |
Kerry Pierce | PERSON | 0.99+ |
30 years | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.99+ |
8-bit | QUANTITY | 0.99+ |
Mike witig | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
2025 | DATE | 0.99+ |
five | QUANTITY | 0.99+ |
dr. Hawking | PERSON | 0.99+ |
Linux | TITLE | 0.99+ |
Arvind Krishna | PERSON | 0.99+ |
Dublin | LOCATION | 0.99+ |
first partner | QUANTITY | 0.99+ |
Rob | PERSON | 0.99+ |
first platform | QUANTITY | 0.99+ |
Matt Hicks | PERSON | 0.99+ |
today | DATE | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
OpenShift | TITLE | 0.99+ |
last week | DATE | 0.99+ |
Joel Horwitz, IBM | IBM CDO Summit Sping 2018
(techno music) >> Announcer: Live, from downtown San Francisco, it's theCUBE. Covering IBM Chief Data Officer Strategy Summit 2018. Brought to you by IBM. >> Welcome back to San Francisco everybody, this is theCUBE, the leader in live tech coverage. We're here at the Parc 55 in San Francisco covering the IBM CDO Strategy Summit. I'm here with Joel Horwitz who's the Vice President of Digital Partnerships & Offerings at IBM. Good to see you again Joel. >> Thanks, great to be here, thanks for having me. >> So I was just, you're very welcome- It was just, let's see, was it last month, at Think? >> Yeah, it's hard to keep track, right. >> And we were talking about your new role- >> It's been a busy year. >> the importance of partnerships. One of the things I want to, well let's talk about your role, but I really want to get into, it's innovation. And we talked about this at Think, because it's so critical, in my opinion anyway, that you can attract partnerships, innovation partnerships, startups, established companies, et cetera. >> Joel: Yeah. >> To really help drive that innovation, it takes a team of people, IBM can't do it on its own. >> Yeah, I mean look, IBM is the leader in innovation, as we all know. We're the market leader for patents, that we put out each year, and how you get that technology in the hands of the real innovators, the developers, the longtail ISVs, our partners out there, that's the challenging part at times, and so what we've been up to is really looking at how we make it easier for partners to partner with IBM. How we make it easier for developers to work with IBM. So we have a number of areas that we've been adding, so for example, we've added a whole IBM Code portal, so if you go to developer.ibm.com/code you can actually see hundreds of code patterns that we've created to help really any client, any partner, get started using IBM's technology, and to innovate. >> Yeah, and that's critical, I mean you're right, because to me innovation is a combination of invention, which is what you guys do really, and then it's adoption, which is what your customers are all about. You come from the data science world. We're here at the Chief Data Officer Summit, what's the intersection between data science and CDOs? What are you seeing there? >> Yeah, so when I was here last, it was about two years ago in 2015, actually, maybe three years ago, man, time flies when you're having fun. >> Dave: Yeah, the Spark Summit- >> Yeah Spark Technology Center and the Spark Summit, and we were here, I was here at the Chief Data Officer Summit. And it was great, and at that time, I think a lot of the conversation was really not that different than what I'm seeing today. Which is, how do you manage all of your data assets? I think a big part of doing good data science, which is my kind of background, is really having a good understanding of what your data governance is, what your data catalog is, so, you know we introduced the Watson Studio at Think, and actually, what's nice about that, is it brings a lot of this together. So if you look in the market, in the data market, today, you know we used to segment it by a few things, like data gravity, data movement, data science, and data governance. And those are kind of the four themes that I continue to see. And so outside of IBM, I would contend that those are relatively separate kind of tools that are disconnected, in fact Dinesh Nirmal, who's our engineer on the analytic side, Head of Development there, he wrote a great blog just recently, about how you can have some great machine learning, you have some great data, but if you can't operationalize that, then really you can't put it to use. And so it's funny to me because we've been focused on this challenge, and IBM is making the right steps, in my, I'm obviously biased, but we're making some great strides toward unifying the, this tool chain. Which is data management, to data science, to operationalizing, you know, machine learning. So that's what we're starting to see with Watson Studio. >> Well, I always push Dinesh on this and like okay, you've got a collection of tools, but are you bringing those together? And he flat-out says no, we developed this, a lot of this from scratch. Yes, we bring in the best of the knowledge that we have there, but we're not trying to just cobble together a bunch of disparate tools with a UI layer. >> Right, right. >> It's really a fundamental foundation that you're trying to build. >> Well, what's really interesting about that, that piece, is that yeah, I think a lot of folks have cobbled together a UI layer, so we formed a partnership, coming back to the partnership view, with a company called Lightbend, who's based here in San Francisco, as well as in Europe, and the reason why we did that, wasn't just because of the fact that Reactive development, if you're not familiar with Reactive, it's essentially Scala, Akka, Play, this whole framework, that basically allows developers to write once, and it kind of scales up with demand. In fact, Verizon actually used our platform with Lightbend to launch the iPhone 10. And they show dramatic improvements. Now what's exciting about Lightbend, is the fact that application developers are developing with Reactive, but if you turn around, you'll also now be able to operationalize models with Reactive as well. Because it's basically a single platform to move between these two worlds. So what we've continued to see is data science kind of separate from the application world. Really kind of, AI and cloud as different universes. The reality is that for any enterprise, or any company, to really innovate, you have to find a way to bring those two worlds together, to get the most use out of it. >> Fourier always says "Data is the new development kit". He said this I think five or six years ago, and it's barely becoming true. You guys have tried to make an attempt, and have done a pretty good job, of trying to bring those worlds together in a single platform, what do you call it? The Watson Data Platform? >> Yeah, Watson Data Platform, now Watson Studio, and I think the other, so one side of it is, us trying to, not really trying, but us actually bringing together these disparate systems. I mean we are kind of a systems company, we're IT. But not only that, but bringing our trained algorithms, and our trained models to the developers. So for example, we also did a partnership with Unity, at the end of last year, that's now just reaching some pretty good growth, in terms of bringing the Watson SDK to game developers on the Unity platform. So again, it's this idea of bringing the game developer, the application developer, in closer contact with these trained models, and these trained algorithms. And that's where you're seeing incredible things happen. So for example, Star Trek Bridge Crew, which I don't know how many Trekkies we have here at the CDO Summit. >> A few over here probably. >> Yeah, a couple? They're using our SDK in Unity, to basically allow a gamer to use voice commands through the headset, through a VR headset, to talk to other players in the virtual game. So we're going to see more, I can't really disclose too much what we're doing there, but there's some cool stuff coming out of that partnership. >> Real immersive experience driving a lot of data. Now you're part of the Digital Business Group. I like the term digital business, because we talk about it all the time. Digital business, what's the difference between a digital business and a business? What's the, how they use data. >> Joel: Yeah. >> You're a data person, what does that mean? That you're part of the Digital Business Group? Is that an internal facing thing? An external facing thing? Both? >> It's really both. So our Chief Digital Officer, Bob Lord, he has a presentation that he'll give, where he starts out, and he goes, when I tell people I'm the Chief Digital Officer they usually think I just manage the website. You know, if I tell people I'm a Chief Data Officer, it means I manage our data, in governance over here. The reality is that I think these Chief Digital Officer, Chief Data Officer, they're really responsible for business transformation. And so, if you actually look at what we're doing, I think on both sides is we're using data, we're using marketing technology, martech, like Optimizely, like Segment, like some of these great partners of ours, to really look at how we can quickly A/B test, get user feedback, to look at how we actually test different offerings and market. And so really what we're doing is we're setting up a testing platform, to bring not only our traditional offers to market, like DB2, Mainframe, et cetera, but also bring new offers to market, like blockchain, and quantum, and others, and actually figure out how we get better product-market fit. What actually, one thing, one story that comes to mind, is if you've seen the movie Hidden Figures- >> Oh yeah. >> There's this scene where Kevin Costner, I know this is going to look not great for IBM, but I'm going to say it anyways, which is Kevin Costner has like a sledgehammer, and he's like trying to break down the wall to get the mainframe in the room. That's what it feels like sometimes, 'cause we create the best technology, but we forget sometimes about the last mile. You know like, we got to break down the wall. >> Where am I going to put it? >> You know, to get it in the room! So, honestly I think that's a lot of what we're doing. We're bridging that last mile, between these different audiences. So between developers, between ISVs, between commercial buyers. Like how do we actually make this technology, not just accessible to large enterprise, which are our main clients, but also to the other ecosystems, and other audiences out there. >> Well so that's interesting Joel, because as a potential partner of IBM, they want, obviously your go-to-market, your massive company, and great distribution channel. But at the same time, you want more than that. You know you want to have a closer, IBM always focuses on partnerships that have intrinsic value. So you talked about offerings, you talked about quantum, blockchain, off-camera talking about cloud containers. >> Joel: Yeah. >> I'd say cloud and containers may be a little closer than those others, but those others are going to take a lot of market development. So what are the offerings that you guys are bringing? How do they get into the hands of your partners? >> I mean, the commonality with all of these, all the emerging offerings, if you ask me, is the distributed nature of the offering. So if you look at blockchain, it's a distributed ledger. It's a distributed transaction chain that's secure. If you look at data, really and we can hark back to say, Hadoop, right before object storage, it's distributed storage, so it's not just storing on your hard drive locally, it's storing on a distributed network of servers that are all over the world and data centers. If you look at cloud, and containers, what you're really doing is not running your application on an individual server that can go down. You're using containers because you want to distribute that application over a large network of servers, so that if one server goes down, you're not going to be hosed. And so I think the fundamental shift that you're seeing is this distributed nature, which in essence is cloud. So I think cloud is just kind of a synonym, in my opinion, for distributed nature of our business. >> That's interesting and that brings up, you're right, cloud and Big Data/Hadoop, we don't talk about Hadoop much anymore, but it kind of got it all started, with that notion of leave the data where it is. And it's the same thing with cloud. You can't just stuff your business into the public cloud. You got to bring the cloud to your data. >> Joel: That's right. >> But that brings up a whole new set of challenges, which obviously, you're in a position just to help solve. Performance, latency, physics come into play. >> Physics is a rough one. It's kind of hard to avoid that one. >> I hear your best people are working on it though. Some other partnerships that you want to sort of, elucidate. >> Yeah, no, I mean we have some really great, so I think the key kind of partnership, I would say area, that I would allude to is, one of the things, and you kind of referenced this, is a lot of our partners, big or small, want to work with our top clients. So they want to work with our top banking clients. They want, 'cause these are, if you look at for example, MaRisk and what we're doing with them around blockchain, and frankly, talk about innovation, they're innovating containers for real, not virtual containers- >> And that's a joint venture right? >> Yeah, it is, and so it's exciting because, what we're bringing to market is, I also lead our startup programs, called the Global Entrepreneurship Program, and so what I'm focused on doing, and you'll probably see more to come this quarter, is how do we actually bridge that end-to-end? How do you, if you're startup or a small business, ultimately reach that kind of global business partner level? And so kind of bridging that, that end-to-end. So we're starting to bring out a number of different incentives for partners, like co-marketing, so I'll help startups when they're early, figure out product-market fit. We'll give you free credits to use our innovative technology, and we'll also bring you into a number of clients, to basically help you not burn all of your cash on creating your own marketing channel. God knows I did that when I was at a start-up. So I think we're doing a lot to kind of bridge that end-to-end, and help any partner kind of come in, and then grow with IBM. I think that's where we're headed. >> I think that's a critical part of your job. Because I mean, obviously IBM is known for its Global 2000, big enterprise presence, but startups, again, fuel that innovation fire. So being able to attract them, which you're proving you can, providing whatever it is, access, early access to cloud services, or like you say, these other offerings that you're producing, in addition to that go-to-market, 'cause it's funny, we always talk about how efficient, capital efficient, software is, but then you have these companies raising hundreds of millions of dollars, why? Because they got to do promotion, marketing, sales, you know, go-to-market. >> Yeah, it's really expensive. I mean, you look at most startups, like their biggest ticket item is usually marketing and sales. And building channels, and so yeah, if you're, you know we're talking to a number of partners who want to work with us because of the fact that, it's not just like, the direct kind of channel, it's also, as you kind of mentioned, there's other challenges that you have to overcome when you're working with a larger company. for example, security is a big one, GDPR compliance now, is a big one, and just making sure that things don't fall over, is a big one. And so a lot of partners work with us because ultimately, a number of the decision makers in these larger enterprises are going, well, I trust IBM, and if IBM says you're good, then I believe you. And so that's where we're kind of starting to pull partners in, and pull an ecosystem towards us. Because of the fact that we can take them through that level of certification. So we have a number of free online courses. So if you go to partners, excuse me, ibm.com/partners/learn there's a number of blockchain courses that you can learn today, and will actually give you a digital certificate, that's actually certified on our own blockchain, which we're actually a first of a kind to do that, which I think is pretty slick, and it's accredited at some of the universities. So I think that's where people are looking to IBM, and other leaders in this industry, is to help them become experts in their, in this technology, and especially in this emerging technology. >> I love that blockchain actually, because it's such a growing, and interesting, and innovative field. But it needs players like IBM, that can bring credibility, enterprise-grade, whether it's security, or just, as I say, credibility. 'Cause you know, this is, so much of negative connotations associated with blockchain and crypto, but companies like IBM coming to the table, enterprise companies, and building that ecosystem out is in my view, crucial. >> Yeah, no, it takes a village. I mean, there's a lot of folks, I mean that's a big reason why I came to IBM, three, four years ago, was because when I was in start-up land, I used to work for H20, I worked for Alpine Data Labs, Datameer, back in the Hadoop days, and what I realized was that, it's an opportunity cost. So you can't really drive true global innovation, transformation, in some of these bigger companies because there's only so much that you can really kind of bite off. And so you know at IBM it's been a really rewarding experience because we have done things like for example, we partnered with Girls Who Code, Treehouse, Udacity. So there's a number of early educators that we've partnered with, to bring code to, to bring technology to, that frankly, would never have access to some of this stuff. Some of this technology, if we didn't form these alliances, and if we didn't join these partnerships. So I'm very excited about the future of IBM, and I'm very excited about the future of what our partners are doing with IBM, because, geez, you know the cloud, and everything that we're doing to make this accessible, is bar none, I mean, it's great. >> I can tell you're excited. You know, spring in your step. Always a lot of energy Joel, really appreciate you coming onto theCUBE. >> Joel: My pleasure. >> Great to see you again. >> Yeah, thanks Dave. >> You're welcome. Alright keep it right there, everybody. We'll be back. We're at the IBM CDO Strategy Summit in San Francisco. You're watching theCUBE. (techno music) (touch-tone phone beeps)
SUMMARY :
Brought to you by IBM. Good to see you again Joel. that you can attract partnerships, To really help drive that innovation, and how you get that technology Yeah, and that's critical, I mean you're right, Yeah, so when I was here last, to operationalizing, you know, machine learning. that we have there, but we're not trying that you're trying to build. to really innovate, you have to find a way in a single platform, what do you call it? So for example, we also did a partnership with Unity, to basically allow a gamer to use voice commands I like the term digital business, to look at how we actually test different I know this is going to look not great for IBM, but also to the other ecosystems, But at the same time, you want more than that. So what are the offerings that you guys are bringing? So if you look at blockchain, it's a distributed ledger. You got to bring the cloud to your data. But that brings up a whole new set of challenges, It's kind of hard to avoid that one. Some other partnerships that you want to sort of, elucidate. and you kind of referenced this, to basically help you not burn all of your cash early access to cloud services, or like you say, that you can learn today, but companies like IBM coming to the table, that you can really kind of bite off. really appreciate you coming onto theCUBE. We're at the IBM CDO Strategy Summit in San Francisco.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Joel | PERSON | 0.99+ |
Joel Horwitz | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Kevin Costner | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dinesh Nirmal | PERSON | 0.99+ |
Alpine Data Labs | ORGANIZATION | 0.99+ |
Lightbend | ORGANIZATION | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Hidden Figures | TITLE | 0.99+ |
Bob Lord | PERSON | 0.99+ |
Both | QUANTITY | 0.99+ |
MaRisk | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
iPhone 10 | COMMERCIAL_ITEM | 0.99+ |
2015 | DATE | 0.99+ |
Datameer | ORGANIZATION | 0.99+ |
both sides | QUANTITY | 0.99+ |
one story | QUANTITY | 0.99+ |
Think | ORGANIZATION | 0.99+ |
five | DATE | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Treehouse | ORGANIZATION | 0.99+ |
three years ago | DATE | 0.99+ |
developer.ibm.com/code | OTHER | 0.99+ |
Unity | ORGANIZATION | 0.98+ |
two worlds | QUANTITY | 0.98+ |
Reactive | ORGANIZATION | 0.98+ |
GDPR | TITLE | 0.98+ |
one side | QUANTITY | 0.98+ |
Digital Business Group | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
Udacity | ORGANIZATION | 0.98+ |
ibm.com/partners/learn | OTHER | 0.98+ |
last month | DATE | 0.98+ |
Watson Studio | ORGANIZATION | 0.98+ |
each year | QUANTITY | 0.97+ |
three | DATE | 0.97+ |
single platform | QUANTITY | 0.97+ |
Girls Who Code | ORGANIZATION | 0.97+ |
Parc 55 | LOCATION | 0.97+ |
one thing | QUANTITY | 0.97+ |
four themes | QUANTITY | 0.97+ |
Spark Technology Center | ORGANIZATION | 0.97+ |
six years ago | DATE | 0.97+ |
H20 | ORGANIZATION | 0.97+ |
four years ago | DATE | 0.97+ |
martech | ORGANIZATION | 0.97+ |
Unity | TITLE | 0.96+ |
hundreds of millions of dollars | QUANTITY | 0.94+ |
Watson Studio | TITLE | 0.94+ |
Dinesh | PERSON | 0.93+ |
one server | QUANTITY | 0.93+ |
(DO NOT MAKE PUBLIC) John Shirley, Dell EMC | HCI: A Foundation For IT Transformation (3)
>> From the Silicon Angle Media Office in Boston, Massachusetts, it's theCUBE. Now, here's your host, Dave Vellante. >> Prior to the historic merger between Dell and EMC, Dell had a relationship with Nutanix, a pioneer in hyper conversion infrastructure. After the merger, many people questioned whether that relationship would continue. Hi everybody, my name is Dave Vellante, I'm here with John Shirley who's the director of product management at Dell EMC, and we're here to talk about the continuation of that relationship, Hi John, good to see you. >> Good to see you as well, thanks for having me. >> You've got a new announcement today, it's the XC series, tell us all about it. >> Yeah, so the XC series, what we're announcing, this is our third generation of powered server deployments for XC series and, what we're announcing is that the two most popular models for XC series are going to be refreshed in 14th generation servers. Those specifically are XC640, which is really designed for compute intensive things like VDI, private cloud, some remote office application, as well as the XC740XD, which is more for storage intensive applications, so think share point, big data application things like that. Now all of the new platforms that we'll release will have new technologies like MVME, they'll have faster networking options like 25 gig ethernet, and a whole bunch of other features that are really going to help propel this into more mainstream applications. >> Okay, so it's not just faster, better price, performance, there's some other innovations that you mentioned in MVME that are coming in that you're integrating and engineering into the solution. >> Absolutely, so we have a really tight relationship between our Power Edge, as well as what we do on the XC series, and in addition to that, we have a really tight relationship with our Nutanix engineering counterparts as well. We're really designing these all into a single application. >> Okay, so the marketing, I'm sorry to interrupt. So the marketing gurus at Dell EMC are throwing around this term, purposeful. >> Yes. >> What does that mean? >> I love this term because it really takes into account all the additional efforts that we do around the solution, we have years and years of experience of deploying SDS solutions on top of servers, and what we really realize is that you want to design these solutions, again to be purposeful as the name implies. It's things like controlling everything, all the way from orderability to manufacturing, to serviceability to ensure that you get a really tight and clean experience with a customer. So things like CPU, memory, hard drive configuration, designed specifically for hyper converge, and that flows all the way through to support. So it's a much cleaner experience for the customer. >> So what does that mean, designed specifically for hyper converge, I mean can you unpack that a little bit? What's different about hyper converge that requires that different design? >> Yeah, well hyper converge, as you probably well know, and I'm not sure how many of the users out there know, but it was really designed around the cloud experience. So taking a look at the hyper scale vendors, and designing similar models for data centers, and really what that entails is things about taking a power edge platform, designing the technologies to be fault tolerant, to be scalable, and we've taken that to the next level. So on the XC series, we've designed some software and some Dell IP that really harnesses a lot of the capabilities of the power edge. We call it the Power Tools SDK, and it really allows for software defined solutions like Nutanix to sit on top of Power Edge. By the way, we use it for our other platforms as well within the portfolio, but it really shows that it is purposefully built and designed for SDS solutions. >> Okay, so Dell was the first to do an OEM relationship with Nutanix, and subsequently they've done maybe a couple of others, but what makes you guys special? >> Well first off, the power edge platform is the leading platform out there in the marketplace, so that alone right there gives us a lot of strength from a manufacturing, procurement, all that ecosystem. That's one of the benefits that we get. We also do things like develop our own IP around this power tools SDK, as well as other IP that we have on the platform. So that's another one right there. Collectively, within the group, we have hundreds of hours of experience, not only designing storage, but also compute around the hypervisor, and around networking, so we've brought all that expertise into the group to really design this hyperconverge platform. And that's something that no one else can really do on the marketplace. >> So in the early days of HCI, which obviously, the workloads were, VDI was a popular workload, and a lot of the knockoffs were, it's a nice infrastructure for a remote office, or small or mid sized businesses. Can you address scalability? Where are we today in terms of scale? >> On the scale, like I said it was one of the design tenets, so I'll give you a good example. If a customer has bought previous versions of XC series, whether it's the 12th generation or the 13th generation, they can now come and buy the 14th generation from us, and put that into the existing ecosystem. Right into the same cluster, and so talk about a mind shift from traditional architectures that would require essentially ripping out the old gear and putting in the new gear, now you can grow as the technology grows, and you can do that in a very seamless fashion without any downtime, and it's very scalable in a very linear sense. >> Can you talk about the portfolio a little bit? Dell EMC has one of everything, if I want it, you probably have it. >> M-hm. >> But sometimes, analysts and independent observers, customers, probably sales guys, it's confusing. So where does this fit in the portfolio, relative to some of the other things that you've announced today and have in the portfolio? >> We get that question all the time, and it's a great question. But it's a pretty clean answer for us. For customers who are standardized on VM1 and they want that experience, we have VXRL, right? Great product. For customers now who want choice of hypervisors, or if they're already standardized on Nutanix platform, then we have XC series, and we have a lot of customers out there who want to go to a model that sits on top of a power edge base because of the power of power edge, so we've got that to offer to our customers, and in particular when we talk about hypervisor choice, we know that Hyper-V is a very fast growing portion of the market, and we are focused on that part of the market for customers who want to do multiple different hypervisors. >> I wonder if I could ask you, you know when you're separate companies, and you're trying to do engineering, you make it happen. Look what you guys did with VCE. How has the experience been at the engineering level, in terms of getting higher levels of integration, now that you guys are one company? Can you talk about that a little bit? >> Yeah, so I'm going to take a step back and not just, just focus on the engineering. It's really end to end, and it goes all the way from the engineering up front, but then it trickles down to the marketing and the product managers, and all the sales teams so everything, end to end, needs to fit well together. What I'll tell you is me, personally, I talk to my product management counterparts, my sales counterparts over on the Nutanix side on a nearly daily basis, so the relationships got to be strong and we've really strengthened that over the years. >> Okay, Nutanix's got to be happy because they've got a massive distribution channel. You guys, Michael Dell was very clear on this from the early days that you guys were going to continue the relationship because that's what customers want. Can you talk about culturally your focus on customers, and EMC's always been very customer focused, Dell, Michael Dell personally was very customer focused, is that really the sort of genesis of the continuation of this relationship? Maybe you can talk about that a little bit. >> Yeah, we are maniacally focused on customers, so if you look at the new platforms that we're shipping, give you a data point. We talk to the customers and we have somewhere around 150 new design features specifically for the XC series platform because of those conversations with customers and because we've done this for three generations, we have a lot of those inputs leading into the product, and so yes we are very focused on the customers, and what we know is that the customers want to have that choice. Not all of them do, right? A lot of customers are going to go over to the Xrell, it's a great product, it's growing really quickly, but we also know that a number have really standardized again on the Hyper-V, or on the Nutanix platform. >> Well because of the size of your install space, you have a huge observation base, we like to call it, and you obviously collect a lot of data. It sounds like you've been able to leverage that for competitive advantage and to add additional value for your customers. >> Yes, it's always nice to have a product and a portfolio that can win. >> Alright so we got to wrap, so we got a crowd chat coming up on December first. First half, #NextGenHCI, it's kind of an AMA on this announcement. Where can I get additional information on this? >> So you can go to www.Dell.com/HCI. >> Excellent, well, John, thanks very much. >> Thank you. >> For coming to the Cube. Alright, thanks for watching, everybody. This is Dave Vellante, we'll see you next time. (light techno music)
SUMMARY :
From the Silicon Angle Media Office of that relationship, Hi John, good to see you. You've got a new announcement today, it's the XC series, Yeah, so the XC series, what we're announcing, and engineering into the solution. on the XC series, and in addition to that, Okay, so the marketing, I'm sorry to interrupt. and that flows all the way through to support. designing the technologies to be fault tolerant, into the group to really design this hyperconverge platform. and a lot of the knockoffs were, it's a nice infrastructure and putting in the new gear, now you can grow Can you talk about the portfolio a little bit? relative to some of the other things of the market, and we are focused on that part of the market How has the experience been at the engineering level, and all the sales teams so everything, end to end, from the early days that you guys were going that the customers want to have that choice. Well because of the size of your install space, and a portfolio that can win. Alright so we got to wrap, For coming to the Cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Nutanix | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
John Shirley | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
25 gig | QUANTITY | 0.99+ |
Michael Dell | PERSON | 0.99+ |
XC740XD | COMMERCIAL_ITEM | 0.99+ |
XC series | COMMERCIAL_ITEM | 0.99+ |
XC640 | COMMERCIAL_ITEM | 0.99+ |
Xrell | ORGANIZATION | 0.99+ |
www.Dell.com/HCI | OTHER | 0.99+ |
third generation | QUANTITY | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.98+ |
Silicon Angle Media Office | ORGANIZATION | 0.97+ |
single application | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
13th generation | QUANTITY | 0.96+ |
14th generation | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
First half | QUANTITY | 0.95+ |
12th generation | QUANTITY | 0.95+ |
three generations | QUANTITY | 0.95+ |
Hyper-V | TITLE | 0.94+ |
XC | COMMERCIAL_ITEM | 0.93+ |
hundreds of hours | QUANTITY | 0.91+ |
Dell | PERSON | 0.89+ |
two most popular models | QUANTITY | 0.89+ |
December first | DATE | 0.88+ |
around 150 new design features | QUANTITY | 0.87+ |
VM1 | TITLE | 0.84+ |
one company | QUANTITY | 0.83+ |
Nutanix | TITLE | 0.81+ |
HCI: A Foundation For IT Transformation | TITLE | 0.79+ |
Edge | COMMERCIAL_ITEM | 0.68+ |
V | COMMERCIAL_ITEM | 0.62+ |
Power Tools SDK | TITLE | 0.6+ |
Power Edge | TITLE | 0.59+ |
VCE | ORGANIZATION | 0.58+ |
years | QUANTITY | 0.58+ |
theCUBE | ORGANIZATION | 0.55+ |
tenets | QUANTITY | 0.48+ |
VXRL | TITLE | 0.48+ |
MVME | ORGANIZATION | 0.43+ |
Hyper | ORGANIZATION | 0.42+ |
HCI | ORGANIZATION | 0.36+ |
John Shirley, Dell EMC | HCI: A Foundation For IT Transformation
>> Announcer: From the Silicon Angle Media office, in Boston, Massachusetts, it's theCUBE! Now, here's your host, Dave Vellante. >> Prior to the historic merger between Dell and EMC, Dell had a relationship with a company called Nutanix. Nutanix was a pioneer in so called hyperconverged infrastructure and a lot of people questioned whether that relationship would continue after the merger. Hi, everybody, I'm Dave Vellante, and I'm here with John Shirley who's the Director of Product Management at Dell EMC and we're going to talk about that. Welcome, John. >> Thank you, thanks for having me. >> So, the XC Series you are continuing the innovation there, tell us about what you are announcing today. >> Yeah, so this is our third generation, so this is the third generation of the XC Series and what we are announcing is that our most popular models are available now, and the most popular models are the XC640, which is more of a compute intensive node that will be targeted at VDI compute intensive remote offices, things like that. And we're also announcing the XC740XD which is more for storage intensive and performance applications. Think big data, SharePoint, exchange, those kind of things. >> Okay, so we're seeing the evolution of the workloads that can be supported by hyperconvergent infrastructure. And this is more evidence, right? >> Absolutely, and to that point, where we started off, we saw a lot of VDI deployments but now very quickly, once those companies adopt the technology, they are growing that more to mainstream. >> Okay, so I see this term, marketing gurus at Dell EMC throw around this term, purposeful. Okay, let's put some meat on the bone, what does that mean? >> I love the term because it really helps describe what we do, right. This isn't just take things like SDS offers, in this case Nutanix, throw it on some PowerEdge and validate it. Those are really core, important steps. But we go above and beyond that so purposeful really is kind of end to end view of what the solution is. So it's things all the way from configuration to manufacturing and supportability. Things like processor choices, SSD selection, memory types, you can kind of go down the list and we've really designed this purposefully for ACI market. >> Okay, so Dell, of course, was the first to do an OEM relationship with Nutanix, there are others. Can you talk about your differentiation, what's special about Dell EMC and Nutanix. >> Yes, so you know, I think if I go back to the three points that I had before. You have a server, you have SDS solutions and you do some validation around it. Very important steps. We really feel that we have the strongest server in the world and so that's point number one for us. Nutanix, great partnership there. And then the validation steps which we have a very strong engineering team to go after now. If we take that a step further, Dell has created some soft or some IP that really helps kind of glue everything together. We call it the Power Tools SDK. And that's really years worth of experience working with SDS solutions that we know how to integrate into the server and really load that software on top of it so we can do things like life cycle management, we can have recovery options, and there's a whole list of options that are available with the Power Tools SDK. So that's one of them. And the final one is we're Dell EMC and the great part about being this new company is that we have this great, great portfolio of technologies. So it's things like integration with data protection, right. Now that we have Avamar Data Domain, we have the ability to create new products. In fact, that's one of the new things that we have as well. We are announcing a new data protection solution that is taking the Avamar software and taking Data Domain and we're integrating that right into the prism interface so if you listen to Nutanix, they say one click simplicity, well we're introducing a one click back-up, one click back-up automation into the portfolio. >> I love that, because a lot of times back-up is an after thought. You know, oh I got this new infrastructure, how am I going to back up the data. Okay, let's bolt this on. So let me ask you a follow up to that. As Dell EMC, you know one company, sometimes when you're two companies it's hard to do that type of engineering, can you talk about as Dell EMC as one, how the engineering culture and results, the outcomes have improved or changed? >> Yeah, absolutely. So, I'm not just going to focus on engineering, because I really want to take a look at the entire organization. So it goes all the way from engineering, marketing product management, sales, it's that whole eco-system. You can even talk about the support organization, the quality and we really have tight relationship between Nutanix and the Dell EMC counterparts. So to give you a good example, I talk with my product management counterparts and I talk with the sales leaders on a nearly daily basis and we want to make sure that relationship is really strong and that we evolve the relationship over time. >> Can we talk a little bit about scalability? We talked earlier at the top about work loads, VDI was very popular, remote office was kind of a sweet spot of hyperconvergent the early days. It's evolved, but scalability is always been a question. Where are we at with regards to scalability of hyperconvergent infrastructure? >> That's a great question. So, HCI came from the big Cloud providers and that technology was really meant to bring the tenants of what we saw with the scale of Cloud providers into the mainstream data centers. And so to that end, scalability is a core attribute. I'll give you a good example here, when the 14th generation of XC Series comes out, we'll be able to plug that in to customers existing eco-systems. So let's say a customer has a 12th generation or a 13th generation Power Edge XC series, we can now plug that technology right into the same cluster and if you talk about reusing technology, integrating technology into the data center, and really providing great value, and making sure customers don't have to throw away say older or medium term technology like the 13th generation, now they can just use the new technology right in place with the existing. >> John can you talk about the portfolio a little bit I mean, you guys got one of everything. If I want it, you probably have it. But a lot of times that gets confusing for customers and partners, probably sales reps. Where does the XC Series and these new announcements, where does it fit in the portfolio relative to some of the other things you are announcing? >> We get this question all the time. In my mind, it's really clear. For customers who have standardized on VMware, we have the xRail. For customers now who want say a choice of hyper visor or for customers who have already standardized on Nutanix software, we have XC Series. So there's absolutely room for both. We know the market is really big and it's growing fast and we have options for customers now whether they want to run on VMware or they want to run on say on Hyper-V as a good example. >> Let's see, when can I get this stuff? Can I buy it today or soon? >> It's available now, it's available now. And we have customers who are anxiously waiting because the new technologies are on their platforms. So it's available now and shipping now as well. >> Excellent. All right, we got to break, but I'll give you last word. Things like key take aways, you know, what should we be thinking about with this announcement, with the partnership? >> Absolutely, I think the key things here is the partnership is still growing strong, and we really feel that the best way to consume Nutanix software is on the XC series in combination with Dell and really getting the best out of both worlds. Out of the Nutanix relationship, out of the Dell relationship. >> Excellent, right, we got to go, but let's see CrowdChat coming up, #NextGenHCI, CrowdChat.net/NextGenHCI on Decemeber first. Where can I get more information about these products? >> If you go to DellEMC.com/HCI. >> Simple. All right, John, thanks very much for coming to theCUBE. Appreciate it. Thanks for watching everybody. This is Dave Vellante, we'll see you next time. (upbeat music)
SUMMARY :
Announcer: From the Silicon Angle Media office, and I'm here with John Shirley So, the XC Series you are continuing the innovation there, and the most popular models are the XC640, Okay, so we're seeing the evolution of the workloads the technology, they are growing that more to mainstream. Okay, let's put some meat on the bone, what does that mean? I love the term because it Can you talk about your differentiation, In fact, that's one of the new things that we have as well. how the engineering culture and results, and that we evolve the relationship over time. We talked earlier at the top about work loads, and if you talk about reusing technology, to some of the other things you are announcing? and it's growing fast and we have options for customers now And we have customers who are anxiously waiting All right, we got to break, but I'll give you last word. and really getting the best out of both worlds. Excellent, right, we got to go, This is Dave Vellante, we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
John Shirley | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Decemeber | DATE | 0.99+ |
two companies | QUANTITY | 0.99+ |
XC Series | COMMERCIAL_ITEM | 0.99+ |
XC640 | COMMERCIAL_ITEM | 0.99+ |
XC740XD | COMMERCIAL_ITEM | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
third generation | QUANTITY | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
both | QUANTITY | 0.98+ |
both worlds | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
one company | QUANTITY | 0.98+ |
HCI | ORGANIZATION | 0.98+ |
three points | QUANTITY | 0.98+ |
SharePoint | TITLE | 0.97+ |
CrowdChat.net/NextGenHCI | OTHER | 0.96+ |
generation | COMMERCIAL_ITEM | 0.95+ |
one click | QUANTITY | 0.95+ |
14th | QUANTITY | 0.95+ |
one | QUANTITY | 0.93+ |
XC series | COMMERCIAL_ITEM | 0.93+ |
13th | QUANTITY | 0.92+ |
#NextGenHCI | ORGANIZATION | 0.92+ |
Power Edge XC series | COMMERCIAL_ITEM | 0.91+ |
12th generation | QUANTITY | 0.87+ |
Foundation For IT Transformation | TITLE | 0.87+ |
CrowdChat | ORGANIZATION | 0.8+ |
DellEMC.com/HCI | OTHER | 0.8+ |
Power Tools | TITLE | 0.78+ |
Avamar | TITLE | 0.76+ |
Power Tools SDK | TITLE | 0.74+ |
VMware | ORGANIZATION | 0.73+ |
Hyper-V | TITLE | 0.72+ |
ACI | ORGANIZATION | 0.72+ |
xRail | TITLE | 0.69+ |
Silicon Angle | LOCATION | 0.66+ |
theCUBE | ORGANIZATION | 0.53+ |
point | QUANTITY | 0.52+ |
PowerEdge | TITLE | 0.37+ |
Susie Wee, Cisco DevNet - Cisco DevNet Create 2017 - #DevNetCreate - #theCUBE
(upbeat music) >> Announcer: Live from San Francisco, it's theCUBE, covering DevNet Create 2017. Brought to you by Cisco. >> Hello, everyone, and welcome back to our live coverage from theCUBE exclusive, two days with Cisco's inaugural DevNet Create event. I'm John Furrier, with my co-host, Peter Burris, who's the general manager of Wikibon.com, and head of research for SiliconANGLE Media. We're talking with Susie Wee, who is the vice president and CTO of Cisco's DevNet, the creator of DevNet, the developer program that was started as grassroots, now a full-blown Cisco developer program. Now starting another foray into the cloud-native open-source community with this new event, DevNet Create. Welcome to theCUBE, thanks for joining us. >> Thank you, John. >> Thanks for having us. We love going to the inaugural events because they're always the first, and you know, being bloggers, and media, you got to be first. First news, first comments. >> Susie: Always first. >> Always first, and we're the only media here, so thank you. >> Susie: Thank you. >> So tell us about the event (Susie chuckles). You're the host and the creator, with your team. >> Susie: Yes. >> How did this come together, why DevNet Create? You have DevNet, this event is going extremely well, tell us. >> Awesome, so, yeah, so we have DevNet, we've had DevNet for about three years. It was actually exactly three years ago that we had our first DevNet Zone, a developer conference at Cisco Live, three years ago. And there, we felt like we pretty squarely hit... We've had successes there, we've had a pretty strong handle on our infrastructure audience, but what we see is that there's this huge transition, transformation going on in the industry, with IoT and cloud, that changes the definition of how applications meet infrastructure. And so this whole thing with, you know, applications, what is an application? What is the infrastructure? The infrastructure is now programmable, how can apps interact? It opens up a whole new world, and so what we did was we created DevNet Create as a standalone developer conference focused on IoT and cloud to focus on that transformation. >> And a lot of industry trends kind of going on, and moves you're making, it's the company, or you, Cisco is making, AppDynamics, big acquisition, kind of speaks to that, but also, there's always a natural progression for Cisco to have moving up the stack with software, but IoT gives you guys a unique opportunity with the network concept. So, making it network programmable, infrastructure as code, as some say in the DevOps world, is the ethos. >> Absolutely. >> How do you guys see yourselves engaging with the community, and what are some of the plans, and what's some of the feedback you're getting here at the event? >> So what we've done here at the event is that, you know, as you've seen from the channel is that, our content is 90% from the community, maybe 10% from Cisco, 90% from the community, because we believe it is all about the ecosystem. It's about how applications meet the infrastructure, it's the systems people are building together. And there's a lot of movement in developing these technologies. We don't know the final form of how an IoT app... Like, who's going to build the app, who's going to build the users, who's going to run the service, who's going to run the infrastructure? It's all still evolving, and we think that the community needs to come together to solve this to make the most of the opportunity. And so that's what, really, this is all about. And then, we think it actually involves learning the languages, making sure that the app folks know the language of the infrastructure folks. They don't have to become experts in it, but just knowing the language. Understand what part's programmable, what part's not, what benefit can you derive from the infrastructure. And then, by really having knowledge of what you can get across, and creating a forum for people to get together to have this conversation, we can make those breakthroughs. >> So just a clarification, you said that 90% of the sessions are non-Cisco, or from the community, and only 10% from Cisco? >> Susie: That's right. >> Is that by design? >> That is absolutely by design. So, when we have the DevNet Zone at Cisco Live, that's all about all of Cisco's products, platforms, APIs, bringing in the community to come and learn about those, but DevNet Create was really, squarely for IoT and app developers, IoT app developers, cloud developers, people working on DevOps, to look at that intersection. So we didn't go into all the gory details of networking, like we very much like to do, but we were really trying to focus on, "What's the value to application developers, "and what are the opportunities?" >> Well, it's interesting because, Susie, we're in the midst, as you said, of a pretty significant transformation, and there's a lot of turbulence, not only in business and how business conceives of digital technology, and the role it's going to play, the developer world, cloud-this, cloud-that, different suppliers, but one of the anchor points is the network, even though the network itself is changing, >> It is. >> in the midst of a transformation, but it's a step function. So, you go from, on the wireless, go outside, 1G to 3G, to 5G, et cetera, that kind of thing, but how is the developer going to inform that next step function in the network, the next big transformation in the network, and to what degree is this kind of a session going to really catalyze that kind of a change? >> Absolutely. So, what happens is, you're right, it's something that we all know, all app developers know, and actually, every person in the world knows, the network is important. The network provides connectivity, the network is what provides Internet, data, and everything there. That's critical to apps, but the thing that's been heard about it is it's not programmable. Like, you kind of get that thing configured, it's working now, you leave it. Don't touch it. >> It's still wires. In the minds of a lot of people, (Susie laughs) it's still wires, right? >> It is, it's wires, or even if it's wireless, once you can get it configured, you leave it. You're not playing with it again, it's too, kind of, dangerous or fragile to change it. >> Because of the sensitivity to operational... >> Because of the sensitivity to operations. The big change that's happening is the network is becoming programmable. The network has APIs, and then, we have things like automation and controller-based networking coming into play, so you don't actually configure it by going one network device at a time, you feed these into a controller, and then, now you're actually doing network-wide commands. That takes out the human error, it actually makes it easy to configure and reconfigure. And when you have that ability to provision resources, to kind of reset configurations, when you can do that quickly through APIs, you suddenly have a tool that you never had before. So let me give you an example. So let's say that you're in a building, you have your badging systems, your automated elevators, you have your surveillance cameras, you want to put out a new security system with surveillance cameras. You don't want to put that on the same network segment as your vending machines. You have a different level of security required. Could put in a work order to say... >> Unless you're really worried about who's stealing from the vending machines. (all laugh) >> So what you can do, now that it's programmable, is use infrastructure as code, is basically say, "Boom, give me a new network segment, "let me drop these new devices onto it, "let the programmable network automatically create "a separate network segment that has "all of these devices together." Then you can start to use group-based policy to now set, you know, the rules that you want, for how those cameras are accessed, who they're accessible by, what kind of data can come in and out of it. You can actually do that with infrastructure as code. That was not a knob that app developers had before. So they don't need to become networking experts, but now they have these knobs that they can use to give you that next level of security, to give you that next level of programmability, and to do it at the speed that an app developer needs. >> So I was talking to Steve Post-y earlier this morning, and he's from Redhead, he's a lead developer, he's not a network guy, he's self-proclaimed, "Hey, I'm not a networking person, I care about apps," and he's a developer, and he brought up something interesting I want to get your thoughts on. I think you're onto something really big with your vision, which is why we're so pumped about it, and he brought up an example of ecosystem's edges, and margins of the edge of these, that when they come together, creates innovation opportunities. And he used the example of data science meets cloud. And what he was using in particular was the example of most data people in the old days were data jocks, they did data, they did things, and they weren't really computer scientists, but as those two communities came together, the computer scientist saying, "Hey, I don't know about data," and the data guy's like, "Hey, you know about algorithms," "I know about algorithms," so innovation happened when that came together. What you're doing here, if I got this right, is you're saying, "Hey, DevNet's doing great," from a Cisco perspective, "but now this whole new creative innovation world "in the cloud is happening in real time. "Bring 'em together, "so best of Cisco knowledge to the guys who don't want to be (chuckles) "experts in that can share information." Is that kind of where this is going? >> Yeah, that's exactly where it's going, and same example, earlier in my career, I was working on sending video over networks, and then you had the networking people doing networking, you had the video people doing video compression, but then video networking, or streaming media, kind of, oh, you can put, you know, your knowledge of the compression and the network all together, so that kind of emerged as a field. The same thing, so, so far, the applications, and the infrastructure, and IT departments have been completely separate. You would just do the best you can, it was the job of IT to provide it, but now, suddenly there's an opportunity to bring these together. And it's, again, it's because the infrastructure's becoming programmable, and now it has knobs and can work quickly. So, yes, this is kind of new ground. And things could continue the way they are, right? And it's okay, we're getting by, but you just won't be realizing the potential of the real kind of... >> Well, open-source has clearly demonstrated that the collective intelligence of communities can really move fast, and share, and it's now tier one, so you're seeing companies go public, MuleSoft, Cloudera, and the list goes on and on. So now you have the dynamic of open-source, so I got to ask you the question, as you go out with DevNet Create, as this creation, the builders that are out there building apps are going to have programmable networks, how do you see this next leg of the journey? Because you have the foray now with DevNet Create, looks good, really well done, what's next? >> What's next is going on and making the real instances that show the application and infrastructure synergy. So let me just give you a really simple example of something that we're doing, which is that Apple and Cisco have had a partnership, and this partnership is coming together in that we have iOS developers who are writing mobile apps. So you have your mobile apps people are writing, we have iOS 10, your app developers are writing these apps. But everybody knows you run into a situation where your app gets congested on the network. Let's say that we're here in Westfield Mall, and they want to put out an AR/VR app, and you want that traffic to work, right? 'Cause if the mall wants to offer an AR/VR service, it takes a lot of bandwidth to get that data through, but through this partnership, what we have is an ability we have to use an iOS 10 SDK to, basically, business optimize your app so that it can run well on a Cisco infrastructure. So basically, it's just saying, "Hey, this is important, "put it in the highest QoS (John laughs) level setting, "and make your AR/VR work." So it's just having these real instances where these work together. >> I mean, I used to be a plumber back in my day when I used to work at HP, and I know how hard it is, and so I'm going to bring this up, because networks used to be stable and fragile/brittle, and then that would determine what you could do on top of it. But there are things like DNS, we hear about DNS, we hear about configuration management, setting ports, and doing this, to your point, I want dynamic provisioning or policy at any given moment, yet the network's got to be ready to do that. >> You don't want to submit a work order for that. (laughs) >> You don't want to have to say, "Hey, can you provision port, whatever, "I need to send a bunch of bandwidth." This is what we're talking about when we say programmable infrastructure, just letting the apps interface with network APIs, right? >> Absolutely, and I think that, you heard earlier, that with CNCF, the Cloud Native Computing Foundation, just announced CNI, so that what they're doing is now offering an ability to take your kind of container orchestration and take into consideration what's going on in the network, right? So if this link is more congested than that, then make sure that you're doing your orchestration in the right ways, that the network is informing the cloud layer, that the cloud platform's informing the network, so that's going to be huge. >> But do you think, I'm curious, Susie, do you think that we're going to see a time when we start bringing conventions at layer 7 in the network, so we start to parse layer 7 down a little bit, so developers can think in terms of some of those higher-level services that previously have been presentation? Are we likely to see that kind of a thing? As the pain of the network starts to go away, and an explicit knowledge of layer 1-6 become a lot less important, are we going to see a natural expansion at layer 7, and think about distributed data, distributed applications, distributed services, more coherence to how that happens on an industry-wide basis? What do you think? >> Yeah, so let's see, I don't know if I have a view on which layers go away, or which layers compress... >> But the knowledge, the focal point of those? >> But the knowledge, absolutely. So it comes into play, and what happens is, like, what is the infrastructure? In the Internet of things, things are a part of your infrastructure. That's just different. As you're going to microservices, applications aren't applications, they're being written as microservices, and then once you put those microservices in containers, they can move around. So you actually have a pretty different paradigm for thinking about the architecture of applications, of how they're orchestrated, what resources they sit on, and how you provision, so you get a very new paradigm for that. And then the key is... >> But they're inherently networked? >> That's right, that's right. It's all about connectivity, it's all about, you know, they don't do anything without the network. And we're pushing the boundaries of the network. >> These aren't function calls over memory like we used to think about things, these things are inherently networked. We know we have network SOAs, and service levels, and whatnot... >> Susie: There is. >> It sounds like we have... I was wondering, here, at this conference, are developers starting to talk about, "Geez, I would like to look at Kubernetes "as a lower-level feature in layer 7," >> Susie: They are. (laughs) >> "where there's a consistent approach to thinking about "how that orchestration layer is going to work, "and how containers work above that, "because I don't have to worry about session anymore, I don't have to worry about transmission." >> Susie: Absolutely. >> That goes away, so give me a little bit more visibility into some of that higher-level stuff, where, really, the connectivity issues are becoming more obvious. >> Absolutely, and an interesting example is that, you know, we actually talked about AppDynamics in the keynote, and so, with AppDynamics, what kind of information can you get from these bits of code that are running in different places? And it comes into where we have the Royal Bank of Scotland, who's saying, "What's my busiest bank branch "where people are doing mobile banking in the country?" And they're like, "Well, how do I answer that question?" And then you see that, oh, someone has their mobile phone, they take an app, then you actually break it down to how is that request, that API, how is that being, kind of, operated throughout your network. And when you take a look, you say, "Okay, well, this called this "piece of code that's running here. "This piece of code used this API to talk to this other service, to talk to this other," you can map that out, get back the calls of, "Hey, this is how many times this API has been called, "this is how many times this service has been called, "this is the ones that are talking to who," then they came up with the answer, saying that our busiest bank branch is the 9 a.m. Paddington Train Station. >> And that's a great example, because now you gain visibility >> Exactly >> into where the dependencies are, which even if you don't explicitly render it that way, starts to build a picture of what the layers of function might look like based on the dependencies and the sharing of the underlying services. >> That's right, and that's where you're saying, like, "What? The infrastructure just gave me business value (John laughs) "in a very direct way. "How did that happen?" >> John: That's a huge opportunity for Cisco. >> So it's a big... >> Well, let's get in the studio and let's break down the Kubernetes and the containers, 'cause Docker's here, a lot of other folks are here. We've had, also, Abby Kearns, the executive director of Cloud Foundry. We've had the executive director from the Cloud Native Compute Foundation, Dan was here, a lot of folks here in the industry kind of validating >> Yeah, Craig was here. >> your support. Sun used to have an expression, the network is the computer, but now, maybe Chuck Robbins should go for network is the app, or the app is the network, (Susie laughs) I mean, that's what's happening here. The interplay between the two is happening big time. >> It is happening here, yeah. Just every element, every piece of code, what we saw is that this year, developers will write 111 billion lines of code. You think about that, every piece of... >> Peter: That we know about. (chuckles) >> That we know about, there's probably more. (chuckles) and all of that, you're right, these are broken up into pieces that are inherently networked, right? They have data, it's all about data and information that they're sharing to give interesting experiences. So this is absolutely a new paradigm. >> Well, congratulations on your success. What a great journey, I know it's been a short time, but I noticed after our in-studio interview, when you came in to share with us, the show, as a preview, Chuck Robbins retweeted one of the tweets. >> Susie: He did. >> And so I got to ask you, internally at Cisco, I know you put this together kind of as a entrepreneurial inside the company, and had support for that, what is the conversation you have with Chuck and the executive team about this effort? Because they got to see a clear line of sight that the value of the network is creating business value. What are some of the internal conversations, can you give us a little bit of color without giving away all the trade secrets? >> Yeah, well, internally, we're getting huge support. Chuck Robbins checks in on this, he actually has been checking in saying, "How's it going?" Rowan Trollope sending, "Hey, how's it going? "I heard it's going great." >> Did he text you today? >> Chuck did a couple days ago. >> John: Okay. (chuckles) >> And then Rowan, today, so, yeah, so we have a lot of conversation. >> Rowan's a CUBE alumni, Chuck's got to get on theCUBE, (Susie laughs) Rowan's been on before. >> Yeah, so they're all kind of checking in on it. We have the IoT World Forum going on in parallel, in London, so, otherwise, they would be here as well. But they understand... >> John: There's a general excitement? This is not a rogue event? >> There's huge excitement. >> This is not, like, a rogue event? >> It's not, it's not, and what happens is... They also understand that we're talking about bringing in the ecosystem. It's not just a Cisco conversation, it is a community... >> Yeah, you're doing it right, you're not trying to take over the sandbox. You're coming in with respect and actually putting out content, and learning. >> Putting out content, and really, it's all about letting people interact and create this new area. It's breaking new ground, it's facilitating a conversation. I mean, where apps meet infrastructure, it's controversial as well. Some people should say, "They should never meet. "Why would they ever meet?" (Susie and John laugh) >> So, we do a lot of shows, I was telling Peter that, you know, we were at the first Hadoop Summit, second Hadoop World, with Cloudera, when they were a small startup, Docker's first event, CubeCon's first event, we do a lot of firsts, and I got to tell you, the energy here feels a lot like those events, where it's just so obvious that (chuckles) "Okay, finally, programmable infrastructure." >> Well, I'll be honest, I'm relieved, because, you know, we were taking a bet. So, you know, when I was bouncing this idea off of you, we were talking about it, it was a risk. So the question is, will it appeal to the app developers, will it appeal to the cloud developers, will it appeal overall? And I'm very relieved and happy to see that the vibe is very positive. >> Very positive. >> So people are very receptive to these ideas. >> Well, you know community, give more than you take has always been a great philosophy. >> I'm always a little paranoid and (John laughs) nervous but I'm very pleased, 'cause people seem to be really happy. There's a lot of action. >> There are a lot of PCs with Docker stickers on them here. (John laughs) >> There are. (laughs) There are, yes, yes. We have the true cloud, IoT, we have the hardcore developers here, and they seem to be very engaged and really embracing... >> Well, we've always been covering DevOps, again, from the beginning, and cloud-native is, to me, it's just a semantic word for DevOps. It's happening, it's going mainstream, and great to see Cisco, and congratulations on all your work, and thanks for including theCUBE in your inaugural event. >> Susie: Thank you. >> Susie Wee, Vice President and CTO at Cisco's DevNet. We're here for the inaugural event, DevNet Create, with the community, two great communities coming together. I'm John Furrier with Peter Burris, stay tuned for more coverage from our exclusive DevNet Create coverage, stay with us. (upbeat music) >> Hi, I'm April Mitchell, and I'm the senior director of strategy.
SUMMARY :
Brought to you by Cisco. the developer program that was started as grassroots, because they're always the first, and you know, You're the host and the creator, with your team. You have DevNet, this event is going extremely well, And so this whole thing with, you know, as some say in the DevOps world, is the ethos. of what you can get across, bringing in the community to come and learn about those, but how is the developer going to inform and actually, every person in the world knows, In the minds of a lot of people, once you can get it configured, you leave it. Because of the sensitivity to operations. Unless you're really worried about to give you that next level of security, and margins of the edge of these, and the network all together, so I got to ask you the question, and you want that traffic to work, right? and doing this, to your point, You don't want to submit a work order for that. just letting the apps interface with network APIs, right? that the network is informing the cloud layer, I don't know if I have a view on which layers go away, and then once you put those microservices in containers, It's all about connectivity, it's all about, you know, and service levels, and whatnot... are developers starting to talk about, Susie: They are. "because I don't have to worry about session anymore, the connectivity issues are becoming more obvious. "this is the ones that are talking to who," and the sharing of the underlying services. That's right, and that's where you're saying, like, a lot of folks here in the industry kind of validating network is the app, or the app is the network, what we saw is that this year, Peter: That we know about. and all of that, you're right, Chuck Robbins retweeted one of the tweets. and the executive team about this effort? "I heard it's going great." And then Rowan, today, Rowan's a CUBE alumni, Chuck's got to get on theCUBE, We have the IoT World Forum going on in parallel, in London, about bringing in the ecosystem. and actually putting out content, it's all about letting people the energy here feels a lot like those events, So the question is, will it appeal to the app developers, So people are Well, you know community, There's a lot of action. There are a lot of PCs with Docker stickers on them here. and they seem to be very engaged and really embracing... from the beginning, and cloud-native is, to me, We're here for the inaugural event, DevNet Create, and I'm the senior director of strategy.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chuck | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Susie Wee | PERSON | 0.99+ |
Abby Kearns | PERSON | 0.99+ |
Susie | PERSON | 0.99+ |
Craig | PERSON | 0.99+ |
Dan | PERSON | 0.99+ |
Chuck Robbins | PERSON | 0.99+ |
April Mitchell | PERSON | 0.99+ |
Cloud Native Compute Foundation | ORGANIZATION | 0.99+ |
Rowan Trollope | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
Peter | PERSON | 0.99+ |
Steve Post | PERSON | 0.99+ |
Rowan | PERSON | 0.99+ |
London | LOCATION | 0.99+ |
90% | QUANTITY | 0.99+ |
iOS 10 | TITLE | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Royal Bank of Scotland | ORGANIZATION | 0.99+ |
CNI | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
10% | QUANTITY | 0.99+ |
Cloud Foundry | ORGANIZATION | 0.99+ |
two days | QUANTITY | 0.99+ |
three years ago | DATE | 0.99+ |
Westfield Mall | LOCATION | 0.99+ |
today | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
two communities | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Wikibon.com | ORGANIZATION | 0.99+ |
MuleSoft | ORGANIZATION | 0.99+ |
iOS | TITLE | 0.99+ |
SiliconANGLE Media | ORGANIZATION | 0.98+ |
IoT World Forum | EVENT | 0.98+ |
DevNet | ORGANIZATION | 0.98+ |
first event | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Michael Hill, SAP & Emily Mui, SAP - SAP SAPPHIRE NOW 2017 - #SAPPHIRENOW #theCUBE
>> Narrator: It's theCUBE, covering Sapphire Now 2017, brought to you by SAP Cloud Platform, and HANA Enterprise Cloud. >> Hello everyone, welcome back to our special coverage of SAP Sapphire Now. I'm John Furrier, here in theCUBE's studios of Palo Alto for our three days of wall to wall coverage, breaking down all the news with analysis. Our next guest here on theCUBE is Emily Mui, Senior Director of HANA Cloud Product Marketing at SAP, and Michael Hill, Senior Director of Product Marketing and SAP Cloud Platform. I had a chance to have a conversation around the big news around SAP Cloud Platform and what it means. I had a chance to ask Emily and Michael about the Sapphire impact around this new strategy, and the impact of multi-cloud. Here's the conversation with Michael and Emily. >> Three things to remember, three Cs, it's about helping accelerate cloud adoption, consumption, as well as-- >> [Michael And John] Choice. >> Choice, because of multi-cloud. >> So this is interesting. So the three Cs, I love that, very gimmicky marketing thing that I like. It gets to the point. Choice is huge. Multi-cloud is what everyone's talking about, in essence is what hybrid cloud's turning into. I mean, hybrid cloud has been the defacto norm now everyone's talking about, that is the preferred way most enterprises are using the cloud on premise and some public cloud, call it hybrid. But now, the mobile cloud's out here. There's Amazon Web Service, you've got Google, Azure, so there's a lot of, so the choice is critical, where to put what were clothes. >> And that's what we're hearing from our customers, and that's why we're moving in that direction. Not everyone wants to stick to one infrastructure as a service provider, they've got multiple clouds to manage, and we're enabling that. >> So choice I get. Cloud adoption is essentially creating those APIs to give them that accelerated approach. More cloud adoption means what? I've got be able to run stuff in the cloud faster, so that means getting their apps API, the API economy. And the consumption, is that on the interface side, or what's the consumption piece of it? >> Well, I'm going to let Michael have a swing at it now. >> It's consumption of innovation. So here we're talking about helping companies with digital transformation with things like Internet of Things, which we had in beta, which is now generally available, so customers can intelligently connect people, things, and business processes, all together now. In addition, we've added other great technologies like SAP CoPilot, which is allowing you to talk to your enterprise systems. So initially, that's what with SAPS for HANA. And you can say, "I'm interested in, "tell me all the open orders from the last quarter." And it will intelligently go get that information. >> It's like a voice recognition, all kinds of news things are coming out. >> Absolutely. >> As a user interface, or interface on cloud. >> They're for the enterprise. >> Or IT interface. >> On your phone or on your computer. >> So it's all being automated. We all know AI, that's just, "All our jobs are being automated." But this is specific. You're saying you're going to interface in with like CoPilot. >> Exactly. So you've got that business context. >> All right, let's step back and look at the Lego blocks. The cloud choice, multi-cloud. Let's get in, and then we'll talk about the adoption piece, how you guys are accelerating that through the marketplaces and APIs, and then the consumption through the new interfaces. So start with multi-cloud. What are the big points there? >> Well, the first is the agility that your platform as a service is now available on not just SAP data centers, but Amazon Web Services, Microsoft Azure, and Google Cloud Platform, being delivered. Amazon Web Services is now generally available, Azure is now beta, and there's a preview of Google Cloud Platform. And here you have one cockpit in SAP Cloud Platform to manage this multi-cloud infrastructure. >> So your strategy is to put your platform as a service on the clouds that customers want to run their workloads on? >> Exactly. So customers may already have specific workloads, or they may be working with partners that have workloads in those particular clouds. And now, SAP Cloud Platform can run in that same infrastructure. >> So the plan is to support the platform as a service from SAP on the clouds of choice for the customer. So they want to put stuff on Azure, if it's related to Office 365, or something going on with that, they could put it there. If they want to put some cloud-native on Amazon Web Service, they can. If they want to use Spanner and some TensorFlow, they could put that on Google. >> And to make this happen was really cool thing, is that we did this through our work in Cloud Foundry, and this allows you to bring your own development language, so BYOL. So if you have developers that are working in a particular language that's not supported natively by SAP previously, they can now be instantly productive on building applications on SAP Cloud Platform. >> So Cloud Foundry is the key to success on this? >> Yeah. Exactly. And that bring things like Node.js, and Python, as well as SAPs. >> All the cloud-native goodness that people want from a developer standpoint. >> Exactly. >> But yet, you guys allow it to run on Prim within the SAP constructs. >> Yep. >> All right, let's talk about cloud adoption, 'cause this is where the big rubber hits the road. Emily, we've been talking about the API economy for years. In fact, SAP was early on, and Web Services going through bankrupt. But there's some real value in here, because SAP runs software in some of the biggest businesses, so there's a lot of nuances to SAP. But when you go cloud and cloud-native, you've got to balance preexisting install base legacy with new apps that are being developed, how are you guys going to do that? >> So we announced the API Business Hub around a year ago at Sapphire in 2016, and it has grown tremendously in terms of content. So we had a lot of new APIs that keep getting added every month. And we're into the hundreds now. But it's not just the APIs, we've got integration workflows, there's all kinds of different content that's being added in there to make easier for our customers and partners to be able to leverage, and integrate, and connect, these different application with SAP back-end. So lot of exciting things happening on that end. >> So this allows them to go to the cloud business model. >> Emily: Exactly, right. >> Okay, now back to the consumption pieces, CoPilot. So is this where you guys are looking at where the dynamic nature of cloud can take advantage of the customers, because not only interfacing with, say, voice, for instance, there's others things, like, "Okay, I want to change processes. "I have the Workflow, or I'm doing something, "I want to just, "I'm not a developer, a Python developer, "I want to go in and make some rule changes, "or things of that nature." >> Yeah, so we have the Workflow service, that's also available. We've got a whole host of new capabilities that are coming out, and we'll call it digital edge, giving our customers a digital edge with these new innovative services. >> Edge as the user and also machines. >> Yes. >> That's where the IoT piece comes in. >> Exactly. >> So decision maker or customer says, "Hey, I've done all this stuff in the cloud." All of a sudden, someone says, "Well, we've got to bolt on some industrial data "from machines in our plant or factory." >> In fact, our IoT, the newest set of capabilities for IoT services is available at Sapphire. >> Okay, s\o what's the big takeaway from this? Let's just boil it down. Bottom line, this announcement impacts customers in what way? >> In many ways. We see many of customers wanting to become digital. And we've talked about how we think the benefits of cloud platform has to do with helping our customers become much more agile in how they do business, and SAP is in perfect position to do that. We've been working with companies, enterprises for years with their business processes, helping them optimize it. So that's the other bit, to be able to optimize all their business processes, and through the cloud. And then lastly, digital is the way to that they want to go. They know they want to be able to adopt all these new technologies. AI is so exciting. The CoPilot, if you've seen the demo, and you can see it at show floor here at Sapphire, it's amazing. Just the fact that you can talk to it, create an order, do some search, talk to it. I know that's how my kids, how they get through everyday life. They don't go look up anything anymore, they don't even Google, just talk. >> It's very dynamic. Certainly, the kids are an indicator, that you see if they want things, have the ability to move things around like the Lego blocks or composability. >> Yeah, so the speed, so that's why we love talking about accelerating consumption, and choice, and cloud adoption, because the speed of which everyone is adopting new technologies is just astronomical. >> Michael, comment on that point, because I always, this is our eight year covering Sapphire with theCUBE. It's our first year we're doing it from the studio as well. But Bill McDermott has always been on this with the whole dashboarding thing. If you look at SAP, the speed of business, how (mumbles) year that was. But each year, he never really changed, it's been the same arc, might've been a zigzag here and there, a little success factors here and there, all this kind of integration you guys have done. But it's been the same message, data's at the heart of the customers' outcomes. And the dashboards of old were data warehouses. But now he was showing a vision where, with the speed of data, the speed of software, you can get your business dashboard at your fingertips. That's what the customers are looking for. Your thoughts? >> It's not only being able to get that information at your fingertips, but actually being able to do something about it. So you can build those applications that can make an impact. So if you have, you're using our iOS SDK, and you've build that Apple interface, you have a nice interface that you can move an order, or you can do something about it while you're traveling. So you have this great dashboard, but now it's actionable. >> And this is the big difference, this is what makes his original vision, which certainly you can replicate with SAP's suite of data, and data and software, to a whole nother dimension of new apps. So app developers can come in and create these apps, and create new value propositions. >> Absolutely. >> All right, so how do they do that? What's the advice the customers, as they look at this new announcement, the impact of them, what does it mean to customer? Pick your cloud of choice? Use the APIs? >> Plenty of choices, and of course, we offer them a lot of guidance too, right? Because we've got a lot of great customers that are using the cloud platform today, some of which are presenting here at Sapphire. Karma Automotive, we love their story. They used to be Fisker Automotive, an all electronic vehicle. And it's amazing that the things that they want to do, and they're using the cloud platform in order to do that. But it's just another example of an innovative company that's looking to work with a company like SAP, and do everything in the cloud, building an application that will make it easier in terms of IoT, the sensors, and things like that, so they can track it to be able to take action on it. So it's very exciting. So lots of new things that are happening. >> I think there's two things that jump out at me, just to summarize the freedom that developers in the cloud-native world can do to create new apps, that also blend in on all of the existing value that SAP's already doing in the marketplace, that's always been, that was something that I observed last year, this is now a realization of that. But two, is now the customers now have a choice to put whatever they want in whatever cloud. And to me, what we've seen on theCUBE over the many interviews we've done, people who follow theCUBE know we've talked to a lot of people, is the workloads find their homes, some like Amazon, some like Azure, some like Google, and I think that is what customers are telling us, and you guys are now offering that choice. "Hey, put some workloads over there. "It doesn't matter where you want to put 'em, "we're just going to run 'em with--" >> And where we can help is really on the business service side. We have the right types of application services within the platform as a service offering, to enable them to create those types of apps to support their business. >> Applications, data, value for customers. >> And it's the integration of data into the application, because that's what's important. >> There'll be a new generation of application developers. We're standing up application like PowerPoint slides, really composing apps, that is the DevOps mainstream trend. Emily, thanks so much for sharing the great news. Michael, good to see you. Thanks for coming on theCUBE. Special Sapphire Now 2017 coverage. Breaking the news of the three Cs, multi-cloud, SAP's new announcement in Orlando. This is theCUBE coverage. More coverage after this short break.
SUMMARY :
brought to you by SAP Cloud Platform, and the impact of multi-cloud. So the three Cs, I love that, And that's what we're hearing from our customers, And the consumption, is that on the interface side, "tell me all the open orders from the last quarter." all kinds of news things are coming out. or interface on cloud. or on your computer. So it's all being automated. So you've got that business context. All right, let's step back and look at the Lego blocks. Well, the first is the agility in that same infrastructure. So the plan is to support and this allows you to bring your own development language, And that bring things like Node.js, and Python, All the cloud-native goodness But yet, you guys allow it to run on Prim because SAP runs software in some of the biggest businesses, But it's not just the APIs, So is this where you guys and we'll call it digital edge, So decision maker or customer says, the newest set of capabilities for IoT services in what way? So that's the other bit, have the ability to move things around Yeah, so the speed, But it's been the same message, So you can build those applications that can make an impact. And this is the big difference, And it's amazing that the things that they want to do, that also blend in on all of the existing value is really on the business service side. And it's the integration of data into the application, that is the DevOps mainstream trend.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Emily | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Emily Mui | PERSON | 0.99+ |
Michael Hill | PERSON | 0.99+ |
Bill McDermott | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Karma Automotive | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Fisker Automotive | ORGANIZATION | 0.99+ |
SAP | ORGANIZATION | 0.99+ |
Orlando | LOCATION | 0.99+ |
Node.js | TITLE | 0.99+ |
ORGANIZATION | 0.99+ | |
Palo Alto | LOCATION | 0.99+ |
last year | DATE | 0.99+ |
Python | TITLE | 0.99+ |
PowerPoint | TITLE | 0.99+ |
last quarter | DATE | 0.99+ |
three days | QUANTITY | 0.99+ |
Sapphire | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
SAP Cloud Platform | TITLE | 0.99+ |
eight year | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
two | QUANTITY | 0.98+ |
Microsoft | ORGANIZATION | 0.98+ |
Three things | QUANTITY | 0.98+ |
Apple | ORGANIZATION | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
each year | QUANTITY | 0.98+ |
Google Cloud Platform | TITLE | 0.98+ |
Spanner | TITLE | 0.98+ |
Amazon Web Services | ORGANIZATION | 0.97+ |
a year ago | DATE | 0.97+ |
three | QUANTITY | 0.97+ |
iOS SDK | TITLE | 0.97+ |
Azure | TITLE | 0.97+ |
HANA | TITLE | 0.97+ |
Cloud Foundry | TITLE | 0.97+ |
HANA Enterprise Cloud | TITLE | 0.97+ |
hundreds | QUANTITY | 0.97+ |
first year | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
today | DATE | 0.96+ |
Amazon Web Service | ORGANIZATION | 0.96+ |
Lego | ORGANIZATION | 0.95+ |
Sapphire | TITLE | 0.94+ |
SAPS | TITLE | 0.94+ |
one cockpit | QUANTITY | 0.93+ |
TensorFlow | TITLE | 0.92+ |
SAP CoPilot | TITLE | 0.91+ |
CoPilot | TITLE | 0.87+ |
three Cs | QUANTITY | 0.87+ |
2017 | DATE | 0.86+ |
HANA | ORGANIZATION | 0.84+ |