Image Title

Search Results for Dietmar Fauser:

Dietmar Fauser, Amadeus | Red Hat Summit 2018


 

>> Announcer: From San Francisco, it's theCUBE. Covering Red Hat Summit 2018. Brought to you by Red Hat. >> Hey, welcome back everyone. This is theCUBE live here in San Francisco at Moscone West Fourth, Red Hat Summit 2018. I'm John Furrier, the co-host of theCUBE with John Troyer, the co-founder of TechReckoning, an advisory firm in the area of open source communities and technology. Our next guest is Cube alumni Dietmar Fauser, head of core platforms and middleware at Amadeus, experienced Red Hatter, event go-er, and practitioner. Great to have you back, great to see you. >> Thank you, good to be here. >> So why are you here, what's going on? Tell us the latest and greatest. What's going on in your world? Obviously, you've been on theCUBE. You go on YouTube, there's a lot of videos on there, you go into great detail on. You been on the Docker journey. You got Red Hat, you got some Oracle. You got a complex environment. You're managing cloud native-like services. Tell us about it. >> We do so, yes, so this time I am here mostly to feed back some experience of concrete implementation out there in the Cloud and on premise so. Paul told me that the theme was mostly hybrid cloud deployments so we have chosen two of our really big applications to explain how concretely this works out with you and when you deploy on the Cloud. >> So you were up on stage this morning in the keynote. I think the scale of your operation maybe raised some eyebrows as well. You're talking about over a trillion transactions. Can you talk a little bit about, talk about your multi-cloud stance and what you showed this morning. >> Okay, so first to frame a bit of the trillion transactions. It's not traditional data based transactions. It's individual data access and highly in-memory cached environment. So I'd say that's a very large number and it's a significant challenge to produce this system. So we're talking about like more than 100,000 core deployments of this applications so. Response time matters extremely in this game because at the end what we are talking here about is the back end that powers large P2C sites, like Kayak, some major search engines, online travel agencies. So it just has to respond in a very fast way. Which pushed us to deploy the solutions very close to where the transactions are really originating to avoid our historical data centers in Germany. We just want to take out the back and forth travel under the Atlantic basically to create a better end user experience at the end. >> Furrier: So you had to drive performance big time? >> We, very much. It's either performance or higher availability or both actually. >> This is a true hybrid cloud, right? You're on prem, you're in AWS, and you're in Google Cloud. So could you talk a little bit about that? All powered by OpenShift. >> OpenShift is the common denominator of the solutions. Some of our core design goals is to build the applications in a platform agnostic way. So an application should not know what's its deployment topology, what's the underlying infrastructure. Which is why I believe that platforms like OpenShift and Kubernetes underneath are so important, because they take over the role of a traditional operating system, but at a larger scale. Either in big Cloud deployments or on premise, but the span of operations that you get with these environments is just like an OS but on a bigger scale. It's not a surprise that people talked about this like a data center operating system for a while. We use it this way so OpenShift is clearly the masterpiece, I would say of the deployment. >> That's the key though, I think, thinking about it as an operating system or an operating environment is the kind of the architectural mindset that you have to be in. Because you've got to look at these resources and connections, link them together. You've got all these team systems constant. So you've got to be a systems person kind of design. How does someone get there that may or may not have traditional systems experience? Like us surly generation systems folks have gone through. Because you have devops automating away things. You have more of an SRE model that Google's talking about. Talking about large scale, it's not a data center anymore, it's an operating environment. How do people get there? What's your recommendation, how do I learn more. What do I do to deploy architecturally? >> That's a key question I think. I think there were two sections to your question, how to get there, so. I think at Amadeus we are pretty good at catching early big trends in the industry. We are very close to large engineering houses like Google and Facebook and others like Red Hat of course and so, it was pretty quickly clear to us, at least to a small amount of these decision-makers that the combination of Red Hat and Google was kind of, a game-changing event, which is why we went there, so. It's, I mean. >> Furrier: The containers have been important for you guys. >> Containers were coming along, so, when this happened Docker became big, our development teams, they wanted to do containers. It was not something that the management has had to push for, it was grassroots type of adoption here. So different pieces fed together that gave us some form of certainty, or a belief that these platforms would be around for a decade to come. >> Developers love Kubernetes, and I mean that, containers, it's like a fish to water, it's just natural. Now talk about Kubernetes now, OpenShift made a bet with Kubernetes, obviously, a few years ago. People were like, what is that about? Now it's obvious why. How are you looking at the Kubernetes trade, obviously it creates a de facto capability, you can wrap services around it, there's a notion of service meshes coming, Istio is the hottest product in the Linux Foundation, CNCF, KubeFlow is right behind it, I mean these are kind of thinking about service and micro-services and workload management. How do you view that, what's your opinion on that direction? >> I'm afraid there is no simple answer to this, because if you start new solutions from scratch, going directly to Kubernetes, OpenShift is the natural way. Now the big thing in large corporations is we all have legacy applications, whatever we call legacy applications, in our case these are pretty large C++ environments that are relatively modern but they are not strictly micro-service based and they are a bit fatter, they have an enterprise service bus on top of this, and so it's not, and we have very awkward, old network protocols, so going straight to the mesh for these applications and micro-services is not a possibility because there is significant re-engineering needed in our own applications before we believe it makes sense to throw them onto a container platform. We could stick all of this in a container but you have to wonder whether you get the benefit you really want to. >> Furrier: Time ROI, return on investment, on the engineering, retrofitting it for service mesh. >> Yes, I mean, the interesting thing is Kubernetes or not, we would have touched these applications anyway to cut them into more manageable pieces. We call this compartmentalization. Other people may call this micro-service-ification, or however we want to call this. So that's, to me this is work that is independent from the cloud strategy in itself. Some of our applications, to move faster, we have decided to put them more or less as they are onto OpenShift, others we take some more time to say, okay let's do the engineering homework first so that we reap the full benefits of this platform, and the benefit really is, what is fundamental for developers, efficiency and agility is that you have relatively small, independent load sets, so that you can quickly load small pieces, you can roll them in. >> Time to production, time from developer to production. >> But also quality, the less isolated, the more you isolate the changes, the less you run the risk that a change is cross-impacting things that are in the same delivery basically. It's a lot about, smaller chunks of software that are managed and for this obviously a micro-service platform is absolutely ideal. So it helps us to push the spirit of the company in this direction, no more monolithical applications, fast daily loads. >> Morale's higher, people happy. >> Well, it's a long journey, so some are happy, some are impatient like me to move faster. Some are still a bit reluctant, it's normal in larger organizations. >> Talk about the scale, I'm really interested in your reaction and experience, let's talk about the scale. I think that's a big story. As cloud enables more horizontally scalable applications, the operating aperture is bigger. It's not like managing systems here, it's a little bit bigger picture. How are you guys looking at the operational framework of that, because now you're essentially a site reliable engineering role, that's what Google talks, in SRE, but now you're operating but you're still developing code, and you're writing applications. So, talk about that dynamic and how you see that playing out going forward. >> So, what we try to do is to separate the platform aspects from the application aspects, so I'm leading the platform engineering unit, including platform operations, so this means that we have the platform SRE role, if you want, so we oversee frontline operations 24 by seven stability of the global system. To me, the game is really about trying to separate and isolate as much as we can from the applications to put it on the platform because we have, like, close to 100 applications running on the platform and if we can fix stuff on the platform for all the applications without being involved in the individual load cycles and waiting for them to integrate some features, we just move much faster. >> You can decouple the application from some core platform features, make them highly cohesive, sounds like an operating system to me. >> It is, and I'll come to the second thought of the SRE a bit later, but currently the big bulk of the work we are doing with OpenShift is now to bring our classical platform stuff under OpenShift. And by classical application, I mean our internal components like security, business rule engines, communication systems, but also the data management side of the house. And I think this is what we're going to witness over the next two or three years, is how can we manage, like, in our case CouchBase, Kafka, all of those things, we want them to be managed as applications under OpenShift with descriptive blueprints, descriptive configurations which means you define the to-be state of a system and you leave OpenShift to ensure that if the to-be state is like, I need 1000 ports for a given application, is violated OpenShift will repair automatically the system. >> That's interesting, you bring up a dynamic that's a trend we're seeing, I want to get your thoughts on this. And it hasn't really been kind of crystallized and yet I haven't heard a good explanation but, the trend seems to be to have many databases. In other words, we're living in a world where there's a database for everything, but not one database. So, like, if I got an application at the edge of the network, it can have its own database, so we shouldn't have to design around a database concept, it should be, concept should still be databases everything, living and growing and managing it. How are, first of all do you believe that, and if so, how do you architect the platform to manage potentially ubiquitous amount of different kinds of databases where the apps are kind of driving their own database role, and working with the core platform. Seems to be an area people are really talking about, because this is where AI shines if you get that right. >> So I agree with you that there are a lot of solutions out there. Sometimes a bit confusing choice, which type of solutions to choose. In our case we have quite a mature, what we call a technical policy, a catalog of technologies that application designers can choose from, so there are several data management stores in there. Traditionally speaking we use Oracle, so Oracle is there and is a good solution for many use cases. We were very early in the Nosql space so we have introduced Couchbase for highly scalable environments, Mongo for more sophisticated objects or operations. We try to educate, or to talk with our application people not to go outside of this. We also use Redis for our platform internal things, so we try to narrow their choices down. >> Stack the databases, what about the glue layer? Any kind of glue layer standards, gluing things together? >> In general we always put an API layer on top of the solutions, so we use our own infrastructure independence layer when we talk to the databases, so we try not to have those native bindings in the application, it's always about disentangling platform aspects from the application. >> So Dietmar, you did talk about this architectural concept, right, of these layers, and you're protecting the application from the platform, what about underneath, right? You're running on multiple clouds. What have been the challenges of, in theory, you know, there's a separation layer there and OpenShift is underneath everything, you've got OpenStack, you've got the public clouds, have there been some challenges operationally in making sure everything runs the same? >> There are multiple challenges, so to start with, the different infrastructures do not behave exactly the same, so just taking something from Google to Amazon, it works in theory but practically speaking the APIs are not exactly the same, so you need to remap the APIs. The underlying behavior is not exactly the same. In general from an application design point of view, and we are pretty used to this anyway because we are distributed systems specialists, but the learning curve comes from the fact that you go to an infrastructure that is, in itself, much less reliable if you look to individual pieces of it. It works fine if you use well the availabilities on concepts and you start with the mindset that you can lose availabilities or even complete regions and take this as a granted, natural event that will happen. If you are in this mindset there aren't so many surprises, OpenShift operates very well with the unreliability of virtual machines. We even contract, in the case of Google, what is called preemptive VM so they get restarted anyway very frequently because they have a different value proposition so if you can run with less reliable stuff you pay less, basically. So if you can take advantage of this, you have another advantage using those. >> Dietmar, it's great to hear your stories, congratulations on your success and all the work you're doing, it's sounds like really cutting-edge and great work. You've been to many Red Hats. What's the revelation this year? What's the big thing that people should know about that's happening in 2018? Is it Kubernetes? What should people pay attention to from your opinion? >> I think we can take Kubernetes now as granted. That's very good news for me and for Amadeus, it was quite a bet at the beginning but we see this now as the de facto standard, and so I think people can now relax and say, okay this is one of the pieces that will be predominant for the decade to come. Usually I'm referring to IT decades, only three years long, not 10 years. >> Okay, and as moving to an operating system environment, I love that analogy. I think it's totally right from the data that we see. We're living in a cloud native world, hybrid cloud on-premise, still true private cloud as Wikibon calls it and really it's an operating system concept architecturally, and IoT is coming fast. It's just going to create more and more data. >> So, what I believe, and what we believe in general at Amadeus is that the next evolution of systems, the big architectural design approach will be to create applications that are much more streaming oriented because it allows to decouple the different computing steps much more. So rather than waiting for a transaction, you subscribe to an event, and any number of processes can subscribe to an event, the producer doesn't have to know who is consuming what, so we go streaming data-centric and massively asynchronous. Which, which, which yields smoother throughput, less hiccups because in transactional systems you always have something that slows down temporarily a little bit, it's very difficult to architect systems with absolute separation of concerns in mind, so sometimes a slowdown of a disk might trigger impacts to other systems. With a streaming and asynchronous approach the systems tend to be much more stable with higher throughput. >> And a lot more scalable. There's the horizontally scalable nature of the cloud, you've got to have the streaming and this architecture in place. This is a fundamental mistake we see with people out there, they don't think like this but then when they hit scale points, it breaks. >> Absolutely, and so, I mean we are a highly transactional shop but many of our use cases already are asynchronous so we go a deep step further on this and we currently work on bringing Kafka massively under OpenShift because we're going to use Kafka to connect data center footprints for all types of data that we have to stream to the application that are out in the public cloud, or on premise basically. >> We should call you professor because this was such a great segment, thanks for sharing an awesome amount of insight on theCube. Thanks for coming on, good to see you again. Dietmar Fauser, head of core platforms and middleware at Amadeus. You know, down and dirty, getting under the hood really at the architecture of scale, high availability, high performance of the systems to be scalable with cloud, obviously open source is powering it, OpenShift and Red Hat. It's theCube bringing you all the power here in San Francisco for Red Hat Summit 2018. I'm John Furrier and John Troyer, we'll be back with more after this short break. (electronic music)

Published Date : May 8 2018

SUMMARY :

Brought to you by Red Hat. Great to have you back, great to see you. You been on the Docker journey. and when you deploy on the Cloud. So you were up on stage of the trillion transactions. We, very much. So could you talk a little bit about that? but the span of operations that you get kind of the architectural that the combination of Red Hat and Google for you guys. that the management has Istio is the hottest product Now the big thing in large corporations is the engineering, retrofitting efficiency and agility is that you have Time to production, time from developer the less you run the risk that a change is some are impatient like me to move faster. Talk about the scale, the applications to put it on the platform You can decouple the the to-be state of a system and you leave of the network, it can So I agree with you that there are of the solutions, so we in making sure everything runs the same? the same, so you need to remap the APIs. What's the revelation this year? predominant for the decade to come. from the data that we see. the systems tend to be much more stable of the cloud, you've got the application that are the systems to be scalable with cloud,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
GermanyLOCATION

0.99+

John TroyerPERSON

0.99+

AmazonORGANIZATION

0.99+

John TroyerPERSON

0.99+

2018DATE

0.99+

PaulPERSON

0.99+

Dietmar FauserPERSON

0.99+

GoogleORGANIZATION

0.99+

John FurrierPERSON

0.99+

San FranciscoLOCATION

0.99+

San FranciscoLOCATION

0.99+

FacebookORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

twoQUANTITY

0.99+

AmadeusORGANIZATION

0.99+

OpenShiftTITLE

0.99+

DietmarPERSON

0.99+

10 yearsQUANTITY

0.99+

three yearsQUANTITY

0.99+

San FranciLOCATION

0.99+

Linux FoundationORGANIZATION

0.99+

AWSORGANIZATION

0.99+

CNCFORGANIZATION

0.99+

1000 portsQUANTITY

0.99+

TechReckoningORGANIZATION

0.99+

two sectionsQUANTITY

0.99+

OracleORGANIZATION

0.99+

bothQUANTITY

0.99+

KubeFlowORGANIZATION

0.98+

KafkaTITLE

0.98+

Red Hat Summit 2018EVENT

0.98+

oneQUANTITY

0.98+

firstQUANTITY

0.98+

HatTITLE

0.97+

Moscone West FourthLOCATION

0.97+

this yearDATE

0.97+

Red HatterORGANIZATION

0.97+

over a trillion transactionsQUANTITY

0.96+

AtlanticLOCATION

0.96+

NosqlTITLE

0.96+

WikibonORGANIZATION

0.95+

one databaseQUANTITY

0.94+

SRETITLE

0.94+

more than 100,000 core deploymentsQUANTITY

0.94+

theCUBEORGANIZATION

0.94+

IstioORGANIZATION

0.94+

YouTubeORGANIZATION

0.93+

CouchBaseTITLE

0.93+

Red HatTITLE

0.91+

KayakORGANIZATION

0.9+

KubernetesORGANIZATION

0.9+

second thoughtQUANTITY

0.89+

this morningDATE

0.88+

CubeORGANIZATION

0.88+

OpenShiftORGANIZATION

0.88+

OpenStackTITLE

0.87+

CouchbaseTITLE

0.86+

few years agoDATE

0.86+

C+TITLE

0.85+

Google CloudTITLE

0.8+

100 applicationsQUANTITY

0.79+

sevenQUANTITY

0.78+

KubernetesTITLE

0.77+