Image Title

Search Results for Compose:

Andy


 

>> Hi, my name is Andy Clemenko. I'm a Senior Solutions Engineer at StackRox. Thanks for joining us today for my talk on labels, labels, labels. Obviously, you can reach me at all the socials. Before we get started, I like to point you to my GitHub repo, you can go to andyc.info/dc20, and it'll take you to my GitHub page where I've got all of this documentation, I've got the Keynote file there. YAMLs, I've got Dockerfiles, Compose files, all that good stuff. If you want to follow along, great, if not go back and review later, kind of fun. So let me tell you a little bit about myself. I am a former DOD contractor. This is my seventh DockerCon. I've spoken, I had the pleasure to speak at a few of them, one even in Europe. I was even a Docker employee for quite a number of years, providing solutions to the federal government and customers around containers and all things Docker. So I've been doing this a little while. One of the things that I always found interesting was the lack of understanding around labels. So why labels, right? Well, as a former DOD contractor, I had built out a large registry. And the question I constantly got was, where did this image come from? How did you get it? What's in it? Where did it come from? How did it get here? And one of the things we did to kind of alleviate some of those questions was we established a baseline set of labels. Labels really are designed to provide as much metadata around the image as possible. I ask everyone in attendance, when was the last time you pulled an image and had 100% confidence, you knew what was inside it, where it was built, how it was built, when it was built, you probably didn't, right? The last thing we obviously want is a container fire, like our image on the screen. And one kind of interesting way we can kind of prevent that is through the use of labels. We can use labels to address security, address some of the simplicity on how to run these images. So think of it, kind of like self documenting, Think of it also as an audit trail, image provenance, things like that. These are some interesting concepts that we can definitely mandate as we move forward. What is a label, right? Specifically what is the Schema? It's just a key-value. All right? It's any key and pretty much any value. What if we could dump in all kinds of information? What if we could encode things and store it in there? And I've got a fun little demo to show you about that. Let's start off with some of the simple keys, right? Author, date, description, version. Some of the basic information around the image. That would be pretty useful, right? What about specific labels for CI? What about a, where's the version control? Where's the source, right? Whether it's Git, whether it's GitLab, whether it's GitHub, whether it's Gitosis, right? Even SPN, who cares? Where are the source files that built, where's the Docker file that built this image? What's the commit number? That might be interesting in terms of tracking the resulting image to a person or to a commit, hopefully then to a person. How is it built? What if you wanted to play with it and do a git clone of the repo and then build the Docker file on your own? Having a label specifically dedicated on how to build this image might be interesting for development work. Where it was built, and obviously what build number, right? These kind of all, not only talk about continuous integration, CI but also start to talk about security. Specifically what server built it. The version control number, the version number, the commit number, again, how it was built. What's the specific build number? What was that job number in, say, Jenkins or GitLab? What if we could take it a step further? What if we could actually apply policy enforcement in the build pipeline, looking specifically for some of these specific labels? I've got a good example of, in my demo of a policy enforcement. So let's look at some sample labels. Now originally, this idea came out of label-schema.org. And then it was a modified to opencontainers, org.opencontainers.image. There is a link in my GitHub page that links to the full reference. But these are some of the labels that I like to use, just as kind of like a standardization. So obviously, Author's, an email address, so now the image is attributable to a person, that's always kind of good for security and reliability. Where's the source? Where's the version control that has the source, the Docker file and all the assets? How it was built, build number, build server the commit, we talked about, when it was created, a simple description. A fun one I like adding in is the healthZendpoint. Now obviously, the health check directive should be in the Docker file. But if you've got other systems that want to ping your applications, why not declare it and make it queryable? Image version, obviously, that's simple declarative And then a title. And then I've got the two fun ones. Remember, I talked about what if we could encode some fun things? Hypothetically, what if we could encode the Compose file of how to build the stack in the first image itself? And conversely the Kubernetes? Well, actually, you can and I have a demo to show you how to kind of take advantage of that. So how do we create labels? And really creating labels as a function of build time okay? You can't really add labels to an image after the fact. The way you do add labels is either through the Docker file, which I'm a big fan of, because it's declarative. It's in version control. It's kind of irrefutable, especially if you're tracking that commit number in a label. You can extend it from being a static kind of declaration to more a dynamic with build arguments. And I can show you, I'll show you in a little while how you can use a build argument at build time to pass in that variable. And then obviously, if you did it by hand, you could do a docker build--label key equals value. I'm not a big fan of the third one, I love the first one and obviously the second one. Being dynamic we can take advantage of some of the variables coming out of version control. Or I should say, some of the variables coming out of our CI system. And that way, it self documents effectively at build time, which is kind of cool. How do we view labels? Well, there's two major ways to view labels. The first one is obviously a docker pull and docker inspect. You can pull the image locally, you can inspect it, you can obviously, it's going to output as JSON. So you going to use something like JQ to crack it open and look at the individual labels. Another one which I found recently was Skopeo from Red Hat. This allows you to actually query the registry server. So you don't even have to pull the image initially. This can be really useful if you're on a really small development workstation, and you're trying to talk to a Kubernetes cluster and wanting to deploy apps kind of in a very simple manner. Okay? And this was that use case, right? Using Kubernetes, the Kubernetes demo. One of the interesting things about this is that you can base64 encode almost anything, push it in as text into a label and then base64 decode it, and then use it. So in this case, in my demo, I'll show you how we can actually use a kubectl apply piped from the base64 decode from the label itself from skopeo talking to the registry. And what's interesting about this kind of technique is you don't need to store Helm charts. You don't need to learn another language for your declarative automation, right? You don't need all this extra levels of abstraction inherently, if you use it as a label with a kubectl apply, It's just built in. It's kind of like the kiss approach to a certain extent. It does require some encoding when you actually build the image, but to me, it doesn't seem that hard. Okay, let's take a look at a demo. And what I'm going to do for my demo, before we actually get started is here's my repo. Here's a, let me actually go to the actual full repo. So here's the repo, right? And I've got my Jenkins pipeline 'cause I'm using Jenkins for this demo. And in my demo flask, I've got the Docker file. I've got my compose and my Kubernetes YAML. So let's take a look at the Docker file, right? So it's a simple Alpine image. The org statements are the build time arguments that are passed in. Label, so again, I'm using the org.opencontainers.image.blank, for most of them. There's a typo there. Let's see if you can find it, I'll show you it later. My source, build date, build number, commit. Build number and get commit are derived from the Jenkins itself, which is nice. I can just take advantage of existing URLs. I don't have to create anything crazy. And again, I've got my actual Docker build command. Now this is just a label on how to build it. And then here's my simple Python, APK upgrade, remove the package manager, kind of some security stuff, health check getting Python through, okay? Let's take a look at the Jenkins pipeline real quick. So here is my Jenkins pipeline and I have four major stages, four stages, I have built. And here in build, what I do is I actually do the Git clone. And then I do my docker build. From there, I actually tell the Jenkins StackRox plugin. So that's what I'm using for my security scanning. So go ahead and scan, basically, I'm staging it to scan the image. I'm pushing it to Hub, okay? Where I can see the, basically I'm pushing the image up to Hub so such that my StackRox security scanner can go ahead and scan the image. I'm kicking off the scan itself. And then if everything's successful, I'm pushing it to prod. Now what I'm doing is I'm just using the same image with two tags, pre-prod and prod. This is not exactly ideal, in your environment, you probably want to use separate registries and non-prod and a production registry, but for demonstration purposes, I think this is okay. So let's go over to my Jenkins and I've got a deliberate failure. And I'll show you why there's a reason for that. And let's go down. Let's look at my, so I have a StackRox report. Let's look at my report. And it says image required, required image label alert, right? Request that the maintainer, add the required label to the image, so we're missing a label, okay? One of the things we can do is let's flip over, and let's look at Skopeo. Right? I'm going to do this just the easy way. So instead of looking at org.zdocker, opencontainers.image.authors. Okay, see here it says build signature? That was the typo, we didn't actually pass in. So if we go back to our repo, we didn't pass in the the build time argument, we just passed in the word. So let's fix that real quick. That's the Docker file. Let's go ahead and put our dollar sign in their. First day with the fingers you going to love it. And let's go ahead and commit that. Okay? So now that that's committed, we can go back to Jenkins, and we can actually do another build. And there's number 12. And as you can see, I've been playing with this for a little bit today. And while that's running, come on, we can go ahead and look at the Console output. Okay, so there's our image. And again, look at all the build arguments that we're passing into the build statement. So we're passing in the date and the date gets derived on the command line. With the build arguments, there's the base64 encoded of the Compose file. Here's the base64 encoding of the Kubernetes YAML. We do the build. And then let's go down to the bottom layer exists and successful. So here's where we can see no system policy violations profound marking stack regimes security plugin, build step as successful, okay? So we're actually able to do policy enforcement that that image exists, that that label sorry, exists in the image. And again, we can look at the security report and there's no policy violations and no vulnerabilities. So that's pretty good for security, right? We can now enforce and mandate use of certain labels within our images. And let's flip back over to Skopeo, and let's go ahead and look at it. So we're looking at the prod version again. And there's it is in my email address. And that validated that that was valid for that policy. So that's kind of cool. Now, let's take it a step further. What if, let's go ahead and take a look at all of the image, all the labels for a second, let me remove the dash org, make it pretty. Okay? So we have all of our image labels. Again, author's build, commit number, look at the commit number. It was built today build number 12. We saw that right? Delete, build 12. So that's kind of cool dynamic labels. Name, healthz, right? But what we're looking for is we're going to look at the org.zdockerketers label. So let's go look at the label real quick. Okay, well that doesn't really help us because it's encoded but let's base64 dash D, let's decode it. And I need to put the dash r in there 'cause it doesn't like, there we go. So there's my Kubernetes YAML. So why can't we simply kubectl apply dash f? Let's just apply it from standard end. So now we've actually used that label. From the image that we've queried with skopeo, from a remote registry to deploy locally to our Kubernetes cluster. So let's go ahead and look everything's up and running, perfect. So what does that look like, right? So luckily, I'm using traefik for Ingress 'cause I love it. And I've got an object in my Kubernetes YAML called flask.doctor.life. That's my Ingress object for traefik. I can go to flask.docker.life. And I can hit refresh. Obviously, I'm not a very good web designer 'cause the background image in the text. We can go ahead and refresh it a couple times we've got Redis storing a hit counter. We can see that our server name is roundrobing. Okay? That's kind of cool. So let's kind of recap a little bit about my demo environment. So my demo environment, I'm using DigitalOcean, Ubuntu 19.10 Vms. I'm using K3s instead of full Kubernetes either full Rancher, full Open Shift or Docker Enterprise. I think K3s has some really interesting advantages on the development side and it's kind of intended for IoT but it works really well and it deploys super easy. I'm using traefik for Ingress. I love traefik. I may or may not be a traefik ambassador. I'm using Jenkins for CI. And I'm using StackRox for image scanning and policy enforcement. One of the things to think about though, especially in terms of labels is none of this demo stack is required. You can be in any cloud, you can be in CentOs, you can be in any Kubernetes. You can even be in swarm, if you wanted to, or Docker compose. Any Ingress, any CI system, Jenkins, circle, GitLab, it doesn't matter. And pretty much any scanning. One of the things that I think is kind of nice about at least StackRox is that we do a lot more than just image scanning, right? With the policy enforcement things like that. I guess that's kind of a shameless plug. But again, any of this stack is completely replaceable, with any comparative product in that category. So I'd like to, again, point you guys to the andyc.infodc20, that's take you right to the GitHub repo. You can reach out to me at any of the socials @clemenko or andy@stackrox.com. And thank you for attending. I hope you learned something fun about labels. And hopefully you guys can standardize labels in your organization and really kind of take your images and the image provenance to a new level. Thanks for watching. (upbeat music)

Published Date : Sep 28 2020

SUMMARY :

at org the org to the andyc

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Andy ClemenkoPERSON

0.99+

EuropeLOCATION

0.99+

100%QUANTITY

0.99+

StackRoxORGANIZATION

0.99+

two tagsQUANTITY

0.99+

PythonTITLE

0.99+

flask.docker.lifeOTHER

0.99+

andy@stackrox.comOTHER

0.99+

AndyPERSON

0.99+

andyc.info/dc20OTHER

0.99+

DockerORGANIZATION

0.99+

todayDATE

0.99+

flask.doctor.lifeOTHER

0.99+

third oneQUANTITY

0.99+

DockerfilesTITLE

0.99+

seventhQUANTITY

0.99+

KubernetesTITLE

0.98+

first oneQUANTITY

0.98+

second oneQUANTITY

0.98+

label-schema.orgOTHER

0.98+

OneQUANTITY

0.98+

KeynoteTITLE

0.98+

andyc.infodc20OTHER

0.98+

first imageQUANTITY

0.98+

First dayQUANTITY

0.97+

CentOsTITLE

0.97+

StackRoxTITLE

0.97+

SkopeoORGANIZATION

0.96+

Red HatORGANIZATION

0.96+

GitTITLE

0.96+

Ubuntu 19.10 VmsTITLE

0.95+

oneQUANTITY

0.95+

build 12OTHER

0.95+

JQTITLE

0.95+

base64TITLE

0.93+

JenkinsTITLE

0.93+

build number 12OTHER

0.91+

org.opencontainers.image.OTHER

0.91+

IngressORGANIZATION

0.89+

DODORGANIZATION

0.89+

opencontainers.image.authors.OTHER

0.89+

a secondQUANTITY

0.89+

two major waysQUANTITY

0.89+

Jenkins StackRoxTITLE

0.88+

GitosisTITLE

0.86+

GitLabORGANIZATION

0.86+

GitHubORGANIZATION

0.86+

two fun onesQUANTITY

0.84+

GitLabTITLE

0.82+

skopeoORGANIZATION

0.82+

DockerTITLE

0.81+

JSONTITLE

0.81+

traefikTITLE

0.77+

skopeoTITLE

0.76+

@clemenkoPERSON

0.74+

RancherTITLE

0.74+

IngressTITLE

0.73+

org.zdockerOTHER

0.72+

RedisTITLE

0.72+

DigitalOceanTITLE

0.71+

org.opencontainers.image.blankOTHER

0.71+

KuberORGANIZATION

0.69+

Jeff Klink, Sera4 | KubeCon + CloudNativeCon Europe 2020 – Virtual


 

>> From around the globe, it's theCUBE with coverage of KubeCon and CloudNativeCon Europe 2020, Virtual. Brought to you by Red Hat, The Cloud Native Computing Foundation and Ecosystem partners. >> Welcome back, I'm Stu Miniman and this is CUBEs coverage of KubeCon CloudNativeCon 2020 in Europe, the virtual edition and of course one of the things we love when we come to these conferences is to get to the actual practitioners, understanding how they're using the various technologies especially here at the CNCF show, so many projects, lots of things changing and really excited. We're going to talk about security in a slightly different way than we often do on theCUBE so happy to welcome to the program from Sera4 I have Jeff Klink who's the Vice President of Engineering and Cloud. Jeff, thanks so much for joining us. >> Thanks too, thanks for having me. >> All right so I teed you up there, give us if you could just a quick thumbnail on Sera4, what your company does and then your role there. >> Absolutely so we're a physical hardware product addressing the telco markets, utility space, all of those so we kind of differentiate herself as a Bluetooth lock for that higher end space, the highest security market where digital encryption is really an absolute must. So we have a few products including our physical lock here, this is a physical padlock, it is where door locks and controllers that all operate over the Bluetooth protocol and that people can just use simply through their mobile phones and operate at the enterprise level. >> Yeah, I'm guessing it's a little bit more expensive than the the padlock I have on my shed which is getting a little rusty and needs a little work but it probably not quite what I'm looking for but you have Cloud, you know, in your title so give us if you could a little bit you know, what the underlying technology that you're responsible for and you know, I understand you've rolled out Kubernetes over the last couple of years, kind of set us up with what were the challenges you were facing before you started using that? >> Absolutely so Stu We've grown over the last five years really as a company like in leaps and bounds and part of that has been the scalability concern and where we go with that, you know, originally starting in the virtual machine space and, you know, original some small customers in telco as we build up the locks and eventually we knew that scalability was really a concern for us, we needed to address that pretty quickly. So as we started to build out our data center space and in this market it's a bit different than your shed locks. Bluetooth locks are kind of everywhere now, they're in logistics, they're on your home and you actually see a lot of compromises these days actually happening on those kind of locks, the home security locks, they're not built for rattling and banging and all that kind of pieces that you would expect in a telco or utility market and in the nuclear space or so you really don't want to lock that, you know, when it's dropped or bang the boat immediately begins to kind of fall apart in your hands and two you're going to expect a different type of security much like you'd see in your SSH certificates, you know, a digital key certificate that arrives there. So in our as we grew up through that piece Kubernetes became a pretty big player for us to try to deal with some of the scale and also to try to deal with some of the sovereignty pieces you don't see in your shed locks. The data sovereignty meeting in your country or as close to you as possible to try to keep that data with the telco, with the utility and kind of in country or in continent with you as well. That was a big challenge for us right off the bat. >> Yeah, you know Jeff absolutely, I have some background from the telco space obviously, there's very rigorous certifications, there's lots of environments that I need to fit into. I want to poke at a word that you mentioned, scale. So scale means lots of things to lots of different people, this year at the KubeCon CloudNativeCon show, one of the scale pieces we're talking about is edge just getting to lots of different locations as opposed to when people first thought about, you know, scale of containers and the like, it was like, do I need to be like Google? Do I have to have that much a scale? Of course, there is only one Google and there's only a handful of companies that need that kind of scale, what was it from your standpoint, is it you know, the latency of all of these devices, is it you know, just the pure number of devices, the number of locations, what was what was the scale limiting factor that you were seeing? >> It's a bit of both in two things, one it was a scale as we brought new customers on, there were extra databases, there was extra identity services, you know, the more locks we sold and the more telcos we sold too suddenly what we started finding is that we needed all these virtual machines and sources in some way to tie them together and the natural piece to those is start to build shared services like SSO and single sign on was a huge driver for us of how do we unite these spaces where they may have maintenance technicians in that space that work for two different telcos. Hey, tower one is down could you please use this padlock on this gate and then this padlock on this cabinet in order to fix it. So that kind of scale immediately showed us, we started to see email addresses or other on two different places and say, well, it might need access into this carrier site because some other carrier has a equipment on that site as well. So the scale started to pick up pretty quickly as well as the space where they started to unite together in a way that we said, well, we kind of have to scale to parts, not only the individuals databases and servers and identity and the storage of their web service data but also we had to unite them in a way that was GDPR compliant and compliant with a bunch of other regulations to say, how do we get these pieces together. So that's where we kind of started to tick the boxes to say in North America, in Latin America, South America we need centralized services but we need some central tie back mechanism as well to start to deal with scale. And the scale came when it went from Let's sell 1000 locks to, by the way, the carrier wants 8000 locks in the next coming months. That's a real scalability concern right off the bat, especially when you start to think of all the people going along with those locks in space as well. So that's the that's the kind of first piece we had to address and single sign on was the head of that for us. >> Excellent, well you know, today when we talk about how do i do container orchestration Kubernetes of course, is the first word that comes to mind, can you bring us back though, how did you end up with Kubernetes, were there other solutions you you looked at when you made your decision? What were your kind of key criteria? How did you choose what partners and vendors you ended up working with? >> So the first piece was is that we all had a lot of VM backgrounds, we had some good DevOps backgrounds as well but nobody was yet into the the container space heavily and so what we looked at originally was Docker swarm, it became our desktop, our daily, our working environment so we knew we were working towards microservices but then immediately this problem emerged that reminded me of say 10, 15 years ago, HD DVD versus Blu-ray and I thought about it as simply as that, these two are fantastic technologies, they're kind of competing in this space, Docker Compose was huge, Docker Hub was growing and growing and we kind of said you got to kind of pick a bucket and go with it and figure out who has the best backing between them, you know from a security policy, from a usage and size and scalability perspective, we knew we would scale this pretty quickly so we started to look at the DevOps and the tooling set to say, scale up by one or scale up by 10, is it doable? Infrastructure as code as well, what could I codify against the best? And as we started looking at those Kubernetes took a pretty quick change for us and actually the first piece of tooling that we looked at was Rancher, we said well there's a lot to learn the Kubernetes space and the Rancher team, they were growing like crazy and they were actually really, really good inside some of their slack channels and some of their groups but they said, reach out, we'll help you even as a free tier, you know and kind of grow our trust in you and you know, vice versa and develop that relationship and so that was our first major relationship was with Rancher and that grew our love for Kubernetes because it took away that first edge of what am i staring at here, it looks like Docker swarm, they put a UI on it, they put some lipstick on it and really helped us get through that first hurdle a couple years ago. >> Well, it's a common pattern that we see in this ecosystem that you know, open source, you try it, you get comfortable with it, you get engaged and then when it makes sense to roll it into production and really start scaling out, that's when you can really formalize those relationships so bring us through the project if you will. You know, how many applications were you starting with? What was the timeline? How many people were involved? Were there, you know, the training or organizational changes, you know, bring us through under the first bits of the project. >> Sure, absolutely. So, like anything it was a series of VMs, we had some VM that were load balanced for databases in the back and protected, we had some manual firewalls through our cloud provider as well but that was kind of the edge of it. You had your web services, your database services and another tier segregated by firewalls, we were operating at a single DCs. As we started to expand into Europe from the North America, Latin America base and as well as Africa, we said this has got to kind of stop. We have a lot of Vms, a lot of machines and so a parallel effort went underway to actually develop some of the new microservices and at first glance was our proxies, our ingresses, our gateways and then our identity service and SSL would be that unifying factor. We honestly knew that moving to Kubernetes in small steps probably wasn't going to be an easy task for us but moving the majority of services over to Kubernetes and then leaving some legacy ones in VM was definitely the right approach for us because now we're dealing with ingressing around the world. Now we're dealing with security of the main core stacks, that was kind of our hardcore focus is to say, secure the stacks up front, ingress from everywhere in the world through like an Anycast Technology and then the gateways will handle that and proxy across the globe and we'll build up from there exactly as we did today. So that was kind of the key for us is that we did develop our micro services, our identity services for SSO, our gateways and then our web services were all developed in containers to start and then we started looking at complimentary pieces like email notification mechanisms, text notification, any of those that could be containerized later, which is dealt with a single one off restful services were moved at a later date. All right. >> So Jeff, yeah absolutely. What to understand, okay, we went through all this technology, we did all these various pieces, what does this mean to your your business projects? So you talked about I need to roll out 8000 devices, is that happening faster? Is it you know, what's the actual business impact of this technology that you've rolled out? >> So here's the key part and here's a differentiator for us is we have two major areas we differentiate in and the first one is asymmetric cryptography. We do own the patents for that one so we know our communication is secure, even when we're lying over Bluetooth. So that's kind of the biggest and foremost one is that how do we communicate with the locks on how do we ensure we can all the time. Two is offline access, some of the major players don't have offline access, which means you can download your keys and assign your keys, go off site do a site to a nuclear bunker wherever it may be and we communicate directly with the lock itself. Our core technology is in the embedded controllers in the lock so that's kind of our key piece and then the lock is a housing around it, it's the mechanical mechanism to it all. So knowing that we had offline technology really nailed down allowed us to do what many called the blue-green approach, which is we're going down for four hours, heads up everybody globally we really need to make this transition but the transition was easy to make with our players, you know, these enterprise spaces and we say we're moving to Kubernetes. It's something where it's kind of a badge of honor to them and they're saying these guys, you know, they really know what they're doing. They've got Kubernetes on the back end, some we needed to explain it to but as soon as they started to hear the words Docker and Kubernetes they just said, wow, this guys are serious about enterprise, we're serious about addressing it and not only that they're forefront of other technologies. I think that's part of our security plan, we use asymmetric encryption, we don't use the Bluetooth security protocol so every time that's compromised, we're not compromised and it's a badge of honor we were much alongside the Kubernetes. >> Alright, Jeff the thing that we're hearing from a lot of companies out there is that that transition that you're going through from VMs to containerization I heard you say that you've got a DevOps practice in there, there's some skill set challenges, there's some training pieces, there's often, you know, maybe a bump or two in the road, I'm sure your project went completely smoothly but what can you share about, you know, the personnel skill sets, any lessons learned along the way that might help others? >> There was a ton. Rancher took that first edge off of us, you know, cube-cuddle, get things up, get things going, RKE in the Rancher space so the Rancher Kubernetes engine, they were kind of that first piece to say how do I get this engine up and going and then I'll work back and take away some of the UI elements and do it myself, from scheduling and making sure that nodes came up to understanding a deployment versus a DaemonSet, that first UI as we moved from like a Docker swarm environment to the the Rancher environment was really kind of key for us to say, I know what these volumes are, I know the networking and I all know these pieces but I don't know how to put core DNS in and start to get them to connect and all of those aspects and so that's where the UI part really took over. We had guys that were good on DevOps, we had guys are like, hey how do I hook it up to a back end and when you have those UI, those clicks like your pod security policy on or off, it's incredible. You turn it on fine, turn on the pod security policy and then from there, we'll either use the UI or we'll go deeper as we get the skill sets to do that so it gave us some really good assurances right off the bat. There were some technologies we really had to learn fast, we had to learn the cube-cuddle command line, we had to learn Helm, new infrastructure pieces with Terraform as well, those are kind of like our back end now. Those are our repeatability aspects that we can kind of get going with. So those are kind of our cores now is it's a Rancher every day, it's cube-cuddle from our command lines to kind of do those, Terraform to make sure we're doing the same thing but those are all practices we, you know, we cut our teeth with Rancher, we looked at the configs that are generated and said, alright, that's actually pretty good configure, you know, maybe there's a team to tolerance or a tweak we could make there but we kind of work backwards that way to have them give us some best practices and then verify those. >> So the space you're in, you have companies that rely on what you do. Security is so important, if you talk about telecommunications, you know, many of the other environments they have, you know, rigid requirements. I want to get to your understanding from you, you're using some open source tools, you've been working with startups, one of your suppliers Rancher was just acquired by SUSE, how's that relationship between you know, this ecosystem? Is that something that is there any concerns from your end user clients and what are your own comfort level with the moves and changes that are happening? >> Having gone through acquisitions myself and knowing the SUSE team pretty well, I'd say actually it's a great thing to know that the startups are funded in a great source. It's great to hear internally, externally their marketing departments are growing but you never know if a startup is growing or not. Knowing this acquisitions taking place actually gives me a lot of security. The team there was healthy, they were growing all the time but sometimes that can just be a face on a company and just talking to the internals candidly as they've always done with us, it's been amazing. So I think that's a great part knowing that there's some great open source texts, Helm Kubernetes as well that have great backers towards them, it's nice to see part of the ecosystem getting back as well in a healthy way rather than a, you know, here's $10,000 Platinum sponsorship. To see them getting the backing from an open source company, I can't say enough for. >> All right, Jeff how about what's going forward from you, what projects you're looking at or what what additions to what you've already done are you looking at doing down the road? >> Absolutely. So the big thing for us is that we've expanded pretty dramatically across the world now. As we started to expand into South Africa, we've expanded into Asia as well so managing these things remotely has been great but we've also started to begin to see some latencies where we're, you know, heading back to our etcd clusters or we're starting to see little cracks and pieces here in some of our QA environment. So part of this is actually the introduction and we started looking into the fog and the edge compute. Security is one of these games where we try to hold the security as core and as tight as you can but trying to get them the best user experience especially in South Africa and serving them from either Europe or Asia, we're trying to move into those data centers and region as well, to provide the sovereignty, to provide the security but it's about latency as well. When I opened my phone to download my digital keys I want that to be quick, I want the administrators to assign quickly but also still giving them that aspect to say I could store this in the edge, I could keep it secure and I could make sure that you still have it, that's where it's a bit different than the standard web experience to say no problem let's put a PNG as close as possible to you to give you that experience, we're putting digital certificates and keys as close as possible to people as well so that's kind of our next generation of the devices as we upgrade these pieces. >> Yeah, there was a line that stuck with me a few years ago, if you look at edge computing, if you look at IoT, the security just surface area is just expanding by orders or magnitude so that just leaves, you know, big challenges that everyone needs to deal with. >> Exactly, yep. >> All right, give us the final word if you would, you know, final lessons learned, you know, you're talking to your peers here in the hallways, virtually of the show. Now that you've gone through all of this, is there anything that you say, boy I wish I had known this it would have been this good or I might have accelerated things or which things, hey I wish I pulled these people or done something a little bit differently. >> Yep, there's a couple actually a big parts right off the bat and one, we started with databases and containers, followed the advice of everyone out there either do managed services or on standalone boxes themselves. That was something we cut our teeth on over a period of time and we really struggled with it, those databases and containers they really perform as poorly as you think they might, you can't get the constraints on those guys, that's one of them. Two we are a global company so we operate in a lot of major geographies now and ETC has been a big deal for us. We tried to pull our ETC clusters farther apart for better resiliency, no matter how much we tweak and play with that thing, keep those things in a region, keep them in separate, I guess the right word would be availability zones, keep them make redundant as possible and protect those at all costs. As we expanded we thought our best strategy would do some geographical distribution, the layout that you have in your Kubernetes cluster as you go global for hub-and-spoke versus kind of centralized clusters and pods and pieces like that, look it over with a with an expert in Kubernetes, talk to them talk about latencies and measure that stuff regularly. That is stuff that kind of tore us apart early in proof of concept and something we had to learn from very quickly, whether it'll be hub-and-spoke and centralize ETC and control planes and then workers abroad or we could spread the ETC and control planes a little more, that's a strategy that needs to be played with if you're not just in North America, South America, Europe, Asia, those are my two biggest pieces because those are our big performance killers as well as discovering PSP, Pod Security Policies early. Get those in, lock it down, get your environments out of route out of, you know, Port 80 things like that on the security space, those are just your basic housecleaning items to make sure that your latency is low, your performances are high and your security's as tight as you can make it. >> Wonderful, well, Jeff thank you so much for sharing Sera4 for story, congratulations to you and your team and wish you the best luck going forward with your initiatives. >> Absolutely, thanks so much Stu. >> All right, thank you for watching. I'm Stu Miniman and thank you for watching theCUBE. (soft music)

Published Date : Aug 18 2020

SUMMARY :

Brought to you by Red Hat, course one of the things we love All right so I teed you up there, all of those so we kind to lock that, you know, when it's dropped that you were seeing? and the natural piece to those is start and we kind of said you got that you know, open source, you try it, to start and then we started looking Is it you know, what's and it's a badge of honor we to a back end and when you that rely on what you do. that the startups are to you to give you that experience, that just leaves, you know, you know, you're talking the layout that you have congratulations to you All right, thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff KlinkPERSON

0.99+

JeffPERSON

0.99+

Red HatORGANIZATION

0.99+

South AfricaLOCATION

0.99+

EuropeLOCATION

0.99+

$10,000QUANTITY

0.99+

AsiaLOCATION

0.99+

North AmericaLOCATION

0.99+

South AfricaLOCATION

0.99+

Stu MinimanPERSON

0.99+

1000 locksQUANTITY

0.99+

RancherORGANIZATION

0.99+

Latin AmericaLOCATION

0.99+

AfricaLOCATION

0.99+

8000 locksQUANTITY

0.99+

8000 devicesQUANTITY

0.99+

first wordQUANTITY

0.99+

South AmericaLOCATION

0.99+

first pieceQUANTITY

0.99+

telcoORGANIZATION

0.99+

TwoQUANTITY

0.99+

KubeConEVENT

0.99+

GDPRTITLE

0.99+

GoogleORGANIZATION

0.99+

two thingsQUANTITY

0.99+

oneQUANTITY

0.99+

TerraformORGANIZATION

0.98+

Sera4ORGANIZATION

0.98+

first pieceQUANTITY

0.98+

four hoursQUANTITY

0.98+

bothQUANTITY

0.98+

twoQUANTITY

0.98+

todayDATE

0.98+

two biggest piecesQUANTITY

0.97+

AnycastORGANIZATION

0.97+

two different telcosQUANTITY

0.97+

first edgeQUANTITY

0.97+

firstQUANTITY

0.95+

singleQUANTITY

0.95+

CloudNativeCon Europe 2020EVENT

0.95+

two major areasQUANTITY

0.94+

first bitsQUANTITY

0.94+

SUSEORGANIZATION

0.93+

KubeCon CloudNativeCon 2020EVENT

0.92+

10QUANTITY

0.92+

CNCFEVENT

0.92+

first hurdleQUANTITY

0.91+

CloudNativeCon Europe 2020EVENT

0.91+

KubernetesTITLE

0.91+

this yearDATE

0.91+

few years agoDATE

0.89+

two different placesQUANTITY

0.89+

DockerORGANIZATION

0.88+

first oneQUANTITY

0.86+

KubernetesORGANIZATION

0.86+

Innovation Happens Best in Open Collaboration Panel | DockerCon Live 2020


 

>> Announcer: From around the globe, it's the queue with digital coverage of DockerCon live 2020. Brought to you by Docker and its ecosystem partners. >> Welcome, welcome, welcome to DockerCon 2020. We got over 50,000 people registered so there's clearly a ton of interest in the world of Docker and Eddie's as I like to call it. And we've assembled a power panel of Open Source and cloud native experts to talk about where things stand in 2020 and where we're headed. I'm Shawn Conley, I'll be the moderator for today's panel. I'm also a proud alum of JBoss, Red Hat, SpringSource, VMware and Hortonworks and I'm broadcasting from my hometown of Philly. Our panelists include; Michelle Noorali, Senior Software Engineer at Microsoft, joining us from Atlanta, Georgia. We have Kelsey Hightower, Principal developer advocate at Google Cloud, joining us from Washington State and we have Chris Aniszczyk, CTO CIO at the CNCF, joining us from Austin, Texas. So I think we have the country pretty well covered. Thank you all for spending time with us on this power panel. Chris, I'm going to start with you, let's dive right in. You've been in the middle of the Docker netease wave since the beginning with a clear focus on building a better world through open collaboration. What are your thoughts on how the Open Source landscape has evolved over the past few years? Where are we in 2020? And where are we headed from both community and a tech perspective? Just curious to get things sized up? >> Sure, when CNCF started about roughly four, over four years ago, the technology mostly focused on just the things around Kubernetes, monitoring communities with technology like Prometheus, and I think in 2020 and the future, we definitely want to move up the stack. So there's a lot of tools being built on the periphery now. So there's a lot of tools that handle running different types of workloads on Kubernetes. So things like Uvert and Shay runs VMs on Kubernetes, which is crazy, not just containers. You have folks that, Microsoft experimenting with a project called Kruslet which is trying to run web assembly workloads natively on Kubernetes. So I think what we've seen now is more and more tools built around the periphery, while the core of Kubernetes has stabilized. So different technologies and spaces such as security and different ways to run different types of workloads. And at least that's kind of what I've seen. >> So do you have a fair amount of vendors as well as end users still submitting in projects in, is there still a pretty high volume? >> Yeah, we have 48 total projects in CNCF right now and Michelle could speak a little bit more to this being on the DOC, the pipeline for new projects is quite extensive and it covers all sorts of spaces from two service meshes to security projects and so on. So it's ever so expanding and filling in gaps in that cloud native landscape that we have. >> Awesome. Michelle, Let's head to you. But before we actually dive in, let's talk a little glory days. A rumor has it that you are the Fifth Grade Kickball Championship team captain. (Michelle laughs) Are the rumors true? >> They are, my speech at the end of the year was the first talk I ever gave. But yeah, it was really fun. I wasn't captain 'cause I wasn't really great at anything else apart from constantly cheer on the team. >> A little better than my eighth grade Spelling Champ Award so I think I'd rather have the kickball. But you've definitely, spent a lot of time leading an Open Source, you've been across many projects for many years. So how does the art and science of collaboration, inclusivity and teamwork vary? 'Cause you're involved in a variety of efforts, both in the CNCF and even outside of that. And then what are some tips for expanding the tent of Open Source projects? >> That's a good question. I think it's about transparency. Just come in and tell people what you really need to do and clearly articulate your problem, more clearly articulate your problem and why you can't solve it with any other solution, the more people are going to understand what you're trying to do and be able to collaborate with you better. What I love about Open Source is that where I've seen it succeed is where incentives of different perspectives and parties align and you're just transparent about what you want. So you can collaborate where it makes sense, even if you compete as a company with another company in the same area. So I really like that, but I just feel like transparency and honesty is what it comes down to and clearly communicating those objectives. >> Yeah, and the various foundations, I think one of the things that I've seen, particularly Apache Software Foundation and others is the notion of checking your badge at the door. Because the competition might be between companies, but in many respects, you have engineers across many companies that are just kicking butt with the tech they contribute, claiming victory in one way or the other might make for interesting marketing drama. But, I think that's a little bit of the challenge. In some of the, standards-based work you're doing I know with CNI and some other things, are they similar, are they different? How would you compare and contrast into something a little more structured like CNCF? >> Yeah, so most of what I do is in the CNCF, but there's specs and there's projects. I think what CNCF does a great job at is just iterating to make it an easier place for developers to collaborate. You can ask the CNCF for basically whatever you need, and they'll try their best to figure out how to make it happen. And we just continue to work on making the processes are clearer and more transparent. And I think in terms of specs and projects, those are such different collaboration environments. Because if you're in a project, you have to say, "Okay, I want this feature or I want this bug fixed." But when you're in a spec environment, you have to think a little outside of the box and like, what framework do you want to work in? You have to think a little farther ahead in terms of is this solution or this decision we're going to make going to last for the next how many years? You have to get more of a buy in from all of the key stakeholders and maintainers. So it's a little bit of a longer process, I think. But what's so beautiful is that you have this really solid, standard or interface that opens up an ecosystem and allows people to build things that you could never have even imagined or dreamed of so-- >> Gotcha. So I'm Kelsey, we'll head over to you as your focus is on, developer advocate, you've been in the cloud native front lines for many years. Today developers are faced with a ton of moving parts, spanning containers, functions, Cloud Service primitives, including container services, server-less platforms, lots more, right? I mean, there's just a ton of choice. How do you help developers maintain a minimalist mantra in the face of such a wealth of choice? I think minimalism I hear you talk about that periodically, I know you're a fan of that. How do you pass that on and your developer advocacy in your day to day work? >> Yeah, I think, for most developers, most of this is not really the top of mind for them, is something you may see a post on Hacker News, and you might double click into it. Maybe someone on your team brought one of these tools in and maybe it leaks up into your workflow so you're forced to think about it. But for most developers, they just really want to continue writing code like they've been doing. And the best of these projects they'll never see. They just work, they get out of the way, they help them with log in, they help them run their application. But for most people, this isn't the core idea of the job for them. For people in operations, on the other hand, maybe these components fill a gap. So they look at a lot of this stuff that you see in the CNCF and Open Source space as number one, various companies or teams sharing the way that they do things, right? So these are ideas that are put into the Open Source, some of them will turn into products, some of them will just stay as projects that had mutual benefit for multiple people. But for the most part, it's like walking through an ion like Home Depot. You pick the tools that you need, you can safely ignore the ones you don't need, and maybe something looks interesting and maybe you study it to see if that if you have a problem. And for most people, if you don't have that problem that that tool solves, you should be happy. No one needs every project and I think that's where the foundation for confusion. So my main job is to help people not get stuck and confused in LAN and just be pragmatic and just use the tools that work for 'em. >> Yeah, and you've spent the last little while in the server-less space really diving into that area, compare and contrast, I guess, what you found there, minimalist approach, who are you speaking to from a server-less perspective versus that of the broader CNCF? >> The thing that really pushed me over, I was teaching my daughter how to make a website. So she's on her Chromebook, making a website, and she's hitting 127.0.0.1, and it looks like geo cities from the 90s but look, she's making website. And she wanted her friends to take a look. So she copied and paste from her browser 127.0.0.1 and none of her friends could pull it up. So this is the point where every parent has to cross that line and say, "Hey, do I really need to sit down "and teach my daughter about Linux "and Docker and Kubernetes." That isn't her main goal, her goal was to just launch her website in a way that someone else can see it. So we got Firebase installed on her laptop, she ran one command, Firebase deploy. And our site was up in a few minutes, and she sent it over to her friend and there you go, she was off and running. The whole server-less movement has that philosophy as one of the stated goal that needs to be the workflow. So, I think server-less is starting to get closer and closer, you start to see us talk about and Chris mentioned this earlier, we're moving up the stack. Where we're going to up the stack, the North Star there is feel where you get the focus on what you're doing, and not necessarily how to do it underneath. And I think server-less is not quite there yet but every type of workload, stateless web apps check, event driven workflows check, but not necessarily for things like machine learning and some other workloads that more traditional enterprises want to run so there's still work to do there. So server-less for me, serves as the North Star for why all these Projects exists for people that may have to roll their own platform, to provide the experience. >> So, Chris, on a related note, with what we were just talking about with Kelsey, what's your perspective on the explosion of the cloud native landscape? There's, a ton of individual projects, each can be used separately, but in many cases, they're like Lego blocks and used together. So things like the surface mesh interface, standardizing interfaces, so things can snap together more easily, I think, are some of the approaches but are you doing anything specifically to encourage this cross fertilization and collaboration of bug ability, because there's just a ton of projects, not only at the CNCF but outside the CNCF that need to plug in? >> Yeah, I mean, a lot of this happens organically. CNCF really provides of the neutral home where companies, competitors, could trust each other to build interesting technology. We don't force integration or collaboration, it happens on its own. We essentially allow the market to decide what a successful project is long term or what an integration is. We have a great Technical Oversight Committee that helps shepherd the overall technical vision for the organization and sometimes steps in and tries to do the right thing when it comes to potentially integrating a project. Previously, we had this issue where there was a project called Open Tracing, and an effort called Open Census, which is basically trying to standardize how you're going to deal with metrics, on the tree and so on in a cloud native world that we're essentially competing with each other. The CNCF TC and committee came together and merged those projects into one parent ever called Open Elementary and so that to me is a case study of how our committee helps, bridges things. But we don't force things, we essentially want our community of end users and vendors to decide which technology is best in the long term, and we'll support that. >> Okay, awesome. And, Michelle, you've been focused on making distributed systems digestible, which to me is about simplifying things. And so back when Docker arrived on the scene, some people referred to it as developer dopamine, which I love that term, because it's simplified a bunch of crufty stuff for developers and actually helped them focus on doing their job, writing code, delivering code, what's happening in the community to help developers wire together multi-part modern apps in a way that's elegant, digestible, feels like a dopamine rush? >> Yeah, one of the goals of the(mumbles) project was to make it easier to deploy an application on Kubernetes so that you could see what the finished product looks like. And then dig into all of the things that that application is composed of, all the resources. So we're really passionate about this kind of stuff for a while now. And I love seeing projects that come into the space that have this same goal and just iterate and make things easier. I think we have a ways to go still, I think a lot of the iOS developers and JS developers I get to talk to don't really care that much about Kubernetes. They just want to, like Kelsey said, just focus on their code. So one of the projects that I really like working with is Tilt gives you this dashboard in your CLI, aggregates all your logs from your applications, And it kind of watches your application changes, and reconfigures those changes in Kubernetes so you can see what's going on, it'll catch errors, anything with a dashboard I love these days. So Yali is like a metrics dashboard that's integrated with STL, a service graph of your service mesh, and lets you see the metrics running there. I love that, I love that dashboard so much. Linkerd has some really good service graph images, too. So anything that helps me as an end user, which I'm not technically an end user, but me as a person who's just trying to get stuff up and running and working, see the state of the world easily and digest them has been really exciting to see. And I'm seeing more and more dashboards come to light and I'm very excited about that. >> Yeah, as part of the DockerCon just as a person who will be attending some of the sessions, I'm really looking forward to see where DockerCompose is going, I know they opened up the spec to broader input. I think your point, the good one, is there's a bit more work to really embrace the wealth of application artifacts that compose a larger application. So there's definitely work the broader community needs to lean in on, I think. >> I'm glad you brought that up, actually. Compose is something that I should have mentioned and I'm glad you bring that up. I want to see programming language libraries, integrate with the Compose spec. I really want to see what happens with that I think is great that they open that up and made that a spec because obviously people really like using Compose. >> Excellent. So Kelsey, I'd be remiss if I didn't touch on your January post on changelog entitled, "Monoliths are the Future." Your post actually really resonated with me. My son works for a software company in Austin, Texas. So your hometown there, Chris. >> Yeah. >> Shout out to Will and the chorus team. His development work focuses on adding modern features via micro services as extensions to the core monolith that the company was founded on. So just share some thoughts on monoliths, micro services. And also, what's deliverance dopamine from your perspective more broadly, but people usually phrase as monoliths versus micro services, but I get the sense you don't believe it's either or. >> Yeah, I think most companies from the pragmatic so one of their argument is one of pragmatism. Most companies have trouble designing any app, monolith, deployable or microservices architecture. And then these things evolve over time. Unless you're really careful, it's really hard to know how to slice these things. So taking an idea or a problem and just knowing how to perfectly compartmentalize it into individual deployable component, that's hard for even the best people to do. And double down knowing the actual solution to the particular problem. A lot of problems people are solving they're solving for the first time. It's really interesting, our industry in general, a lot of people who work in it have never solved the particular problem that they're trying to solve for the first time. So that's interesting. The other part there is that most of these tools that are here to help are really only at the infrastructure layer. We're talking freeways and bridges and toll bridges, but there's nothing that happens in the actual developer space right there in memory. So the libraries that interface to the structure logging, the libraries that deal with rate limiting, the libraries that deal with authorization, can this person make this query with this user ID? A lot of those things are still left for developers to figure out on their own. So while we have things like the brunettes and fluid D, we have all of these tools to deploy apps into those target, most developers still have the problem of everything you do above that line. And to be honest, the majority of the complexity has to be resolved right there in the app. That's the thing that's taking requests directly from the user. And this is where maybe as an industry, we're over-correcting. So we had, you said you come from the JBoss world, I started a lot of my Cisco administration, there's where we focus a little bit more on the actual application needs, maybe from a router that as well. But now what we're seeing is things like Spring Boot, start to offer a little bit more integration points in the application space itself. So I think the biggest parts that are missing now are what are the frameworks people will use for authorization? So you have projects like OPA, Open Policy Agent for those that are new to that, it gives you this very low level framework, but you still have to understand the concepts around, what does it mean to allow someone to do something and one missed configuration, all your security goes out of the window. So I think for most developers this is where the next set of challenges lie, if not actually the original challenge. So for some people, they were able to solve most of these problems with virtualization, run some scripts, virtualize everything and be fine. And monoliths were okay for that. For some reason, we've thrown pragmatism out of the window and some people are saying the only way to solve these problems is by breaking the app into 1000 pieces. Forget the fact that you had trouble managing one piece, you're going to somehow find the ability to manage 1000 pieces with these tools underneath but still not solving the actual developer problems. So this is where you've seen it already with a couple of popular blog posts from other companies. They cut too deep. They're going from 2000, 3000 microservices back to maybe 100 or 200. So to my world, it's going to be not just one monolith, but end up maybe having 10 or 20 monoliths that maybe reflect the organization that you have versus the architectural pattern that you're at. >> I view it as like a constellation of stars and planets, et cetera. Where you you might have a star that has a variety of, which is a monolith, and you have a variety of sort of planetary microservices that float around it. But that's reality, that's the reality of modern applications, particularly if you're not starting from a clean slate. I mean your points, a good one is, in many respects, I think the infrastructure is code movement has helped automate a bit of the deployment of the platform. I've been personally focused on app development JBoss as well as springsSource. The Spring team I know that tech pretty well over the years 'cause I was involved with that. So I find that James Governor's discussion of progressive delivery really resonates with me, as a developer, not so much as an infrastructure Deployer. So continuous delivery is more of infrastructure notice notion, progressive delivery, feature flags, those types of things, or app level, concepts, minimizing the blast radius of your, the new features you're deploying, that type of stuff, I think begins to speak to the pain of application delivery. So I'll guess I'll put this up. Michelle, I might aim it to you, and then we'll go around the horn, what are your thoughts on the progressive delivery area? How could that potentially begin to impact cloud native over 2020? I'm looking for some rallying cries that move up the stack and give a set of best practices, if you will. And I think James Governor of RedMonk opened on something that's pretty important. >> Yeah, I think it's all about automating all that stuff that you don't really know about. Like Flagger is an awesome progressive delivery tool, you can just deploy something, and people have been asking for so many years, ever since I've been in this space, it's like, "How do I do AB deployment?" "How do I do Canary?" "How do I execute these different deployment strategies?" And Flagger is a really good example, for example, it's a really good way to execute these deployment strategies but then, make sure that everything's happening correctly via observing metrics, rollback if you need to, so you don't just throw your whole system. I think it solves the problem and allows you to take risks but also keeps you safe in that you can be confident as you roll out your changes that it all works, it's metrics driven. So I'm just really looking forward to seeing more tools like that. And dashboards, enable that kind of functionality. >> Chris, what are your thoughts in that progressive delivery area? >> I mean, CNCF alone has a lot of projects in that space, things like Argo that are tackling it. But I want to go back a little bit to your point around developer dopamine, as someone that probably spent about a decade of his career focused on developer tooling and in fact, if you remember the Eclipse IDE and that whole integrated experience, I was blown away recently by a demo from GitHub. They have something called code spaces, which a long time ago, I was trying to build development environments that essentially if you were an engineer that joined a team recently, you could basically get an environment quickly start it with everything configured, source code checked out, environment properly set up. And that was a very hard problem. This was like before container days and so on and to see something like code spaces where you'd go to a repo or project, open it up, behind the scenes they have a container that is set up for the environment that you need to build and just have a VS code ID integrated experience, to me is completely magical. It hits like developer dopamine immediately for me, 'cause a lot of problems when you're going to work with a project attribute, that whole initial bootstrap of, "Oh you need to make sure you have this library, this install," it's so incredibly painful on top of just setting up your developer environment. So as we continue to move up the stack, I think you're going to see an incredible amount of improvements around the developer tooling and developer experience that people have powered by a lot of this cloud native technology behind the scenes that people may not know about. >> Yeah, 'cause I've been talking with the team over at Docker, the work they're doing with that desktop, enable the aim local environment, make sure it matches as closely as possible as your deployed environments that you might be targeting. These are some of the pains, that I see. It's hard for developers to get bootstrapped up, it might take him a day or two to actually just set up their local laptop and development environment, and particularly if they change teams. So that complexity really corralling that down and not necessarily being overly prescriptive as to what tool you use. So if you're visual code, great, it should feel integrated into that environment, use a different environment or if you feel more comfortable at the command line, you should be able to opt into that. That's some of the stuff I get excited to potentially see over 2020 as things progress up the stack, as you said. So, Michelle, just from an innovation train perspective, and we've covered a little bit, what's the best way for people to get started? I think Kelsey covered a little bit of that, being very pragmatic, but all this innovation is pretty intimidating, you can get mowed over by the train, so to speak. So what's your advice for how people get started, how they get involved, et cetera. >> Yeah, it really depends on what you're looking for and what you want to learn. So, if you're someone who's new to the space, honestly, check out the case studies on cncf.io, those are incredible. You might find environments that are similar to your organization's environments, and read about what worked for them, how they set things up, any hiccups they crossed. It'll give you a broad overview of the challenges that people are trying to solve with the technology in this space. And you can use that drill into the areas that you want to learn more about, just depending on where you're coming from. I find myself watching old KubeCon talks on the cloud native computing foundations YouTube channel, so they have like playlists for all of the conferences and the special interest groups in CNCF. And I really enjoy talking, I really enjoy watching excuse me, older talks, just because they explain why things were done, the way they were done, and that helps me build the tools I built. And if you're looking to get involved, if you're building projects or tools or specs and want to contribute, we have special interest groups in the CNCF. So you can find that in the CNCF Technical Oversight Committee, TOC GitHub repo. And so for that, if you want to get involved there, choose a vertical. Do you want to learn about observability? Do you want to drill into networking? Do you care about how to deliver your app? So we have a cig called app delivery, there's a cig for each major vertical, and you can go there to see what is happening on the edge. Really, these are conversations about, okay, what's working, what's not working and what are the next changes we want to see in the next months. So if you want that kind of granularity and discussion on what's happening like that, then definitely join those those meetings. Check out those meeting notes and recordings. >> Gotcha. So on Kelsey, as you look at 2020 and beyond, I know, you've been really involved in some of the earlier emerging tech spaces, what gets you excited when you look forward? What gets your own level of dopamine up versus the broader community? What do you see coming that we should start thinking about now? >> I don't think any of the raw technology pieces get me super excited anymore. Like, I've seen the circle of around three or four times, in five years, there's going to be a new thing, there might be a new foundation, there'll be a new set of conferences, and we'll all rally up and probably do this again. So what's interesting now is what people are actually using the technology for. Some people are launching new things that maybe weren't possible because infrastructure costs were too high. People able to jump into new business segments. You start to see these channels on YouTube where everyone can buy a mic and a B app and have their own podcasts and be broadcast to the globe, just for a few bucks, if not for free. Those revolutionary things are the big deal and they're hard to come by. So I think we've done a good job democratizing these ideas, distributed systems, one company got really good at packaging applications to share with each other, I think that's great, and never going to reset again. And now what's going to be interesting is, what will people build with this stuff? If we end up building the same things we were building before, and then we're talking about another digital transformation 10 years from now because it's going to be funny but Kubernetes will be the new legacy. It's going to be the things that, "Oh, man, I got stuck in this Kubernetes thing," and there'll be some governor on TV, looking for old school Kubernetes engineers to migrate them to some new thing, that's going to happen. You got to know that. So at some point merry go round will stop. And we're going to be focused on what you do with this. So the internet is there, most people have no idea of the complexities of underwater sea cables. It's beyond one or two people, or even one or two companies to comprehend. You're at the point now, where most people that jump on the internet are talking about what you do with the internet. You can have Netflix, you can do meetings like this one, it's about what you do with it. So that's going to be interesting. And we're just not there yet with tech, tech is so, infrastructure stuff. We're so in the weeds, that most people almost burn out what's just getting to the point where you can start to look at what you do with this stuff. So that's what I keep in my eye on, is when do we get to the point when people just ship things and build things? And I think the closest I've seen so far is in the mobile space. If you're iOS developer, Android developer, you use the SDK that they gave you, every year there's some new device that enables some new things speech to text, VR, AR and you import an STK, and it just worked. And you can put it in one place and 100 million people can download it at the same time with no DevOps team, that's amazing. When can we do that for server side applications? That's going to be something I'm going to find really innovative. >> Excellent. Yeah, I mean, I could definitely relate. I was Hortonworks in 2011, so, Hadoop, in many respects, was sort of the precursor to the Kubernetes area, in that it was, as I like to refer to, it was a bunch of animals in the zoo, wasn't just the yellow elephant. And when things mature beyond it's basically talking about what kind of analytics are driving, what type of machine learning algorithms and applications are they delivering? You know that's when things tip over into a real solution space. So I definitely see that. I think the other cool thing even just outside of the container and container space, is there's just such a wealth of data related services. And I think how those two worlds come together, you brought up the fact that, in many respects, server-less is great, it's stateless, but there's just a ton of stateful patterns out there that I think also need to be addressed as these richer applications to be from a data processing and actionable insights perspective. >> I also want to be clear on one thing. So some people confuse two things here, what Michelle said earlier about, for the first time, a whole group of people get to learn about distributed systems and things that were reserved to white papers, PhDs, CF site, this stuff is now super accessible. You go to the CNCF site, all the things that you read about or we used to read about, you can actually download, see how it's implemented and actually change how it work. That is something we should never say is a waste of time. Learning is always good because someone has to build these type of systems and whether they sell it under the guise of server-less or not, this will always be important. Now the other side of this is, that there are people who are not looking to learn that stuff, the majority of the world isn't looking. And in parallel, we should also make this accessible, which should enable people that don't need to learn all of that before they can be productive. So that's two sides of the argument that can be true at the same time, a lot of people get caught up. And everything should just be server-less and everyone learning about distributed systems, and contributing and collaborating is wasting time. We can't have a world where there's only one or two companies providing all infrastructure for everyone else, and then it's a black box. We don't need that. So we need to do both of these things in parallel so I just want to make sure I'm clear that it's not one of these or the other. >> Yeah, makes sense, makes sense. So we'll just hit the final topic. Chris, I think I'll ask you to help close this out. COVID-19 clearly has changed how people work and collaborate. I figured we'd end on how do you see, so DockerCon is going to virtual events, inherently the Open Source community is distributed and is used to not face to face collaboration. But there's a lot of value that comes together by assembling a tent where people can meet, what's the best way? How do you see things playing out? What's the best way for this to evolve in the face of the new normal? >> I think in the short term, you're definitely going to see a lot of virtual events cropping up all over the place. Different themes, verticals, I've already attended a handful of virtual events the last few weeks from Red Hat summit to Open Compute summit to Cloud Native summit, you'll see more and more of these. I think, in the long term, once the world either get past COVID or there's a vaccine or something, I think the innate nature for people to want to get together and meet face to face and deal with all the serendipitous activities you would see in a conference will come back, but I think virtual events will augment these things in the short term. One benefit we've seen, like you mentioned before, DockerCon, can have 50,000 people at it. I don't remember what the last physical DockerCon had but that's definitely an order of magnitude more. So being able to do these virtual events to augment potential of physical events in the future so you can build a more inclusive community so people who cannot travel to your event or weren't lucky enough to win a scholarship could still somehow interact during the course of event to me is awesome and I hope something that we take away when we start all doing these virtual events when we get back to physical events, we find a way to ensure that these things are inclusive for everyone and not just folks that can physically make it there. So those are my thoughts on on the topic. And I wish you the best of luck planning of DockerCon and so on. So I'm excited to see how it turns out. 50,000 is a lot of people and that just terrifies me from a cloud native coupon point of view, because we'll probably be somewhere. >> Yeah, get ready. Excellent, all right. So that is a wrap on the DockerCon 2020 Open Source Power Panel. I think we covered a ton of ground. I'd like to thank Chris, Kelsey and Michelle, for sharing their perspectives on this continuing wave of Docker and cloud native innovation. I'd like to thank the DockerCon attendees for tuning in. And I hope everybody enjoys the rest of the conference. (upbeat music)

Published Date : May 29 2020

SUMMARY :

Brought to you by Docker of the Docker netease wave on just the things around Kubernetes, being on the DOC, the A rumor has it that you are apart from constantly cheer on the team. So how does the art and the more people are going to understand Yeah, and the various foundations, and allows people to build things I think minimalism I hear you You pick the tools that you need, and it looks like geo cities from the 90s but outside the CNCF that need to plug in? We essentially allow the market to decide arrived on the scene, on Kubernetes so that you could see Yeah, as part of the and I'm glad you bring that up. entitled, "Monoliths are the Future." but I get the sense you and some people are saying the only way and you have a variety of sort in that you can be confident and in fact, if you as to what tool you use. and that helps me build the tools I built. So on Kelsey, as you and be broadcast to the globe, that I think also need to be addressed the things that you read about in the face of the new normal? and meet face to face So that is a wrap on the DockerCon 2020

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

MichellePERSON

0.99+

Shawn ConleyPERSON

0.99+

Michelle NooraliPERSON

0.99+

Chris AniszczykPERSON

0.99+

2011DATE

0.99+

CNCFORGANIZATION

0.99+

KelseyPERSON

0.99+

1000 piecesQUANTITY

0.99+

10QUANTITY

0.99+

Apache Software FoundationORGANIZATION

0.99+

2020DATE

0.99+

JanuaryDATE

0.99+

oneQUANTITY

0.99+

CiscoORGANIZATION

0.99+

PhillyLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

Austin, TexasLOCATION

0.99+

a dayQUANTITY

0.99+

Atlanta, GeorgiaLOCATION

0.99+

SpringSourceORGANIZATION

0.99+

TOCORGANIZATION

0.99+

100QUANTITY

0.99+

HortonworksORGANIZATION

0.99+

DockerConEVENT

0.99+

North StarORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

PrometheusTITLE

0.99+

Washington StateLOCATION

0.99+

first timeQUANTITY

0.99+

Red HatORGANIZATION

0.99+

bothQUANTITY

0.99+

DockerORGANIZATION

0.99+

YouTubeORGANIZATION

0.99+

WillPERSON

0.99+

200QUANTITY

0.99+

Spring BootTITLE

0.99+

AndroidTITLE

0.99+

two companiesQUANTITY

0.99+

two sidesQUANTITY

0.99+

iOSTITLE

0.99+

one pieceQUANTITY

0.99+

Kelsey HightowerPERSON

0.99+

RedMonkORGANIZATION

0.99+

two peopleQUANTITY

0.99+

3000 microservicesQUANTITY

0.99+

Home DepotORGANIZATION

0.99+

JBossORGANIZATION

0.99+

Google CloudORGANIZATION

0.98+

NetflixORGANIZATION

0.98+

50,000 peopleQUANTITY

0.98+

20 monolithsQUANTITY

0.98+

OneQUANTITY

0.98+

one thingQUANTITY

0.98+

ArgoORGANIZATION

0.98+

KubernetesTITLE

0.98+

two companiesQUANTITY

0.98+

eachQUANTITY

0.98+

GitHubORGANIZATION

0.98+

over 50,000 peopleQUANTITY

0.98+

five yearsQUANTITY

0.98+

twoQUANTITY

0.98+

DockerEVENT

0.98+

Deepak Singh, AWS | DockerCon 2020


 

>> Narrator: From around the globe, it's theCUBE with digital coverage of DockerCon LIVE 2020, brought to you by Docker and its ecosystem partners. >> Hi, I'm Stu Miniman and this is theCUBE's coverage of DockerCon LIVE 2020. Happy to welcome back to the program one of our CUBE alumni, Deepak Singh. He's the vice president of compute services at Amazon Web Services. Deepak, great to see you. >> Likewise, hi, Stu. Nice to meet you again. >> All right, so for our audience that hasn't been in your previous times on theCUBE, give us a little bit about, you know, your role and your organization inside AWS? >> Yeah, so I'm, I've been part of the AWS compute services world from, for the last 12 years in various capacities. Today, I run a number of teams, all our container services, our Linux teams, I also happen to run a high performance computing organization, so it's a nice mix of all the computing that our customers do, especially some of the more new and large scale compute types that our customers are doing. >> All right, so Deepak, obviously, you know, the digital events, we understand what's happening with the global pandemic. DockerCon was actually always planned to be an online event but I want to understand, you know, your teams, how things are affecting, we know distributed is something that Amazon's done, but you have to cut up those two pizza and send them out to the additional groups or, you know, what advice are you giving the developers out there? >> Yeah, in many ways, obviously, how we operate has changed. We are at home, maybe I think with our families. DockerCon was always going to be virtual, but many other events like AWS Summits are now virtual so, you know, in some ways, the teams, the people that get most impacted are not necessarily the developers in our team but people who interact a lot with customers, who go to conferences and speak and they are finding new ways of being effective and being successful and they've been very creative at it. Our customers are getting very good at working with us virtually because we can always go to their site, they can always come to Seattle, or run of other sites for meeting. So we've all become very good at, and disciplined at how do you conduct really nice virtual meetings. But from a customer commitment side, from how we are operating, the things that we're doing, not that much has changed. We still run our projects the same way, the teams work together. My team tends to do a lot of happy things like Friday happy hours, they happen to be all virtual. I think last time we played, what word, bingo? I forget exactly what game we played. I know I got some point somewhere. But we do our best to maintain sort of our team chemistry or camaraderie but the mission doesn't change which is our customers expect us to keep operating their services, make sure that they're highly available, keep delivering new capabilities and I think in this environment, in some ways that's even more important than ever, as customer, as the consumer moves online and so much business is being done virtually so it keeps us on our toes but it's been an adjustment but I think we are all, not just us, I think the whole world is doing the best that they can under the circumstances. >> Yeah, absolutely, it definitely has humanized things quite a bit. From a technology standpoint, Deepak, you know, distributed systems has really been the challenge of you know, quite a long journey that people have been going on. Docker has played, you know, a really important role in a lot of these cloud native technologies. It's been just amazing to watch, you know, one of the things I point to in my career is, you know, watching from those very, very early days of Docker to the Cambrian explosion of what we've seen container based services, you know, you've been part of it for quite a number of years and AWS had many services out there. For people that are getting started, you know, what guidance do you give them? What do they understand about, you know, containerization in 2020? >> Yeah, containerization in 2020 is quite a bit different from when Docker started in 2013. I remember speaking at DockerCon, I forget, that's 2014, 2015, and it was a very different world. People are just trying to figure out what containers are that they could package code in deeper. Today, containers are mainstream, it is more customers or at least many customers and they are starting to build new applications, probably starting them either with containers or with some form of server technology. At least that's the default starting point but increasingly, we also seen customers with existing applications starting to think about how do they adapt? And containers are a means to an end. The end is how can we move faster? How can we deliver more quickly? How can our teams be more productive? And how can you do it more, less expensively, at lower cost? And containers are a big part, important and critical piece of that puzzle, both from how customers are operating their infrastructure, that there's a whole ecosystem of schedulers and orchestration and security tools and all the things that an enterprise need to deliver applications using containers that they have built up. Over the last few years, you know, we have multiple container services that meet those needs. And I think that's been the biggest change is that there's so much more. Which also means that when you're getting started, you're faced with many more options. When Docker started, it was this cute whale, Docker run, Docker build Docker push, it was pretty simple, you could get going really quickly. And today you have 500 different options. My guidance to customers really is, boils down to what are you trying to achieve? If you're an organization that's trying to corral infrastructure and trying to use an existing VM more effectively, for example, you probably do want to invest in becoming experts at schedulers and understanding orchestration technologies like ECS and EKS work but if you just want to run applications, you probably want to look at something like Fargate or more. I mean, you could go towards Lambda and just run code. But I think it all boils down to where you're starting your journey. And by the way, understanding Docker run, Docker build and Docker push is still a great idea. It helps you understand how things work. >> All right, so Deepak, you've already brought up a couple of AWS services of, you know, talk about the options out there, that you can either run on top of AWS, you have a lot of native services, you know, ECS, EKS, you mentioned, Fargate there, and very broad ecosystem in space. Could you just, you know, obviously, there are entire breakout sessions to talk about , the various AWS services, but you know, give us that one on one level as to what to understand for container service by AWS. >> Yeah, and these services evolved organically and we launched the Amazon Elastic Container Service or ECS in preview in November or whenever re:Invent was that year in 2014, which seems ages ago in the world of containers but in the end, our goal is to give our customers the most choice, so that they can solve problems the way they want to solve them. So Amazon ECS is our native container orchestration service, it's designed to work with and the rest of the AWS ecosystem. So it uses VPC for networking, it uses IAM identity, it uses ALB for load balancing, other than just good examples, some examples of how it works. But it became pretty clear over time that there was a lot of customers who were investing in communities, very often starting in their own data centers. And as they migrated onto the cloud, they wanted to continue using the same tool plane but they also wanted to not have to manage the complexity of communities control planes, upgrades. And they also wanted some of the same integrations that they were getting with ECS and so that's where the Amazon Elastic Kubernetes Service or EKS comes in, which is, okay, we will manage a control plane for you. We will manage upgrades and patches for you. You focus on building your applications in Kubernetes way, so it embraces Kubernetes. It has, invokes with all the Kubernetes tooling and gives you a Kubernetes native experience, but then also ties into the broad AWS ecosystem and allows us to take care of some of the muck that many customers quite frankly don't and shouldn't have to worry about. But then we took it one step further and actually launched the same time as EKS and that's, AWS Fargate, and Fargate was, came from the recognition that we had, actually, a long time ago, which is, one of the beauties of EC2 was that customers never had, had to stop, didn't have to worry about racking and stacking and where a server was running anymore. And the idea was, how can we apply that to the world of containers. And we also learned a little bit from what we had done with Lambda. And we took that and took the server layer and took it out of the way. Then from a customer standpoint, all you're launching is a pod or a task or a service and you're not worrying about which machines I need to get, what types of machines I need to get. And the operational simplicity that comes with it is quite remarkable and quite finding not that, surprisingly, our customers want us to keep pushing the boundary of the kind operational simplicity we can give them but Fargate serves a critical building block and part of that, and we're super excited because, you know, today by far when a new customer, when a customer comes and runs a container on AWS the first time they pick Fargate, we're usually using ECS because EKS and Fargate is much newer, but that is a default starting point for any new container customer on AWS which is great. >> All right, well, you know, Docker, the company really helped a lot with that democratization, container technologies, you know, all those services that you talked about from AWS. I'm curious now, the partnership with Docker here, you know, how do some of the AWS services, you know, fit in with Docker? I'm thinking Docker Desktop probably someplace that they're, you know, or some connection? >> Yeah, I think one of the things that Docker has always been really good at as a company, as a project, is understanding the developer and the fact that they start off on a laptop. That's where the original Docker experience that go well, and Docker Desktop since then and we see a ton of Docker Desktop customers have used AWS. We also learned very early on, because originally ECS CLI supported Docker Compose. That ecosystem is also very rich and people like building Docker files and post files and just being able to launch them. So we continue to learn from what Docker is doing with Docker Desktop. We continue working with them on making sure that customizing the Docker Compose and Docker Desktop can run all their services and application on AWS. And we'll continue working with Docker, the company, on how we make that a lot easier for our customers, they are our mutual customers, and how we can learn from their simplicity that Docker, the simplicity that Docker brings and the sort of ease of use the Docker bring for the developer and the developer experience. We learn from that for our own services and we love working with them to make sure that the customer that's starting with Docker Desktop or the Docker CLI has a great experience as they move towards a fully orchestrated experience in the cloud, for example. There's a couple of other areas where Docker has turned out to have had foresight and driven some of our thinking. So a few years ago, Docker released this thing called containerd, where they took out their container runtime from inside the bigger Docker engine. And containerd has become a very important project for us as well as, it's the underpinning of Fargate now and we see a lot of interest from customers that want to keep building on containerd as well. And it's going to be very interesting to see how we work with Docker going forward and how we can continue to give our customers a lot of value, starting from the laptop and then ending up with large scale services in the cloud. >> Very interesting stuff, you know, interesting. Anytime we have a conversation about Docker, there's Docker the technology and Docker the company and that leads us down the discussion of open-source technologies . You were just talking about, you know, containerd believe that connects us to Firecracker. What you and your team are involved in, what's your viewpoint is the, you know, what you're seeing from open-source, how does Amazon think of that? And what else can you share with the audience on this topic? >> Yeah, as you've probably seen over the last few years, both from our work in Kubernetes, with things like Firecracker and more recently Bottlerocket. AWS gets deeply involved with open-source in a number of ways. We are involved heavily with a number of CNCF projects, whether it be containerd, whether it be things like Kubernetes itself, projects in the Kubernetes ecosystem, the service mesh world with Envoy and with the containerd project. So where containerd fits in really well with AWS is in a project that we call firecracker-containerd. They're effectively for Fargate, firecracker-containerd as we move Fargate towards Firecracker becomes out of the container in which you run containerd. It's effectively the equivalent of runC in a traditional Docker engine world. And, you know, one of the first things we did when Firecracker got rolled out was open-source the firecracker-containerd project. It's a go project and the idea was it's a great way for people to build VM like isolation and then build sort of these serverless container architectures like we want to do with Fargate. And, you know, I think Firecracker itself has been a great success. You see customer, you know, companies like Libvirt integrating with Firecracker. I've seen a few other examples of, sometimes unbeknownst to us, of people picking a Firecracker and using it for very, very interesting use cases and not just on AWS in other places as well. And we learnt a lot from that that's kind of why Bottlerocket is, was released the way it was. It is both a product and a project. Bottlerocket, the operating system is an open-source project. It's on GitHub, it has all the building tooling, you can take it and do whatever you want with it. And then on the AWS side, we will build and publish Bottlerocket armies, Amazon machine images, we will support them on AWS and there it's a product. But then Bottlerocket the project is something that anybody in the world who wants to run a minimal operating system can choose to pick up. And I think we've learnt a lot from these experiences, how we deal with the community, how we work with other people who are interested in contributing. And you know, Docker is one of the, the Docker open-source pieces and Docker the company are both part of the growing open-source ecosystem that's coming from AWS, especially on the container world. So it's going to be very interesting. And I'll end with, containerization has started impacting other parts of AWS, as well as our other services are being built, very often through ECS and EKS, but they're also influencing how we think about what capabilities we need to build into the broader container ecosystem. >> Yeah, Deepak, you know, you mentioned that some of the learnings from Lambda has impacted the services you're doing on the containerization side. You know, we've been watching some of the blurring of the lines between another container world and the containerization world. You know, there's some open-source projects out there, the CNCS working on things, you know, what's the latest, as you see kind of containerization and serverless and you know, where do you see them going forward? >> This is that I say that crystal balls are not my strong suite. But we hear customers, customers often want the best of both world. What we see very often is that customers don't actually choose just Fargate or just Lambda, they'll choose both. Where for different pieces of their architecture, they may pick a different solution. And sometimes that's driven by what they know, sometimes driven by what fits into their need. Some of the lines blur but they're still quite different. Lambda, for example, as a very event driven architecture, it is one process at a time. It has all these event hooks into the rest of AWS that are hard to replicate. And if that's the world you want to live in or benefit from, you're going to use lambda. If you're running long running services or you want a particular size that you don't get in Lambda or you want to take a more traditional application and convert it into a more modern application, chances are you're starting on Fargate but it fits in really well you have an existing operational model that fits into it. So we see applications evolving very interestingly. It's one reason why when we build a service mesh, we thought forward instead. It is almost impossible that we will have a world that's 100% containers, 100% Lambda or 100% EC2. It's going to be some mix of all of these. We have to think about it that way. And it's something that we constantly think about is how can we do things in a way that companies aren't forced to pick one way to it and "Oh, I'm going to build on Fargate" and then months later, they're like, "Yeah, we should have probably done Lambda." And I think that is something we think a lot about, whether it's from a developer's experience side or if it's from service meshes, which allow you to move back and forth or make the mesh. And I think that is the area where you'll see us do a lot more going forward. >> Excellent, so last last question for you Deepak is just give us a little bit as to what, you know, industry watchers will be looking at the container services going forward, next kind of 12, 18 months? >> Yeah, so I think one of the great things of the last 18 months has been that type of application that we see customers running, I don't think there's any bound to it. We see everything from people running microservices, or whatever you want to call decoupled services these days, but are services in the end, people are running, most are doing a lot of batch processing, machine learning, artificial intelligence that work with containers. But I think where the biggest dangers are going to come is as companies mature, as companies make containers, not just things that they build greenfield applications but also start thinking about migrating legacy applications in much more volume. A few things are going to happen. I think we'll be, containers come with a lot of complexity right now. I think you've, if you've seen my last two talks at re:Invent along with David Richardson from the Lambda team. You'll hear that we talk a lot about the fact that we see, we've made customers think about more things than they used to in the pre container world. I think you'll see now that the early adopter techie part has done, cloud has adopted containers and the next wave of mainstream users is coming in, you'll see more attractions come on as well, you'll see more governance, I think service meshes have a huge role to play here. How identity works or this fits into things like control tower and more sort of enterprise focused tooling around how you put guardrails around your containerized applications. You'll see it two or three different directions, I think you'll see a lot more on the serverless side, just the fact that so many customers start with Fargate, they're going to make us do more. You'll see a lot more on the ease of use developer experience of production side because you started off with the folks who like to tinker and now you're getting more and more customers that just want to run. And then you'll see, and that's actually a place where Docker, the company and the project have a lot to offer, because that's always been different. And then on the other side, you have the governance guardrails, and how is going to be in a compliant environment, how am I going to migrate all these applications over so that work will keep going on and you'll more and more of that. So those are the three buckets I'll use, the world can surprise us and you might end up with something completely radically different but that seems like what we're hearing from our customers right now. >> Excellent, well, Deepak, always a pleasure to catch up with you. Thanks so much for joining us again on theCUBE. >> No, always a pleasure Stu and hopefully, we get to do this again someday in person. >> Absolutely, I'm Stu Miniman, thanks as always for watching theCUBE. >> Deepak: Yep, thank you. (gentle music)

Published Date : May 29 2020

SUMMARY :

brought to you by Docker He's the vice president Nice to meet you again. of the AWS compute services world from, but I want to understand, you know, and disciplined at how do you conduct It's been just amazing to watch, you know, Over the last few years, you know, a couple of AWS services of, you know, and actually launched the same time as EKS how do some of the AWS services, you know, and the fact that they and Docker the company the first things we did the CNCS working on things, you know, And if that's the world you and the next wave of to catch up with you. and hopefully, we get to do Absolutely, I'm Stu Miniman, Deepak: Yep, thank you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Amazon Web ServicesORGANIZATION

0.99+

David RichardsonPERSON

0.99+

Deepak SinghPERSON

0.99+

DeepakPERSON

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

SeattleLOCATION

0.99+

2013DATE

0.99+

NovemberDATE

0.99+

Stu MinimanPERSON

0.99+

2020DATE

0.99+

LambdaTITLE

0.99+

2014DATE

0.99+

twoQUANTITY

0.99+

DockerORGANIZATION

0.99+

DockerConEVENT

0.99+

2015DATE

0.99+

12QUANTITY

0.99+

18 monthsQUANTITY

0.99+

todayDATE

0.99+

TodayDATE

0.99+

StuPERSON

0.99+

Docker DesktopTITLE

0.99+

bothQUANTITY

0.99+

DockerTITLE

0.98+

FirecrackerTITLE

0.98+

Docker DesktopTITLE

0.98+

KubernetesTITLE

0.98+

ECSTITLE

0.98+

FargateORGANIZATION

0.98+

one reasonQUANTITY

0.98+

100%QUANTITY

0.98+

three bucketsQUANTITY

0.98+

500 different optionsQUANTITY

0.97+

first timeQUANTITY

0.97+

oneQUANTITY

0.97+

two pizzaQUANTITY

0.97+

LibvirtORGANIZATION

0.97+

Elton Stoneman & Julie Lerman | DockerCon 2020


 

>> Speaker: From around the Globe, it's theCUBE with digital coverage of DockerCon Live 2020, brought to you by Docker and its ecosystem partners. >> Hello, how you doing? Welcome to DockerCon. We're kind of halfway through now, I guess. Thank you for joining us on this session. So my name is Elton, I'm a Docker Captain. And I'm joined by Julie who was also a Docker Captain. This is actually this session was Julie's idea. We were talking about this learning of Docker and how it's a light bulb moment for lots of people. But Julie, she came up with this great idea for DevOps. So I'll let Julie introduce herself, and tell you a bit about what we're going to talk about. >> Thanks, Elton. So I'm Julie Lerman. I'm a Software Coach. I'm a developer. I've been a developer for over 30 years. I work independently and I'm a Docker captain. Also a Microsoft Regional Director. I wouldn't let them put it on there, because it makes people think I work for Microsoft but I don't. (he laughs) >> Yeah, so it's a weird title. So the Microsoft ID the Regional Director, it's like a kind of Uber, MVP. So I'm an MVP. And that's fine. That's just like a community recognition, just like you get with a Docker captain. So MVP is kind of like the micro version, Julie's MVP too. But then you get the Regional Director which is something that MVP get. >> Doesn't matter. >> I'm not surprised Julie. >> Stop, a humble man. (he laughs) >> We've been using Docker for years 10 years between. >> You probably, how long ago was your Docker aha moment? >> So 2014 I first started using Docker, so I was working on a project, where I was assaulting for a team who were building an Android tablet, and they were building the whole thing, so they Spec out the tablet, they got a bill over in the far East. They were building their own OS their own app to run on and of course all that stacks within it. But they was all talking to the services that were running in the power they wanted to use as your for that and .NET that was on-prem, though that technology historically . So I came in to do the .NET stuff is running in as your, but I got really friendly with the Linux guys. It was very DevOps, it was one team who did the whole thing. And they were using Docker for that their build tools, and for have the and the CI tools, and they were running their own get server and it was all in. >> Already until 2014. That's pretty cool. >> Yeah, pretty early introduction to it. And it was super cool. So I'd always been interested in Linux, but never really dug into it. Because the entry bar was so high runs nothing in it. So you read about this great open source project, and then you go and look at the documentation and you have to download the source code and build it and it's like, well, I'm not going to be doing that stuff. And then Docker came along. I do Docker run. (he laughs) >> Well, I would say it was a little definitely delayed from that. I'm still thinking Wait, when you first started saying that this company was building their own android system, you start thinking, they're building software, but no, they weren't building everything, which is pretty amazing. So, I have to say it took me quite a while, but I was also behind on understanding virtual machines. (both laughs) So, Docker comes along, and I have lots of friends who are using it, I spent a lot of time with Michelle Noorali this Monday, and she's big container person. And most of the people I hear talking about Docker are really doing DevOps, which is not my thing. As a developer, I always just said, let somebody else do that stuff. I want to code an architect and do things like that. And I also do a lot of data work. I'm not like a big data person doing analytics. Or I'm not a DBA. I'm more very involved in getting data in and out of applications. So my aha moment, I would say was like, four years ago, after Microsoft moved SQL Server over to Linux, and then put it inside a Docker image. So that was my very first experience, just saying, oh, what does this do and I downloaded the image. And Docker run. And then like literally I was like, holy smokes. SQL Servers already installed. The containers up like that, and then it's got to run a couple of Bashan SQL scripts to get all the system tables, and databases and things like that. So that's another 15 seconds. But that was literally for me. The not really aha, it was more like OMG, and I'll keep the EFF out just to keep it clean here. It was my OMG moment with Docker. So getting that start, then I worked with the SQL Server image and container and did some different things, with that in applications. And then eventually, expanded my knowledge out bit by bit, and got a deeper understanding of it and tried more things. So I get to a comfort level and then add to it and add to it. >> Yeah. And I think that the great thing about that is that as you're going on that journey that aha moments keep coming, along we had another aha moment this week, with the new announcement that you can use your Docker compose files, and use your Docker commands to spin stuff up running in as your container instances. So like that you've kept up that learning journey is there if you want to go down, How do I take my monolithic application, and break up into pieces and run those in containers? Like suddenly the fact that you can just glue all these things together in run it on one platform, and manage everything in the same way? And these light bulbs keep on coming. So, you've seen the modernization things that people are doing that's a lot of the work that I do now, and taking these big applications, you just write a Docker file, and you've got your 15 year old .NET application running in the container. And you can run that in the cloud with no changes to code, and not see them. But that's super powerful for people. >> And I think one of the really important things, especially for people like you and I, who are also teachers, and is to try to really remember that moment, because I know a lot of times, when people are deeply expert in something it they forget how hard it was, or what it felt like not to understand it that context. So I still have held on to that. So when I talk, I like to do introduction, I like to help people get that aha moment. And then I say, Okay, now go on to the, they're really expert people. You're ready to learn more, but it's really important to especially, maybe we're teachers, conference speakers, book authors, pluralsight, etc. But lots of other people, who are working on teams they might already be somebody who's gotten there with Docker, and they want to help their teammates understand Docker. So I think it's really important to, for everybody who wants to share that to kind of have a little empathy, and remember what that was like, and understand that sometimes it just takes explaining it a different way explaining maybe, just tweaking your expression, or some of the words or your analogies. >> Yeah, that's definitely true. And you often find this it's a technology, that people really become affectionate for, they have a real deep feeling for documents, once they start using it, and you get these internal champions in companies who say, "This is the stuff I've been using, I've been using this at home or whatever." And they want to bring it into their project, and it's pretty cool to be able to say to them this is, take me on the same journey that you've been on, or you've been on a journey, which was probably slightly more investment for you, because you had to learn from scratch. But now you can relay that back into your own project. So you can take, you don't have to take everyone from scratch like you did. You can say, here's the Docker file for our own application. This is how it works. And bringing things into the terms that people are using everyday , I think is something that's super powerful. Why because you're completely strange. (he laughs) >> Oh, I was being really cool about your video. (both laughs) Maybe it's just how it streaming back to me. I think the teacher thing again, like we'll work a little harder and, bump our knees and stub our toes, or tear our hair out or whatever pain we have to go through, with that learning because, it's also kind of obsessive. And you can steer people away from those things, although it's also helpful to let them be aware like this might happen, and if it does, it's because of this. But that's not the happy path. >> Yeah, absolutely. And I think, it's really interesting talking to people about the time you're trying to get to what problem are they trying to solve? It's interesting, you talk about DevOps there, and how that sort of not an area, that you've done a lot of stuff in. Writing a couple of organizations, whether they're really trying hard to move that model, and trying to break down the barriers, between the team who build the software, and the team who run the software, but they have those barriers, but 20 years, it's really hard to write that stuff down. And it's a big cultural shift, it needs a lot of investment. But if you can make a technological change as well, if you can get people using the same tools, the same languages, the same processes to do things, that makes it so much easier. Like now my operators are using Docker files, on there and the security team are going into the Docker file and cozening it, or DevOps team or building up my compose file, and everyone's using the same thing, it really helps a lot, to bind people together to work on the same area. >> I also do a lot of work in domain Dave Vellante design, and that whole idea of collaboration, and bringing together teams, that don't normally work together, and bringing them together, and enabling them to find a way to collaborate giving them tools for collaboration, just like what you're saying with, having the same terms and using the same tools. So that's really powerful. You gave me a great example of one of your clients, aha moments with Docker. Do you remember which that was? The money yes, it's a very powerful Aha. >> Yes. >> She cherish that. >> The company that I've worked for before, when I was doing still get thought that I can sort a thing, and they knew I'd go into containers. I was working for Docker at the time. And I went in just as if I wasn't a sales pitch or anything, I was just as a favor to talk to them about what containers would look like if payments, their operation, big heavy Windows users, huge number of environment, lots of VMs that are all running stuff, to get the isolation, and give them what they needed. And I did this presentation of IT. So it wasn't a technical thing. It was very high level, it was about how containers kind of work. And I'm fundamentally a technical person, so I probably have more detail in there. And then you would get from a sales pitch, but it was very much about, you can take your applications, you can wrap them up the running these things for containers, you still get the isolation, you can run loads more of them on the same hardware that you've got, and you don't pay a Windows license each of those containers, you pay a license for the server that the right one. >> That's it, that's the moment. >> And the head of IT said that's going to save us millions of dollars. (he laughs) And that was his aha moment. >> I'm going to wrap that into my conference session, about getting to the Docker, for sure getting that aha moment. My experience is less that but wow, I mean, that's so powerful. When you're talking to come C level people about making those kinds of changes, because you need to have their buy in. So as a developer and somebody who works with developers, and that's kind of my audience, my experience more has been, when I'm giving conference presentations, and I'll start out in a room of people, and I have to say, when I'm at .NET focus conference, I find that the not there yet with Docker. Part of the audience is a big one. So I kind of do a poll at the beginning of the talk. Who's heard of Docker, obviously, they're in the room, but curious because you still don't really understand it. And that's usually a bulk of the room. And what I like to ask at the end is, of all of you that, that first group, like, do you feel like you get it now, like you just get what it is and what it does, as opposed to I don't know what this thing is. It's for rocket scientists. Is that's how I felt about it. I was like, I'm just a developer. It wasn't my thing. But now, I'm still not doing DevOps, I use Docker as a really important tool, during development and test and that's actually one of it I'm going to be talking about that. But it's my session a little later. Oh, like the next hour. It's about using Docker, that my aha Docker, SQL Server, in an image and but using that in Dave Vellante, it's not about the DevOps and the CI/CD and Kubernetes, I can spell it. (he laughs) Especially when I get to say k eight s, Like I even know the cool Lingo (mumbles) on Twitter. (he laughs) >> I think that's one of the cool things about this technology stack in particular, I think to get the most out of it, you need to dig in really light if you want to, if you're looking at doing this stuff in production, if you're attracted by the fact that I can have a managed container platform in anytime. And I can deploy my app, everywhere using the same set of things that compose files or humidity files or whatever. And if you really want to take advantage of that, you kind of have to get down to the principles understand all go on a proper kind of learning journey. But if you don't want to do that, you can kind of stop wherever it makes sense for you. So like even when I'm talking to different audiences, is a lot strangely enough, I did a pool size large bin this morning. It was quite a specific topic. It was about building applications in containers. So is about using containers, to compile your app and then package it, so you can build anywhere. But even a session like that, the first maybe two minutes, I give a lightning quick overview, of what containers are and how you use them. Here's exactly like you say, people will come to a session, if it's got Docker or humanities in the title. But if they don't have the entry requirements. They've never really used this stuff. And we were up here and it's a big dump for them. So I try and always have that introductory slide. >> I had to do that on the fly. >> Sorry. >> I've done that on the fly in conference, because yes, doing like, ASP.NET Core with Entity Framework and containers. And, 80% of the room, really didn't know anything about Docker. So, instead of talking like five minutes about Docker and then demoing the rest, I ended up spending more time talking about Docker, to make sure everybody was really you could tell that difference when they're like oh, like that they understood enough, in order to be follow along and understand the value of what it was that I was there to show, about it in that core, I'm also this is making me remember that first time I actually use Docker compose, because it was a while, I was just using the SQL Server, Docker image, in on my development machine for quite a while. And because I wasn't deploying, I was learning and exploring and so I was on my development machine, so I didn't need to do anything else. So the first time I really started orchestrating, that was yet another aha moment. But I was ready for it then. I think you know if you start with Docker compose and you don't haven't done the other, maybe I would write but I was ready, because I'd already gotten used to using the tooling and, really understanding what was going on with containers. Then that Docker compose was like yeah. (he laughs) >> It's just the next one, in the line is a great comment actually in the chat about someone in the chat. >> From chat? >> Yeah, from Steve saying, that he could see there would be an aha moment for his about security. And actually that's absolutely, it's so when security people, first want to get their head around containers, they get worried that if someone can compromise the app in the container, they might get a break out, and get to all the other containers. And suddenly, instead of having one VM compromised, you have 100 containers compromised. But actually, when you dig into it so much easier to get this kind of defense in depth, when you're building in containers, because you have your tape on an image that's owned by your team who produced the path, whether or not they will have their own images, that are built with best practices. You can sign your images, through your platform doesn't run anything that isn't signed, you have a full history of exactly what's in the source code is what's in production, there's all sorts of, ways you can layer on security that, attract that side of the audience. >> I've been looking at you this whole time, and like I forgot about the live chat. There's the live chat. (he laughs) There's Scott Johnston in live chat. >> Yes. >> People talking about Kubernetes and swarm. I'm scrolling through quickly to see if anybody's saying, well, my aha moment was. >> There was a good one. What was this one from Fatima earlier on, Maya was pointing out with almost no configuration onto a VM, and couldn't believe it never looked back on us. >> Yeah. >> That's exactly, on one command, if your image is mostly built, SaaS has some sensible defaults, it just all works. And everyone's (mumbles). >> Yeah, and the thing that I'm doing in my session is, what I love. the fact that for development team, Development Testing everybody on the team, and then again on up the pipeline to CI/CD. It's just a matter of, not only do you have your SaaS code, but in your SaaS code, you've got your Docker compose, and your Docker compose just makes sure, that you have the development environment that you need, all the frame, everything that you need is just there, without having to go out and find it and install it. >> There were no gap in a development environment with CI build the production. So I'm hearing, you don't hear but I can hear that we need to wrap up. >> Oh, yeah. >> Get yourself prepared for your next session, which everyone should definitely, I'll be watching everyone else do. So thanks everyone for joining. Thanks, Julie for a great idea for a conversation, was about 4050 we'll have a beer with and I would, I would Yeah. >> Yeah, we live many thousands of miles away from one another. >> Well, hopefully next year, there will be a different topic on how we can all meet some of you guys. >> And I do need to point out, the last time we were together, Elton, I got a copy of Alan's book and he signed it. (both laughs) And we took a picture of it. >> There are still more books on the stand >> Yeah, I know that's an old book, but it's the one that you signed. Thank you so much. >> Thanks everyone for joining and we'll enjoy the rest of the topic home. >> Bye. (soft music)

Published Date : May 29 2020

SUMMARY :

brought to you by Docker and tell you a bit about what and I'm a Docker captain. So MVP is kind of like the micro version, (he laughs) We've been using Docker and for have the and the CI tools, That's pretty cool. and then you go and look and then it's got to run a couple that you can use your and is to try to really and it's pretty cool to be able And you can steer people and the team who run the software, and enabling them to find a way and you don't pay a Windows license each And that was his aha moment. I find that the not there yet with Docker. and how you use them. and so I was on my development machine, in the chat about someone in the chat. and get to all the other containers. and like I forgot about the live chat. Kubernetes and swarm. and couldn't believe it it just all works. Yeah, and the thing that So I'm hearing, you don't hear and I would, I would Yeah. Yeah, we live many how we can all meet some of you guys. And I do need to point out, but it's the one that you signed. and we'll enjoy the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

JuliePERSON

0.99+

Michelle NooraliPERSON

0.99+

Scott JohnstonPERSON

0.99+

Julie LermanPERSON

0.99+

EltonPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Dave VellantePERSON

0.99+

AlanPERSON

0.99+

2014DATE

0.99+

two minutesQUANTITY

0.99+

80%QUANTITY

0.99+

100 containersQUANTITY

0.99+

DockerTITLE

0.99+

androidTITLE

0.99+

five minutesQUANTITY

0.99+

DockerORGANIZATION

0.99+

next yearDATE

0.99+

15 secondsQUANTITY

0.99+

one platformQUANTITY

0.99+

over 30 yearsQUANTITY

0.99+

DockerConEVENT

0.99+

SQL ServersTITLE

0.99+

SQL ServerTITLE

0.99+

first experienceQUANTITY

0.98+

four years agoDATE

0.98+

SQLTITLE

0.98+

LinuxTITLE

0.98+

WindowsTITLE

0.98+

bothQUANTITY

0.97+

firstQUANTITY

0.96+

first timeQUANTITY

0.96+

AndroidTITLE

0.96+

both laughsQUANTITY

0.96+

this weekDATE

0.96+

oneQUANTITY

0.95+

DockerCon Live 2020EVENT

0.95+

Elton StonemanPERSON

0.94+

eachQUANTITY

0.94+

.NETTITLE

0.94+

TwitterORGANIZATION

0.93+

UberORGANIZATION

0.93+

KubernetesTITLE

0.93+

one teamQUANTITY

0.93+

Docker composeTITLE

0.93+

Entity FrameworkTITLE

0.91+

millions of dollarsQUANTITY

0.91+

thousands of milesQUANTITY

0.9+

first groupQUANTITY

0.86+

years 10 yearsQUANTITY

0.85+

one commandQUANTITY

0.81+

DockerCon 2020 Kickoff


 

>>From around the globe. It's the queue with digital coverage of DockerCon live 2020 brought to you by Docker and its ecosystem partners. >>Hello everyone. Welcome to Docker con 2020 I'm John furrier with the cube. I'm in our Palo Alto studios with our quarantine crew. We have a great lineup here for DockerCon con 2020 virtual event. Normally it was in person face to face. I'll be with you throughout the day from an amazing lineup of content over 50 different sessions, cube tracks, keynotes, and we've got two great co-hosts here with Docker, Jenny Marcio and Brett Fisher. We'll be with you all day, all day today, taking you through the program, helping you navigate the sessions. I'm so excited, Jenny. This is a virtual event. We talk about this. Can you believe it? We're, you know, may the internet gods be with us today and hope everyone's having an easy time getting in. Jenny, Brett, thank you for being here. Hey, >>Yeah. Hi everyone. Uh, so great to see everyone chatting and telling us where they're from. Welcome to the Docker community. We have a great day planned for you >>Guys. Great job. I'm getting this all together. I know how hard it is. These virtual events are hard to pull off. I'm blown away by the community at Docker. The amount of sessions that are coming in the sponsor support has been amazing. Just the overall excitement around the brand and the, and the opportunities given this tough times where we're in. Um, it's super exciting. Again, made the internet gods be with us throughout the day, but there's plenty of content. Uh, Brett's got an amazing all day marathon group of people coming in and chatting. Jenny, this has been an amazing journey and it's a great opportunity. Tell us about the virtual event. Why DockerCon virtual. Obviously everyone's cancelling their events, but this is special to you guys. Talk about Docker con virtual this year. >>Yeah. You know, the Docker community shows up at DockerCon every year and even though we didn't have the opportunity to do an in person event this year, we didn't want to lose the time that we all come together at DockerCon. The conversations, the amazing content and learning opportunities. So we decided back in December to make Docker con a virtual event. And of course when we did that, there was no quarantine. Um, we didn't expect, you know, I certainly didn't expect to be delivering it from my living room, but we were just, I mean we were completely blown away. There's nearly 70,000 people across the globe that have registered for Docker con today. And when you look at backer cons of past right live events, really and more learning are just the tip of the iceberg. And so thrilled to be able to deliver a more inclusive vocal event today. And we have so much planned. Uh, I think Brett, you want to tell us some of the things that you have planned? >>Well, I'm sure I'm going to forget something cause there's a lot going on. But, uh, we've obviously got interviews all day today on this channel with John the crew. Um, Jenny has put together an amazing set of all these speakers all day long in the sessions. And then you have a captain's on deck, which is essentially the YouTube live hangout where we just basically talk shop. Oh, it's all engineers, all day long, captains and special guests. And we're going to be in chat talking to you about answering your questions. Maybe we'll dig into some stuff based on the problems you're having or the questions you have. Maybe there'll be some random demos, but it's basically, uh, not scripted. It's an all day long unscripted event, so I'm sure it's going to be a lot of fun hanging out in there. >>Well guys, I want to just say it's been amazing how you structured this so everyone has a chance to ask questions, whether it's informal laid back in the captain's channel or in the sessions where the speakers will be there with their, with their presentations. But Jenny, I want to get your thoughts because we have a site out there that's structured a certain way for the folks watching. If you're on your desktop, there's a main stage hero. There's then tracks and Brett's running the captain's tracks. You can click on that link and jump into his session all day long. He's got an amazing set of line of sleet, leaning back, having a good time. And then each of the tracks, you can jump into those sessions. It's on a clock. It'll be available on demand. All that content is available if you're on your desktop, if you're on your mobile, it's the same thing. >>Look at the calendar, find the session that you want. If you're interested in it, you could watch it live and chat with the participants in real time or watch it on demand. So there's plenty of content to navigate through. We do have it on a clock and we'll be streaming sessions as they happen. So you're in the moment and that's a great time to chat in real time. But there's more, Jenny, you're getting more out of this event. We, you guys try to bring together the stimulation of community. How does the participants get more out of the the event besides just consuming some of the content all day today? >>Yeah. So first set up your profile, put your picture next to your chat handle and then chat. We have like, uh, John said we have various setups today to help you get the most out of your experience are breakout sessions. The content is prerecorded so you get quality content and the speakers and chat. So you can ask questions the whole time. Um, if you're looking for the hallway track, then definitely check out the captain's on deck channel. Uh, and then we have some great interviews all day on the queue so that up your profile, join the conversation and be kind, right. This is a community event. Code of conduct is linked on every page at the top and just have a great day. >>And Brett, you guys have an amazing lineup on the captain, so you have a great YouTube channel that you have your stream on. So the folks who were familiar with that can get that either on YouTube or on the site. The chat is integrated in, so you're set up, what do you got going on? Give us the highlights. What are you excited about throughout your day? Take us through your program on the captains. That's going to be probably pretty dynamic in the chat too. >>Yeah. Yeah. So, uh, I'm sure we're going to have less, uh, lots of, lots of stuff going on in chat. So no concerns there about, uh, having crickets in the, in the chat. But we're going to, uh, basically starting the day with two of my good Docker captain friends, uh, Nirmal Mehta and Laura taco. And we're going to basically start you out and at the end of this keynote, at the end of this hour, and we're going to get you going. And then you can maybe jump out and go to take some sessions. Maybe there's some cool stuff you want to check out in other sessions that are, you want to chat and talk with the, the instructors, the speakers there, and then you're going to come back to us, right? Or go over, check out the interview. So the idea is you're hopping back and forth and throughout the day we're basically changing out every hour. >>We're not just changing out the, uh, the guests basically, but we're also changing out the topics that we can cover because different guests will have different expertise. We're going to have some special guests in from Microsoft, talk about some of the cool stuff going on there. And basically it's captains all day long. And, uh, you know, if you've been on my YouTube live show you, you've watched that, you've seen a lot of the guests we have on there. I'm lucky to just hang out with all these really awesome people around the world, so it's going to be fun. >>Awesome. And the content again has been preserved. You guys had a great session on call for paper sessions. Jenny, this is good stuff. What are the things can people do to make it interesting? Obviously we're looking for suggestions. Feel free to, to chirp on Twitter about ideas that can be new. But you guys got some surprises. There's some selfies. What else? What's going on? Any secret, uh, surprises throughout the day. >>There are secret surprises throughout the day. You'll need to pay attention to the keynotes. Brett will have giveaways. I know our wonderful sponsors have giveaways planned as well in their sessions. Uh, hopefully right you, you feel conflicted about what you're going to attend. So do know that everything is recorded and will be available on demand afterwards so you can catch anything that you miss. Most of them will be available right after they stream the initial time. >>All right, great stuff. So they've got the Docker selfie. So the Docker selfies, the hashtag is just Docker con hashtag Docker con. If you feel like you want to add some of the hashtag no problem, check out the sessions. You can pop in and out of the captains is kind of the cool, cool. Kids are going to be hanging out with Brett and then all they'll knowledge and learning. Don't miss the keynote. The keynote should be solid. We got changed governor from red monk delivering a keynote. I'll be interviewing him live after his keynote. So stay with us and again, check out the interactive calendar. All you gotta do is look at the calendar and click on the session you want. You'll jump right in. Hop around, give us feedback. We're doing our best. Um, Brett, any final thoughts on what you want to share to the community around, uh, what you got going on the virtual event? Just random thoughts. >>Yeah. Uh, so sorry, we can't all be together in the same physical place. But the coolest thing about as business online is that we actually get to involve everyone. So as long as you have a computer and internet, you can actually attend DockerCon if you've never been to one before. So we're trying to recreate that experience online. Um, like Jenny said, the code of conduct is important. So, you know, we're all in this together with the chat, so try to try to be nice in there. These are all real humans that, uh, have feelings just like me. So let's, let's try to keep it cool and, uh, over in the Catherine's channel be taking your questions and maybe playing some music, playing some games, giving away some free stuff. Um, while you're, you know, in between sessions learning. Oh yeah. >>And I gotta say props to your rig. You've got an amazing setup there, Brett. I love what your show you do. It's really bad ass and kick ass. So great stuff. Jenny sponsors ecosystem response to this event has been phenomenal. The attendance 67,000. We're seeing a surge of people hitting the site now. So, um, if you're not getting in, just, you know, just wait going, we're going to crank through the queue, but the sponsors on the ecosystem really delivered on the content side and also the sport. You want to share a few shout outs on the sponsors who really kind of helped make this happen. >>Yeah, so definitely make sure you check out the sponsor pages and you go, each page is the actual content that they will be delivering. So they are delivering great content to you. Um, so you can learn and a huge thank you to our platinum and gold authors. >>Awesome. Well I got to say, I'm super impressed. I'm looking forward to the Microsoft Amazon sessions, which are going to be good. And there's a couple of great customer sessions there and you know, I tweeted this out last night and let them get you guys' reaction to this because you know, there's been a lot of talk around the covert crisis that we're in, but there's also a positive upshot to this is Cambridge and explosion of developers that are going to be building new apps. And I said, you know, apps apps aren't going to just change the world. They're gonna save the world. So a lot of the theme years, the impact that developers are having right now in the current situation, you know, if we get the goodness of compose and all the things going on in Docker and the relationships, this real impact happening with the developer community. And it's pretty evident in the program and some of the talks and some of the examples how containers and microservices are certainly changing the world and helping save the world. Your thoughts. >>Yeah. So if you, I think we have a, like you said, a number of sessions and interviews in the program today that really dive into that. And even particularly around coven, um, Clemente is sharing his company's experience, uh, from being able to continue operations in Italy when they were completely shut down. Uh, beginning of March, we have also in the cube channel several interviews about from the national Institute of health and precision cancer medicine at the end of the day. And you just can really see how containerization and, uh, developers are moving in industry and, and really humanity forward because of what they're able to build and create, uh, with advances in technology. Yeah. >>And first responders and these days is developers. Brett compose is getting a lot of traction on Twitter. I can see some buzz already building up. There's huge traction with compose, just the ease of use and almost a call for arms for integrating into all the system language libraries. I mean, what's going on with compose? I mean, what's the captain say about this? I mean, it seems to be really tracking in terms of demand and interest. >>Yeah, it's, it's a, I think we're over 700,000 composed files on GitHub. Um, so it's definitely beyond just the standard Docker run commands. It's definitely the next tool that people use to run containers. Um, just by having that we just by, and that's not even counting. I mean, that's just counting the files that are named Docker compose Yammel so I'm sure a lot of you out there have created a gamma file to manage your local containers or even on a server with Docker compose. And the nice thing is, is Docker is doubling down on that. So we've gotten some news recently, um, from them about what they want to do with opening the spec up, getting more companies involved, because compose is already gathered so much interest from the community. You know, AWS has importers, there's Kubernetes importers for it. So there's more stuff coming and we might just see something here in a few minutes. >>Well, let's get into the keynote. Guys, jump into the keynote. If you missed anything, come back to the stream, check out the sessions, check out the calendar. Let's go. Let's have a great time. Have some fun. Thanks for enjoy the rest of the day. We'll see you soon..

Published Date : May 28 2020

SUMMARY :

It's the queue with digital coverage of DockerCon I'll be with you throughout the day from an amazing lineup of content over 50 different We have a great day planned for you Obviously everyone's cancelling their events, but this is special to you guys. have the opportunity to do an in person event this year, we didn't want to lose the And we're going to be in chat talking to you about answering your questions. And then each of the tracks, you can jump into those sessions. Look at the calendar, find the session that you want. So you can ask questions the whole time. So the folks who were familiar with that can get that either on YouTube or on the site. the end of this keynote, at the end of this hour, and we're going to get you going. And, uh, you know, if you've been on my YouTube live show you, you've watched that, you've seen a lot of the What are the things can people do to make it interesting? you can catch anything that you miss. click on the session you want. So as long as you have a computer and internet, And I gotta say props to your rig. Um, so you can learn and a huge thank you in the current situation, you know, if we get the goodness of compose and all the things going on in Docker and the relationships, medicine at the end of the day. just the ease of use and almost a call for arms for integrating into all the system language libraries. I mean, that's just counting the files that are named Docker compose Yammel so I'm sure a lot of you out there have created a gamma file check out the sessions, check out the calendar.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JennyPERSON

0.99+

ClementePERSON

0.99+

BrettPERSON

0.99+

ItalyLOCATION

0.99+

JohnPERSON

0.99+

Brett FisherPERSON

0.99+

AWSORGANIZATION

0.99+

DecemberDATE

0.99+

MicrosoftORGANIZATION

0.99+

Jenny MarcioPERSON

0.99+

twoQUANTITY

0.99+

Palo AltoLOCATION

0.99+

DockerConEVENT

0.99+

LauraPERSON

0.99+

eachQUANTITY

0.99+

DockerORGANIZATION

0.99+

67,000QUANTITY

0.99+

YouTubeORGANIZATION

0.99+

each pageQUANTITY

0.99+

DockerCon con 2020EVENT

0.99+

Docker conEVENT

0.98+

todayDATE

0.98+

Nirmal MehtaPERSON

0.98+

CatherinePERSON

0.98+

Docker con 2020EVENT

0.97+

firstQUANTITY

0.97+

Brett composePERSON

0.97+

over 50 different sessionsQUANTITY

0.96+

this yearDATE

0.96+

last nightDATE

0.96+

DockerTITLE

0.96+

over 700,000 composed filesQUANTITY

0.96+

AmazonORGANIZATION

0.95+

TwitterORGANIZATION

0.95+

nearly 70,000 peopleQUANTITY

0.95+

GitHubORGANIZATION

0.94+

DockerCon live 2020EVENT

0.94+

Institute of health and precision cancer medicineORGANIZATION

0.91+

DockerCon 2020 KickoffEVENT

0.89+

John furrierPERSON

0.89+

CambridgeLOCATION

0.88+

KubernetesTITLE

0.87+

two great co-hostsQUANTITY

0.84+

first respondersQUANTITY

0.79+

this yearDATE

0.78+

oneQUANTITY

0.75+

themQUANTITY

0.7+

nationalORGANIZATION

0.7+

beginning of MarchDATE

0.68+

every yearQUANTITY

0.5+

Docker con.EVENT

0.49+

red monkPERSON

0.43+

YammelPERSON

0.34+

Aaron Kalb, Alation | CUBEconversations June 2018


 

(stirring music) >> Hi, I'm Peter Burris, and welcome to another CUBE Conversation from theCUBE Studios in beautiful Palo Alto, California. Got a great conversation today. We're going to be talking about some of the new advances that are associated with big data analytics and improving the rate at which human beings, people who actually work with data, can get more out of their data, be more certain about their data, and improve the social system that actually is dependent upon data. To do that, we've got Aaron Kalb of Alation here with us. Aaron is the co-founder and is VP of design and strategic initiatives. Aaron, welcome back to theCUBE. >> Thanks so much for having me, Peter. >> So, then, let's start this off. The concern that a lot of folks have when they think about analytics, big data, and the promise of some of these new advanced technologies is they see how they could be generating significant business value, but they observe that it often falls short. It falls short for technological reasons, you know, setting up the infrastructure is very, very, difficult. But we've started solving that by moving a lot of these workloads to the cloud. They also are discovering that the toolchains can be very complex, but they're starting to solve that by working with companies with vision, like Alation, about how you can bring these things together more easily. There are some good things happening within the analytics space, but one of the biggest challenges is, even if you set up your pipelines and your analytics systems and applications right, you still encounter resistance inside the business, because human beings don't necessarily have a natural affinity for data. Data is not something that's easy to consume, it's not something easy to recognize. People just haven't been trained in it. We need more that makes it easy to identify data quality, data issues, et cetera. Tell us a little bit about what Alation's doing to solve that human side, the adoption side of the challenge. >> That's a great point and a great question, Peter. Fundamentally, what we see is it used to be a problem of quantity. There wasn't enough ability to generate data assets, and to distribute them, and to get to them. Now, there's just an overwhelming amount of places to gather data. The problem becomes finding development data for your need, understanding and putting it into context, and most fundamentally, trusting that it's actually telling you a true story about the world. You know, what we find now is, as there's been more self-service analytics, there's more and more dashboards and queries and content being generated, and often an executive will look at two different answers to the same question that are trending in totally different directions. They'll say, "I can't trust any of this. "On paper, I want to be data-driven, "but in actuality, I'm just going to go back to my gut, "'cause the data is not always trustworthy, "and it's hard to tell what's trustworthy and what's not." >> This is, even after they've found the data and enough people have been working on it to say, to put it in context to say, "Yes, this data is being used in marketing," or, "This data has been used in operations production." there's another layer of branding or whatnot that we can put on data that says, "This data is appropriate for use in this way." Is that what we're talking about here? >> Absolutely right. To help with finding and understanding data, you can group it and make it browsable by topic. You can enable keyword search over it in that natural language. That's stuff that Alation has done in the past. What we're excited to unveil now is this idea of trust check, which is all about saying, wherever you're at in that data value chain of taking raw data and schematizing it and eventually producing pretty dashboards and visualizations, that at every step, we can ensure that only the most trustworthy data sets are being used, because any problem upstream flows downstream. >> So, trust check. >> Trust check. >> Trust check, it's something that comes out of Alation. Is it also being used with other visualization tools or other sources or other applications? >> That's a great question. It's all of the above. Trust check starts with saying, if I'm an analyst who wants to create a dashboard or a visualization, I'm going to have to write some SQL query to do that. What we've done in that context with Alation Compose, is our home-grown SQL tool, is provided a tool, and trust check kind of gets its name from spell check. It used to be there was a dictionary, and you could look it up by hand, and you could look it up online, but that's a lot of work for every single word to check it. And then, you know, Microsoft, I think, was the first innovative saying, "Oh, let's put a little red squiggle that you can't miss "right in your workflow as you're writing, "so you don't have to go to it, it comes to you." We do the exact same thing. I'm about to query a table that is deprecated or has a data quality issue. I immediately see bright red on my screen, can't miss it, and I can fix my behavior. That's as I'm creating a data asset. We also, through our partnerships with Salesforce and with Tableau, each of whom have very popular visualization tools, to, say. if people are consuming a dashboard, not a SQL query, but looking at a Tableau dashboard or a visualization in Salesforce Einstein Analytics, what would it mean to badge right there and then, put a stamp of approval on the most trustworthy sources and a warning or caveat on things that might have an upstream data quality problem? >> So, when you say warning or caveat, you're saying literally that there are exceptions or there are other concerns associated with the data, and reviewing that as part of the analytic process. >> That's exactly right. Much like, again, spell check underlines, or looking at, if you think about if I'm driving in my car with Waze, and it says, "Oh, traffic up ahead, view route this way." What does it mean to get in the user interface where people live, whether they're a business user in Salesforce or Tableau, or a data analyst in a query tool, right there in their flow having onscreen indications of everything happening below the tip of the iceberg that affects their work and the trustworthiness of the data sets they're using. >> So that's what it is. I'll tell you a quick story about spell check. >> Please. >> Many years ago, I'm old enough that I was one of the first users of some of these tools. When you typed in IBM, Microsoft Word would often change it to DUM, which was kind of interesting, given the things that were going on between them. But it leads you to ask questions. How does this work? I mean, how does spell check work? Well, how does trust check work, because that's going to have an enormous implication. People have to trust how trust check works. Tell us a little bit about how trust check works. >> Absolutely. How do you trust trust check? The little red or yellow or bright, salient indicators we've designed are just to get your attention. Then, as a user, you can click into those indicators and see why is this appearing. The biggest reason that an indicator will appear in a trust check context is that a person, a data curator or data steward, has put a warning or a deprecation on the data set. It's not, you know, oh, IBM doesn't like Microsoft, or vice versa. You know, you can see the sourcing. It isn't just, oh, because Merriam-Webster says so. It emerges from the logic of your own organization. But now Alation has this entire catalog backing trust check where it gives a bunch of signals that can help those curators and stewards to decide what indicators to put on what objects. For example, we might observe, this table used to be refreshed frequently. It hasn't in a while. Does that mean it's ripe for getting a bit of a warning on it? Or, people aren't really using this data set. Is there a reason for that? Or, something upstream was just flagged having a data quality issue. That data quality issue might flow downstream like pollution in a creek, and that can be an indication of another reason why you might want to label data as not trustworthy. >> In Alation context with Salesforce and Tableau partners, and perhaps some others, this trust check ends up being a social moniker for what constitutes good data that is branded as a consequence of both technological as well as social activities around that data captured by Alation. I got that right? >> That's exactly right. We're taking technical signals and social signals, because what happens in our customers today before we launched trust check, what they would do is, if you had the time, you would phone a friend. You'd say, "Hey, you seem to be data-savvy. "Does this number look weird to you? "Do you know what's going on? "Is something wrong with the table that it's sourced from?" The problem is, that person's on vacation, and you're out of luck. This is saying, let's push everything we know across that entire chain, from the rawest data to the most polished asset and have all that information pushed up to where you live in the moment you're making a decision, should I trust this data, how should I use it? >> In the whole, going back to this whole world of big data and analytics, we're moving more of the workloads to the cloud to get rid of the infrastructure problems. We're utilizing more integrated toolchains to get rid of the complexity associated with a lot of the analytic pipelines. How does trust check then applied, go back to this notion of human beings not being willing to accept somebody else's data. Give us that use case of how someone's going to sit down in a boardroom or at a strategic meeting or whatever else it is, see trust check, and go, "I get it." >> Absolutely, that's a fantastic question. There's two reasons why, even though all organizations, or 80% according to Gartner, claim they're committed to being data-driven. You still have these moments, people say, "Yeah, I see the numbers, "but I'm going to ignore them, or discount them, "or be very skeptical of them." One issue is just how much of the data that gets to you in the boardroom or the exec team meeting is wrong. We had an incredibly successful data-driven customer who did an internal audit and found that 1/3 of the numbers that appeared in the PowerPoint presentations on which major business decisions were being made, a full 1/3 of them were off by an extraordinary amount, an amount so big that it would, the decision would've cut the other way had the number been accurate. The sheer volume of bad data coming in to undermine trust. The second is, even if only 5% of the data were untrustworthy, if you don't know which is which, the 95% that's trustworthy and the 5% that's not, you still might not be able to use it with confidence. We believe that having trust check be at every stage in this data value chain will solve, actually, both problems by having that spell-check-like experience in the query tool, which is where most analytics projects start. We can reduce the amount of garbage going into the meeting rooms where business choices are being made. And by putting that badge saying "This is certified," or, "Take this with a grain of salt," or, "No, this is totally wrong," that putting that badge on the visualizations that business leaders are looking at in Salesforce and Tableau, and over time, in ideally every tool that anybody would use in an enterprise, we can also help distinguish the wheat from the chaff in that context as well. We think we're attacking both parts of this problem, and that will really drive a data-driven culture truly being adoptable in an organization. >> I want to tie a couple things that you said here. You mentioned the word design a couple times. You're the VP of design at Alation. It also sounds like when you're talking about design, you're not just talking about design of the interface or the software. You're talking about design of how people are going to use the software. What is the extent to which design, what's the scope of design as you see it in this context of advanced analytics, and is trust check just a first step that you're taking? Tell us a little bit about that. >> Yeah, that's a great set of questions, Peter. Design for us means really looking at humans, and starting by listening and watching. You know, a lot of people in the cataloging space and the governance space, they list a lot of should statements. "People should adopt this process, "because otherwise, mistakes will be made." >> Because Gartner said 80% of you have! >> Right, exactly. We think the shoulds only get you so far. We want to really understand the human psychology. How do people actually behave when they're under pressure to move quickly in a rapidly changing environment, when they're afraid of being caught having made a mistake? There's all these pressures people are under. And so, it's not realistic to say, again, you could imagine saying, "Oh, every time before you go out the door, "go to MapQuest or some sort of traffic website "and look up the route and print it out, "so you make sure you plot correctly." No one has time for that, just like no one has time to look up every single word in their essay or their memo or their email and look it up in the dictionary to see if it's right. But when you have an intervention that comes into somebody's flow and is impossible to miss, and is an angel on your shoulder keeping you from making a mistake, or, you know, in-car navigation that tells you in real time, "Here's how you should route." Those sort of things fit into somebody's lifestyle and actually move impact. Our idea is, let's meet people where they are. Acknowledge the challenges that humans face and make technology that really helps them and comes to them instead of scolding them and saying, "Oh, you should change your flow in this uncomfortable way "and come to us, "and that's the only way "you'll achieve the outcome you want." >> Invest the tool into the process and into the activity, as opposed to force people to alter the activity around the limitations or capabilities of the tool. >> Exactly right. And so, while design is optimizing the exact color and size and UI/UX both in our own tools and working with our partners to optimize that, it's starting at an even bigger level of saying, "How do we design the entire workflow "so humans can do what they do best "and the computer just gives them "what they need in real time?" >> And as something as important, and this kind of takes it full circle, something as important and potentially strategic as advanced analytics, having that holistic view is really going to determine success or failure in a lot of businesses. >> That is absolutely right, Peter, and you asked earlier, "Is this just the beginning?" That's absolutely true. Our goal is to say, whatever part of the analytics process you are in, that you get these realtime interventions to help you get the information that's relevant to you, understand what it means in the context you're in, and make sure that it's trustworthy and reliable so people can be truly data-driven. >> Well, there's a lot of invention going on, but what we're really seeking here is changes in social behavior that lead to consequential improvements in business. Aaron Kalb, VP of design and strategic initiatives at Alation, thanks very much for talking about this important advance in how we think about analytics. >> Thank you so much for having me, Peter. >> This is, again, Peter Burris. This has been a CUBE Conversation. Until next time. (stirring music)

Published Date : Jul 12 2018

SUMMARY :

and improving the rate at which human beings, and the promise of some of these new advanced technologies and to distribute them, and to get to them. Is that what we're talking about here? That's stuff that Alation has done in the past. Trust check, it's something that comes out of Alation. "Oh, let's put a little red squiggle that you can't miss and reviewing that as part of the analytic process. and the trustworthiness of the data sets they're using. I'll tell you a quick story about spell check. But it leads you to ask questions. and that can be an indication of another reason I got that right? and have all that information pushed up to where you live to get rid of the infrastructure problems. that gets to you in the boardroom What is the extent to which design, and the governance space, and make technology that really helps them and comes to them around the limitations or capabilities of the tool. and UI/UX both in our own tools and this kind of takes it full circle, to help you get the information that's relevant to you, that lead to consequential improvements in business. This is, again, Peter Burris.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AaronPERSON

0.99+

Peter BurrisPERSON

0.99+

MicrosoftORGANIZATION

0.99+

PeterPERSON

0.99+

IBMORGANIZATION

0.99+

Aaron KalbPERSON

0.99+

80%QUANTITY

0.99+

June 2018DATE

0.99+

two reasonsQUANTITY

0.99+

95%QUANTITY

0.99+

AlationORGANIZATION

0.99+

firstQUANTITY

0.99+

One issueQUANTITY

0.99+

Merriam-WebsterORGANIZATION

0.99+

GartnerORGANIZATION

0.99+

5%QUANTITY

0.99+

secondQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

bothQUANTITY

0.99+

TableauTITLE

0.99+

todayDATE

0.98+

theCUBEORGANIZATION

0.98+

oneQUANTITY

0.98+

theCUBE StudiosORGANIZATION

0.98+

first stepQUANTITY

0.98+

SalesforceORGANIZATION

0.98+

both partsQUANTITY

0.97+

PowerPointTITLE

0.97+

both problemsQUANTITY

0.97+

SalesforceTITLE

0.95+

TableauORGANIZATION

0.94+

eachQUANTITY

0.94+

MapQuestORGANIZATION

0.92+

1/3QUANTITY

0.91+

two different answersQUANTITY

0.9+

first usersQUANTITY

0.86+

couple timesQUANTITY

0.84+

SQLTITLE

0.82+

Many years agoDATE

0.8+

WazeTITLE

0.8+

single wordQUANTITY

0.79+

Alation ComposeTITLE

0.78+

WordTITLE

0.78+

GartnerPERSON

0.77+

Einstein AnalyticsTITLE

0.7+

DUMTITLE

0.65+

CUBEORGANIZATION

0.62+

AlationPERSON

0.59+

couple thingsQUANTITY

0.59+

ConversationEVENT

0.58+

challengesQUANTITY

0.53+

Itamar Ankorian, Attunity | BigData NYC 2017


 

>> Announcer: Live from Midtown Manhattan, it's theCUBE, covering Big Data New York City 2017. Brought to you by SiliconANGLE Media and its ecosystem sponsor. >> Okay, welcome back, everyone, to our live special CUBE coverage in New York City in Manhattan, we're here in Hell's Kitchen for theCUBE's exclusive coverage of our Big Data NYC event and Strata Data, which used to be called Strata Hadoop, used to be Hadoop World, but our event, Big Data NYC, is our fifth year where we gather every year to see what's going on in big data world and also produce all of our great research. I'm John Furrier, the co-host of theCUBE, with Peter Burris, head of research. Our next guest, Itamar Ankorion, who's the Chief Marketing Officer at Attunity. Welcome back to theCUBE, good to see you. >> Thank you very much. It's good to be back. >> We've been covering Attunity for many, many years. We've had many conversations, you guys have had great success in big data, so congratulations on that. But the world is changing, and we're seeing data integration, we've been calling this for multiple years, that's not going away, people need to integrate more. But with cloud, there's been a real focus on accelerating the scale component with an emphasis on ease of use, data sovereignty, data governance, so all these things are coming together, the cloud has amplified. What's going on in the big data world, and it's like, listen, get movin' or you're out of business has pretty much been the mandate we've been seeing. A lot of people have been reacting. What's your response at Attunity these days because you have successful piece parts with your product offering? What's the big update for you guys with respect to this big growth area? >> Thank you. First of all, the cloud data lakes have been a major force, changing the data landscape and data management landscape for enterprises. For the past few years, I've been working closely with some of the world's leading organizations across different industries as they deploy the first and then the second and third iteration of the data lake and big data architectures. And one of the things, of course, we're all seeing is the move to cloud, whether we're seeing enterprises move completely to the cloud, kind of move the data lakes, that's where they build them, or actually have a hybrid environment where part of the data lake and data works analytics environment is on prem and part of it is in the cloud. The other thing we're seeing is that the enterprises are starting to mix more of the traditional data lake, the cloud is the platform, and streaming technologies is the way to enable all the modern data analytics that they need, and that's what we have been focusing on on enabling them to use data across all these different technologies where and when they need it. >> So, the sum of the parts is worth more if it's integrated together seems to be the positioning, which is great, it's what customers want, make it easier. What is the hard news that you guys have, 'cause you have some big news? Let's get to the news real quick. >> Thank you very much. We did, today, we have announced, we're very excited about it, we have announced a new big release of our data integration platform. Our modern platform brings together Attunity Replicate, Attunity Compose for Hive, and Attunity Enterprise Manager, or AEM. These are products that we've evolved significantly, invested a lot over the last few years to enable organizations to use data, make data available, and available in the real time across all these different platforms, and then, turn this data to be ready for analytics, especially in Hive and Hadoop environments on prem and now also in the cloud. Today, we've announced a major release with a lot of enhancements across the entire product line. >> Some people might know you guys for the Replicate piece. I know that this announcement was 6.0, but as you guys have the other piece part to this, really it's about modernization of kind of old-school techniques. That's really been the driver of your success. What specifically in this announcement makes it, you know, really work well for people who move in real time, they want to have good data access. What's the big aha for the customers out there with Attunity on this announcement? >> That's a great question, thank you. First of all is that we're bringing it all together. As you mentioned, over the past few years, Attunity Replicate has emerged as the choice of many Fortune 100 and other companies who are building modern architectures and moving data across different platforms, to the cloud, to their lakes, and they're doing it in a very efficient way. One of the things we've seen is that they needed the flexibility to adapt as they go through their journey, to adapt different platforms, and what we give them with Replicate was the flexibility to do so. We give them the flexibility, we give them the performance to get the data and efficiency to move only the change of the data as they happen and to do that in a real-time fashion. Now, that's all great, but once the data gets to the data lake, how do you then turn it into valuable information? That's when we introduced Compose for Hive, which we talked about in our last session a few month ago, which basically takes the next stage in the pipeline picking up incremental, continuous data that is fed into the data lake and turning those into operational data store, historical data stores, data store that's basically ready for analytics. What we've done with this release that we're really excited about is putting all of these together in a more integrated fashion, putting Attunity Enterprise Manager on top of it to help manage larger scale environments so customers can move faster in deploying these solutions. >> As you think about the role that Attunity's going to play over time, though, it's going to end up being part of a broader solution for how you handle your data. Imagine for a second the patterns that your customers are deploying. What is Attunity typically being deployed with? >> That's a great question. First of all, we're definitely part of a large ecosystem for building the new data architecture, new data management with data integration being more than ever a key part of that bigger ecosystem because as all they actually have today is more islands with more places where the data needs to go, and to your point, more patterns in which the data moves. One of those patterns that we've seen significantly increase in demand and deployment is streaming. Where data used to be batch, now we're all talking about streaming. Kafka has emerged as a very common platform, but not only Kafka. If you're on Amazon Web Services, you're using Kinesis. If you're in Azure, you're using Azure Event Hubs. You have different streaming technologies. That's part of how this has evolved. >> How is that challenge? 'Cause you just bring up a good point. I mean, with the big trend that customers want is they want either the same code basis on prem and that they have the hybrid, which means the gateway, if you will, to the public cloud. They want to have the same code base, or move workloads between different clouds, multi-cloud, it seems to be the Holy Grail, we've identified it. We are taking the position that we think multi-cloud will be the preferred architecture going forward. Not necessarily this year, but it's going to get there. But as a customer, I don't want to have to rebuild employees and get skill development and retraining on Amazon, Azure, Google. I mean, each one has its own different path, you mentioned it. How do you talk to customers about that because they might be like, whoa, I want it, but how do I work in that environment? You guys have a solution for that? >> We do, and in fact, one of the things we've seen, to your point, we've seen the adoption of multiple clouds, and even if that adoption is staged, what we're seeing is more and more customers that are actually referring to the term lock-in in respect to the cloud. Do we put all the eggs in one cloud, or do we allow ourselves the flexibility to move around and use different clouds, and also mitigate our risk in that respect? What we've done from that perspective is first of all, when you use the Attunity platform, we take away all the development complexity. In the Attunity platform, it is very easy to set up. Your data flow is your data pipelines, and it's all common and consistent. Whether you're working on prem, whether you work on Amazon Web Services, on Azure, or on Google or other platforms, it all looks and feels the same. First of all, and you solve the issue of the diversity, but also the complexity, because what we've done is, this is one of the big things that Attunity is focused on was reducing the complexity, allowing to configure these data pipelines without development efforts and resources. >> One of the challenges, or one of the things you typically do to take complexity out is you do a better job of design up front. And I know that Attunity's got a tool set that starts to address some of of these things. Take us a little bit through how your customers are starting to think in terms of designing flows as opposed to just cobbling together things in a bespoke way. How is that starting to change as customers gain experience with large data sets, the ability, the need to aggregate them, the ability to present them to developers in different ways? >> That's a great point, and again, one of the things we've focused on is to make the process of developing or configuring these different data flows easy and modular. First, while in Attunity you can set up different flows in different patterns, and you can then make them available to others for consumption. Some create the data ingestion, or some create the data ingestion and then create a data transformation with Compose for Hive, and with Attunity Enterprise Manager, we've now also introduced APIs that allow you to create your own microservices, consuming and using the services enabled by the platform, so we provide more flexibility to put all these different solutions together. >> What's the biggest thing that you see from a customer standpoint, from a problem that you solve? If you had to kind of lay it out, you know the classic, hey, what problem do you solve? 'Cause there are many, so take us through the key problem, and then, if there's any secondary issues that you guys can address customers, that seems the way conversation starts. What are key problems that you solve? >> I think one of the major problems that we solve is scale. Our customers that are deploying data lakes are trying to deploy and use data that is coming, not from five or 10 or even 50 data sources, we work at hundreds going on thousands of data sources now. That in itself represents a major challenge to our customers, and we're addressing it by dramatically simplifying and making the process of setting those up very repeatable, very easy, and then providing the management facility because when you have hundreds or thousands, management becomes a bigger issue to operationalize it. We invested a lot in a management facility for those, from a monitoring, control, security, how do you secure it? The data lake is used by many different groups, so how do we allow each group to see and work only on what belongs to that group? That's part it, too. So again, the scale is the major thing there. The other one is real timeliness. We talked about the move to streaming, and a lot of it is in order to enable streaming analytics, real-time analytics. That's only as good as your data, so you need to capture data in real time. And that of course has been our claim to fame for a long time, being the leading independent provider of CDC, change data capture technology. What we've done now, and also expanded significantly with the new release, version six, is creating universal database streaming. >> What is that? >> We take databases, we take databases, all the enterprise databases, and we turn them into live streams. When you think, by the way, by the most common way that people have used, customers have used to bring data into the lake from a database, it was Scoop. And Scoop is a great, easy software to use from an open source perspective, but it's scripting and batch. So, you're building your new modern architecture with the two are effectively scripting and batch. What we do with CDC is we enable to take a database, and instead of the database being something you come to periodically to read it, we actually turn it into a live feed, so as the data changes in the database, we stream it, we make it available across all these different platforms. >> Changes the definition of what live streaming is. We're live streaming theCUBE, we're data. We're data streaming, and you get great data. So, here's the question for you. This is a good topic, I love this topic. Pete and I talk about this all the time, and it's been addressed in the big data world, but it's kind of, you can see the pattern going mainstream in society globally, geopolitically and also in society. Batch processing and data in motion are real time. Streaming brings up this use case to the end customer, which is this is the way they've done it before, certainly store things in data lakes, that's not going to go away, you're going to store stuff, but the real gain is in motion. >> Itamar: Correct. >> How do you describe that to a customer when you go out and say, hey, you know, you've been living in a batch world, but wake up to the real world called real time. How do you get to them to align with it? Some people get it right away, I see that, some people don't. How do you talk about that because that seems to be a real cultural thing going on right now, or operational readiness from the customer standpoint? Can you just talk through your feeling on that? >> First of all, this often gets lost in translation, and we see quite a few companies and even IT departments that when you talk, when they refer to real time, or their business tells them we need real time, what they understand from it is when you ask for the data, the response will be immediate. You get real time access to the data, but the data is from last week. So, we get real time access, but for last week's data. And that's what we try to do is to basically say, wait a second, when you mean real time, what does real time mean? And we start to understand what is the meaning of using last week's data versus, or yesterday's data, over the real time data, and that makes a big difference. We actually see that today the access, the availability, the availability to act on the real time data, that's the frontier of competitive differentiation. That's what makes a customer experience better, that's what makes the business more operationally efficient than the competition. >> It's the data, not so much the process of what they used to do. They're version of real time is I responded to you pretty quickly. >> Exactly, the other thing that's interesting is because we see it with, again, change of the capture becoming a critical component of the modern data architecture. Traditionally, we used to talk about different type of tools and technology, now CDC itself is becoming a critical part of it, and the reason is that it serves and it answers a lot of fundamental needs that are now becoming critical. One is the need for real-time data. The other one is efficiency. If you're moving to the cloud, and we talked about this earlier, if you're data lake is going to be in the cloud, there's no way you're going to reload all your data because the bandwidth is going to get in the way. So, you have to move only the delta. You need the ability to capture and move only the delta, so CDC becomes fundamental both in enabling the real time as well the efficient, the low-impact data integration. >> You guys have a lot of partners, technology partners, global SIs, resellers, a bunch of different partnership levels. The question I have for you, love to get your reaction and share your insight into is, okay, as the relationship to the customer who has the problem, what's in it for me? I want to move my business forward, I want to do digital business, I need to get up my real-time data as it's happening. Whether it's near real time or real time, that's evolution, but ultimately, they have to move their developers down a certain path. They'll usually hire a partner. The relationship between partners and you, the supplier to the customer, has changed recently. >> That's correct. >> How is that evolving? >> First of all, it's evolving in several ways. We've invested on our part to make sure that we're building Attunity as a leading vendor in the ecosystem of they system integration consulting companies. We work with pretty much all the major global system integrators as well as regional ones, boutique ones, that focus on the emerging technologies as well as get the modern analytic-type platforms. We work a lot with plenty of them on major corporate data center-level migrations to the cloud. So again, the motivations are different, but we invest-- >> More specialized, are you seeing more specialty, what's the trend? >> We've been a technology partner of choice to both Amazon and Microsoft for enabling, facilitating the data migration to the cloud. They of course, their select or preferred group of partners they work with, so we all come together to create these solutions. >> Itamar, what's the goals for Attunity as we wrap up here? I give you the last word, as you guys have this big announcement, you're bringing it all together. Integrating is key, it's always been your ethos in the company. Where is this next level, what's the next milestone for you guys? What do you guys see going forward? >> First of all, we're going to continue to modernize. We're really excited about the new announcement we did today, Replicate six, AEM six, a new version of Compose for Hive that now also supports small data lakes, Aldermore, Scaldera, EMR, and a key point for us was expanding AEM to also enable analytics on the data we generate as data flows through it. The whole point is modernizing data integration, providing more intelligence in the process, reducing the complexity, and facilitating the automation end-to-end. We're going to continue to solve, >> Automation big, big time. >> Automation is a big thing for us, and the point is, you need to scale. In order to scale, we want to generate things for you so you don't to develop for every piece. We automate the automation, okay. The whole point is to deliver the solution faster, and the way we're going to do it is to continue to enhance each one of the products in its own space, if it's replication across systems, Compose for Hive for transformations in pipeline automation, and AEM for management, but also to create integration between them. Again, for us it's to create a platform that for our customers they get more than the sum of the parts, they get the unique capabilities that we bring together in this platform. >> Itamar, thanks for coming onto theCUBE, appreciate it, congratulations to Attunity. And you guys bringing it all together, congratulations. >> Thank you very much. >> This theCUBE live coverage, bringing it down here to New York City, Manhattan. I'm John Furrier, Peter Burris. Be right back with more after this short break. (upbeat electronic music)

Published Date : Sep 27 2017

SUMMARY :

Brought to you by SiliconANGLE Media I'm John Furrier, the co-host of theCUBE, Thank you very much. What's the big update for you guys the move to cloud, whether we're seeing enterprises What is the hard news that you guys have, and available in the real time That's really been the driver of your success. the flexibility to adapt as they go through their journey, Imagine for a second the patterns and to your point, more patterns in which the data moves. We are taking the position that we think multi-cloud We do, and in fact, one of the things we've seen, the ability to present them to developers in different ways? one of the things we've focused on is What's the biggest thing that you see We talked about the move to streaming, and instead of the database being something and it's been addressed in the big data world, or operational readiness from the customer standpoint? the availability to act on the real time data, I responded to you pretty quickly. because the bandwidth is going to get in the way. the supplier to the customer, has changed boutique ones, that focus on the emerging technologies facilitating the data migration to the cloud. What do you guys see going forward? on the data we generate as data flows through it. and the point is, you need to scale. And you guys bringing it all together, congratulations. it down here to New York City, Manhattan.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MicrosoftORGANIZATION

0.99+

Itamar AnkorionPERSON

0.99+

Peter BurrisPERSON

0.99+

AmazonORGANIZATION

0.99+

hundredsQUANTITY

0.99+

John FurrierPERSON

0.99+

fiveQUANTITY

0.99+

last weekDATE

0.99+

New York CityLOCATION

0.99+

ItamarPERSON

0.99+

secondQUANTITY

0.99+

CDCORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

firstQUANTITY

0.99+

TodayDATE

0.99+

PetePERSON

0.99+

50 data sourcesQUANTITY

0.99+

10QUANTITY

0.99+

Itamar AnkorianPERSON

0.99+

twoQUANTITY

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

each groupQUANTITY

0.99+

yesterdayDATE

0.99+

fifth yearQUANTITY

0.99+

OneQUANTITY

0.99+

todayDATE

0.99+

FirstQUANTITY

0.99+

Attunity ReplicateORGANIZATION

0.99+

ManhattanLOCATION

0.99+

oneQUANTITY

0.98+

Midtown ManhattanLOCATION

0.98+

NYCLOCATION

0.98+

AttunityORGANIZATION

0.97+

AldermoreORGANIZATION

0.97+

bothQUANTITY

0.97+

one cloudQUANTITY

0.97+

this yearDATE

0.97+

EMRORGANIZATION

0.96+

Big DataEVENT

0.96+

KafkaTITLE

0.95+

each oneQUANTITY

0.95+

ScalderaORGANIZATION

0.95+

thousandsQUANTITY

0.94+

AzureORGANIZATION

0.94+

Strata HadoopEVENT

0.94+

New York City, ManhattanLOCATION

0.94+

6.0QUANTITY

0.93+

theCUBEORGANIZATION

0.93+

Azure Event HubsTITLE

0.91+

2017EVENT

0.91+

a secondQUANTITY

0.91+

HiveTITLE

0.9+

rtune 100ORGANIZATION

0.9+

CUBEORGANIZATION

0.9+

few month agoDATE

0.88+

Attunity Enterprise ManagerTITLE

0.83+

thousands of data sourcesQUANTITY

0.83+

2017DATE

0.82+

AEMTITLE

0.8+

third iterationQUANTITY

0.79+

version sixQUANTITY

0.78+

Itamar Ankorion, Attunity & Arvind Rajagopalan, Verizon - #DataWorks - #theCUBE


 

>> Narrator: Live from San Jose in the heart of Silicon Valley, it's the CUBE covering DataWorks Summit 2017 brought to you by Hortonworks. >> Hey, welcome back to the CUBE live from the DataWorks Summit day 2. We've been here for a day and a half talking with fantastic leaders and innovators, learning a lot about what's happening in the world of big data, the convergence with Internet of Things Machine Learning, artificial intelligence, I could go on and on. I'm Lisa Martin, my co-host is George Gilbert and we are joined by a couple of guys, one is a Cube alumni, Itamar Ankorion, CMO of Attunity, Welcome back to the Cube. >> Thank you very much, good to be here, thank you Lisa and George. >> Lisa: Great to have you. >> And Arvind Rajagopalan, the Director of Technology Services for Verizon, welcome to the Cube. >> Thank you. >> So we were chatting before we went on, and Verizon, you're actually going to be presenting tomorrow, at the DataWorks summit, tell us about building... the journey that Verizon has been on building a Data Lake. >> Oh, Verizon is over the last 20 years, has been a large corporation, made up of a lot of different acquisitions and mergers, and that's how it was formed in 20 years back, and as we've gone through the journey of the mergers and the acquisitions over the years, we had data from different companies come together and form a lot of different data silos. So the reason we kind of started looking at this, is when our CFO started asking questions around... Being able to answer One Verizon questions, it's as simple as having Days Payable, or Working Capital Analysis across all the lines of businesses. And since we have a three-major-ERP footprint, it is extremely hard to get that data out, and there was a lot of manual data prep activities that was going into bringing together those One Verizon views. So that's really what was the catalyst to get the journey started for us. >> And it was driven by your CFO, you said? >> Arvind: That's right. >> Ah, very interesting, okay. So what are some of the things that people are going to hear tomorrow from your breakout session? >> Arvind: I'm sorry, say that again? >> Sorry, what are some of the things that the people, the attendees from your breakout session, are going to learn about the steps and the journey? >> So I'm going to primarily be talking about the challenges that we ran into, and share some around that, and also talk about some of the factors, such as the catalysts and what drew us to sort of moving in that direction, as well as getting to some architectural components, from high-level standpoint, talk about certain partners that we work with, the choices we made from an architecture perspective and the tools, as well as to kind of close the loop on, user adoption and what users are seeing in terms of business value, as we start centralizing all of the data at Verizon from a backoff as Finance and Supply Chains standpoint. So that's kind of what I'm looking at talking tomorrow. >> Arvind, it's interesting to hear you talk about sort of collecting data from essentially backoff as operational systems in a Data Lake. Were there... I assume that the state is sort of more refined and easily structured than the typical stories we hear about Data Lakes. Were there challenges in making it available for exploration and visualization, or were all the early-use cases really just Production Reporting? >> So standard reporting across the ERP systems is very mature and those capabilities are there, but then you look at across-ERP systems and we have three major ERP systems for each of the lines of businesses, when you want to look at combining all of the data, it's very hard, and to add to that, you pointed on self-service discovery, and visualization across all three datas, that's even more challenging, because it takes a lot of heavy lift, to normalize all of the data and bring it into one centralized platform, and we started off the journey with Oracle, and then we had SAP HANA, we were trying to bring all the data together, but then we were looking at systems in our non-SAP ERP systems and bringing that data into a SAP-kind of footprint, one, the cost was tremendously high, also there was a lot of heavy lift and challenges in terms of manually having to normalize the data and bring it into the same kind of data models. And even after all of that was done, it was not very self-service oriented for our users and Finance and Supply Chain. >> Let me drill into two of those things. So it sounds like the ETL process of converting it into a consumable format was very complex, and then it sounds like also, the discoverability, like where a tool, perhaps like Elation, might help, which is very, very immature right now, or maybe not immature, it's still young. Is that what was missing, or why was the ETL process so much more heavyweight than with a traditional data warehouse? >> The ETL processes, there's a lot of heavy lifting there involved, because of the proprietary data structures of the ERP systems, especially SAP is... The data structures and how the data is used across clustered and pool tables, is very proprietary. And on top of that, bringing the data formats and structures from a PeopleSoft ERP system which are supporting different lines of businesses, so there are a lot of customization that's gone into place, there are specific things that we use in the ERPs, in terms of the modules and how the processes are modeled in each of the lines of businesses, complicates things a lot. And then you try and bring all these three different ERPs, and the nuances that they have over the years, try and bring them together, it actually makes it very complex. >> So tell us then, help us understand, how the Data Lake made that easier. Was it because you didn't have to do all the refinement before it got there. And tell us how Attunity helped make that possible. >> Oh absolutely, so I think that's one of the big things, why we picked the Hortonworks as one of our key partners in terms of buidling out the Data Lake, it just came on greed, you aren't necessarily worried about doing a whole lot of ETL before you bring the data in, and it also provides with the tools and the technologies from a lot other partners. We have a lot of maturity now, better provided self-service discovery capabilities for ad hoc analysis and reporting. So this is helpful to the users because now they don't have to wait for prolonged IT development cycles to model the data, do the ETL and build reports for the to consume, which sometimes could take weeks and months. Now in a matter of days, they're able to see the data they're looking for and they're able to start the analysis, and once they start the analysis and the data is accessible, it's a matter of minutes and seconds looking at the different tools, how they want to look at it, how they want to model it, so it's actually being a huge value from the perspective of the users and what they're looking to do. >> Speaking of value, one of the things that was kind of thematic yesterday, we see enterprises are now embracing big data, they're embracing Hadoop, it's got to coexist within our ecosystem, and it's got to inter-operate, but just putting data in a Data Lake or Hadoop, that's not the value there, it's being able to analyze that data in motion, at rest, structured, unstructured, and start being able to glean or take actionable insights. From your CFO's perspective, where are you know of answering some of the questions that he or she had, from an insights perspective, with the Data Lake that you have in place? >> Yeah, before I address that, I wanted to quickly touch upon and wrap up George's question, if you don't mind. Because one of the key challenges, and I do talk about how Attunity helped. I was just about to answer the question before we moved on, so I just want to close the loop on that a little bit. So in terms of bringing the data in, the data acquisition or ingestion is key aspect of it, and again, looking at the proprietary data structures from the ERP systems is very complex, and involves a multi-step process to bring the data into a strange environment, and be able to put it in the swamp bring it into the Lake. And what Attunity has been able to help us with is, it has the intelligence to look at and understand the proprietary data structures of the ERPs, and it is able to bring all the data from the ERP source systems directly into Hadoop, without any stops, or staging data bases along the way. So it's been a huge value from that standpoint, I'll get into more details around that. And to answer your question, around how it's helping from a CFO standpoint, and the users in Finance, as I said, now all the data is available in one place, so it's very easy for them to consume the data, and be able to do ad hoc analysis. So if somebody's looking to, like I said earlier, want to look at and calculate base table, as an example, or they want to look at working capital, we are actually moving data using Attunity, CDC replicate product, we're getting data in real-time, into the Data Lake. So now they're able to turn things around, and do that kind of analysis in a matter of hours, versus overnight or in a matter of days, which was the previous environment. >> And that was kind of one of the things this morning, is it's really about speed, right? It's how fast can you move and it sounds like together with Attunity, Verizon is really not only making things simpler, as you talked about in this kind of model that you have, with different ERP systems, but you're also really able to get information into the right hands much, much faster. >> Absolutely, that's the beauty of the near real-time, and the CDC architecture, we're able to get data in, very easily and quickly, and Attunity also provides a lot of visibility as the data is in flight, we're able to see what's happening in the source system, how many packets are flowing through, and to a point, my developers are so excited to work with a product, because they don't have to worry about the changes happening in the source systems in terms of DDL and those changes are automatically understood by the product and pushed to the destination of Hadoop. So it's been a game-changer, because we have not had any downtime, because when there are things changing on the source system side, historically we had to take downtime, to change those configurations and the scripts, and publish it across environments, so that's been huge from that standpoint as well. >> Absolutely. >> Itamar, maybe, help us understand where Attunity can... It sounds like there's greatly reduced latency in the pipeline between the operational systems and the analytic system, but it also sounds like you still need to essentially reformat the data, so that it's consumable. So it sounds like there's an ETL pipeline that's just much, much faster, but at the same time, when it's like, replicate, it sounds like that goes without transformations. So help us sort of understand that nuance. >> Yeah, that's a great question, George. And indeed in the past few years, customers have been focused predominantly on getting the data to the Lake. I actually think it's one of the changes in the fame, we're hearing here in the show and the last few months is, how do we move to start using the data, the great applications on the data. So we're kind of moving to the next step, in the last few years we focused a lot on innovating and creating the solutions that facilitate and accelerate the process of getting data to the Lake, from a large scope of systems, including complex ones like SAP, and also making the process of doing that easier, providing real-time data that can both feed streaming architectures as well as batch ones. So once we got that covered, to your question, is what happens next, and one of the things we found, I think Verizon is also looking at it now and are being concomitant later. What we're seeing is, when you bring data in, and you want to adopt the streaming, or a continuous incremental type of data ingestion process, you're inherently building an architecture that takes what was originally a database, but you're kind of, in a sense, breaking it apart to partitions, as you're loading it over time. So when you land the data, and Arvind was referring to a swamp, or some customers refer to it as a landing zone, you bring the data into your Lake environment, but at the first stage that data is not structured, to your point, George, in a manner that's easily consumable. Alright, so the next step is, how do we facilitate the next step of the process, which today is still very manual-driven, has custom development and dealing with complex structures. So we actually are very excited, we've introduced, in the show here, we announced a new product by Attunity, Compose for Hive, which extends our Data Lake solutions, and what Compose of Hive is exactly designed to do, is address part of the problem you just described, where's when the data comes in and is partitioned, what Compose for Hive does, is it reassembles these partitions, and it then creates analytic-ready data sets, back in Hive, so it can create operational data stores, it can create historical data stores, so then the data becomes formatted, in a matter that's more easily accessible for users, who want to use analytic tools, VI-tools, Tableau, Qlik, any type of tool that can easily access a database. >> Would there be, as a next step, whether led by Verizon's requirements or Attunity's anticipation of broader customer requirements, something where, there's a, if not near real-time, but a very low latency landing and transformation, so that data that is time-sensitive can join the historical data. >> Absolutely, absolutely. So what we've done, is focus on real-time availability of data. So when we feed the data into the Data Lake, we fit it into ways, one is directly into Hive, but we also go through a streaming architecture, like Kafka, in the case of Hortonworks, can also fit also very well into HDF. So then the next step in the process, is producing those analytic data sets, or data source, out of it, which we enable, and what we do is design it together with our partners, with our inner customers. So again when we work on Replicate, then we worked on Compose, we worked very close with Fortune companies trying to deal with these challenges, so we can design a product. In the case of Compose for Hive for example, we have done a lot of collaboration, at a product engineering level, with Hortonworks, to leverage the latest and greatest in Hive 2.2, Hive LLAP, to be able to push down transformations, so those can be done faster, including real-time, so those datasets can be updated on a frequent basis. >> You talked about kind of customer requirements, either those specific or not, obviously talking to telecommunications company, are you seeing, Itamar, from Attunity's perspective, more of this need to... Alright, the data's in the Lake, or first it comes to the swamp, now it's in the Lake, to start partitioning it, are you seeing this need driven in specific industries, or is this really pretty horizontal? >> That's a good question and this is definitely a horizontal need, it's part of the infrastructure needs, so Verizon is a great customer, and we even worked similarly in telecommunications, we've been working with other customers in other industries, from manufacturing, to retail, to health care, to automotive and others, and in all of those cases it's on a foundation level, it's very similar architectural challenges. You need to ingest the data, you want to do it fast, you want to do it incrementally or continuously, even if you're loading directly into Hadoop. Naturally, when you're loading the data through a Kafka, or streaming architecture, it's a continuous fashon, and then you partition the data. So the partitioning of the data is kind of inherent to the architecture, and then you need to help deal with the data, for the next step in the process. And we're doing it both with Compose for Hive, but also for customers using streaming architectures like Kafka, we provide the mechanisms, from supporting or facilitating things like schema unpollution, and schema decoding, to be able to facilitate the downstream process of processing those partitions of data, so we can make the data available, that works both for analytics and streaming analytics, as well as for scenarios like microservices, where the way in which you partition the data or deliver the data, allows each microservice to pick up on the data it needs, from the relevant partition. >> Well guys, this has been a really informative conversation. Congratulations, Itamar, on the new announcement that you guys made today. >> Thank you very much. >> Lisa: Arvin, great to hear the use case and how Verizon really sounds quite pioneering in what you're doing, wish you continued success there, we look forward to hearing what's next for Verizon, we want to thank you for watching the CUBE, we are again live, day two, of the DataWorks summit, #DWS17, before me my co-host George Gilbert, I am Lisa Martin, stick around, we'll be right back. (relaxed techno music)

Published Date : Jun 14 2017

SUMMARY :

in the heart of Silicon Valley, and we are joined by a couple of guys, Thank you very much, good to be here, the Director of Technology Services for Verizon, at the DataWorks summit, So the reason we kind of started looking at this, that people are going to hear tomorrow and the tools, as well as to kind of close the loop on, than the typical stories we hear about Data Lakes. and bring it into the same kind of data models. So it sounds like the ETL process and the nuances that they have over the years, how the Data Lake made that easier. do the ETL and build reports for the to consume, and it's got to inter-operate, and it is able to bring all the data and it sounds like together with Attunity, and the CDC architecture, we're able to get data in, and the analytic system, getting the data to the Lake. can join the historical data. like Kafka, in the case of Hortonworks, Alright, the data's in the Lake, You need to ingest the data, you want to do it fast, Congratulations, Itamar, on the new announcement Lisa: Arvin, great to hear the use case

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
George GilbertPERSON

0.99+

Arvind RajagopalanPERSON

0.99+

ArvindPERSON

0.99+

Lisa MartinPERSON

0.99+

VerizonORGANIZATION

0.99+

Itamar AnkorionPERSON

0.99+

LisaPERSON

0.99+

GeorgePERSON

0.99+

ItamarPERSON

0.99+

OracleORGANIZATION

0.99+

San JoseLOCATION

0.99+

Silicon ValleyLOCATION

0.99+

twoQUANTITY

0.99+

tomorrowDATE

0.99+

KafkaTITLE

0.99+

threeQUANTITY

0.99+

HortonworksORGANIZATION

0.99+

CubeORGANIZATION

0.99+

ArvinPERSON

0.99+

DataWorks SummitEVENT

0.99+

SAP HANATITLE

0.99+

OneQUANTITY

0.99+

eachQUANTITY

0.99+

yesterdayDATE

0.99+

#DWS17EVENT

0.99+

oneQUANTITY

0.98+

a day and a halfQUANTITY

0.98+

CDCORGANIZATION

0.98+

first stageQUANTITY

0.98+

TableauTITLE

0.98+

DataWorks Summit 2017EVENT

0.98+

AttunityORGANIZATION

0.98+

HiveTITLE

0.98+

bothQUANTITY

0.98+

AttunityPERSON

0.98+

DataWorksEVENT

0.97+

todayDATE

0.97+

Compose for HiveORGANIZATION

0.97+

ComposeORGANIZATION

0.96+

Hive 2.2TITLE

0.95+

QlikTITLE

0.94+

HadoopTITLE

0.94+

one placeQUANTITY

0.93+

day twoQUANTITY

0.92+

each microserviceQUANTITY

0.9+

firstQUANTITY

0.9+

20 years backDATE

0.89+

#DataWorksORGANIZATION

0.87+

three major ERP systemsQUANTITY

0.83+

last 20 yearsDATE

0.82+

PeopleSoftORGANIZATION

0.8+

Data LakeCOMMERCIAL_ITEM

0.8+

SAPORGANIZATION

0.79+

Kendall Nelson, OpenStack Foundation & John Griffith, NetApp - OpenStack Summit 2017 - #theCUBE


 

>> Narrator: Live from Boston, Massachusetts, it's theCUBE covering OpenStack Summit 2017. Brought to you by the OpenStack Foundation, Red Hat, and additional ecosystem support. (techno music) >> And we're back. I'm Stu Miniman joined by my co-host, John Troyer. Happy to welcome to the program two of the keynote speakers this morning, worked on some of the container activity, Kendall Nelson, who's a Upstream Developer Advocate with the OpenStack Foundation. >> Yep. >> And John Griffith, who's a Principal Engineer from NetApp, excuse me, through the SolidFire acquisition. Thank you so much both for joining. >> Kendall Nelson: Yeah. Thank you. >> John Griffith: Thanks for havin' us. >> Stu Miniman: So you see-- >> Yeah. >> When we have any slip-ups when we're live, we just run through it. >> Run through it. >> Kendall, you ever heard of something like that happening? >> Kendall Nelson: Yeah. Yeah. That might've happened this morning a little bit. (laughs) >> So, you know, let's start with the keynote this morning. I tell ya, we're pretty impressed with the demos. Sometimes the demo gods don't always live up to expectations. >> Kendall Nelson: Yeah. >> But maybe share with our audience just a little bit about kind of the goals, what you were looking to accomplish. >> Yeah. Sure. So basically what we set out to do was once the ironic nodes were spun up, we wanted to set up a standalone cinder service and use Docker Compose to do that so that we could do an example of creating a volume and then attaching it to a local instance and kind of showing the multiple backend capabilities of Cinder, so... >> Yeah, so the idea was to show how easy it is to deploy Cinder. Right? So and then plug that into that Kubernetes deployment using a flex volume plugin and-- >> Stu Miniman: Yeah. >> Voila. >> It was funny. I saw some comments on Twitter that were like, "Well, maybe we're showing Management that it's not, you know, a wizard that you just click, click, click-- >> John Griffith: Right. >> Kendall Nelson: Yeah. >> "And everything's done." There is some complexity here. You do want to have some people that know what they're doing 'cause things can break. >> Kendall Nelson: Yeah. >> I love that the container stuff was called ironic. The bare metal was ironic because-- >> Kendall Nelson: Yeah. >> Right. When you think OpenStack at first, it was like, "Oh. This is virtualized infrastructure." And therefore when containers first came out, it was like, "Wait. It's shifting. It's going away from virtualization." John, you've been on Cinder. You helped start Cinder. >> Right. >> So maybe you could give us a little bit about historical view as to where that came from and where it's goin'. Yeah. >> Yeah. It's kind of interesting, 'cause it... You're absolutely right. There was a point where, in the beginning, where virtualization was everything. Right? Ironic actually, I think it really started more of a means to an end to figure out a better way to deploy OpenStack. And then what happened was, as people started to realize, "Oh, hey. Wait." You know, "This whole bare metal thing and running these cloud services on bare metal and bare metal clouds, this is a really cool thing. There's a lot of merit here." So then it kind of grew and took on its own thing after that. So it's pretty cool. There's a lot of options, a lot of choices, a lot of different ways to run a cloud now, so... >> Kendall Nelson: Yeah. >> You want to comment on that Kendall, or... >> Oh, no. Just there are definitely tons of ways you can run a cloud and open infrastructure is really interesting and growing. >> That has been one thing that we've noticed here at the show. So my first summit, so it was really interesting to me as an outsider, right, trying to perceive the shape of OpenStack. Right? Here the message has actually been very clear. We're no longer having to have a one winner... You know, one-size-fits-all kind of cloud world. Like we had that fight a couple of years ago. It's clear there's going to be multiple clouds, multiple places, multiple form factors, and it was very nice people... An acknowledgement of the ecosystem, that there's a whole open source ecosystem of containers and of other open source projects that have grown up all around OpenStack, so... But I want to talk a little bit about the... And the fact that containers and Kubernetes and that app layer is actually... Doesn't concern itself with the infrastructure so much so actually is a great fit for sitting on top of or... And adjacent to OpenStack. Can you all talk a little bit about the perception here that you see with the end users and cloud builders that are here at the show and how are they starting to use containers. Do they understand the way these two things fit together? >> Yeah. I think that we had a lot of talks submitted that were focused on containers, and I was just standing outside the room trying to get into a Women of OpenStack event, and the number of people that came pouring out that were interested in the container stack was amazing. And I definitely think people are getting more into that and using it with OpenStack is a growing direction in the community. There are couple new projects that are growing that are containers-focused, like... One just came into the projects, OpenStack Helm. And that's a AT&T effort to use... I think it's Kubernetes with OpenStack. So yeah, tons. >> So yeah, it's interesting. I think the last couple of years there's been a huge uptick in the interest of containers, and not just in containers of course, but actually bringing those together with OpenStack and actually running containers on OpenStack as the infrastructure. 'Cause to your point, what everybody wants to see, basically, is commoditized, automated and generic infrastructure. Right? And OpenStack does a really good job of that. And as people start to kind of realize that OpenStack isn't as hard and scary as it used to be... You know, 'cause for a few years there it was pretty difficult and scary. It's gotten a lot better. So deployment, maintaining, stuff like that, it's not so bad, so it's actually a really good solution to build containers on. >> Well, in fact, I mean, OpenStack has that history, right? So you've been solving a lot of problems. Right now the container world, both on the docker side and Kubernetes as well, you're dealing with storage drivers-- >> John Griffith: Yeah. >> Networking overlays-- >> Right. >> Multi-tenancy security, all those things that previous generations of technology have had to solve. And in fact, I mean, you know, right now, I'd say storage and storage interfaces actually are one of the interesting challenges that docker and Kubernetes and all that level of containers and container orchestration and spacing... I mean, it seems like... Has OpenStack already solved, in some way, it's already solved some of these problems with things like Cinder? >> Abso... Yeah. >> John Troyer: And possibly is there an application to containers directly? >> Absolutely. I mean, I think the thing about all of this... And there's a number of us from the OpenStack community on the Cinder side as well as the networking side, too-- >> Yeah. >> Because that's another one of those problem spaces. That are actually taking active roles and participating in the Kubernetes communities and the docker communities to try and kind of help with solving the problems over on that side, right? And moving forward. The fact is is storage is, it's kind of boring, but it's hard. Everybody thinks-- >> John Troyer: It's not boring. >> Yeah. >> It's really awesomely hard. Yeah. >> Everybody thinks it's, "Oh, I'll just do my own." It's actually a hard thing to get right, and you learn a lot over the last seven years of OpenStack. >> Yeah. >> We've learned a lot in production, and I think there's a lot to be learned from what we've done and how things could be going forward with other projects and new technologies to kind of learn from those lessons and make 'em better, so... >> Yeah. >> In terms of multicloud, hybrid cloud world that we're seeing, right? What do you see as the role of OpenStack in that kind of a multicloud deployments now? >> OpenStack can be used in a lot of different ways. It can be on top of containers or in containers. You can orchestrate containers with OpenStack. That's like the... Depending on the use case, you can plug and play a lot of different parts of it. On all the projects, we're trying to move to standalone sort of services, so that you can use them more easily with other technologies. >> Well, and part of your demo this morning, you were pulling out of a containerized repo somehow. So is that kind of a path forward for the mainline OpenStack core? >> So personally, I think it would be a pretty cool way to go forward, right? It would make things a lot easier, a lot simpler. And kind of to your point about hybrid cloud, the thing that's interesting is people have been talking about hybrid cloud for a long time. What's most interesting these days though is containers and things like Kubernetes and stuff, they're actually making hybrid cloud something that's really feasible and possible, right? Because now, if I'm running on a cloud provider, whether it's OpenStack, Amazon, Google, DigitalOcean, it doesn't matter anymore, right? Because all of that stuff in my app is encapsulated in the container. So hybrid cloud might actually become a reality, right? The one thing that's missing still (John Troyer laughs) is data, right? (Kendall Nelson laughs) Data gravity and that whole thing. So if we can figure that out, we've actually got somethin', I think. >> Interesting comment. You know, hybrid cloud a reality. I mean, we know the public cloud here, it's real. >> Yeah. >> With the Kubernetes piece, doesn't that kind of pull together some... Really enable some of that hybrid strategy for OpenStack, which I felt like two or three years ago it was like, "No, no, no. Don't do public cloud. >> John Griffith: Yeah. >> "It's expensive and (laughter) hard or something. "And yeah, infrastructure's easy and free, right?" (laughter) Wait, no. I think I missed that somewhere. (laughter) But yeah, it feels like you're right at the space that enables some of those hybrid and multicloud capabilities. >> Well, and the thing that's interesting is if you look at things like Swarm and Kubernetes and stuff like that, right? One of the first things that they all build are cloud providers, whether OpenStack, AWS, they're all in there, right? So for Swarm, it's pretty awesome. I did a demo about a year ago of using Amazon and using OpenStack, right? And running the exact same workloads the exact same way with the exact same tools, all from Docker machine and Swarm. It was fantastic, and now you can do that with Kubernetes. I mean, now that's just... There's nothing impressive. It's just normal, right? (Kendall Nelson laughs) That's what you do. (laughs) >> I love the demos this morning because they actually were, they were CLI. They were command-line driven, right? >> Kendall Nelson: Yeah. >> I felt at some conferences, you see kind of wizards and GUIs and things like that, but here they-- >> Yeah. >> They blew up the terminal and you were typing. It looked like you were actually typing. >> Kendall Nelson: Oh, yeah. (laughter) >> John Griffith: She was. >> And I actually like the other demo that went on this morning too, where they... The interop demo, right? >> Mm-hmm. >> John Troyer: They spun up 15 different OpenStack clouds-- >> Yeah. >> From different providers on the fly, right there, and then hooked up a CockroachDB, a huge cluster with all of them, right? >> Kendall Nelson: Yeah. >> Can you maybe talk... I just described it, but can you maybe talk a little bit about... That seemed actually super cool and surprising that that would happen that... You could script all that that it could real-time on stage. >> Yeah. I don't know if you, like, noticed, but after our little flub-up (laughs) some of the people during the interop challenge, they would raise their hand like, "Oh, yeah. I'm ready." And then there were some people that didn't raise their hands. Like, I'm sure things went wrong (John Troyer laughs) and with other people, too. So it was kind of interesting to see that it's really happening. There are people succeeding and not quite gettin' there and it definitely is all on the fly, for sure. >> Well, we talked yesterday to CTO Red Hat, and he was talking same thing. No, it's simpler, but you're still making a complicated distributed computing system. >> Kendall Nelson: Oh, definitely. >> Right? There are a lot of... This is not a... There are a lot of moving parts here. >> Kendall Nelson: Yeah. >> Yeah. >> Well, it's funny, 'cause I've been around for a while, right? So I remember what it was like to actually build these things on your own. (laughs) Right? And this is way better, (laughter) so-- >> So it gets your seal of approval? We have reached a point of-- >> Yeah. >> Of usability and maintainability? >> Yeah, and it's just going to keep gettin' better, right? You know, like the interop challenge, the thing that's awesome there is, so they use Ansible, and they talk to 20 different clouds and-- >> Kendall Nelson: Yeah. >> And it works. I mean, it's awesome. It's great. >> Kendall Nelson: Yeah. >> So I guess I'm hearing containers didn't kill OpenStack, as a matter of fact, it might enable the next generation-- >> Kendall Nelson: Yeah. >> Of what's going on, so-- >> John Griffith: Yeah. >> How about serverless? When do we get to see that in here? I actually was lookin' real quick. There's a Functions as a Service session that somebody's doing, but any commentary as to where that fits into OpenStack? >> Go ahead. (laughs) >> So I'm kind of mixed on the serverless stuff, especially in a... In a public cloud, I get it, 'cause then I just call it somebody else's server, right? >> Stu Miniman: Yeah. >> In a private context, it's something that I haven't really quite wrapped my head around yet. I think it's going to happen. I mean, there's no doubt about it. >> Kendall Nelson: Yeah. >> I just don't know exactly what that looks like for me. I'm more interested right now in figuring out how to do awesome storage in things like Kubernetes and stuff like that, and then once we get past that, then I'll start thinking about serverless. >> Yeah. >> Yeah. >> 'Cause where I guess I see is... At like an IoT edge use case where I'm leveraging a container architecture that's serverless driven, that's where-- >> Yeah. >> It kind of fits, and sometimes that seems to be an extension of the public cloud, rather than... To the edge of the public cloud rather than the data center driven-- >> John Griffith: Yeah. >> But yeah. >> Well, that's kind of interesting, actually, because in that context, I do have some experience with some folks that are deploying that model now, and what they're doing is they're doing a mini OpenStack deployment on the edge-- >> Stu Miniman: Yep. >> And using Cinder and Instance and everything else, and then pushing, and as soon as they push that out to the public, they destroy what they had, and they start over, right? And so it's really... It's actually really interesting. And the economics, depending on the scale and everything else, you start adding it up, it's phenomenal, so... >> Well, you two are both plugged into the user community, the hands-on community. What's the mood of the community this year? Like I said, my first year, everybody seems engaged. I've just run in randomly to people that are spinning up their first clouds right now in 2017. So it seems like there's a lot of people here for the first time excited to get started. What do you think the mood of the user community is like? >> I think it's pretty good. I actually... So at the beginning of the week, I helped to run the OpenStack Upstream Institute, which is teaching people how to contribute to the Upstream Community. And there were a fair amount of users there. There are normally a lot of operators and then just a set of devs, and it seemed like there were a lot more operators and users looking that weren't originally interested in contributing Upstream that are now looking into those things. And at our... We had a presence at DockerCon, actually. We had a booth there, and there were a ton of users that were coming and talking to us, and like, "How can I use OpenStack with containers?" So it's, like, getting more interest with every day and growing rapidly, so... >> That's great. >> Yeah. >> All right. Well, want to thank both of you for joining us. I think this went flawless on the interview. (laughter) And yeah, thanks so much. >> Yeah. >> All these things happen... Live is forgiving, as we say on theCUBE and absolutely going forward. So thanks so much for joining us. >> John Griffith: Thank you. John and I will be back with more coverage here from the OpenStack Summit in Boston. You're watching theCUBE. (funky techno music)

Published Date : May 9 2017

SUMMARY :

Brought to you by the OpenStack Foundation, Happy to welcome to the program And John Griffith, who's a Principal Engineer When we have any slip-ups when we're live, That might've happened this morning a little bit. Sometimes the demo gods about kind of the goals, and kind of showing the multiple backend capabilities So and then plug that into that Kubernetes deployment I saw some comments on Twitter that were like, You do want to have some people that know what they're doing I love that the container stuff was called ironic. When you think OpenStack at first, So maybe you could give us a little bit more of a means to an end to figure out and open infrastructure is really interesting and growing. that are here at the show and how are they starting and the number of people that came pouring out and not just in containers of course, Well, in fact, I mean, OpenStack has that history, that previous generations of technology have had to solve. Yeah. on the Cinder side as well as the networking side, too-- in the Kubernetes communities and the docker communities Yeah. and you learn a lot over the last seven years of OpenStack. and I think there's a lot to be learned from what we've done Depending on the use case, you can plug and play So is that kind of a path forward And kind of to your point about hybrid cloud, I mean, we know the public cloud here, With the Kubernetes piece, doesn't that kind of that enables some of those hybrid Well, and the thing that's interesting I love the demos this morning because they actually were, They blew up the terminal and you were typing. Kendall Nelson: Oh, yeah. And I actually like the other demo and surprising that that would happen that... and it definitely is all on the fly, for sure. and he was talking same thing. There are a lot of moving parts here. to actually build these things on your own. And it works. I actually was lookin' real quick. (laughs) So I'm kind of mixed on the serverless stuff, I think it's going to happen. and then once we get past that, At like an IoT edge use case It kind of fits, and sometimes that seems to be and as soon as they push that out to the public, here for the first time excited to get started. So at the beginning of the week, I think this went flawless on the interview. and absolutely going forward. John and I will be back with more coverage here

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John GriffithPERSON

0.99+

JohnPERSON

0.99+

Stu MinimanPERSON

0.99+

John TroyerPERSON

0.99+

Kendall NelsonPERSON

0.99+

2017DATE

0.99+

15QUANTITY

0.99+

Red HatORGANIZATION

0.99+

KendallPERSON

0.99+

OpenStack FoundationORGANIZATION

0.99+

AT&TORGANIZATION

0.99+

BostonLOCATION

0.99+

AmazonORGANIZATION

0.99+

twoDATE

0.99+

Boston, MassachusettsLOCATION

0.99+

yesterdayDATE

0.99+

twoQUANTITY

0.99+

AWSORGANIZATION

0.99+

bothQUANTITY

0.99+

GoogleORGANIZATION

0.99+

OpenStack SummitEVENT

0.99+

OpenStackTITLE

0.99+

one thingQUANTITY

0.98+

20 different cloudsQUANTITY

0.98+

this yearDATE

0.98+

three years agoDATE

0.98+

one winnerQUANTITY

0.98+

first timeQUANTITY

0.98+

first yearQUANTITY

0.98+

OpenStack Upstream InstituteORGANIZATION

0.97+

OneQUANTITY

0.97+

OpenStack Summit 2017EVENT

0.97+

SolidFireORGANIZATION

0.96+

CTO Red HatORGANIZATION

0.96+

oneQUANTITY

0.95+

NetAppORGANIZATION

0.95+

first cloudsQUANTITY

0.94+

CinderORGANIZATION

0.93+

first summitQUANTITY

0.93+

couple of years agoDATE

0.93+

CinderTITLE

0.91+

KubernetesTITLE

0.91+

this morningDATE

0.91+

Matt Hayes, Attunity - #SAPPHIRENOW - #theCUBE


 

>> Voiceover: From Orlando, Florida, it's theCube, covering Sapphire now, headline sponsored by SAP, Hana, the Cloud, the leader in Platform as a service, with support from Console Inc, the cloud internet company, now here are your hosts, John Furrier, and Peter Burris. >> Hey welcome back everyone, we are here live at SAP Sapphire in Orlando, Florida, this is theCube, Silicon Angle Media's flagship program, we go out to the events and extract the scene of the noise, I'm John Furrier with my co-host Peter Burris, our next guest is Matt Hayes, VP of SAP Business, Attunity, welcome to theCube. >> Thank you, thank you so much. >> So great to have you on, get the update on Attunity. You've been on theCube many times, you guys have been great supporters of theCube, appreciate that, and want to get a little update, so obviously Attunity, it's all about big data, Hana is a big data machine, it does a lot of things fast, certainly analystics being talked about here, but how do you guys fit in with SAP, what's your role here? How does it fit? >> Sure sure, well I think this is our ninth of tenth time here at Sapphire, we've been in the ecosystem for quite some time, our Gold Client solution is really designed to help SAP customers move data from production to non-production systems, and now, more throughout the landscape, or the enterprise even, so as SAP's evolved, we've evolved with SAP and a lot of our customers get a lot of value by taking real-life production data out of their production system, and moving that to non-production systems, training, sandbox, test environments. Some customer's use it for troubleshooting, you know, you have a problem with some data in production, you can bring that into a non-production system and test that, and some scrambling capabilities as well. Most SAP customers have a lot of risk if their copying the production data into non-production systems that are less secure, less regulated, so some of the data scrambling or obfuscation techniques that we have make it so that that data can safely go into those non-production systems and be protected. >> What's been your evolution? I mean obviously you mentioned you guys been evolving with SAP, so what is the current evolution? What's the highlight, what's the focus? >> So, obviously Hana has been the focus for quite some time and it still is, more and more of our customer's are moving to Hana, and adopting that technology, less so with S4, because that's kind of a newer phase, so a lot of people are making the two step approach of going to Hana, and then looking at S4, but Cloud as well, we can really aid in that Cloud enablement, because the scrambling. When we can scramble that sensitive data, it helps customer's feel comfortable and confident that they can put vendor and customer and other sensitive data in a Cloud based environment. >> And where are you guys winning? So what's the main thrust of why you guys are doing business in the SAP ecosystem. >> So with SAP you're always looking to do things better. And when you do things better, it results in cost savings on your project, and if you could save money on your project and do things smarter, you free up peoples time to focus on the fun projects, to focus on Hana, to focus on Cloud, and with our software, with our technology, by copying that data and providing real production data in the development and sandbox environments, we're impacting and improving the change control processes, we're impacting and improving the testing processes within companies, we're enabling some automation of some of those processes. >> Getting things up and running faster in the POC or Development environment? Real data? >> Yeah because you can be more nimble if you have real production data that you're working with while you're prototyping, you can make changes faster, you can be more confident in what you're promoting to production, you can be avoiding having a bad transport or a bad change going into the production environment and impact your business. So if you're not having to worry about that kind of stuff, you can worry about the fun stuff. You can look at Hana, you can look at Cloud, you can look at some of the newer technologies that SAP is providing. >> So, you guys grew up and matured, as you said, you've grown as SAP has grown, SAP used to be regarded as largely an applications company, now SAP, you know the S4, Hana platform, is a platform, and SAP's talking about partnerships, they're talking about making this whole platform even more available, accessible, to new developers through the Apple partnership etcetera, creates a new dynamic for you guys who have historically been focused on being able to automate the movement of data, certain data, certain processes, how are you preparing to potentially have to accommodate an accelerated rate of digitization as a consequence of all these partners, now working at SAP as a platform? >> That's a great question, and it's actually, it aligns with Attunity's vision and direction as well, so SAP, like you said, used to be an applications company, now it's an applications company with a full platform integrated all the way around, and Attunity is the same way, we came to Attunity through acquisition, and bringing our SAP Gold Client technology, but now we're expanding that, we're expanding it so that we can provide SAP data to other parts of the enterprise, we can combine data, we can combine highly structured SAP data with unstructured data, such as IOT Data, or social media streams in Hadoop, so the big data vision for Attunity is what's key, and right now we're in the process of blending what we do with SAP, with big data, which happens to align with SAP's platform. You know SAP is obviously helping customers move to Hana on the application side, but there's a whole analytics realm to it, that's even a bigger part of SAP's business right now, and that's kind of where we fit in. We're looking at those technologies, we're looking at how we can get data in and out of Hadoop, SAP Data in and out of Hadoop, how we can blend that with non SAP Data, to provide business value to SAP customers through that. >> Are you guys mainly focused on Fren, or are you also helping customer's move stuff into and out of Clouds and inside a hybrid cloud environment? >> Both actually, most SAP customer's are on Premise, so most of our focus is on Premise, we've seen a lot of customers move to the Cloud, either partial or completely. For those customers, they can use our technology the exact same way, and Attunity's replication software works on Prem and in the Cloud as well. So Cloud is definitely a big focus. Also, our relationship with Amazon, and Red Shift, there's a lot of Cloud capabilities and needs for moving data between on Premise and the Cloud, and back and forth. >> As businesses build increasingly complex workloads, which they clearly are, from a business stand point, they're trying to simplify the underlying infrastructure and technology, but they're trying to support increasingly complex types of work. How do you anticipate that the ecosystems ability to be able to map this on to technology is going to impact the role that data movement plays. Let me be a little bit more specific, historically, there were certain rules about how much data could be moved and how much work could be done in a single or a group of transactions. We anticipate that the lost art of data architecture across distances, more complex applications, it's going to become more important, are you being asked by your customers to help them think through, in a global basis, the challenges of data movement, as a set of flows within the enterprise, and not just point to point types of integration? >> I think we're starting to see that. I think it's definitely an evolving aspect of what's going on as, some low level examples that I can share with you on that are, we have some large global customers that have regional SAP environments, they might run one for North America, one for South America, Europe, and Asia-Pacific. Well they're consolidating them, some of those restrictions have been removed and now they're working on consolidating those regional instances into one global SAP instance. And if they're using that as a catalyst to move to Hana, that's really where you're getting into that realm where you're taking pieces that used to have to be distributed and broken up, and bringing them together, and if you can bring the structured enterprise application data on the SAP side together, now you can start moving towards some of the other aspects of the data like the analytics pieces. >> But you still have to worry about IOT, which is where are we going to process the data? Are we going to bring it back? Are we going to do it locally? You're worrying about sources external to your business, how you're going to move them in so that their intellectual property is controlled, my intellectual property is controlled, there's a lot of work that has to go in to thinking about the role that data movement is going to play within business design. >> Absolutely, and I actually think that that's part of the pieces that need to evolve over the next couple of years, it's kind of like the first time that you were here and heard about Hana, and here we are eight years later, and we understand the vision and the roadmap that that's played. That's happening now too, when you talk to SAP customers, some of them have clearly adopted the Hadoop technology and figured out how to make that work. You've got SAP Vora technology to bring data in and out of Hana from Hadoop, but that stuff is all brand new, we're not talking to a lot of customers that are using those. They're on the roadmap, they're looking at ways to do it, how to do it, but right now it's part of the roadmap. I think what's going to be key for us at Attunity is really helping customers blend that data, that IOT data, that social media stream data, with structured data from SAP. If I can take my customer master out of SAP and have that participate with IOT data, or if I can take my equipment master data out of SAP and combine that with Vlog data, IOT Data, I can start really doing predictive analytics, and if I can do those predictive analytics, with that unstructured data, I can use that to automate features within my enterprise application, so for example, if I know a part's going to fail, between 500 and 1000 hours of use, then I can proactively create maintenance tickets, or service notifications or something, so we can repair the device before it actually breaks. >> So talk about the, for the folks out there who want to kind of know the Attunity story a bit more, take a minute to explain kind of where you fit in, and where you, where SAP hands off to you, and where you fit specifically because big data management, there's are important technologies, but some say, well doesn't SAP have that? So where's the hand off? Where do you guys sister up against these guys the best? How should customers, or potential customers, know when to call you and what not. >> So, I often refer to SAP as a 747 Jumbo Jet right? So it's the big plane, and it's got everything in it. Anything at all, and all that you need to do, you could probably do it somewhere inside of SAP. There's an application for it, there's a platform for it, there's now a database for it, there's everything. So, a lot of customers work only in that realm, but there's a lot of customers that work outside of that too, SAP's an important part of the enterprise landscape, but there's other pieces too. >> People are nibbling at the solution, not fully baked out SAP. >> Right, right. >> You do one App. >> Yeah, and SAP's great at providing tools for example, to load data into Hana, there's a lot of capability to take non-SAP source data and bring it into Hana. But, what if you want to move that data around? What if you wanted to do some things different with it? What if you wanted to move some data out and back in? What if you want to, you know there's just a lot of things you want to be able to do with the data, and if you're all in on the SAP side, and you're all into the Hana platform, and that's what you're doing, you've probably got all the pieces to do that. But if you've got some pieces that are outside of that, and you need it all to play together, that's where Attunity comes in great, because Attunity has that, we're impartial to that, we can take data and move it around wherever, of course SAP is a really important part of our play in what we do, but we need to understand what the customers are doing, and everyday we talk to customers that are always looking, >> Give an example, give it a good example of that, customer that you've worked with, use a case. >> Yeah, let's see, most of my examples are going to be SAP centric, >> That's okay. >> We've got a couple of customers, I don't know if I can mention their names, where they come to us and say, "Hey we've got all this SAP data, and we might have 30 different SAP systems and we need all of that SAP data to pull together for us to be able to analyze it, and then we have non-SAP data that we want to partner with that as well. There might be terra-data, there might be Hadoop, might be some Oracle applications that are external that touch in, and these companies have these complex visions of figuring out how to do it, so when you look at Attunity and what we provide, we've got all these great solutions, we've got the replication technology, we've got the data model on the SAP side to copy the SAP data, we now have the data warehouse automation solution with Compose that keeps finding niche ways to work in, to be highly viable. >> But the main purpose is moving data around within SAP, give or take the Jumbo Jet, or 737. >> Well sometimes you just got to go down to the store and buy a half gallon of milk, right? And you're not going to jump on a Jumbo Jet to go down and get the milk. >> Right. >> You need tooling that makes it easy to get it. >> Got milk, it's the new slogan. Got data. >> Well there you go, the marketing side now. >> Okay so, vibe of the show, what's your take at SAP here, you've been here nine years, you've been looking around the landscape, you guys have been evolving with it, certainly it's exciting now. You're hearing really concrete examples of SAP showing some of the dashboards that McDermott's been showing every year, I remember when the iPad came out, "Oh the iPad's the most amazing thing", of course analytics is pretty obvious. That stuffs now coming to fruition, so there's a lot of growth going on, what's your vibe of the show? You seeing that, can you share any color commentary? Hallway conversations? >> Yeah, Sapphire's, you know, you get everything. You know it's like you said, the half gallon of milk, well we're at the supermarket right now, you need milk, you need eggs, you need flowers, whatever you need is here. >> The cake can be baked, if you have all the ingredients, Steve Job's says "put good frosting on it". (laughs) That's a UX. >> Lots of butter and lots of sugar. But yeah there's so many different focuses here at Sapphire, that it's a very broad show and you have an opportunity, for us it's a great opportunity to work with our partners closer, and it's also a good opportunity to talk to out customers, and certain levels within our customers, CIO's, VIP's. >> They're all together, they're all here. >> Right exactly, and you get to hear what their broader vision is, because every day we're talking to customers, and yeah we're hearing their broader vision, but here we hear more of it in a very confined space, and we get to map that up against our roadmap and see what we're doing and kind of say, yeah we're on the right track, I mean we need to be on the right track in two fronts. First and foremost with our customers, and second of all with SAP. And part of our long term success has been watching SAP and saying "okay, we can see where they're going with this, we can see where they're going with this, and this one they're driving really fast on, we've got to get on this track, you know, Hana. >> So the folks watching that aren't here, any highlights that you'd like to share? >> Wow, well you guys said yourself, Reggie Jackson was here the other night, that was pretty fantastic. I'm a huge baseball fan, go Cubby's, but it was fun to see Reggie Jackson. >> Park Ball, you know you had a share of calamities, I'm a Red Sox's man. >> Yeah you're wounds have been healed though (laughs). >> We've had the Holy Water been thrown from Babe Ruth. It was great that Reggie though was interesting, because we talk about a baseball concept that was about the unwritten rules, we saw Batista get cold-cocked a couple of days ago, and it brought up this whole unwritten rules, and we kind of had a tie in to business, which is the rules are changing, certainly in the business that we're in, and he talked about the unwritten rules of Baseball and at the end he said, "No, they aren't unwritten rules, they're written" And he was hardcore like MLB should not be messing with the game. >> Yeah. >> I mean Batista got fined, I think, what, five games? Was that the key mount? >> Yeah, yup. >> Didn't he get one game, and the guy that punched him got eight. >> That's right, he got it, eight games, that's right. So okay, MLB's putting pressure on them for structuring the game, should we let this stuff go? We came in late, second base, okay, what's your take on that? >> Well I mean as a Baseball fan I love the unwritten rules, I love the fact that the players police the game. >> Well that's what he was talking about, in his mind that's exactly what he was saying. That the rules amongst the players for policing the game are very, very well understood, and if Baseball tries to legislate and take it out of the players hands, it's going to lead to a whole bunch of chaotic behavior, and it's probably right. >> Yeah, and you've already got replay, and what was it, the Met's guy said he misses arguing with the umpires, and the next day he got thrown out (laughs). >> Probably means he wanted to get thrown out, needed a day off. What's going on with Attunity, what's next for you guys? What's next show, what's put on the business,. >> So, show-wise this is one of our most important shows of the year, events of the year, well I'll always be a tech-head, tech-heads are very targeted audience for us, we have a new version of Gold Client that's out a bit later this month, more under the hood stuff, just making things faster, and aligning it better with Hana and things like that, but we're really focused on integrating the solutions at Attunity right now. I mean you look at Attunity and Attunity had grown by acquisition, the RepliWeb acquisition in '11, and the acquisition of my company in 2013, we've added Compose, we've added Visibility, so now we've got this breath of solutions here and we're now knitting them together, and they're really coming together nicely. The Compose product, the data warehouse automation, I mean it's a new concept, but every time we show it to somebody they love it. You can't really point it at a SAP database, cause the data mile's too complex, but for data warehouse's of applications that have simple data models where you just need to do some data warehousing, basic data warehouses, it's phenomenal. And we've even figured out with SAP how we can break down certain aspects of that data, like just the financial data. If we just break down the financial data, can we create some replication and some change data capture there using the replicate technology and then feed it into Compose, provide a simple data warehouse solution that basic users can use. You know, you've got your BW, you've got your business objects and all that, but there's always that lower level, we're always talking to customers where they're still doing stuff like downloading contents of tables into spreadsheets and working with it, so Compose kind of a niche there. The visibility being able to identify what data's being used and what's not used, we're looking at combining that and pointing that at an SAP system and combining that with archiving technology and data retention technologies to figure out how we can tell a customer, alright here's your data retention policies, but here's where you're touching and not touching your data, and how can we move that around and get that out. >> Great stuff Matt, thanks for coming on theCube, appreciate that, if anything else I got to congratulate you on your success and, again, it's early stages and it's just going to get bigger and bigger, you know having that robust platform, and remember, not everyone runs their entire business on SAP, so there's a lot of other data warehouses coming round the corner. >> Yeah that's for sure, and we're well positioned and well aligned to deal with all types of data, me as an SAP guy, I love working with SAP data, but we've got a broader vision, and I think our broader visions really align nicely with what our customers want. >> Inter-operating the data, making it work for you, Got Data's new slogan here on theCube, we're going to coin that, 'Got Milk', 'Got Data'. Thanks to Peter Burris, bringing the magic here on theCube, we are live in Orlando, you're watching theCube. (techno music) >> Voiceover: There'll be millions of people in the near future that will want to be involved in their own personal well-being and wellness.

Published Date : May 19 2016

SUMMARY :

the Cloud, the leader in the scene of the noise, So great to have you on, regulated, so some of the of going to Hana, and then of why you guys are doing and do things smarter, you bad change going into the is the same way, we came to and in the Cloud as well. the ecosystems ability to of the data like the analytics pieces. in so that their intellectual and the roadmap that that's played. kind of know the Attunity all that you need to do, the solution, not fully baked probably got all the pieces to do that. it a good example of that, how to do it, so when you SAP, give or take the Jumbo Jet, or 737. and get the milk. makes it easy to get it. Got milk, it's the new slogan. the marketing side now. some of the dashboards that said, the half gallon of you have all the ingredients, broad show and you have got to get on this track, you know, Hana. Wow, well you guys said Park Ball, you know you Yeah you're wounds have the unwritten rules, we and the guy that punched the game, should we let this stuff go? rules, I love the fact that That the rules amongst the and the next day he got put on the business,. and the acquisition of my company in 2013, to congratulate you on your and we're well positioned bringing the magic here on millions of people in the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BurrisPERSON

0.99+

Matt HayesPERSON

0.99+

Reggie JacksonPERSON

0.99+

BatistaPERSON

0.99+

AmazonORGANIZATION

0.99+

2013DATE

0.99+

Red SoxORGANIZATION

0.99+

eight gamesQUANTITY

0.99+

AppleORGANIZATION

0.99+

OrlandoLOCATION

0.99+

Console IncORGANIZATION

0.99+

MattPERSON

0.99+

five gamesQUANTITY

0.99+

John FurrierPERSON

0.99+

ReggiePERSON

0.99+

eightQUANTITY

0.99+

EuropeLOCATION

0.99+

one gameQUANTITY

0.99+

iPadCOMMERCIAL_ITEM

0.99+

ComposeORGANIZATION

0.99+

nine yearsQUANTITY

0.99+

HanaPERSON

0.99+

SapphireORGANIZATION

0.99+

Silicon Angle MediaORGANIZATION

0.99+

SAPORGANIZATION

0.99+

ninthQUANTITY

0.99+

South AmericaLOCATION

0.99+

eight years laterDATE

0.99+

Orlando, FloridaLOCATION

0.99+

1000 hoursQUANTITY

0.98+

RepliWebORGANIZATION

0.98+

'11DATE

0.98+

North AmericaLOCATION

0.98+

BothQUANTITY

0.98+

HanaTITLE

0.98+

Babe RuthPERSON

0.98+

millions of peopleQUANTITY

0.98+

FirstQUANTITY

0.98+

Park BallPERSON

0.98+

oneQUANTITY

0.98+

two stepQUANTITY

0.98+

HadoopTITLE

0.98+

two frontsQUANTITY

0.97+

BaseballORGANIZATION

0.97+

first timeQUANTITY

0.97+

HanaORGANIZATION

0.97+

Steve JobPERSON

0.97+

MetORGANIZATION

0.97+

BaseballTITLE

0.97+

second baseQUANTITY

0.97+

McDermottPERSON

0.96+

500QUANTITY

0.96+

secondQUANTITY

0.96+

theCubeORGANIZATION

0.95+

747 Jumbo JetCOMMERCIAL_ITEM

0.95+

OracleORGANIZATION

0.95+

737COMMERCIAL_ITEM

0.94+

PremiseTITLE

0.94+

half gallon of milkQUANTITY

0.94+

AttunityORGANIZATION

0.94+

CloudTITLE

0.94+

later this monthDATE

0.93+

Red ShiftORGANIZATION

0.93+

SAP BusinessORGANIZATION

0.93+