Image Title

Search Results for IDE:

Erik Brynjolfsson, MIT & Andrew McAfee, MIT - MIT IDE 2015 - #theCUBE


 

>> live from the Congress Centre in London, England. It's the queue at M i t. And the digital economy The second machine age Brought to you by headlines sponsor M i t. >> I already We're back Dave along with Student of American Nelson and Macca Fear are back here after the day Each of them gave a detailed presentation today related to the book Gentlemen, welcome back to to see you >> Good to see you again I want to start with you >> on a question. That last question That and he got from a woman when you're >> starting with him on a question that was asked of him Yes. And you'LL see why when you find something you like. You dodged the question by the way. Fair for record Hanging out with you guys makes us smarter. Thank you. Hear it? So the question was >> around education She expressed real concern, particularly around education for younger people. I guess by the time they get to secondary education it's too late. You talked about in the book about the three r's we need to read. Obviously we need to write Teo be able to do arithmetic in our head. Sure. What's your take on that on that question. You >> know those basics, our table stakes. I mean, you have to be able to do that kind of stuff. But the real payoff comes from creativity doing something really new and original. The good news is that most people love being creative and original. You look at a kid playing, you know, whether it there two or three years old, that's all that you put some blocks in front of them. They start building, creating things, and our school system is, Andy was saying in his his talkers, questions was, is that many of the schools are almost explicitly designed to tamp that down to get people to conform, get them to all be consistent. Which is exactly what Henry Ford needed for his factories, you know, to work on the assembly line. But now that machines could do that repetitive, consistent kind of work, it's time to let creativity flourish again. And that's when you got to do on top of those basic skills. >> So I have one, and it's pretty clear that that that are Kramer education model. It's really hard for some kids to accept. They just want they want to run around. They want to go express themselves. They wantto poke a world. That's not what that grid full of desks is designed to do. >> We call that a d d. Now I follow. Yeah, I have one >> Montessori kid out of my foot. Really? He's by far the most creative most ano didactic. You're a Montessori Travel Marie, not the story. Have it right? Is that >> Look, I'm not educational research. I am Amon a story kid. I think she got it right. And she was able to demonstrate that she could take kids out of the slums of Bologna who were, at the time considered mentally defective. There's this notion that the reason the poor are poor because they were they were just mentally insufficient. And she could show their learning and their progress. So I completely agree with Eric. We need all of our students need to be able to Teo, accomplish the basics, to read, to write, to do basic math. What Montessori taught me is you can get there via this completely kind of hippie freeform route. And I'm really happy for that education talk. Talk about you and your students. >> Your brainstorm on things that people could do with computers. Can't. >> Yeah, a lot of money >> this and exercise that you do pretty regularly. What's that? How is >> that evolved? A little >> something. We do it more systematically, I almost always doing in at talking over where With Forum. It's a kind of dinner conversation out we can't get away from. So we're hearing a lot. And you know, there's a recurring patterns that emerged, and you heard some of them today around interpersonal skills around creativity. Still, coordination is still physical coordination. What some of these have in common is that their skills that we've evolved over literally, you know, hundreds of thousands or millions of years. And there are billions of neurons devoted to some of these skills. Coordination, vision, interpersonal skills and other skills like arithmetic is something that's really very recent, and we don't have a lot of neurons devoted to that. So it's not surprising the machines can pick up those more recent skills more than the Maurin eight ones. Now overtime, will machines be able to do more of those other skills? I suspect they probably will exactly how long it will take. That's the question for neuroscientists. The AI researchers >> made me make that country think about not just diagnosing a patient but getting them to comply with the treatment regimen. Take your medicine. Eat better. Stop smoking. We know the compliance rates for terrible for demonstrably good ideas. How do we improve them? Is in a technology solution a little bit. Is it an interpersonal solution? Absolutely. I think we need deeply empathetic, deeply capable people to help each other become healthier, become better people. Right Program might come from an algorithm, but that algorithm on the computer that spits it out is going to be lousy at getting most people to comply. Way need human beings for that. So when >> we talking technology space, we've been evangelizing that people need to get rid of what we call the undifferentiated having lifting. And I wonder if there's an opportunity in our personal life, you think about how much time we spend Well, you know, what are we doing for dinner when we're running the kids around? You know, how do I get dressed in the different things that have here their studies sometimes like waste so much brain power, trying to get rid of these things and there's opportunities. Welcome, Jetsons. Actually, no, they >> didn't have these problems that can help us with some of that. I think people should actually help us with over of it. You know, I actually I have a personal trainer and he's one of the last people that I would ever have exclude from my life because he's the guy who could actually help me lead a healthier life. And I play so much value on that. >> I like your metaphor of this is undifferentiated stuff, that really it's not the stuff that makes you great. It's just stuff you have to do. And I remember having a conversation with folks that s AP, and they said, you know, sure would like to brag about this, but we take away a lot of stuff that isn't what differentiates companies in the back office stuff. Getting your basic bookkeeping, accounting, supply chain stuff done and it's interesting. I think we could use the same thing for for personal lives. Let's get rid of that sort of underbrush of necessity stuff so we can focus on the things that are uniquely good at >> alright so way have to run out when I need garbage bags with toilet paper. Honestly, a drone should show up and drop that on my friends. >> So I wonder when I look at the self driving car that you've talked about, will we reach a point that not only do we trust computers in the car, it's cars to drive herself? But we've reached a point where we're just got nothing. Trust humans anymore because self driving cars there just so much safer and better than what we've got is that coming >> in the next twenty years? I personally think so, and the first time is deeply weird and unsettling. I think both of us were a little bit terrified the first time we drove in the Google Autonomous Car and the Google or driving it hit the button and took his hands off the controls. That was a weird moment. I liken it to when I was learning to scuba dive. Very first breath you take underwater is deeply unsettling because you're not supposed to be doing this. After a few breaths, it becomes background. >> But you know, I was I was driving to the airport to come here, and I look in the lanes left to me. There's a woman, you know, texting, and I'd be much you're terrifying if she wasn't driving. If the computer is doing because then we could be more, that's the right way to think about it. I think the time will come and it may not be that far away. We're the norm's shift exactly the other way around and be considered risky to have a human at the wheel and the safety. That thing that the insurance company will want is to have a machine there. You know, I think this is a temporary phase with Newt technology. We become frightened of them. When microwave ovens first came out, they were weird and wonderful. Not most of us think of them is really kind of boring and routine. Same thing is gonna happen with self driving to accidents. Well, that's the story is, that is, But none of them were. Of course, according to the story >> driving, what's clear is that they're safer than the human driver. As of today, they are only going to get safer. We're not evolving that quick, >> but you got the question. Is that self driving, car driven story? Dr. We laughed because we're live in Boston. But your answer was, Will drive started driving, driving, >> you know, eventually, you know, I think it's fair to say that there's a big difference. You know, the first nineteen, ninety five, ninety nine percent of driving is something that's a lot easier. That last one percent or one hundredth of one percent becomes much, much harder. And right now we've had There's a card just last week that drove across the United States, but there were half a dozen times when he had to have a human interviews and particularly unusual situations. And I think because of our norms and expectations, that won't be enough for a self driving car to be safer than humans will need it to be te next paper or something like maybe >> like the just example may be the ultimate combination is a combination of human and self driving car, >> Maybe situation after situation. I think that's going to be the case and I'LL go back to medical diagnosis. I would at least for the short to medium term, I would like to have a pair of human eyes over the treatment plan that the that being completely digital diagnostician spits out. Maybe over time it will be clear that there are no flaws in that. We could go totally digital, but we can combine the two. >> I think in most cases what anything is right, what you brought up. But you know the case of self driving cars in particular, and other situations where humans have to take over for a machine that's failing for someway like aircraft. When the autopilot is doing things right, it turns out that that transition could be very, very rocky and expecting a human to be on call to be able to quickly grasp what's going on in the middle of a crisis of a freak out that's not reasonable isn't necessarily the best time to be swishing over. So there's a there's a fuel. Human factors issued their of how you design it, not just to the human could take over, but you could make a kind of a seamless transition. And that's not easy. >> Okay, so maybe self driving cars, that doesn't happen. But back to the medical example. Maybe Watson will replace Dr Welby, but have not Dr Oz >> interaction or any nurse or somebody who actually gets me to comply again. But also, I do think that Dr Watson can and should take over for people in the developing world who only have access instead of First World medical care. They've got a smartphone. OK, we're going to be able to deliver absolute top shelf world class medical diagnostics to those people fairly quickly. Of course, we should >> do that and then combine it with a coach who gets people to take the prescription when they're supposed to do it, change their eating habits or communities or whatever else you hear your peers are all losing weight. >> Why aren't you? >> I wantto askyou something coming on. Time here has been gracious with your time and your talk. We're very out spoken about. A couple of things I would summarize. It is you lot must Bill Gates and Stephen Hawking. You're paranoid tens. There's no privacy in the Internet, so get over. >> I didn't say there's no privacy. I know working. I think it's important to be clear on this. I think privacy is really important. I do think it's right that we have, and we should have. What I don't want to do is have a bureaucrat defined my privacy rights for me and start telling >> companies what they can and can't do is a result. What >> I'd much prefer instead is to say, Look, if there are things that we know >> Cos we're doing that we do not approve >> of let's deal with that situation as opposed to trying to put the guard rails in place and fence off the different kinds of innovative, strict growth, right? >> I mean, there's two kinds of mistakes you could make. One is, you can let companies do things and you should have regulated them. The other is. You could regulate them preemptively when you really should have let them do things and both kinds of errors or possible. Our sense of looking at what's happening in Jinan is that we've thrived where we allow more permission, listen innovation. We allowed companies to do things and then go back and fix things rather than when we try and locked down the past in the existing processes, so are leaning. In most cases, not every case is to be a little more free, a little more open recognized that there will be mistakes. It's not gonna be that we're perfectly guaranteed is that there is a risk when you walk across the street but go back and fix things at that point rather than preemptively define exactly how things are gonna play. Let >> me give you an example. If Google were to say to me, Hey, Andy, unless you pay us x dollars per month, we're gonna show the world your last fifty Google searches. I would completely pay for that kind of blackmail, right? Certain your search history is incredibly personal reveals a lot about you. Google is not going to do that. It would just it would crater their own business. So trying to trying to fence that kind of stuff often advance makes a lot of sense to me. Then then then relying on this. This sounds a little bit weird, but a combination of for profit companies and people with three choice that that's a really good guarantor of our freedoms and our rights. So you >> guys have a pretty good thing going. It doesn't look like strangle each other anytime soon. But >> how do you How do you decide who >> does one treat by how you operate with reading the book? It's like, Okay, like I think that was Andy because he's talking about Erica. I think that was Erica's. He's talking, >> but I couldn't tell you. I think it's hard for you to reverse engineer because it gets so co mingled over time. And, you know, I gave the example the end of the talk about humans and machines working together synergistically. I think the same thing is true with Indian me out. You may disagree, but I find that we are smarter when we work together so much smarter. Then when we work individually, we go and bring some things on the blackboard. And I had these aha moments that I don't think I would've had just sitting by myself and do I should be that ah ha moment to Andy. To me, it's actually to this Borg of us working together >> and fundamentally, these air bumper sticker things to say. If after working with someone, you become convinced that they respect you and that you could trust them and like Erik says that you're better off together, that you would be individually, it's a complete no brainer to >> keep doing the work together. Well, we're really humbled to be here. You guys are great contact. Everything is free and available. We really believe in that sort of economics. And so thank you very much for having us here. >> Well, it's just a real pleasure. >> All right, Right there, buddy. We'LL be back to wrap up right after this is Q relied from London. My tea.

Published Date : Apr 10 2015

SUMMARY :

to you by headlines sponsor M i t. That last question That and he got from a woman when you're with you guys makes us smarter. I guess by the time they get to secondary education it's too late. I mean, you have to be able to do that kind of stuff. It's really hard for some kids to accept. I have one You're a Montessori Travel Marie, not the story. We need all of our students need to be able to Teo, accomplish the basics, Your brainstorm on things that people could do with computers. this and exercise that you do pretty regularly. that we've evolved over literally, you know, hundreds of thousands or millions of years. but that algorithm on the computer that spits it out is going to be lousy at getting most people to comply. And I wonder if there's an opportunity in our personal life, you think about how much time we spend I think people should actually help us with over of it. I think we could use the same thing for for personal lives. alright so way have to run out when I need garbage bags with toilet paper. do we trust computers in the car, it's cars to drive herself? I liken it to when I was learning to scuba dive. I think this is a temporary phase with Newt technology. they are only going to get safer. but you got the question. And I think because of our norms I think that's going to be the case and I'LL go back to medical I think in most cases what anything is right, what you brought up. But back to the medical example. I do think that Dr Watson can and should take over for people in do it, change their eating habits or communities or whatever else you hear your peers are all It is you lot must Bill Gates and I think it's important to be clear on this. companies what they can and can't do is a result. It's not gonna be that we're perfectly guaranteed is that there is a risk when you walk across So you But I think that was Erica's. I think it's hard for you to reverse engineer because it gets so co mingled and fundamentally, these air bumper sticker things to say. And so thank you very much for having We'LL be back to wrap up right after this is Q relied from London.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David NicholsonPERSON

0.99+

ChrisPERSON

0.99+

Lisa MartinPERSON

0.99+

JoelPERSON

0.99+

Jeff FrickPERSON

0.99+

PeterPERSON

0.99+

MonaPERSON

0.99+

Dave VellantePERSON

0.99+

David VellantePERSON

0.99+

KeithPERSON

0.99+

AWSORGANIZATION

0.99+

JeffPERSON

0.99+

KevinPERSON

0.99+

Joel MinickPERSON

0.99+

AndyPERSON

0.99+

RyanPERSON

0.99+

Cathy DallyPERSON

0.99+

PatrickPERSON

0.99+

GregPERSON

0.99+

Rebecca KnightPERSON

0.99+

StephenPERSON

0.99+

Kevin MillerPERSON

0.99+

MarcusPERSON

0.99+

Dave AlantePERSON

0.99+

EricPERSON

0.99+

AmazonORGANIZATION

0.99+

twoQUANTITY

0.99+

DanPERSON

0.99+

Peter BurrisPERSON

0.99+

Greg TinkerPERSON

0.99+

UtahLOCATION

0.99+

IBMORGANIZATION

0.99+

JohnPERSON

0.99+

RaleighLOCATION

0.99+

BrooklynLOCATION

0.99+

Carl KrupitzerPERSON

0.99+

LisaPERSON

0.99+

LenovoORGANIZATION

0.99+

JetBlueORGANIZATION

0.99+

2015DATE

0.99+

DavePERSON

0.99+

Angie EmbreePERSON

0.99+

Kirk SkaugenPERSON

0.99+

Dave NicholsonPERSON

0.99+

2014DATE

0.99+

SimonPERSON

0.99+

UnitedORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

SouthwestORGANIZATION

0.99+

KirkPERSON

0.99+

FrankPERSON

0.99+

Patrick OsbornePERSON

0.99+

1984DATE

0.99+

ChinaLOCATION

0.99+

BostonLOCATION

0.99+

CaliforniaLOCATION

0.99+

SingaporeLOCATION

0.99+

Andrew McAfee, MIT & Erik Brynjolfsson, MIT - MIT IDE 2015 - #theCUBE


 

>> live from the Congress Centre in London, England. It's the queue at M I t. And the digital economy. The second machine Age Brought to you by headlines sponsor M I T. >> Everybody, welcome to London. This is Dave along with student men. And this is the cube. The cube goes out, we go to the events. We extract the signal from the noise. We're very pleased to be in London, the scene of the first machine age. But we're here to talk about the second Machine age. Andrew McAfee and Erik Brynjolfsson. Gentlemen, first of all, congratulations on this fantastic book. It's been getting great acclaim. So it's a wonderful book if you haven't read it. Ah, Andrew, Maybe you could hold it up for our audience here, the second machine age >> and Dave to start off thanks to you for being able to pronounce both of our names correctly, that's just about unprecedented. In the history of this, >> I can probably even spell them. Whoa, Don't. So, anyway, welcome. We appreciate you guys coming on and appreciate the opportunity to talk about the book. So if you want to start with you, so why London? I mean, I talked about the first machine age. Why are we back here? One of the >> things we learned when we were writing the book is how big deal technological progress is on the way you learn that is by going back and looking at a lot of history and trying to understand what bet the curve of human history. If we look at how advanced our civilizations are, if we look at how many people there are in the world, if we look at GDP per capita around the world, amazingly enough, we have that data going back hundreds, sometimes thousands of years. And no matter what data you're looking at, you get the same story, which is that nothing happened until the Industrial Revolution. So for us, the start of the first machine machine age for us, it's a real thrill to come to London to come to the UK, which was the birthplace of the Industrial Revolution. The first machine age to talk about the second. >> So, Eric, I wonder if you could have with two sort of main vectors that you take away from the book won is that you know, machines have always replaced humans and maybe doing so at a different rate of these days. But the other is the potential of continued innovation, even though many people say Moore's law is dead. You guys have come up with sort of premises to how innovation will continue to double. So boil it down for the lay person. What should we think about? Well, sure. >> I mean, let me just elaborate on what you just said. Technology's always been destroying jobs, but it's also always been creating jobs, you know, A couple centuries ago, ninety percent of Americans worked in agriculture on farms in nineteen hundred is down to about forty one percent. Now is less than two percent. All those people didn't simply become unemployed. Instead, new industries were invented by Henry Ford, Steve Jobs, Bill Gates. Lots of other people and people got rather unemployed, became redeployed. One of the concerns is is, Are we doing that fast enough? This time around, we see a lot of bounty being created by technology. Global poverty rates are falling. Record wealth in the United States record GDP per person. But not everyone's participating in that. Not even when sharing the past ten fifteen years, we've actually to our surprise seem median income fall that's income of the person the fiftieth percentile, even though the overall pie is getting bigger. And one of the reasons that we created the initiative on the digital economy was to try to crack that, not understand what exactly is going on? How is technology behaving differently this time around in earlier eras and part that has to do with some of the unique characteristics of eventual goods? >> Well, your point in the book is that normally median income tracks productivity, and it's it's not this time around. Should we be concerned about that? >> I think we should be concerned about it. That's different than trying to stop for halt course of technology. That's absolutely not something you >> should >> be more concerned about. That way, Neto let >> technology move ahead. We need to let the innovation happen, and if we are concerned about some of the side effects or some of the consequences of that fine, let's deal with those. You bring up what I think is the one of most important side effects to have our eye on, which is exactly as you say when we look back for a long time, the average worker was taking home more pay, a higher standard of living decade after decade as their productivity improved. To the point that we started to think about that as an economic law, your compensation is your marginal productivity fantastic what we've noticed over the past couple of decades, and I don't think it's a coincidence that we've noticed this, as the computer age has accelerated, is that there's been a decoupling. The productivity continues to go up, but the wage that average income has stagnated. Dealing with that is one of our big challenges. >> So what you tell your students become a superstar? I mean, not everybody could become a superstar. Well, our students cats, you know, maybe the thing you know they're all aspired to write. >> A lot of people focus on the way that technology has helped superstars reach global audiences. You know, I had one student. He wrote an app, and about two or three weeks, he tells me, and within a few months he had reached a million people with that app. That's something that probably would have been impossible a couple of decades ago. But he was able to do that because he built it on top of the Facebook platform, which is on top of the Internet and a lot of other innovations that came before. So in some ways it's never been easier to become a superstar and to reach literally not just millions, but even billions of people. But that's not the only successful path in the second machine age. There's also other categories where machines just aren't very good. Yet one of the ones that comes to mind is interpersonal skills, whether that's coaching or underst picking up on other cues from people nurturing people carrying for people. And there are a whole set of professions around those categories as well. You don't have to have some superstar programmer to be successful in those categories, and there are millions of jobs that are needed in those categories for to take care of other P people. So I think there's gonna be a lot of ways to be successful in the second machine age, >> so I think >> that's really important because one take away that I don't like from people who've looked at our work is that only the amazing entrepreneurs or the people with one forty plus IQ's are going to be successful in the second machine age. That's it's just not correct. As Eric says, the ability to negotiate the ability Teo be empathetic to somebody, the ability to care for somebody machines they're lousy of thes. They remain really important things to do. They remain economically valuable things >> love concern that they won't remain louse. If I'm a you know, student listening, you said in your book, Self driving cars, You know, decade ago, even five years ago so it can happen. So how do we predict with computers Will and won't be good at We >> basically don't. Our track record in doing that is actually fairly lousy. The mantra that I've learned is that objects in the future are closer than they appear on the stuff that seem like complete SciFi. You're never goingto happen keeps on happening now. That said, I am still going to be blown away the first time I see a computer written novel that that that works, that that I find compelling, that that seems like a very human skill. But we are starting to see technologies that are good at recognizing human emotions that can compose music that can do art paintings that I find pretty compelling. So never say never is another. >> I mean right, right. If if I look some of the examples lately, you know, basic news computers could do that really well. IBM, you know, the lots of machine can make recipes that we would have never thought of. Very things would be creative. And Ian, the technology space, you know, you know, a decade ago computer science is where you tell everybody to go into today is data scientists still like a hot opportunity for people to go in And the technology space? Where, where is there some good opportunity? >> Or whether or not that's what the job title on the business card is that going to be hot being a numerous person being ableto work with large amounts of data input, particular being able to work with huge amounts of data in a digital environment in a computer that skills not going anywhere >> you could think of jobs in three categories is ready to technology. They're ones that air substitutes racing against machine. They're ones that air compliments that are using technology under ones that just aren't really affected yet by technology. The first category you definitely want to stay away from. You know, a lot of routine information processing work. Those were things machines could do well, >> prepare yourself as a job. Is that for a job as a payroll clerk? There's a really bad wait. >> See that those jobs were disappearing, both in terms of the numbers of employment and the wages that they get. The second category jobs. That compliment data scientist is a great example of that or somebody who's AP Writer or YouTube. Those are things that technology makes your skills more and more valuable. And there's this huge middle category. We talked earlier about interpersonal skills, a lot of physical task. Still, where machines just really can't touch them too much. Those are also categories that so far hell >> no, I didnt know it like middle >> school football, Coach is a job. It's going to be around a human job. It's going to be around for a long time to come because I have not seen the piece of technology that can inspire a group of twelve or thirteen year olds to go out there and play together as a team. Now Erik has actually been a middle school football coach, and he actually used a lot of technology to help him get good at that job, to the point where you are pretty successful. Middle school football coach >> way want a lot of teams games, and part of it was way could learn from technology. We were able to break down films in ways that people never could've previously at the middle school level. His technology's made a lot of things much cheaper. Now then we're available. >> So it was learning to be competitive versus learning how to teach kids to play football. Is that right? Or was a bit? Well, actually, >> one of the most important things and being a coach is that interpersonal connection is one thing I liked the most about it, and that's something I think no robot could do. What I think it be a long, long time. If ever that inspiring halftime speech could be given by a robot >> on getting Eric Gipper bring the Olsen Well, the to me, the more, most interesting examples I didn't realise this until I read your book, is that the best chess player in the world is not a computer, it's a computer and a human. That's what those to me. It seemed to be the greatest opportunities for innovative way. Call a >> racing with machines, and we want to emphasize that that's what people should be focusing. I think there's been a lot of attention on how machines can replace humans. But the bigger opportunities how humans and machines could work together to do things they could never have been done before in games like chess. We see that possibility. But even more, interestingly, is when they're making new discoveries in neuroscience or new kinds of business models like Uber and others, where we are seeing value creation in ways that was just not possible >> previously, and that chess example is going to spill over into the rest of the economy very, very quickly. I think about medicine and medical diagnosis. I believe that work needs to be a huge amount, more digital automated than it is today. I want Dr Watson as my primary care physician, but I do think that the real opportunities we're going to be to combine digital diagnosis, digital pattern recognition with the union skills and abilities of the human doctor. Let's bring those two skill sets together >> well, the Staton your book is. It would take a physician one hundred sixty hours a week to stay on top of reading, to stay on top of all the new That's publication. That's the >> estimate. And but there's no amount of time that watching could learn how to do that empathy that requires to communicate that and learn from a patient so that humans and machines have complementary skills. The machines are strong in some categories of humans and others, and that's why a team of humans and computers could be so >> That's the killer. Since >> the book came out, we found another great example related to automation and medicine in science. There's a really clever experiment that the IBM Watson team did with team out of Baylor. They fed the technology a couple hundred thousand papers related to one area of gene expression and proteins. And they said, Why don't you predict what the next molecules all we should look at to get this tart to get this desired response out on the computer said Okay, we think these nine are the next ones that are going to be good candidates. What they did that was so clever they only gave the computer papers that had been published through two thousand three. So then we have twelve years to see if those hypotheses turned out to be correct. Computer was batting about seven hundred, so people say, didn't that technology could never be creative. I think coming up with a a good scientific hypothesis is an example of creative work. Let's make that work a lot more digital as well. >> So, you know, I got a question from the crowd here. Thie First Industrial Revolution really helped build up a lot of the cities. The question is, with the speed and reach of the Internet and everything, is this really going to help distribute the population? Maur. What? The digital economy? I don't I don't think so. I don't think we want to come to cities, not just because it's the only waited to communicate with somebody we actually want to be >> face to face with them. We want to hang out with urbanization is a really, really powerful trend. Even as our technologies have gotten more powerful. I don't think that's going to revert, but I do think that if you if you want to get away from the city, at least for a period of time and go contemplate and be out in the world. You can now do that and not >> lose touch. You know, the social undistributed workforce isn't gonna drive that away. It's It's a real phenomenon, but it's not going to >> mean that cities were going >> to be popular. Well, the cities have two unique abilities. One is the entertainment. If you'd like to socialize with people in a face to face way most of the time, although people do it online as well, the other is that there's still a lot of types of communication that are best done in person. And, in fact, real estate value suggests that being able to be close toe other experts in your field. Whether it's in Silicon Valley, Hollywood, Wall Street is still a valuable asset. Eric and I >> travel a ton not always together. We could get a lot of our work done via email on via digital tools. When it comes time to actually get together and think about the next article or the next book, we need to be in the same room with the white bored doing it. Old school >> want to come back to the roots of innovation. Moore's law is Gordon Mohr put forth fiftieth anniversary next week, and it's it's It's coming to an end in terms of that actually has ended in terms of the way it's doubling every eighteen months, but looks like we still have some runway. But you know, experts can predict and you guys made it a point you book People always underestimate, you know, human's ability to do the things that people think they can't do. But the rial innovation is coming from this notion of combinatorial technologies. That's where we're going to see that continued exponential growth. What gives you confidence that that >> curve will continue? If you look at innovation as the work, not of coming up with some brand new Eureka, but as putting together existing building blocks in a new and powerful way, Then you should get really optimistic because the number of building blocks out there in the world is only going up with iPhones and sensors and banned weapon and all these different new tools and the ability to tap into more brains around the world to allow more people to try to do that recombination. That ability is only increasing as well. I'm massively optimistic about innovation, >> yet that's a fundamental break from the common attitude. We hear that we're using up all the low hanging fruit, that innovation. There's some fixed stock of it, and first we get the easy innovations, and then it gets harder and harder to innovate. We fundamentally disagree with that. You, in fact, every innovation we create creates more and more building blocks for additional innovations. And if you look historically, most of the breakthroughs have been achieved by combining previously existing innovations. So that makes me optimistic that we'LL have more and more of those building blocks going >> forward. People say that we've we've wrung all of the benefit out of the internal combustion engine, for example, and it's all just rounding error. For here. Know a completely autonomous car is not rounding error. That's the new thing that's going to change. Our lives is going to change our cities is going to change our supply chains, and it's making a new, entirely new use case out of that internal combustion. >> So you used the example of ways in the book, Really, you know, their software, obviously was involved, but it really was sensors and it was social media. And we're mobile phones and networks, just these combinations of technologies for innovation, >> none of which was an invention of the Ways team, none of which was original. Theyjust put those elements together in a really powerful way. >> So that's I mean, the value of ways isn't over. So we're just scratching the surface, and we could talk about sort of what you guys expect. Going forward. I know it's hard to predict well, another >> really important thing about wages in addition to the wake and combined and recombined existing components. It's available for free on my phone, and GPS would've cost hundreds of dollars a few years ago, and it wouldn't have been nearly as good at ways. And in a decade before that, it would have been infinitely expensive. You couldn't get it at any price, and this is a really important phenomenon. The digital economy that is underappreciated is that so much of what we get is now available at zero cost. Our GDP measures are all the goods and services they're bought and sold. If they have zero price, they show up is a zero in GDP. >> Wikipedia, right? Wikipedia, but that just wait here overvalue ways. Yeah, it doesn't. That >> doesn't mean zero value. It's still quite valuable to us. And more and more. I think our metrics are not capturing the real essence of the digital economy. One of the things we're doing at the Initiative initiative, the addition on the usual economy is to understand better what the right metrics will be for seeing this kind of growth. >> And I want to talk about that in the context of what you just said. The competitiveness. So if I get a piece of fruit disappears Smythe Digital economy, it's different. I wonder if you could explain that, >> and one of the ways it's different will use waze is an example here again, is network effects become really, really powerful? So ways gets more valuable to me? The more other ways er's there are out there in the world, they provide more traffic information that let me know where the potholes and the construction are. So network effects lead to really kind of different competitive dynamics. They tend to lead toward more winner, take all situations. They tend to lead toward things that look more not like monopolies, and that tends to freak some people out. I'm a little more home about that because one of the things we also know from observing the high tech industries is that today's near monopolist is yesterday's also ran. We just see that over and over because complacency and inertia are so deadly, there's always some some disruptor coming up, even in the high tech industries to make the incumbents nervous. >> Right? Open source. >> We'LL open source And that's a perfect example of how some of the characteristics of goods in the digital economy are fundamentally different from earlier eras and microeconomics. We talk about rival and excludable goods, and that's what you need for a competitive equilibrium. Digital goods, our non rival and non excludable. You go back to your micro economics textbook for more detail in that, but in essence, what it means is that these goods could be freely coffee at almost zero cost. Each copy is a perfect replica of the original that could be transmitted anywhere on the planet almost instantaneously, and that leads to a very different kind of economics that what we had for the previous few hundred years, >> or you don't work to quantify that. Does that sort of Yeah, wave wanted >> Find the effect on the economy more broadly. But there's also a very profound effects on business and the kind of business models that work. You know, you mentioned open source as an example. There are platform economics, Marshall Banal Stein. One of the experts in the field, is speaking here today about that. Maybe we get a chance to talk about it later. You can sometimes make a lot of money by giving stuff away for free and gaining from complimentary goods. These are things that >> way started. Yeah, Well, there you go. Well, that would be working for you could only do that for a little >> while. You'll like you're a drug dealer. You could do that for a little while. And then you get people addicted many. You start charging them a lot. There's a really different business model in the second machine age, which is just give stuff away for free. You can make enough off other ancillary streams like advertising to have a large, very, very successful business. >> Okay, I wonder if we could sort of, uh, two things I want first I want to talk about the constraints. What is the constraints to taking advantage of that? That innovation curve in the next day? >> Well, that's a great question, and less and less of the constraint is technological. More and more of the constraint is our ability as individuals to cope with change and said There's a race between technology and education, and an even more profound constraint is the ability of our organisations in our culture to adapt. We really see that it's a bottleneck. And at the MIT Sloan School, we're very much focused on trying to relieve those constraints. We've got some brilliant technologists that are inventing the future on the technology side, but we've got to keep up with our business. Models are economic systems, and that's not happening fast enough. >> So let's think about where the technology's aren't in. The constraints aren't and are. As Eric says, access to technology is vanishing as a constraint. Access to capital is vanishing as a constraint, at least a demonstrator to start showing that you've got a good idea because of the cloud. Because of Moore's law and a small team or alone innovator can demonstrate the power of their idea and then ramp it up. So those air really vanishing constraints are mindset, constraints, our institutional constraints. And unfortunately, increasingly, I believe regulatory constraints. Our colleague Larry Lessing has a great way to phrase the choice, he says, With our policies, with our regulations, we can protect the future from the past, or we could protect the past from the future. That choice is really, really write. The future is a better place. Let's protect that from the incumbents in the inertia. >> So that leads us to sort of some of the proposals that you guys made in terms of how we can approach this. Good news is, capitalism is not something that you're you're you're you're very much in favor of, you know, attacking no poulet bureau, I think, was your comments on DH some of the other things? Actually, I found pretty practical, although not not likely, but practical things, right? Yes, but but still, you know, feasible certainly, certainly, certainly intellectually. But what have you seen in terms of the reaction to your proposals? And do you have any once that the public policy will begin to shape in a way that wages >> conference that the conversation is shifting. So just from the publication date now we've noticed there's a lot more willingness to engage with these ideas with the ideas that tech progress is racing ahead but leaving some people behind in more people behind in an economic sense over time. So we've talked to politicians. We've talked to policy makers. We've talked to faint thanks. That conversation is progressing. And if we want to change our our government, you want to change our policies. I think it has to start with changing the conversation. It's a bottom out phenomenon >> and is exactly right. And that's really one of the key things that we learned, you know well, we talked to our political science friends. They remind us that in American other democracies, leaders are really followers on. They follow public opinion and the people are the leaders. So we're not going to be able to get changes in our policies until we change the old broad conversation. We get people recognizing the issues they're underway here, and I wouldn't be too quick to dismiss some of these bigger changes we describe as possible the book. I mean, historically, there've been some huge changes the cost of the mass public education was a pretty radical idea when it was introduced. The concept of Social Security were recently the concept of marriage. Equality with something I think people wouldn't have imagined maybe a decade or two ago so you could have some big changes in the political conversation. It starts with what the people want, and ultimately the leaders will follow. >> It's easy to get dismayed about the logjam in Washington, and I get dismayed once in a while. But I think back a decade ago, if somebody had told me that gay marriage and legal marijuana would be pretty widespread in America, I would have laughed in their face. And, you know, I'm straight and I don't smoke dope. I think these were both fantastic developments, and they came because the conversation shifted. Not not because we had a gay pot smoker in the white. >> Gentlemen, Listen, thank you very much. First of all, for running this great book, well, even I got one last question. So I understand you guys were working on your topic for you next, but can you give us a little bit of, uh, some thoughts as to what you're thinking. What do we do? We tip the hand. Well, sure, I think that >> it's no no mystery that we teach in a business school. And we spent a lot of time interacting with business leaders. And as we've mentioned in the discussion here, there have been some huge changes in the kind of business models that are successful in the second machine age. We want to elaborate on those describe nuts what were seeing when we talk to business leaders but also with the economic theory says about what will and what? What won't work. >> So second machine age was our attempt it like a big idea book. Let's write the Business guide to the Second Machine Age. >> Excellent. First of all, the book is a big idea. A lot of big ideas in the book, with excellent examples and some prescription, I think, for moving forward. So thank you for writing that book. And congratulations on its success. Really appreciate you guys coming in the Cube. Good luck today and we look forward to talking to in the future. Thanks for having been a real pleasure. Keep right. Everybody will be right back. We're live from London. This is M I t E. This is the cube right back

Published Date : Apr 10 2015

SUMMARY :

to you by headlines sponsor M I T. We extract the signal from the noise. and Dave to start off thanks to you for being able to pronounce both of our names correctly, I mean, I talked about the first machine age. The first machine age to talk about the second. So boil it down for the lay person. and part that has to do with some of the unique characteristics of eventual goods? and it's it's not this time around. I think we should be concerned about it. That way, Neto let To the point that we started to think about that as an economic law, So what you tell your students become a superstar? Yet one of the ones that comes to mind is interpersonal skills, the ability Teo be empathetic to somebody, the ability to care for somebody machines they're lousy If I'm a you know, student listening, you said in your The mantra that I've learned is that objects in the future are closer than they appear on the stuff And Ian, the technology space, you know, you know, a decade ago computer science is where you tell The first category you definitely want to stay away from. Is that for a job as a payroll clerk? See that those jobs were disappearing, both in terms of the numbers of employment and the wages that they get. job, to the point where you are pretty successful. We were able to break down films in ways that people never could've previously at the middle school level. Is that right? one of the most important things and being a coach is that interpersonal connection is one thing I liked the most on getting Eric Gipper bring the Olsen Well, the to me, But the bigger opportunities how humans previously, and that chess example is going to spill over into the rest of the economy very, That's the to communicate that and learn from a patient so that humans and machines have complementary skills. That's the killer. There's a really clever experiment that the IBM Watson team did with team out of Baylor. everything, is this really going to help distribute the population? I don't think that's going to revert, but I do think that if you if you want to get away from the city, You know, the social undistributed workforce isn't gonna drive that away. One is the entertainment. we need to be in the same room with the white bored doing it. ended in terms of the way it's doubling every eighteen months, but looks like we still have some runway. and powerful way, Then you should get really optimistic because the number of building blocks out there in the world And if you look historically, most of the breakthroughs have been achieved by combining That's the new thing that's going to change. So you used the example of ways in the book, Really, you know, none of which was an invention of the Ways team, none of which was original. and we could talk about sort of what you guys expect. Our GDP measures are all the goods and services they're bought and sold. Wikipedia, but that just wait here overvalue ways. One of the things we're doing at the Initiative initiative, And I want to talk about that in the context of what you just said. I'm a little more home about that because one of the things we also instantaneously, and that leads to a very different kind of economics that what we had for the previous few or you don't work to quantify that. One of the experts in the field, is speaking here today about that. Well, that would be working for you could only do that for a little There's a really different business model in the second machine age, What is the constraints More and more of the constraint is our ability as individuals to cope with change and Let's protect that from the incumbents in the inertia. in terms of the reaction to your proposals? I think it has to start with changing the conversation. And that's really one of the key things that we learned, you know well, It's easy to get dismayed about the logjam in Washington, and I get dismayed once in a while. So I understand you guys were working on your topic for you next, but can you give us a little bit of, it's no no mystery that we teach in a business school. the Second Machine Age. A lot of big ideas in the book, with excellent examples and some

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Erik BrynjolfssonPERSON

0.99+

Andrew McAfeePERSON

0.99+

EricPERSON

0.99+

Larry LessingPERSON

0.99+

IanPERSON

0.99+

Gordon MohrPERSON

0.99+

LondonLOCATION

0.99+

ErikPERSON

0.99+

twelve yearsQUANTITY

0.99+

twelveQUANTITY

0.99+

WashingtonLOCATION

0.99+

MIT Sloan SchoolORGANIZATION

0.99+

IBMORGANIZATION

0.99+

UKLOCATION

0.99+

DavePERSON

0.99+

Bill GatesPERSON

0.99+

Henry FordPERSON

0.99+

Silicon ValleyLOCATION

0.99+

Steve JobsPERSON

0.99+

Eric GipperPERSON

0.99+

AndrewPERSON

0.99+

United StatesLOCATION

0.99+

AmericaLOCATION

0.99+

hundreds of dollarsQUANTITY

0.99+

less than two percentQUANTITY

0.99+

Each copyQUANTITY

0.99+

hundredsQUANTITY

0.99+

nineQUANTITY

0.99+

Marshall Banal SteinPERSON

0.99+

millionsQUANTITY

0.99+

first categoryQUANTITY

0.99+

WatsonPERSON

0.99+

OneQUANTITY

0.99+

MITORGANIZATION

0.99+

ninety percentQUANTITY

0.99+

next weekDATE

0.99+

todayDATE

0.99+

one studentQUANTITY

0.99+

yesterdayDATE

0.99+

London, EnglandLOCATION

0.99+

UberORGANIZATION

0.99+

thousands of yearsQUANTITY

0.99+

two thousandQUANTITY

0.99+

five years agoDATE

0.99+

secondQUANTITY

0.99+

HollywoodLOCATION

0.99+

MoorePERSON

0.99+

bothQUANTITY

0.99+

iPhonesCOMMERCIAL_ITEM

0.99+

Wall StreetLOCATION

0.99+

thirteen yearQUANTITY

0.98+

oneQUANTITY

0.98+

a decade agoDATE

0.98+

about forty one percentQUANTITY

0.98+

second categoryQUANTITY

0.97+

two unique abilitiesQUANTITY

0.97+

about seven hundredQUANTITY

0.97+

fiftieth percentileQUANTITY

0.97+

FacebookORGANIZATION

0.97+

WikipediaORGANIZATION

0.97+

Industrial RevolutionEVENT

0.96+

decade agoDATE

0.96+

first timeQUANTITY

0.96+

two thingsQUANTITY

0.96+

about twoQUANTITY

0.95+

two skillQUANTITY

0.95+

one hundred sixty hours a weekQUANTITY

0.95+

three weeksQUANTITY

0.95+

firstQUANTITY

0.95+

Steven Hillion & Jeff Fletcher, Astronomer | AWS Startup Showcase S3E1


 

(upbeat music) >> Welcome everyone to theCUBE's presentation of the AWS Startup Showcase AI/ML Top Startups Building Foundation Model Infrastructure. This is season three, episode one of our ongoing series covering exciting startups from the AWS ecosystem to talk about data and analytics. I'm your host, Lisa Martin and today we're excited to be joined by two guests from Astronomer. Steven Hillion joins us, it's Chief Data Officer and Jeff Fletcher, it's director of ML. They're here to talk about machine learning and data orchestration. Guys, thank you so much for joining us today. >> Thank you. >> It's great to be here. >> Before we get into machine learning let's give the audience an overview of Astronomer. Talk about what that is, Steven. Talk about what you mean by data orchestration. >> Yeah, let's start with Astronomer. We're the Airflow company basically. The commercial developer behind the open-source project, Apache Airflow. I don't know if you've heard of Airflow. It's sort of de-facto standard these days for orchestrating data pipelines, data engineering pipelines, and as we'll talk about later, machine learning pipelines. It's really is the de-facto standard. I think we're up to about 12 million downloads a month. That's actually as a open-source project. I think at this point it's more popular by some measures than Slack. Airflow was created by Airbnb some years ago to manage all of their data pipelines and manage all of their workflows and now it powers the data ecosystem for organizations as diverse as Electronic Arts, Conde Nast is one of our big customers, a big user of Airflow. And also not to mention the biggest banks on Wall Street use Airflow and Astronomer to power the flow of data throughout their organizations. >> Talk about that a little bit more, Steven, in terms of the business impact. You mentioned some great customer names there. What is the business impact or outcomes that a data orchestration strategy enables businesses to achieve? >> Yeah, I mean, at the heart of it is quite simply, scheduling and managing data pipelines. And so if you have some enormous retailer who's managing the flow of information throughout their organization they may literally have thousands or even tens of thousands of data pipelines that need to execute every day to do things as simple as delivering metrics for the executives to consume at the end of the day, to producing on a weekly basis new machine learning models that can be used to drive product recommendations. One of our customers, for example, is a British food delivery service. And you get those recommendations in your application that says, "Well, maybe you want to have samosas with your curry." That sort of thing is powered by machine learning models that they train on a regular basis to reflect changing conditions in the market. And those are produced through Airflow and through the Astronomer platform, which is essentially a managed platform for running airflow. So at its simplest it really is just scheduling and managing those workflows. But that's easier said than done of course. I mean if you have 10 thousands of those things then you need to make sure that they all run that they all have sufficient compute resources. If things fail, how do you track those down across those 10,000 workflows? How easy is it for an average data scientist or data engineer to contribute their code, their Python notebooks or their SQL code into a production environment? And then you've got reproducibility, governance, auditing, like managing data flows across an organization which we think of as orchestrating them is much more than just scheduling. It becomes really complicated pretty quickly. >> I imagine there's a fair amount of complexity there. Jeff, let's bring you into the conversation. Talk a little bit about Astronomer through your lens, data orchestration and how it applies to MLOps. >> So I come from a machine learning background and for me the interesting part is that machine learning requires the expansion into orchestration. A lot of the same things that you're using to go and develop and build pipelines in a standard data orchestration space applies equally well in a machine learning orchestration space. What you're doing is you're moving data between different locations, between different tools, and then tasking different types of tools to act on that data. So extending it made logical sense from a implementation perspective. And a lot of my focus at Astronomer is really to explain how Airflow can be used well in a machine learning context. It is being used well, it is being used a lot by the customers that we have and also by users of the open source version. But it's really being able to explain to people why it's a natural extension for it and how well it fits into that. And a lot of it is also extending some of the infrastructure capabilities that Astronomer provides to those customers for them to be able to run some of the more platform specific requirements that come with doing machine learning pipelines. >> Let's get into some of the things that make Astronomer unique. Jeff, sticking with you, when you're in customer conversations, what are some of the key differentiators that you articulate to customers? >> So a lot of it is that we are not specific to one cloud provider. So we have the ability to operate across all of the big cloud providers. I know, I'm certain we have the best developers that understand how best practices implementations for data orchestration works. So we spend a lot of time talking to not just the business outcomes and the business users of the product, but also also for the technical people, how to help them better implement things that they may have come across on a Stack Overflow article or not necessarily just grown with how the product has migrated. So it's the ability to run it wherever you need to run it and also our ability to help you, the customer, better implement and understand those workflows that I think are two of the primary differentiators that we have. >> Lisa: Got it. >> I'll add another one if you don't mind. >> You can go ahead, Steven. >> Is lineage and dependencies between workflows. One thing we've done is to augment core Airflow with Lineage services. So using the Open Lineage framework, another open source framework for tracking datasets as they move from one workflow to another one, team to another, one data source to another is a really key component of what we do and we bundle that within the service so that as a developer or as a production engineer, you really don't have to worry about lineage, it just happens. Jeff, may show us some of this later that you can actually see as data flows from source through to a data warehouse out through a Python notebook to produce a predictive model or a dashboard. Can you see how those data products relate to each other? And when something goes wrong, figure out what upstream maybe caused the problem, or if you're about to change something, figure out what the impact is going to be on the rest of the organization. So Lineage is a big deal for us. >> Got it. >> And just to add on to that, the other thing to think about is that traditional Airflow is actually a complicated implementation. It required quite a lot of time spent understanding or was almost a bespoke language that you needed to be able to develop in two write these DAGs, which is like fundamental pipelines. So part of what we are focusing on is tooling that makes it more accessible to say a data analyst or a data scientist who doesn't have or really needs to gain the necessary background in how the semantics of Airflow DAGs works to still be able to get the benefit of what Airflow can do. So there is new features and capabilities built into the astronomer cloud platform that effectively obfuscates and removes the need to understand some of the deep work that goes on. But you can still do it, you still have that capability, but we are expanding it to be able to have orchestrated and repeatable processes accessible to more teams within the business. >> In terms of accessibility to more teams in the business. You talked about data scientists, data analysts, developers. Steven, I want to talk to you, as the chief data officer, are you having more and more conversations with that role and how is it emerging and evolving within your customer base? >> Hmm. That's a good question, and it is evolving because I think if you look historically at the way that Airflow has been used it's often from the ground up. You have individual data engineers or maybe single data engineering teams who adopt Airflow 'cause it's very popular. Lots of people know how to use it and they bring it into an organization and say, "Hey, let's use this to run our data pipelines." But then increasingly as you turn from pure workflow management and job scheduling to the larger topic of orchestration you realize it gets pretty complicated, you want to have coordination across teams, and you want to have standardization for the way that you manage your data pipelines. And so having a managed service for Airflow that exists in the cloud is easy to spin up as you expand usage across the organization. And thinking long term about that in the context of orchestration that's where I think the chief data officer or the head of analytics tends to get involved because they really want to think of this as a strategic investment that they're making. Not just per team individual Airflow deployments, but a network of data orchestrators. >> That network is key. Every company these days has to be a data company. We talk about companies being data driven. It's a common word, but it's true. It's whether it is a grocer or a bank or a hospital, they've got to be data companies. So talk to me a little bit about Astronomer's business model. How is this available? How do customers get their hands on it? >> Jeff, go ahead. >> Yeah, yeah. So we have a managed cloud service and we have two modes of operation. One, you can bring your own cloud infrastructure. So you can say here is an account in say, AWS or Azure and we can go and deploy the necessary infrastructure into that, or alternatively we can host everything for you. So it becomes a full SaaS offering. But we then provide a platform that connects at the backend to your internal IDP process. So however you are authenticating users to make sure that the correct people are accessing the services that they need with role-based access control. From there we are deploying through Kubernetes, the different services and capabilities into either your cloud account or into an account that we host. And from there Airflow does what Airflow does, which is its ability to then reach to different data systems and data platforms and to then run the orchestration. We make sure we do it securely, we have all the necessary compliance certifications required for GDPR in Europe and HIPAA based out of the US, and a whole bunch host of others. So it is a secure platform that can run in a place that you need it to run, but it is a managed Airflow that includes a lot of the extra capabilities like the cloud developer environment and the open lineage services to enhance the overall airflow experience. >> Enhance the overall experience. So Steven, going back to you, if I'm a Conde Nast or another organization, what are some of the key business outcomes that I can expect? As one of the things I think we've learned during the pandemic is access to realtime data is no longer a nice to have for organizations. It's really an imperative. It's that demanding consumer that wants to have that personalized, customized, instant access to a product or a service. So if I'm a Conde Nast or I'm one of your customers, what can I expect my business to be able to achieve as a result of data orchestration? >> Yeah, I think in a nutshell it's about providing a reliable, scalable, and easy to use service for developing and running data workflows. And talking of demanding customers, I mean, I'm actually a customer myself, as you mentioned, I'm the head of data for Astronomer. You won't be surprised to hear that we actually use Astronomer and Airflow to run all of our data pipelines. And so I can actually talk about my experience. When I started I was of course familiar with Airflow, but it always seemed a little bit unapproachable to me if I was introducing that to a new team of data scientists. They don't necessarily want to have to think about learning something new. But I think because of the layers that Astronomer has provided with our Astro service around Airflow it was pretty easy for me to get up and running. Of course I've got an incentive for doing that. I work for the Airflow company, but we went from about, at the beginning of last year, about 500 data tasks that we were running on a daily basis to about 15,000 every day. We run something like a million data operations every month within my team. And so as one outcome, just the ability to spin up new production workflows essentially in a single day you go from an idea in the morning to a new dashboard or a new model in the afternoon, that's really the business outcome is just removing that friction to operationalizing your machine learning and data workflows. >> And I imagine too, oh, go ahead, Jeff. >> Yeah, I think to add to that, one of the things that becomes part of the business cycle is a repeatable capabilities for things like reporting, for things like new machine learning models. And the impediment that has existed is that it's difficult to take that from a team that's an analyst team who then provide that or a data science team that then provide that to the data engineering team who have to work the workflow all the way through. What we're trying to unlock is the ability for those teams to directly get access to scheduling and orchestrating capabilities so that a business analyst can have a new report for C-suite execs that needs to be done once a week, but the time to repeatability for that report is much shorter. So it is then immediately in the hands of the person that needs to see it. It doesn't have to go into a long list of to-dos for a data engineering team that's already overworked that they eventually get it to it in a month's time. So that is also a part of it is that the realizing, orchestration I think is fairly well and a lot of people get the benefit of being able to orchestrate things within a business, but it's having more people be able to do it and shorten the time that that repeatability is there is one of the main benefits from good managed orchestration. >> So a lot of workforce productivity improvements in what you're doing to simplify things, giving more people access to data to be able to make those faster decisions, which ultimately helps the end user on the other end to get that product or the service that they're expecting like that. Jeff, I understand you have a demo that you can share so we can kind of dig into this. >> Yeah, let me take you through a quick look of how the whole thing works. So our starting point is our cloud infrastructure. This is the login. You go to the portal. You can see there's a a bunch of workspaces that are available. Workspaces are like individual places for people to operate in. I'm not going to delve into all the deep technical details here, but starting point for a lot of our data science customers is we have what we call our Cloud IDE, which is a web-based development environment for writing and building out DAGs without actually having to know how the underpinnings of Airflow work. This is an internal one, something that we use. You have a notebook-like interface that lets you write python code and SQL code and a bunch of specific bespoke type of blocks if you want. They all get pulled together and create a workflow. So this is a workflow, which gets compiled to something that looks like a complicated set of Python code, which is the DAG. I then have a CICD process pipeline where I commit this through to my GitHub repo. So this comes to a repo here, which is where these DAGs that I created in the previous step exist. I can then go and say, all right, I want to see how those particular DAGs have been running. We then get to the actual Airflow part. So this is the managed Airflow component. So we add the ability for teams to fairly easily bring up an Airflow instance and write code inside our notebook-like environment to get it into that instance. So you can see it's been running. That same process that we built here that graph ends up here inside this, but you don't need to know how the fundamentals of Airflow work in order to get this going. Then we can run one of these, it runs in the background and we can manage how it goes. And from there, every time this runs, it's emitting to a process underneath, which is the open lineage service, which is the lineage integration that allows me to come in here and have a look and see this was that actual, that same graph that we built, but now it's the historic version. So I know where things started, where things are going, and how it ran. And then I can also do a comparison. So if I want to see how this particular run worked compared to one historically, I can grab one from a previous date and it will show me the comparison between the two. So that combination of managed Airflow, getting Airflow up and running very quickly, but the Cloud IDE that lets you write code and know how to get something into a repeatable format get that into Airflow and have that attached to the lineage process adds what is a complete end-to-end orchestration process for any business looking to get the benefit from orchestration. >> Outstanding. Thank you so much Jeff for digging into that. So one of my last questions, Steven is for you. This is exciting. There's a lot that you guys are enabling organizations to achieve here to really become data-driven companies. So where can folks go to get their hands on this? >> Yeah, just go to astronomer.io and we have plenty of resources. If you're new to Airflow, you can read our documentation, our guides to getting started. We have a CLI that you can download that is really I think the easiest way to get started with Airflow. But you can actually sign up for a trial. You can sign up for a guided trial where our teams, we have a team of experts, really the world experts on getting Airflow up and running. And they'll take you through that trial and allow you to actually kick the tires and see how this works with your data. And I think you'll see pretty quickly that it's very easy to get started with Airflow, whether you're doing that from the command line or doing that in our cloud service. And all of that is available on our website >> astronomer.io. Jeff, last question for you. What are you excited about? There's so much going on here. What are some of the things, maybe you can give us a sneak peek coming down the road here that prospects and existing customers should be excited about? >> I think a lot of the development around the data awareness components, so one of the things that's traditionally been complicated with orchestration is you leave your data in the place that you're operating on and we're starting to have more data processing capability being built into Airflow. And from a Astronomer perspective, we are adding more capabilities around working with larger datasets, doing bigger data manipulation with inside the Airflow process itself. And that lends itself to better machine learning implementation. So as we start to grow and as we start to get better in the machine learning context, well, in the data awareness context, it unlocks a lot more capability to do and implement proper machine learning pipelines. >> Awesome guys. Exciting stuff. Thank you so much for talking to me about Astronomer, machine learning, data orchestration, and really the value in it for your customers. Steve and Jeff, we appreciate your time. >> Thank you. >> My pleasure, thanks. >> And we thank you for watching. This is season three, episode one of our ongoing series covering exciting startups from the AWS ecosystem. I'm your host, Lisa Martin. You're watching theCUBE, the leader in live tech coverage. (upbeat music)

Published Date : Mar 9 2023

SUMMARY :

of the AWS Startup Showcase let's give the audience and now it powers the data ecosystem What is the business impact or outcomes for the executives to consume how it applies to MLOps. and for me the interesting that you articulate to customers? So it's the ability to run it if you don't mind. that you can actually see as data flows the other thing to think about to more teams in the business. about that in the context of orchestration So talk to me a little bit at the backend to your So Steven, going back to you, just the ability to spin up but the time to repeatability a demo that you can share that allows me to come There's a lot that you guys We have a CLI that you can download What are some of the things, in the place that you're operating on and really the value in And we thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Lisa MartinPERSON

0.99+

Jeff FletcherPERSON

0.99+

StevenPERSON

0.99+

StevePERSON

0.99+

Steven HillionPERSON

0.99+

LisaPERSON

0.99+

EuropeLOCATION

0.99+

Conde NastORGANIZATION

0.99+

USLOCATION

0.99+

thousandsQUANTITY

0.99+

twoQUANTITY

0.99+

HIPAATITLE

0.99+

AWSORGANIZATION

0.99+

two guestsQUANTITY

0.99+

AirflowORGANIZATION

0.99+

AirbnbORGANIZATION

0.99+

10 thousandsQUANTITY

0.99+

OneQUANTITY

0.99+

Electronic ArtsORGANIZATION

0.99+

oneQUANTITY

0.99+

PythonTITLE

0.99+

two modesQUANTITY

0.99+

AirflowTITLE

0.98+

10,000 workflowsQUANTITY

0.98+

about 500 data tasksQUANTITY

0.98+

todayDATE

0.98+

one outcomeQUANTITY

0.98+

tens of thousandsQUANTITY

0.98+

GDPRTITLE

0.97+

SQLTITLE

0.97+

GitHubORGANIZATION

0.96+

astronomer.ioOTHER

0.94+

SlackORGANIZATION

0.94+

AstronomerORGANIZATION

0.94+

some years agoDATE

0.92+

once a weekQUANTITY

0.92+

AstronomerTITLE

0.92+

theCUBEORGANIZATION

0.92+

last yearDATE

0.91+

KubernetesTITLE

0.88+

single dayQUANTITY

0.87+

about 15,000 every dayQUANTITY

0.87+

one cloudQUANTITY

0.86+

IDETITLE

0.86+

Marcel Hild, Red Hat & Kenneth Hoste, Ghent University | Kubecon + Cloudnativecon Europe 2022


 

(upbeat music) >> Announcer: theCUBE presents KubeCon and CloudNativeCon Europe 2022, brought to you by Red Hat, the Cloud Native Computing Foundation, and its ecosystem partners. >> Welcome to Valencia, Spain, in KubeCon CloudNativeCon Europe 2022. I'm your host Keith Townsend, along with Paul Gillon. And we're going to talk to some amazing folks. But first Paul, do you remember your college days? >> Vaguely. (Keith laughing) A lot of them are lost. >> I think a lot of mine are lost as well. Well, not really, I got my degree as an adult, so they're not that far past. I can remember 'cause I have the student debt to prove it. (both laughing) Along with us today is Kenneth Hoste, systems administrator at Ghent University, and Marcel Hild, senior manager software engineering at Red Hat. You're working in office of the CTO? >> That's absolutely correct, yes >> So first off, I'm going to start off with you Kenneth. Tell us a little bit about the research that the university does. Like what's the end result? >> Oh, wow, that's a good question. So the research we do at university and again, is very broad. We have bioinformaticians, physicists, people looking at financial data, all kinds of stuff. And the end result can be very varied as well. Very often it's research papers, or spinoffs from the university. Yeah, depending on the domain I would say, it depends a lot on. >> So that sounds like the perfect environment for cloud native. Like the infrastructure that's completely flexible, that researchers can come and have a standard way of interacting, each team just use it's resources as they would, the Navana for cloud native. >> Yeah. >> But somehow, I'm going to guess HPC isn't quite there yet. >> Yeah, not really, no. So, HPC is a bit, let's say slow into adopting new technologies. And we're definitely seeing some impact from cloud, especially things like containers and Kubernetes, or we're starting to hear these things in HPC community as well. But I haven't seen a lot of HPC clusters who are really fully cloud native. Not yet at least. Maybe this is coming. And if I'm walking around here at KubeCon, I can definitely, I'm being convinced that it's coming. So whether we like it or not we're probably going to have to start worrying about stuff like this. But we're still, let's say, the most prominent technologies of things like NPI, which has been there for 20, 30 years. The Fortran programming language is still the main language, if you're looking at compute time being spent on supercomputers, over 1/2 of the time spent is in Fortran code essentially. >> Keith: Wow. >> So either the application itself where the simulations are being done is implemented in Fortran, or the libraries that we are talking to from Python for example, for doing heavy duty computations, that backend library is implemented in Fortran. So if you take all of that into account, easily over 1/2 of the time is spent in Fortran code. >> So is this because the libraries don't migrate easily to, distributed to that environment? >> Well, it's multiple things. So first of all, Fortran is very well suited for implementing these type of things. >> Paul: Right. >> We haven't really seen a better alternative maybe. And also it'll be a huge effort to re-implement that same functionality in a newer language. So, the use case has to be very convincing, there has to be a very good reason why you would move away from Fortran. And, at least the HPC community hasn't seen that reason yet. >> So in theory, and right now we're talking about the theory and then what it takes to get to the future. In theory, I can take that Fortran code put it in a compiler that runs in a container? >> Yeah, of course, yeah. >> Why isn't it that simple? >> I guess because traditionally HPC is very slow at adopting new stuff. So, I'm not saying there isn't a reason that we should start looking at these things. Flexibility is a very important one. For a lot of researchers, their compute needs are very picky. So they're doing research, they have an idea, they want you to run lots of simulations, get the results, but then they're silent for a long time writing the paper, or thinking about how to, what they can learn from the results. So there's lots of peaks, and that's a very good fit for a cloud environment. I guess at the scale of university you have enough diversity end users that all those peaks never fall at the same time. So if you have your big own infrastructure you can still fill it up quite easily and keep your users happy. But this busty thing, I guess we're seeing that more and more or so. >> So Marcel, talk to us about, Red Hat needing to service these types of end users. That it can be on both ends I'd imagine that you have some people still in writing in Fortran, you have some people that's asking you for objects based storage. Where's Fortran, I'm sorry, not Fortran, but where is Red Hat in providing the underlay and the capabilities for the HPC and AI community? >> Yeah. So, I think if you look at the user base that we're looking at, it's on this spectrum from development to production. So putting AI workloads into production, it's an interesting challenge but it's easier to solve, and it has been solved to some extent, than the development cycle. So what we're looking at in Kenneth's domain it's more like the end user, the data scientist, developing code, and doing these experiments. Putting them into production is that's where containers live and thrive. You can containerize your model, you containerize your workload, you deploy it into your OpenShift Kubernetes cluster, done, you monitor it, done. So the software developments and the SRE, the ops part, done, but how do I get the data scientist into this cloud native age where he's not developing on his laptop or on a machine, where he SSH into and then does some stuff there. And then some system admin comes and needs to tweak it because it's running out of memory or whatnot. But how do we take him and make him, well, and provide him an environment that is good enough to work in, in the browser, and then with IDE, where the workload of doing the computation and the experimentation is repeatable, so that the environment is always the same, it's reliable, so it's always up and running. It doesn't consume resources, although it's up and running. Where it's, where the supply chain and the configuration of... And the, well, the modules that are brought into the system are also reliable. So all these problems that we solved in the traditional software development world, now have to transition into the data science and HPC world, where the problems are similar, but yeah, it's different sets. It's more or less, also a huge educational problem and transitioning the tools over into that is something... >> Well, is this mostly a technical issue or is this a cultural issue? I mean, are HPC workloads that different from more conventional OLTP workloads that they would not adapt well to a distributed containerized environment? >> I think it's both. So, on one hand it's the cultural issue because you have two different communities, everybody is reinventing the wheel, everybody is some sort of siloed. So they think, okay, what we've done for 30 years now we, there's no need to change it. And they, so it's, that's what thrives and here at KubeCon where you have different communities coming together, okay, this is how you solved the problem, maybe this applies also to our problem. But it's also the, well, the tooling, which is bound to a machine, which is bound to an HPC computer, which is architecturally different than a distributed environment where you would treat your containers as kettle, and as something that you can replace, right? And the HPC community usually builds up huge machines, and these are like the gray machines. So it's also technical bit of moving it to this age. >> So the massively parallel nature of HPC workloads you're saying Kubernetes has not yet been adapted to that? >> Well, I think that parallelism works great. It's just a matter of moving that out from an HPC computer into the scale out factor of a Kubernetes cloud that elastically scales out. Whereas the traditional HPC computer, I think, and Kenneth can correct me here is, more like, I have this massive computer with 1 million cores or whatnot, and now use it. And I can use my time slice, and book my time slice there. Whereas this a Kubernetes example the concept is more like, I have 1000 cores and I declare something into it and scale it up and down based on the needs. >> So, Kenneth, this is where you talked about the culture part of the changes that need to be happening. And quite frankly, the computer is a tool, it's a tool to get to the answer. And if that tool is working, if I have a 1000 cores on a single HPC thing, and you're telling me, well, I can't get to a system with 2000 cores. And if you containerized your process and move it over then maybe I'll get to the answer 50% faster maybe I'm not that... Someone has to make that decision. How important is it to get people involved in these types of communities from a researcher? 'Cause research is very tight-knit community to have these conversations and help that see move happen. >> I think it's very important to that community should, let's say, the cloud community, HPC research community, they should be talking a lot more, there should be way more cross pollination than there is today. I'm actually, I'm happy that I've seen HPC mentioned at booths and talks quite often here at KubeCon, I wasn't really expecting that. And I'm not sure, it's my first KubeCon, so I don't know, but I think that's kind of new, it's pretty recent. If you're going to the HPC community conferences there containers have been there for a couple of years now, something like Kubernetes is still a bit new. But just this morning there was a keynote by a guy from CERN, who was explaining, they're basically slowly moving towards Kubernetes even for their HPC clusters as well. And he's seeing that as the future because all the flexibility it gives you and you can basically hide all that from the end user, from the researcher. They don't really have to know that they're running on top of Kubernetes. They shouldn't care. Like you said, to them it's just a tool, and they care about if the tool works, they can get their answers and that's what they want to do. How that's actually being done in the background they don't really care. >> So talk to me about the AI side of the equation, because when I talk to people doing AI, they're on the other end of the spectrum. What are some of the benefits they're seeing from containerization? >> I think it's the reproducibility of experiments. So, and data scientists are, they're data scientists and they do research. So they care about their experiment. And maybe they also care about putting the model into production. But, I think from a geeky perspective they are more interested in finding the next model, finding the next solution. So they do an experiment, and they're done with it, and then maybe it's going to production. So how do I repeat that experiment in a year from now, so that I can build on top of it? And a container I think is the best solution to wrap something with its dependency, like freeze it, maybe even with the data, store it away, and then come to it back later and redo the experiment or share the experiment with some of my fellow researchers, so that they don't have to go through the process of setting up an equivalent environment on their machines, be it their laptop, via their cloud environment. So you go to the internet, download something doesn't work, container works. >> Well, you said something that really intrigues me you know in concept, I can have a, let's say a one terabyte data set, have a experiment associated with that. Take a snapshot of that somehow, I don't know how, take a snapshot of that and then share it with the rest of the community and then continue my work. >> Marcel: Yeah. >> And then we can stop back and compare notes. Where are we at in a maturity scale? Like, what are some of the pitfalls or challenges customers should be looking out for? >> I think you actually said it right there, how do I snapshot a terabyte of data? It's, that's... >> It's a terabyte of data. (both conversing) >> It's a bit of a challenge. And if you snapshot it, you have two terabytes of data or you just snapshot the, like and get you to do a, okay, this is currently where we're at. So that's why the technology is evolving. How do we do source control management for data? How do we license data? How do we make sure that the data is unbiased, et cetera? So that's going more into the AI side of things. But at dealing with data in a declarative way in a containerized way, I think that's where currently a lot of innovation is happening. >> What do you mean by dealing with data in a declarative way? >> If I'm saying I run this experiment based on this data set and I'm running this other experiment based on this other data set, and I as the researcher don't care where the data is stored, I care that the data is accessible. And so I might declare, this is the process that I put on my data, like a data processing pipeline. These are the steps that it's going through. And eventually it will have gone through this process and I can work with my data. Pretty much like applying the concept of pipelines through data. Like you have these data pipelines and then now you have cube flow pipelines as one solution to apply the pipeline concept, to well, managing your data. >> Given the stateless nature of containers, is that an impediment to HPC adoption because of the very large data sets that are typically involved? >> I think it is if you have terabytes of data. Just, you have to get it to the place where the computation will happen, right? And just uploading that into the cloud is already a challenge. If you have the data sitting there on a supercomputer and maybe it was sitting there for two years, you probably don't care. And typically a lot of universities the researchers don't necessarily pay for the compute time they use. Like, this is also... At least in Ghent that's the case, it's centrally funded, which means, the researchers don't have to worry about the cost, they just get access to the supercomputer. If they need two terabytes of data, they get that space and they can park it on the system for years, no problem. If they need 200 terabytes of data, that's absolutely fine. >> But the university cares about the cost? >> The university cares about the cost, but they want to enable the researchers to do the research that they want to do. >> Right. >> And we always tell researchers don't feel constrained about things like compute power, storage space. If you're doing smaller research, because you're feeling constrained, you have to tell us, and we will just expand our storage system and buy a new cluster. >> Paul: Wonderful. >> So you, to enable your research. >> It's a nice environment to be in. I think this might be a Jevons paradox problem, you give researchers this capability you might, you're going to see some amazing things. Well, now the people are snapshoting, one, two, three, four, five, different versions of a one terabytes of data. It's a good problem to have, and I hope to have you back on theCUBE, talking about how Red Hat and Ghent have solved those problems. Thank you so much for joining theCUBE. From Valencia, Spain, I'm Keith Townsend along with Paul Gillon. And you're watching theCUBE, the leader in high tech coverage. (upbeat music)

Published Date : May 19 2022

SUMMARY :

brought to you by Red Hat, do you remember your college days? A lot of them are lost. the student debt to prove it. that the university does. So the research we do at university Like the infrastructure I'm going to guess HPC is still the main language, So either the application itself So first of all, So, the use case has talking about the theory I guess at the scale of university and the capabilities for and the experimentation is repeatable, And the HPC community usually down based on the needs. And quite frankly, the computer is a tool, And he's seeing that as the future What are some of the and redo the experiment the rest of the community And then we can stop I think you actually It's a terabyte of data. the AI side of things. I care that the data is accessible. for the compute time they use. to do the research that they want to do. and we will just expand our storage system and I hope to have you back on theCUBE,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul GillonPERSON

0.99+

Keith TownsendPERSON

0.99+

KennethPERSON

0.99+

Kenneth HostePERSON

0.99+

Marcel HildPERSON

0.99+

PaulPERSON

0.99+

Red HatORGANIZATION

0.99+

two yearsQUANTITY

0.99+

KeithPERSON

0.99+

MarcelPERSON

0.99+

1 million coresQUANTITY

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

50%QUANTITY

0.99+

20QUANTITY

0.99+

FortranTITLE

0.99+

1000 coresQUANTITY

0.99+

30 yearsQUANTITY

0.99+

two terabytesQUANTITY

0.99+

CERNORGANIZATION

0.99+

2000 coresQUANTITY

0.99+

GhentLOCATION

0.99+

Valencia, SpainLOCATION

0.99+

firstQUANTITY

0.99+

GhentORGANIZATION

0.99+

one terabytesQUANTITY

0.99+

each teamQUANTITY

0.99+

one solutionQUANTITY

0.99+

KubeConEVENT

0.99+

todayDATE

0.99+

one terabyteQUANTITY

0.99+

PythonTITLE

0.99+

Ghent UniversityORGANIZATION

0.99+

KubernetesTITLE

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.98+

HPCORGANIZATION

0.98+

two different communitiesQUANTITY

0.96+

terabytes of dataQUANTITY

0.96+

both endsQUANTITY

0.96+

over 1/2QUANTITY

0.93+

twoQUANTITY

0.93+

CloudnativeconORGANIZATION

0.93+

CloudNativeCon Europe 2022EVENT

0.92+

this morningDATE

0.92+

a yearQUANTITY

0.91+

fiveQUANTITY

0.9+

theCUBEORGANIZATION

0.89+

FortranORGANIZATION

0.88+

KubeConORGANIZATION

0.87+

two terabytes of dataQUANTITY

0.86+

KubeCon CloudNativeCon Europe 2022EVENT

0.86+

EuropeLOCATION

0.85+

yearsQUANTITY

0.81+

a terabyte of dataQUANTITY

0.8+

NavanaORGANIZATION

0.8+

200 terabytes ofQUANTITY

0.79+

Kubecon +ORGANIZATION

0.77+

Liran Tal, Synk | CUBE Conversation


 

(upbeat music) >> Hello, everyone. Welcome to theCUBE's coverage of the "AWS Startup Showcase", season two, episode one. I'm Lisa Martin, and I'm excited to be joined by Snyk, next in this episode. Liran Tal joins me, the director of developer advocacy. Liran, welcome to the program. >> Lisa, thank you for having me. This is so cool. >> Isn't it cool? (Liran chuckles) All the things that we can do remotely. So I had the opportunity to speak with your CEO, Peter McKay, just about a month or so ago at AWS re:Invent. So much growth and momentum going on with Snyk, it's incredible. But I wanted to talk to you about specifically, let's start with your role from a developer advocate perspective, 'cause Snyk is saying modern development is changing, so traditional AppSec gatekeeping doesn't apply anymore. Talk to me about your role as a developer advocate. >> It is definitely. The landscape is changing, both developer and security, it's just not what it was before, and what we're seeing is developers need to be empowered. They need some help, just working through all of those security issues, security incidents happening, using open source, building cloud native applications. So my role is basically about making them successful, helping them any way we can. And so getting that security awareness out, or making sure people are having those best practices, making sure we understand what are the frustrations developers have, what are the things that we can help them with, to be successful day to day. And how they can be a really good part of the organization in terms of fixing security issues, not just knowing about it, but actually being proactively on it. >> And one of the things also that I was reading is, Shift Left is not a new concept. We've been talking about it for a long time. But Snyk's saying it was missing some things and proactivity is one of those things that it was missing. What else was it missing and how does Snyk help to fix that gap? >> So I think Shift Left is a good idea. In general, the idea is we want to fix security issues as soon as we can. We want to find them. Which I think that is a small nuance that what's kind of missing in the industry. And usually what we've seen with traditional security before was, 'cause notice that, the security department has like a silo that organizations once they find some findings they push it over to the development team, the R&D leader or things like that, but until it actually trickles down, it takes a lot of time. And what we needed to do is basically put those developer security tools, which is what Snyk is building, this whole security platform. Is putting that at the hands and at the scale of, and speed of modern development into developers. So, for example, instead of just finding security issues in your open source dependencies, what we actually do at Snyk is not just tell you about them, but you actually open a poll request to your source codes version and management system. And through that we are able to tell you, now you can actually merge it, you can actually review it, you can actually have it as part of your day-to-day workflows. And we're doing that through so many other ways that are really helpful and actually remediating the problem. So another example would be the IDE. So we are actually embedding an extension within your IDEs. So, once you actually type in your own codes, that is when we actually find the vulnerabilities that could exist within your own code, if that's like insecure code, and we can tell you about it as you hit Command + S and you will save the file. Which is totally different than what SaaS tools starting up application security testing was before because, when things started, you usually had SaaS tools running in the background and like CI jobs at the weekend and in deltas of code bases, because they were so slow to run, but developers really need to be at speed. They're developing really fast. They need to deploy. One development is deployed to production several times a day. So we need to really enable developers to find and fix those security issues as fast as we can. >> Yeah, that speed that you mentioned is absolutely critical to their workflow and what they're expecting. And one of the unique things about Snyk, you mentioned, the integration into how this works within development workflow with IDE, CIDC, they get environment enabling them to work at speed and not have to be security experts. I imagine are two important elements to the culture of the developer environment, right? >> Correct, yes. It says, a large part is we don't expect developers to be security experts. We want to help them, we want to, again, give them the tools, give them the knowledge. So we do it in several ways. For example, that IDE extension has a really cool thing that's like kind of unique to it that I really like, and that is, when we find, for example, you're writing code and maybe there's a batch traversal vulnerability in the function that you just wrote, what we'll actually do when we tell you about it, it will actually tell you, hey, look, these are some other commits made by other open source projects where we found the same vulnerability and those commits actually fixed it. So actually giving you example cases of what potentially good code looks like. So if you think about it, like who knows what patch reversal is, but prototype pollution like many types of vulnerabilities, but at the same time, we don't expect developers to actually know, the deep aspects of security. So they're left off with, having some findings, but not really, they want to fix them, but they don't really have the expertise to do it. So what we're doing is we're bridging that gap and we're being helpful. So I think this is what really proactive security is for developers, that says helping them remediate it. And I can give like more examples, like the security database, it's like a wonderful place where we also like provide examples and references of like, where does their vulnerability come from if there's like, what's fogging in open-source package? And we highlight that with a lot of references that provide you with things, the pull requests that fixed date, or the issue with where this was discussed. You have like an entire context of what is the... What made this vulnerability happen. So you have like a little bit more context than just specifically, emerging some stuff and updating, and there's a ton more. I'm happy to like dive more into this. >> Well, I can hear your enthusiasm for it, a developer advocate it seems like you are. But talking about the burdens of the gaps that you guys are filling it also seems like the developers and the security folks that this is also a bridge for those teams to work better together. >> Correct. I think that is not siloed anymore. I think the idea of having security champions or having threat modeling activities are really, really good, or like insightful both like developers and security, but more than just being insightful, useful practices that organizations should actually do actually bringing a discussion together to actually creating a more cohesive environment for both of those kind of like expertise, development and security to work together towards some of these aspects of like just mitigating security issues. And one of the things that actually Snyk is doing in that, in bringing their security into the developer mindset is also providing them with the ability to prioritize and understand what policies to put in place. So a lot of the times security organizations actually, the security org wants to do is put just, guardrails to make sure that developers have a good leeway to work around, but they're not like doing things that like, they definitely shouldn't do that, like prior to bringing a big risk into today organizations. And that's what I think we're doing also like great, which is the fact that we're providing the security folks to like put the policies in place and then developers who actually like, work really well within those understand how to prioritize vulnerabilities is an important part. And we kind of like quantify that, we put like an urgency score that says, hey, you should fix this vulnerability first. Why? Because it has, first of all, well, you can upgrade really quickly. It has a fix right there. Secondly, there's like an exploit in the wild. It means potentially an attacker can weaponize this vulnerability and like attack your organizations, in an automated fashion. So you definitely want to put that put like a lead on that, on that broken window, if so to say. So we ended up other kind of metrics that we can quantify and put this as like an urgency score, which we called a priority score that helps again, developers really know what to fix first, because like they could get a scan of like hundreds of vulnerabilities, but like, what do I start first with? So I find that like very useful for both the security and the developers working together. >> Right, and especially now, as we've seen such changes in the last couple of years to the threat landscape, the vulnerabilities, the security issues that are impacting every industry. The ability to empower developers to not only work at the speed with which they are accustomed and need to work, but also to be able to find those vulnerabilities faster prioritize which ones need to be fixed. I mean, I think of Log4Shell, for example, and when the challenge is going on with the supply chain, that this is really a critical capability from a developer empowerment perspective, but also from a overall business health and growth perspective. >> Definitely. I think, first of all, like if you want to step just a step back in terms of like, what has changed. Like what is the landscape? So I think we're seeing several things happening. First of all, there's this big, tremendous... I would call it a trend, but now it's like the default. Like of the growth of open source software. So first of all as developers are using more and more open source and that's like a growing trend of have like drafts of this. And it's like always increasing across, by the way, every ecosystem go, rust, .net, Java, JavaScript, whatever you're building, that's probably like on a growing trend, more open source. And that is, we will talk about it in a second what are the risks there. But that is one trend that we're saying. The other one is cloud native applications, which is also worth to like, I think dive deep into it in terms of the way that we're building applications today has completely shifted. And I think what AWS is doing in that sense is also creating a tremendous shift in the mindset of things. For example, out of the cloud infrastructure has basically democratized infrastructure. I do not need to, own my servers and own my monitoring and configure everything out. I can actually write codes that when I deploy it, when something parses this and runs this, it actually creates servers and monitoring, logging, different kinds of things for me. So it democratize the whole sense of building applications from what it was decades ago. And this whole thing is important and really, really fast. It makes things scalable. It also introduces some rates. For example, some of these configuration. So there's a lot that has been changed. And in that landscape of like what modern developer is and I think in that sense, we kind of can need a lead to a little bit more, be helpful to developers and help them like avoid all those cases. And I'm like happy to dive into like the open source and the cloud native. That was like follow-ups on this one. >> I want to get into a little bit more about your relationship with AWS. When I spoke with Peter McKay for re:Invent, he talked about the partnership being a couple of years old, but there's some kind of really interesting things that AWS is doing in terms of leveraging, Snyk. Talk to me about that. >> Indeed. So Snyky integrates with almost, I think probably a lot of services, but probably almost all of those that are unique and related to developers building on top of the AWS platform. And for example, that would be, if you actually are building your code, it connects like the source code editor. If you are pushing that code over, it integrates with code commits. As you build and CIS are running, maybe code build is something you're using that's in code pipeline. That is something that you have like native integrations. At the end of the day, like you have your container registry or Lambda. If you're using like functions as a service for your obligations, what we're doing is integrating with all of that. So at the end of the day, you really have all of that... It depends where you're integrating, but on all of those points of integration, you have like Snyk there to help you out and like make sure that if we find on any of those, any potential issues, anything from like licenses to vulnerabilities in your containers or just your code or your open source code in those, they actually find it at that point and mitigate the issue. So this kind of like if you're using Snyk, when you're a development machine, it kind of like accompanies you through this journey all over what a CIC kind of like landscape looks like as an architectural landscape for development, kind of like all the way there. And I think what you kind of might be I think more interested, I think to like put your on and an emphasis would be this recent integration with the Amazon Inspector. Which is as it's like very pivotal parts on the AWS platform to provide a lot of, integrate a lot of services and provide you with those insights on security. And I think the idea that now that is able to leverage vulnerability data from the Snyk's security intelligence database that says that's tremendous. And we can talk about that. We'd look for shell and recent issues. >> Yeah. Let's dig into that. We've have a few minutes left, but that was obviously a huge issue in November of 2021, when obviously we're in a very dynamic global situation period, but it's now not a matter of if an organization is going to be hit by vulnerabilities and security threats. It's a matter of when. Talk to me about really how impactful Snyk was in the Log4Shell vulnerability and how you help customers evade probably some serious threats, and that could have really impacted revenue growth, customer satisfaction, brand reputation. >> Definitely. The Log4Shell is, well, I mean was a vulnerability that was disclosed, but it's probably still a major part and going to be probably for the foreseeable future. An issue for organizations as they would need to deal with us. And we'll dive in a second and figure out like why, but in like a summary here, Log4Shell was the vulnerability that actually was found in Java library called Log4J. A logging library that is so popular today and used. And the thing is having the ability to react fast to those new vulnerabilities being disclosed is really a vital part of the organizations, because when it is asking factful, as we've seen Log4Shell being that is when, it determines where the security tool you're using is actually helping you, or is like just an added thing on like a checkbox to do. And that is what I think made Snyk's so unique in the sense. We have a team of those folks that are really boats, manually curating the ecosystem of CVEs and like finding by ourselves, but also there's like an entire, kind of like an intelligence platform beyond us. So we get a lot of notifications on chatter that happens. And so when someone opens an issue on an open source repository says, Hey, I found an issue here. Maybe that's an XSS or code injection or something like that. We find it really fast. And we at that point, before it goes to CVE requirement and stuff like that through like a miter and NVD, we find it really fast and can add it to the database. So this has been something that we've done with Log4Shell, where we found that as it was disclosed, not on the open source, but just on the open source system, but it was generally disclosed to everyone at that point. But not only that, because look for J as the library had several iterations of fixes they needed. So they fixed one version. Then that was the recommendation to upgrade to then that was actually found as vulnerable. So they needed to fix the another time and then another time and so on. So being able to react fast, which is, what I think helped a ton of customers and users of Snyk is that aspect. And what I really liked in the way that this has been received very well is we were very fast on creating those command line tools that allow developers to actually find cases of the Log4J library, embedded into (indistinct) but not true a package manifest. So sometimes you have those like legacy applications, deployed somewhere, probably not even legacy, just like the Log4J libraries, like bundled into a net or Java source code base. So you may not even know that you're using it in a sense. And so what we've done is we've like exposed with Snyk CLI tool and a command line argument that allows you to search for all of those cases. Like we can find them and help you, try and mitigate those issues. So that has been amazing. >> So you've talked in great length, Liran about, and detail about how Snyk is really enabling and empowering developers. One last question for you is when I spoke with Peter last month at re:Invent, he talked about the goal of reaching 28 million developers. Your passion as a director of developer advocacy is palpable. I can feel it through the screen here. Talk to me about where you guys are on that journey of reaching those 28 million developers and what personally excites you about what you're doing here. >> Oh, yeah. So many things. (laughs) Don't know where to start. We are constantly talking to developers on community days and things like that. So it's a couple of examples. We have like this dev site community, which is a growing and kicking community of developers and security people coming together and trying to work and understand, and like, just learn from each other. We have those events coming up. We actually have this, "The Big Fix". It's a big security event that we're launching on February 25th. And the idea is, want to help the ecosystem secure security obligations, open source or even if it's closed source. We like help you fix that though that yeah, it's like helping them. We've launched this Snyk ambassadors program, which is developers and security people, CSOs are even in there. And the idea is how can we help them also be helpful to the community? Because they are like known, they are passionate as we are, on application security and like helping developers code securely, build securely. So we launching all of those programs. We have like social impact related programs and the way that we like work with organizations, like maybe non-profit maybe they just need help, like getting, the security part of things kind of like figured out, students and things like that. Like, there's like a ton of those initiatives all over the boards, helping basically the world be a little bit more secure. >> Well, we could absolutely use Snyk's help in making the world more secure. Liran it's been great talking to you. Like I said, your passion for what you do and what Snyk is able to facilitate and enable is palpable. And it was a great conversation. I appreciate that. And we look forward to hearing what transpires during 2022 for Snyk so you got to come back. >> I will. Thank you. Thank you, Lisa. This has been fun. >> All right. Excellent. Liran Tal, I'm Lisa Martin. You're watching theCUBE's second season, season two of the "AWS Startup Showcase". This has been episode one. Stay tuned for more great episodes, full of fantastic content. We'll see you soon. (upbeat music)

Published Date : Jan 17 2022

SUMMARY :

of the "AWS Startup Showcase", Lisa, thank you for having me. So I had the opportunity to speak of the organization in terms And one of the things and like CI jobs at the weekend and not have to be security experts. the expertise to do it. that you guys are filling So a lot of the times and need to work, So it democratize the whole he talked about the partnership So at the end of the day, you and that could have really the ability to react fast and what personally excites you and the way that we like in making the world more secure. I will. We'll see you soon.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
LiranPERSON

0.99+

Peter McKayPERSON

0.99+

Lisa MartinPERSON

0.99+

AWSORGANIZATION

0.99+

LisaPERSON

0.99+

February 25thDATE

0.99+

PeterPERSON

0.99+

November of 2021DATE

0.99+

Liran TalPERSON

0.99+

oneQUANTITY

0.99+

SnykORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Log4ShellTITLE

0.99+

second seasonQUANTITY

0.99+

JavaTITLE

0.99+

JavaScriptTITLE

0.99+

last monthDATE

0.99+

decades agoDATE

0.98+

LambdaTITLE

0.98+

Log4JTITLE

0.98+

one versionQUANTITY

0.98+

one trendQUANTITY

0.97+

One last questionQUANTITY

0.97+

bothQUANTITY

0.97+

firstQUANTITY

0.96+

AppSecTITLE

0.96+

2022DATE

0.95+

One developmentQUANTITY

0.95+

SecondlyQUANTITY

0.95+

28 million developersQUANTITY

0.95+

todayDATE

0.94+

theCUBEORGANIZATION

0.93+

episode oneQUANTITY

0.88+

hundreds of vulnerabilitiesQUANTITY

0.86+

Shift LeftORGANIZATION

0.84+

two important elemQUANTITY

0.83+

SnykPERSON

0.82+

about a month orDATE

0.8+

SnykyPERSON

0.8+

last couple of yearsDATE

0.76+

couple of yearsQUANTITY

0.75+

several times a dayQUANTITY

0.75+

reEVENT

0.74+

Startup ShowcaseTITLE

0.74+

SynkORGANIZATION

0.74+

CICTITLE

0.73+

LeftTITLE

0.72+

season twoQUANTITY

0.7+

re:InventEVENT

0.7+

FirstQUANTITY

0.68+

customersQUANTITY

0.68+

Innovation Happens Best in Open Collaboration Panel | DockerCon Live 2020


 

>> Announcer: From around the globe, it's the queue with digital coverage of DockerCon live 2020. Brought to you by Docker and its ecosystem partners. >> Welcome, welcome, welcome to DockerCon 2020. We got over 50,000 people registered so there's clearly a ton of interest in the world of Docker and Eddie's as I like to call it. And we've assembled a power panel of Open Source and cloud native experts to talk about where things stand in 2020 and where we're headed. I'm Shawn Conley, I'll be the moderator for today's panel. I'm also a proud alum of JBoss, Red Hat, SpringSource, VMware and Hortonworks and I'm broadcasting from my hometown of Philly. Our panelists include; Michelle Noorali, Senior Software Engineer at Microsoft, joining us from Atlanta, Georgia. We have Kelsey Hightower, Principal developer advocate at Google Cloud, joining us from Washington State and we have Chris Aniszczyk, CTO CIO at the CNCF, joining us from Austin, Texas. So I think we have the country pretty well covered. Thank you all for spending time with us on this power panel. Chris, I'm going to start with you, let's dive right in. You've been in the middle of the Docker netease wave since the beginning with a clear focus on building a better world through open collaboration. What are your thoughts on how the Open Source landscape has evolved over the past few years? Where are we in 2020? And where are we headed from both community and a tech perspective? Just curious to get things sized up? >> Sure, when CNCF started about roughly four, over four years ago, the technology mostly focused on just the things around Kubernetes, monitoring communities with technology like Prometheus, and I think in 2020 and the future, we definitely want to move up the stack. So there's a lot of tools being built on the periphery now. So there's a lot of tools that handle running different types of workloads on Kubernetes. So things like Uvert and Shay runs VMs on Kubernetes, which is crazy, not just containers. You have folks that, Microsoft experimenting with a project called Kruslet which is trying to run web assembly workloads natively on Kubernetes. So I think what we've seen now is more and more tools built around the periphery, while the core of Kubernetes has stabilized. So different technologies and spaces such as security and different ways to run different types of workloads. And at least that's kind of what I've seen. >> So do you have a fair amount of vendors as well as end users still submitting in projects in, is there still a pretty high volume? >> Yeah, we have 48 total projects in CNCF right now and Michelle could speak a little bit more to this being on the DOC, the pipeline for new projects is quite extensive and it covers all sorts of spaces from two service meshes to security projects and so on. So it's ever so expanding and filling in gaps in that cloud native landscape that we have. >> Awesome. Michelle, Let's head to you. But before we actually dive in, let's talk a little glory days. A rumor has it that you are the Fifth Grade Kickball Championship team captain. (Michelle laughs) Are the rumors true? >> They are, my speech at the end of the year was the first talk I ever gave. But yeah, it was really fun. I wasn't captain 'cause I wasn't really great at anything else apart from constantly cheer on the team. >> A little better than my eighth grade Spelling Champ Award so I think I'd rather have the kickball. But you've definitely, spent a lot of time leading an Open Source, you've been across many projects for many years. So how does the art and science of collaboration, inclusivity and teamwork vary? 'Cause you're involved in a variety of efforts, both in the CNCF and even outside of that. And then what are some tips for expanding the tent of Open Source projects? >> That's a good question. I think it's about transparency. Just come in and tell people what you really need to do and clearly articulate your problem, more clearly articulate your problem and why you can't solve it with any other solution, the more people are going to understand what you're trying to do and be able to collaborate with you better. What I love about Open Source is that where I've seen it succeed is where incentives of different perspectives and parties align and you're just transparent about what you want. So you can collaborate where it makes sense, even if you compete as a company with another company in the same area. So I really like that, but I just feel like transparency and honesty is what it comes down to and clearly communicating those objectives. >> Yeah, and the various foundations, I think one of the things that I've seen, particularly Apache Software Foundation and others is the notion of checking your badge at the door. Because the competition might be between companies, but in many respects, you have engineers across many companies that are just kicking butt with the tech they contribute, claiming victory in one way or the other might make for interesting marketing drama. But, I think that's a little bit of the challenge. In some of the, standards-based work you're doing I know with CNI and some other things, are they similar, are they different? How would you compare and contrast into something a little more structured like CNCF? >> Yeah, so most of what I do is in the CNCF, but there's specs and there's projects. I think what CNCF does a great job at is just iterating to make it an easier place for developers to collaborate. You can ask the CNCF for basically whatever you need, and they'll try their best to figure out how to make it happen. And we just continue to work on making the processes are clearer and more transparent. And I think in terms of specs and projects, those are such different collaboration environments. Because if you're in a project, you have to say, "Okay, I want this feature or I want this bug fixed." But when you're in a spec environment, you have to think a little outside of the box and like, what framework do you want to work in? You have to think a little farther ahead in terms of is this solution or this decision we're going to make going to last for the next how many years? You have to get more of a buy in from all of the key stakeholders and maintainers. So it's a little bit of a longer process, I think. But what's so beautiful is that you have this really solid, standard or interface that opens up an ecosystem and allows people to build things that you could never have even imagined or dreamed of so-- >> Gotcha. So I'm Kelsey, we'll head over to you as your focus is on, developer advocate, you've been in the cloud native front lines for many years. Today developers are faced with a ton of moving parts, spanning containers, functions, Cloud Service primitives, including container services, server-less platforms, lots more, right? I mean, there's just a ton of choice. How do you help developers maintain a minimalist mantra in the face of such a wealth of choice? I think minimalism I hear you talk about that periodically, I know you're a fan of that. How do you pass that on and your developer advocacy in your day to day work? >> Yeah, I think, for most developers, most of this is not really the top of mind for them, is something you may see a post on Hacker News, and you might double click into it. Maybe someone on your team brought one of these tools in and maybe it leaks up into your workflow so you're forced to think about it. But for most developers, they just really want to continue writing code like they've been doing. And the best of these projects they'll never see. They just work, they get out of the way, they help them with log in, they help them run their application. But for most people, this isn't the core idea of the job for them. For people in operations, on the other hand, maybe these components fill a gap. So they look at a lot of this stuff that you see in the CNCF and Open Source space as number one, various companies or teams sharing the way that they do things, right? So these are ideas that are put into the Open Source, some of them will turn into products, some of them will just stay as projects that had mutual benefit for multiple people. But for the most part, it's like walking through an ion like Home Depot. You pick the tools that you need, you can safely ignore the ones you don't need, and maybe something looks interesting and maybe you study it to see if that if you have a problem. And for most people, if you don't have that problem that that tool solves, you should be happy. No one needs every project and I think that's where the foundation for confusion. So my main job is to help people not get stuck and confused in LAN and just be pragmatic and just use the tools that work for 'em. >> Yeah, and you've spent the last little while in the server-less space really diving into that area, compare and contrast, I guess, what you found there, minimalist approach, who are you speaking to from a server-less perspective versus that of the broader CNCF? >> The thing that really pushed me over, I was teaching my daughter how to make a website. So she's on her Chromebook, making a website, and she's hitting 127.0.0.1, and it looks like geo cities from the 90s but look, she's making website. And she wanted her friends to take a look. So she copied and paste from her browser 127.0.0.1 and none of her friends could pull it up. So this is the point where every parent has to cross that line and say, "Hey, do I really need to sit down "and teach my daughter about Linux "and Docker and Kubernetes." That isn't her main goal, her goal was to just launch her website in a way that someone else can see it. So we got Firebase installed on her laptop, she ran one command, Firebase deploy. And our site was up in a few minutes, and she sent it over to her friend and there you go, she was off and running. The whole server-less movement has that philosophy as one of the stated goal that needs to be the workflow. So, I think server-less is starting to get closer and closer, you start to see us talk about and Chris mentioned this earlier, we're moving up the stack. Where we're going to up the stack, the North Star there is feel where you get the focus on what you're doing, and not necessarily how to do it underneath. And I think server-less is not quite there yet but every type of workload, stateless web apps check, event driven workflows check, but not necessarily for things like machine learning and some other workloads that more traditional enterprises want to run so there's still work to do there. So server-less for me, serves as the North Star for why all these Projects exists for people that may have to roll their own platform, to provide the experience. >> So, Chris, on a related note, with what we were just talking about with Kelsey, what's your perspective on the explosion of the cloud native landscape? There's, a ton of individual projects, each can be used separately, but in many cases, they're like Lego blocks and used together. So things like the surface mesh interface, standardizing interfaces, so things can snap together more easily, I think, are some of the approaches but are you doing anything specifically to encourage this cross fertilization and collaboration of bug ability, because there's just a ton of projects, not only at the CNCF but outside the CNCF that need to plug in? >> Yeah, I mean, a lot of this happens organically. CNCF really provides of the neutral home where companies, competitors, could trust each other to build interesting technology. We don't force integration or collaboration, it happens on its own. We essentially allow the market to decide what a successful project is long term or what an integration is. We have a great Technical Oversight Committee that helps shepherd the overall technical vision for the organization and sometimes steps in and tries to do the right thing when it comes to potentially integrating a project. Previously, we had this issue where there was a project called Open Tracing, and an effort called Open Census, which is basically trying to standardize how you're going to deal with metrics, on the tree and so on in a cloud native world that we're essentially competing with each other. The CNCF TC and committee came together and merged those projects into one parent ever called Open Elementary and so that to me is a case study of how our committee helps, bridges things. But we don't force things, we essentially want our community of end users and vendors to decide which technology is best in the long term, and we'll support that. >> Okay, awesome. And, Michelle, you've been focused on making distributed systems digestible, which to me is about simplifying things. And so back when Docker arrived on the scene, some people referred to it as developer dopamine, which I love that term, because it's simplified a bunch of crufty stuff for developers and actually helped them focus on doing their job, writing code, delivering code, what's happening in the community to help developers wire together multi-part modern apps in a way that's elegant, digestible, feels like a dopamine rush? >> Yeah, one of the goals of the(mumbles) project was to make it easier to deploy an application on Kubernetes so that you could see what the finished product looks like. And then dig into all of the things that that application is composed of, all the resources. So we're really passionate about this kind of stuff for a while now. And I love seeing projects that come into the space that have this same goal and just iterate and make things easier. I think we have a ways to go still, I think a lot of the iOS developers and JS developers I get to talk to don't really care that much about Kubernetes. They just want to, like Kelsey said, just focus on their code. So one of the projects that I really like working with is Tilt gives you this dashboard in your CLI, aggregates all your logs from your applications, And it kind of watches your application changes, and reconfigures those changes in Kubernetes so you can see what's going on, it'll catch errors, anything with a dashboard I love these days. So Yali is like a metrics dashboard that's integrated with STL, a service graph of your service mesh, and lets you see the metrics running there. I love that, I love that dashboard so much. Linkerd has some really good service graph images, too. So anything that helps me as an end user, which I'm not technically an end user, but me as a person who's just trying to get stuff up and running and working, see the state of the world easily and digest them has been really exciting to see. And I'm seeing more and more dashboards come to light and I'm very excited about that. >> Yeah, as part of the DockerCon just as a person who will be attending some of the sessions, I'm really looking forward to see where DockerCompose is going, I know they opened up the spec to broader input. I think your point, the good one, is there's a bit more work to really embrace the wealth of application artifacts that compose a larger application. So there's definitely work the broader community needs to lean in on, I think. >> I'm glad you brought that up, actually. Compose is something that I should have mentioned and I'm glad you bring that up. I want to see programming language libraries, integrate with the Compose spec. I really want to see what happens with that I think is great that they open that up and made that a spec because obviously people really like using Compose. >> Excellent. So Kelsey, I'd be remiss if I didn't touch on your January post on changelog entitled, "Monoliths are the Future." Your post actually really resonated with me. My son works for a software company in Austin, Texas. So your hometown there, Chris. >> Yeah. >> Shout out to Will and the chorus team. His development work focuses on adding modern features via micro services as extensions to the core monolith that the company was founded on. So just share some thoughts on monoliths, micro services. And also, what's deliverance dopamine from your perspective more broadly, but people usually phrase as monoliths versus micro services, but I get the sense you don't believe it's either or. >> Yeah, I think most companies from the pragmatic so one of their argument is one of pragmatism. Most companies have trouble designing any app, monolith, deployable or microservices architecture. And then these things evolve over time. Unless you're really careful, it's really hard to know how to slice these things. So taking an idea or a problem and just knowing how to perfectly compartmentalize it into individual deployable component, that's hard for even the best people to do. And double down knowing the actual solution to the particular problem. A lot of problems people are solving they're solving for the first time. It's really interesting, our industry in general, a lot of people who work in it have never solved the particular problem that they're trying to solve for the first time. So that's interesting. The other part there is that most of these tools that are here to help are really only at the infrastructure layer. We're talking freeways and bridges and toll bridges, but there's nothing that happens in the actual developer space right there in memory. So the libraries that interface to the structure logging, the libraries that deal with rate limiting, the libraries that deal with authorization, can this person make this query with this user ID? A lot of those things are still left for developers to figure out on their own. So while we have things like the brunettes and fluid D, we have all of these tools to deploy apps into those target, most developers still have the problem of everything you do above that line. And to be honest, the majority of the complexity has to be resolved right there in the app. That's the thing that's taking requests directly from the user. And this is where maybe as an industry, we're over-correcting. So we had, you said you come from the JBoss world, I started a lot of my Cisco administration, there's where we focus a little bit more on the actual application needs, maybe from a router that as well. But now what we're seeing is things like Spring Boot, start to offer a little bit more integration points in the application space itself. So I think the biggest parts that are missing now are what are the frameworks people will use for authorization? So you have projects like OPA, Open Policy Agent for those that are new to that, it gives you this very low level framework, but you still have to understand the concepts around, what does it mean to allow someone to do something and one missed configuration, all your security goes out of the window. So I think for most developers this is where the next set of challenges lie, if not actually the original challenge. So for some people, they were able to solve most of these problems with virtualization, run some scripts, virtualize everything and be fine. And monoliths were okay for that. For some reason, we've thrown pragmatism out of the window and some people are saying the only way to solve these problems is by breaking the app into 1000 pieces. Forget the fact that you had trouble managing one piece, you're going to somehow find the ability to manage 1000 pieces with these tools underneath but still not solving the actual developer problems. So this is where you've seen it already with a couple of popular blog posts from other companies. They cut too deep. They're going from 2000, 3000 microservices back to maybe 100 or 200. So to my world, it's going to be not just one monolith, but end up maybe having 10 or 20 monoliths that maybe reflect the organization that you have versus the architectural pattern that you're at. >> I view it as like a constellation of stars and planets, et cetera. Where you you might have a star that has a variety of, which is a monolith, and you have a variety of sort of planetary microservices that float around it. But that's reality, that's the reality of modern applications, particularly if you're not starting from a clean slate. I mean your points, a good one is, in many respects, I think the infrastructure is code movement has helped automate a bit of the deployment of the platform. I've been personally focused on app development JBoss as well as springsSource. The Spring team I know that tech pretty well over the years 'cause I was involved with that. So I find that James Governor's discussion of progressive delivery really resonates with me, as a developer, not so much as an infrastructure Deployer. So continuous delivery is more of infrastructure notice notion, progressive delivery, feature flags, those types of things, or app level, concepts, minimizing the blast radius of your, the new features you're deploying, that type of stuff, I think begins to speak to the pain of application delivery. So I'll guess I'll put this up. Michelle, I might aim it to you, and then we'll go around the horn, what are your thoughts on the progressive delivery area? How could that potentially begin to impact cloud native over 2020? I'm looking for some rallying cries that move up the stack and give a set of best practices, if you will. And I think James Governor of RedMonk opened on something that's pretty important. >> Yeah, I think it's all about automating all that stuff that you don't really know about. Like Flagger is an awesome progressive delivery tool, you can just deploy something, and people have been asking for so many years, ever since I've been in this space, it's like, "How do I do AB deployment?" "How do I do Canary?" "How do I execute these different deployment strategies?" And Flagger is a really good example, for example, it's a really good way to execute these deployment strategies but then, make sure that everything's happening correctly via observing metrics, rollback if you need to, so you don't just throw your whole system. I think it solves the problem and allows you to take risks but also keeps you safe in that you can be confident as you roll out your changes that it all works, it's metrics driven. So I'm just really looking forward to seeing more tools like that. And dashboards, enable that kind of functionality. >> Chris, what are your thoughts in that progressive delivery area? >> I mean, CNCF alone has a lot of projects in that space, things like Argo that are tackling it. But I want to go back a little bit to your point around developer dopamine, as someone that probably spent about a decade of his career focused on developer tooling and in fact, if you remember the Eclipse IDE and that whole integrated experience, I was blown away recently by a demo from GitHub. They have something called code spaces, which a long time ago, I was trying to build development environments that essentially if you were an engineer that joined a team recently, you could basically get an environment quickly start it with everything configured, source code checked out, environment properly set up. And that was a very hard problem. This was like before container days and so on and to see something like code spaces where you'd go to a repo or project, open it up, behind the scenes they have a container that is set up for the environment that you need to build and just have a VS code ID integrated experience, to me is completely magical. It hits like developer dopamine immediately for me, 'cause a lot of problems when you're going to work with a project attribute, that whole initial bootstrap of, "Oh you need to make sure you have this library, this install," it's so incredibly painful on top of just setting up your developer environment. So as we continue to move up the stack, I think you're going to see an incredible amount of improvements around the developer tooling and developer experience that people have powered by a lot of this cloud native technology behind the scenes that people may not know about. >> Yeah, 'cause I've been talking with the team over at Docker, the work they're doing with that desktop, enable the aim local environment, make sure it matches as closely as possible as your deployed environments that you might be targeting. These are some of the pains, that I see. It's hard for developers to get bootstrapped up, it might take him a day or two to actually just set up their local laptop and development environment, and particularly if they change teams. So that complexity really corralling that down and not necessarily being overly prescriptive as to what tool you use. So if you're visual code, great, it should feel integrated into that environment, use a different environment or if you feel more comfortable at the command line, you should be able to opt into that. That's some of the stuff I get excited to potentially see over 2020 as things progress up the stack, as you said. So, Michelle, just from an innovation train perspective, and we've covered a little bit, what's the best way for people to get started? I think Kelsey covered a little bit of that, being very pragmatic, but all this innovation is pretty intimidating, you can get mowed over by the train, so to speak. So what's your advice for how people get started, how they get involved, et cetera. >> Yeah, it really depends on what you're looking for and what you want to learn. So, if you're someone who's new to the space, honestly, check out the case studies on cncf.io, those are incredible. You might find environments that are similar to your organization's environments, and read about what worked for them, how they set things up, any hiccups they crossed. It'll give you a broad overview of the challenges that people are trying to solve with the technology in this space. And you can use that drill into the areas that you want to learn more about, just depending on where you're coming from. I find myself watching old KubeCon talks on the cloud native computing foundations YouTube channel, so they have like playlists for all of the conferences and the special interest groups in CNCF. And I really enjoy talking, I really enjoy watching excuse me, older talks, just because they explain why things were done, the way they were done, and that helps me build the tools I built. And if you're looking to get involved, if you're building projects or tools or specs and want to contribute, we have special interest groups in the CNCF. So you can find that in the CNCF Technical Oversight Committee, TOC GitHub repo. And so for that, if you want to get involved there, choose a vertical. Do you want to learn about observability? Do you want to drill into networking? Do you care about how to deliver your app? So we have a cig called app delivery, there's a cig for each major vertical, and you can go there to see what is happening on the edge. Really, these are conversations about, okay, what's working, what's not working and what are the next changes we want to see in the next months. So if you want that kind of granularity and discussion on what's happening like that, then definitely join those those meetings. Check out those meeting notes and recordings. >> Gotcha. So on Kelsey, as you look at 2020 and beyond, I know, you've been really involved in some of the earlier emerging tech spaces, what gets you excited when you look forward? What gets your own level of dopamine up versus the broader community? What do you see coming that we should start thinking about now? >> I don't think any of the raw technology pieces get me super excited anymore. Like, I've seen the circle of around three or four times, in five years, there's going to be a new thing, there might be a new foundation, there'll be a new set of conferences, and we'll all rally up and probably do this again. So what's interesting now is what people are actually using the technology for. Some people are launching new things that maybe weren't possible because infrastructure costs were too high. People able to jump into new business segments. You start to see these channels on YouTube where everyone can buy a mic and a B app and have their own podcasts and be broadcast to the globe, just for a few bucks, if not for free. Those revolutionary things are the big deal and they're hard to come by. So I think we've done a good job democratizing these ideas, distributed systems, one company got really good at packaging applications to share with each other, I think that's great, and never going to reset again. And now what's going to be interesting is, what will people build with this stuff? If we end up building the same things we were building before, and then we're talking about another digital transformation 10 years from now because it's going to be funny but Kubernetes will be the new legacy. It's going to be the things that, "Oh, man, I got stuck in this Kubernetes thing," and there'll be some governor on TV, looking for old school Kubernetes engineers to migrate them to some new thing, that's going to happen. You got to know that. So at some point merry go round will stop. And we're going to be focused on what you do with this. So the internet is there, most people have no idea of the complexities of underwater sea cables. It's beyond one or two people, or even one or two companies to comprehend. You're at the point now, where most people that jump on the internet are talking about what you do with the internet. You can have Netflix, you can do meetings like this one, it's about what you do with it. So that's going to be interesting. And we're just not there yet with tech, tech is so, infrastructure stuff. We're so in the weeds, that most people almost burn out what's just getting to the point where you can start to look at what you do with this stuff. So that's what I keep in my eye on, is when do we get to the point when people just ship things and build things? And I think the closest I've seen so far is in the mobile space. If you're iOS developer, Android developer, you use the SDK that they gave you, every year there's some new device that enables some new things speech to text, VR, AR and you import an STK, and it just worked. And you can put it in one place and 100 million people can download it at the same time with no DevOps team, that's amazing. When can we do that for server side applications? That's going to be something I'm going to find really innovative. >> Excellent. Yeah, I mean, I could definitely relate. I was Hortonworks in 2011, so, Hadoop, in many respects, was sort of the precursor to the Kubernetes area, in that it was, as I like to refer to, it was a bunch of animals in the zoo, wasn't just the yellow elephant. And when things mature beyond it's basically talking about what kind of analytics are driving, what type of machine learning algorithms and applications are they delivering? You know that's when things tip over into a real solution space. So I definitely see that. I think the other cool thing even just outside of the container and container space, is there's just such a wealth of data related services. And I think how those two worlds come together, you brought up the fact that, in many respects, server-less is great, it's stateless, but there's just a ton of stateful patterns out there that I think also need to be addressed as these richer applications to be from a data processing and actionable insights perspective. >> I also want to be clear on one thing. So some people confuse two things here, what Michelle said earlier about, for the first time, a whole group of people get to learn about distributed systems and things that were reserved to white papers, PhDs, CF site, this stuff is now super accessible. You go to the CNCF site, all the things that you read about or we used to read about, you can actually download, see how it's implemented and actually change how it work. That is something we should never say is a waste of time. Learning is always good because someone has to build these type of systems and whether they sell it under the guise of server-less or not, this will always be important. Now the other side of this is, that there are people who are not looking to learn that stuff, the majority of the world isn't looking. And in parallel, we should also make this accessible, which should enable people that don't need to learn all of that before they can be productive. So that's two sides of the argument that can be true at the same time, a lot of people get caught up. And everything should just be server-less and everyone learning about distributed systems, and contributing and collaborating is wasting time. We can't have a world where there's only one or two companies providing all infrastructure for everyone else, and then it's a black box. We don't need that. So we need to do both of these things in parallel so I just want to make sure I'm clear that it's not one of these or the other. >> Yeah, makes sense, makes sense. So we'll just hit the final topic. Chris, I think I'll ask you to help close this out. COVID-19 clearly has changed how people work and collaborate. I figured we'd end on how do you see, so DockerCon is going to virtual events, inherently the Open Source community is distributed and is used to not face to face collaboration. But there's a lot of value that comes together by assembling a tent where people can meet, what's the best way? How do you see things playing out? What's the best way for this to evolve in the face of the new normal? >> I think in the short term, you're definitely going to see a lot of virtual events cropping up all over the place. Different themes, verticals, I've already attended a handful of virtual events the last few weeks from Red Hat summit to Open Compute summit to Cloud Native summit, you'll see more and more of these. I think, in the long term, once the world either get past COVID or there's a vaccine or something, I think the innate nature for people to want to get together and meet face to face and deal with all the serendipitous activities you would see in a conference will come back, but I think virtual events will augment these things in the short term. One benefit we've seen, like you mentioned before, DockerCon, can have 50,000 people at it. I don't remember what the last physical DockerCon had but that's definitely an order of magnitude more. So being able to do these virtual events to augment potential of physical events in the future so you can build a more inclusive community so people who cannot travel to your event or weren't lucky enough to win a scholarship could still somehow interact during the course of event to me is awesome and I hope something that we take away when we start all doing these virtual events when we get back to physical events, we find a way to ensure that these things are inclusive for everyone and not just folks that can physically make it there. So those are my thoughts on on the topic. And I wish you the best of luck planning of DockerCon and so on. So I'm excited to see how it turns out. 50,000 is a lot of people and that just terrifies me from a cloud native coupon point of view, because we'll probably be somewhere. >> Yeah, get ready. Excellent, all right. So that is a wrap on the DockerCon 2020 Open Source Power Panel. I think we covered a ton of ground. I'd like to thank Chris, Kelsey and Michelle, for sharing their perspectives on this continuing wave of Docker and cloud native innovation. I'd like to thank the DockerCon attendees for tuning in. And I hope everybody enjoys the rest of the conference. (upbeat music)

Published Date : May 29 2020

SUMMARY :

Brought to you by Docker of the Docker netease wave on just the things around Kubernetes, being on the DOC, the A rumor has it that you are apart from constantly cheer on the team. So how does the art and the more people are going to understand Yeah, and the various foundations, and allows people to build things I think minimalism I hear you You pick the tools that you need, and it looks like geo cities from the 90s but outside the CNCF that need to plug in? We essentially allow the market to decide arrived on the scene, on Kubernetes so that you could see Yeah, as part of the and I'm glad you bring that up. entitled, "Monoliths are the Future." but I get the sense you and some people are saying the only way and you have a variety of sort in that you can be confident and in fact, if you as to what tool you use. and that helps me build the tools I built. So on Kelsey, as you and be broadcast to the globe, that I think also need to be addressed the things that you read about in the face of the new normal? and meet face to face So that is a wrap on the DockerCon 2020

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

MichellePERSON

0.99+

Shawn ConleyPERSON

0.99+

Michelle NooraliPERSON

0.99+

Chris AniszczykPERSON

0.99+

2011DATE

0.99+

CNCFORGANIZATION

0.99+

KelseyPERSON

0.99+

1000 piecesQUANTITY

0.99+

10QUANTITY

0.99+

Apache Software FoundationORGANIZATION

0.99+

2020DATE

0.99+

JanuaryDATE

0.99+

oneQUANTITY

0.99+

CiscoORGANIZATION

0.99+

PhillyLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

Austin, TexasLOCATION

0.99+

a dayQUANTITY

0.99+

Atlanta, GeorgiaLOCATION

0.99+

SpringSourceORGANIZATION

0.99+

TOCORGANIZATION

0.99+

100QUANTITY

0.99+

HortonworksORGANIZATION

0.99+

DockerConEVENT

0.99+

North StarORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

PrometheusTITLE

0.99+

Washington StateLOCATION

0.99+

first timeQUANTITY

0.99+

Red HatORGANIZATION

0.99+

bothQUANTITY

0.99+

DockerORGANIZATION

0.99+

YouTubeORGANIZATION

0.99+

WillPERSON

0.99+

200QUANTITY

0.99+

Spring BootTITLE

0.99+

AndroidTITLE

0.99+

two companiesQUANTITY

0.99+

two sidesQUANTITY

0.99+

iOSTITLE

0.99+

one pieceQUANTITY

0.99+

Kelsey HightowerPERSON

0.99+

RedMonkORGANIZATION

0.99+

two peopleQUANTITY

0.99+

3000 microservicesQUANTITY

0.99+

Home DepotORGANIZATION

0.99+

JBossORGANIZATION

0.99+

Google CloudORGANIZATION

0.98+

NetflixORGANIZATION

0.98+

50,000 peopleQUANTITY

0.98+

20 monolithsQUANTITY

0.98+

OneQUANTITY

0.98+

one thingQUANTITY

0.98+

ArgoORGANIZATION

0.98+

KubernetesTITLE

0.98+

two companiesQUANTITY

0.98+

eachQUANTITY

0.98+

GitHubORGANIZATION

0.98+

over 50,000 peopleQUANTITY

0.98+

five yearsQUANTITY

0.98+

twoQUANTITY

0.98+

DockerEVENT

0.98+

Breaking Analysis: re:Invent 2019...of Transformation & NextGen Cloud


 

>> From the SiliconANGLE media office in Boston, Massachusetts, it's theCUBE. Now, here's your host, Dave Vellante. >> Hello, everyone, and welcome to this week's episode of theCUBE Insights, powered by ETR. In this Breaking Analysis, I want to do a quasi post-mortem on AWS re:Invent, and put the company's prospects into context using some ETR spending data. First I want to try to summarize some of the high-level things that we heard at the event. I won't go into all the announcements in any kind of great detail, there's a lot that's been written out there on what was announced, but I will touch on a few of the items that I felt were noteworthy and try to give you some of the main themes. I then want to dig into some of the spending data and share with you what's happening from a buyer's perspective in the context of budgets, and we'll specifically focus on AWS's business lines. And then I'm going to bring my colleague Stu Miniman into the conversation, and we're going to talk about AWS's hybrid strategy in some detail, and then we're going to wrap. So, the first thing that I want to do is give you a brief snapshot of the re:Invent takeaways, and I'll try to give you some commentary that you might not have heard coming out of the show. So, to summarize re:Invent, AWS is not being on rinsing and repeating, they have this culture of raising the bar, but one thing that doesn't change is this shock and awe that they do of announcements, it comes out each year, and it's obvious. It's always a big theme, and this year Andy Jassy really wanted to underscore the company's feature and functional lead relative to some of the other cloud providers. Now the overarching theme that Jassy brought home in his keynote this year is that the cloud is enabling transformation. Not just teeny, incremental improvement, he's talking about transformation that has to start at the very top of the organization, so it's somewhat a challenge and an appeal to enterprises, generally versus what is often a message to startups at re:Invent. And he was specifically talking to the c-suite here. Jassy didn't say this, but let me paraphrase something that John Furrier said in his analysis on theCUBE. He said if you're not born in the cloud, you basically better find the religion and get reborn, or you're going to be out of business. Now, one of the other big trends that we saw this year at re:Invent, and it's starting to come into focus, is that AWS is increasingly leveraging its acquisition of Annapurna with these new chip sets that give it higher performance and better cost structures and utilization than it can with merchant silicon, and specifically Intel. And here's what I'll say about that. AWS is one of the largest, if not the largest customer of Intel's in the world. But here's the thing, Intel wants a level playing field. We've seen this over the years, where it's in Intel's best interest to have that level playing field as much as possible, in its customer base. You saw it in PCs, in servers, and now you're seeing it in cloud. The more balanced the customer base is, the better it is for Intel because no one customer can exert undue influence and control over Intel. Intel's a consummate arms dealer, and so from AWS's perspective it makes sense to add capabilities and innovate, and vertically integrate in a way that can drive proprietary advantage that they can't necessarily get from Intel, and drive down costs. So that's kind of what's happening here. The other big thing we saw is latency, what Pat Gelsinger calls the law of physics. Well a few years ago, AWS, they wouldn't even acknowledge on-prem workloads, and Stu and I are going to talk about that, but clearly sees hybrid as an opportunity now. I'm going to talk more on detail and drill into this with Stu, but a big theme of the event was moving Outposts closer to on-prem workloads, that aren't going to be moving into the cloud anytime soon. And then also the edge, as well as, for instance, Amazon's Wavelength announcement that puts Outposts into 5G networks at major carriers. Now another takeaway is that AWS is unequivocal about the right tool for the right job, and you see this really prominently in database, where I've counted at least 10 purpose-built databases in the portfolio. AWS took some really indirect shots at Oracle, maybe even direct shots at Oracle, which, Oracle treats Oracle Database as a hammer, and every opportunity as a nail, antithetical to AWS's philosophy. Now there were a ton of announcements around AI and specifically the SageMaker IDE, specifically Studio, SageMaker Studio, which stood out as a way to simplify machine intelligence. Now this approach addresses the skillset problem. What I mean by that is, the lack of data scientists to leverage AI. But one of the things that we're kind of watching here is, it's going to be interesting to see if it exacerbates the AI black box issue. Making the logic behind the machines' outcomes less transparent. Now, all of this builds up to what we've been calling next-gen cloud, and we're entering a new era that goes well beyond infrastructure as a service, and lift and shift workloads. And it really ties back to Jassy's theme of transformation, where analytics approaches new computing models, like serverless, which are fundamental now, as is security, and a topic that we've addressed in detail in prior Breaking Analysis segments. AWS even made an announcement around quantum computing as a service, they call it Braket. So those are some of the things that we were watching. All right, now let's pivot and look at some of the data. Here's a reminder of the macro financials for AWS, we get some decent data around AWS financials, and this chart, I've showed before, but it's AWS's absolute revenue and quarterly revenue year on year with the growth rates. It's very large and it's growing, that's the bottom line, but growth is slowing to 35% last quarter as you can see. But to iterate, or reiterate, we're looking at a roughly 36 billion dollar company, growing at 35% a year, and you don't see that often. And so, this market, it still has a long way to go. Now let's look at some of the ETR tactical data on spending. Now remember, spending attentions according to ETR are reverting to pre-2018 levels, and are beginning to show signs of moderation. This chart shows spending momentum based on what ETR calls net score, and that represents the net percentage of customers that are spending more on a particular platform. Now, here's what's really interesting about this chart. It show the net scores for AWS across a number of the company's markets, comparing the gray, which is October '18 survey, with the blue, July '19, and the yellow, October '19. And you can see that workspaces, machine learning and AI, cloud overall, analytic databases, they're all either up or holding the same levels as a year ago, so you see AWS is bucking the trend, and even though spending on containers appears to be a little less than last year, it's holding firm from the July survey, so my point is that AWS is really bucking that trend from the overall market, and is continuing to do very very well. Now this next slide takes the same segments, and looks at what ETR refers to as market share, which is a measure of pervasiveness in the survey. So as you can see, AWS is gaining in virtually all of its segments. So even though spending overall is softening, AWS in the marketplace, AWS is doing a much better job than its peers on balance. Now, the other thing I want to address is this notion of repatriation. I get this a lot, as I'm sure do other analysts. People say to me, "Dave, you should really look into this. "We hear from a lot of customers "that they moved to the cloud, "now they're moving workloads back on-prem "because the cloud is so expensive." Okay, so they say "You should look into this." So this next chart really does look into this. What the chart shows is across those same offerings from AWS, so the same services, the percent of customers that are replacing AWS, so I'm using this as a proxy for repatriation. Look at the numbers, they're low single digits. You see traditional enterprise vendors' overall business growing in the low single digits, or shrinking. AWS's defections are in the low single digits, so, okay, now look at this next chart. What about adoptions, if the cloud is slowing down, you'd expect a slowdown in new adoptions. What this data shows is the percent of customers that are responding, that they're adding AWS in these segments, so there's a new platform. So look, across the board, you're seeing increases of most of AWS's market segments. Notably, in respondents citing AWS overall at the very rightmost bars, you are admittedly seeing some moderation relative to last year. So that's a bit of a concern and clearly something to watch, but as I showed you earlier, AWS overall, that same category, is holding firm, because existing customers are spending more. All right, so that's the data portion of the conversation, hopefully we put that repatriation stuff to bed, and I now want to bring in Stu Miniman to the conversation, and we're going to talk more about multicloud, hybrid, on-prem, we'll talk about Outposts specifically, so Stu, welcome, thank you very much for coming on. >> Thanks Dave, glad to be here with you. >> All right, so let's talk about, let's start with multicloud, and dig into the role of Kubernetes a little bit, let me sort of comment on how I think AWS looks at multicloud. I think they look at multicloud as using multiple public clouds, and they look at on-prem as hybrid. Your thoughts on AWS's perspective on multicloud, and what's going on in the market. >> Yeah, and first of all, Dave, I'll step back for a second, you talked about how Amazon has for years had shots against Oracle. The one that Amazon actually was taking some shots at this year was Microsoft, so, not only did they talk about Oracle, they talked about customers looking to flee their SQL customers, and I lead into that because when you talk about hybrid cloud, Dave, if you talked to any analyst over the last three, four years and you say "Okay, what vendor is best position in hybrid, "which cloud provider has the "best solution for hybrid cloud?" Microsoft is the one that we'd say, because their strong domain in the enterprise, of course with Windows, the move to Office 365, the clear number two player in Azure, and they've had Azure Stack for a number of years, and they had Azure Pack before that, they'd had a number of offerings, they just announced this year Azure Arc, so three, we've had at least three generations of hybrid multicloud solutions from Microsoft, Amazon has a different positioning. As we've talked about for years, Dave, not only doesn't Amazon like to use the words hybrid or multicloud, for the most part, but they do have a different viewpoint. So the partnership with VMware expanded what they're doing on hybrid, and while Andy Jassy, he at least acknowledges that multicloud is a thing, when he sat down with John Furrier ahead of the show, he said "Well, there might be reasons why customers "either there's a group inside "that has a service that they want, "that they might want to do a secondary cloud, "or if I'm concerned that I might fall out of love "with this primary supplier I have, "I might need a second one." Andy said in not so, just exactly, said "Look, we understand multicloud is a thing." Now, architecturally, Amazon's positioning on this is that you should use Amazon, and they should be the center of what you're doing. You talked a lot about Outposts, Outposts, critical to what Amazon is doing in this environment. >> And we're going to talk about that, but you're right, Amazon doesn't like to talk about multicloud as a term, however, and by the way, they say that multicloud is more expensive, less secure, more complicated, more costly, and probably true, but you're right, they are acknowledging at least, and I would predict just as hybrid, which we want to talk about right now, they'll be talking about, they'll be participating in some way, shape, or form, but before we go to multicloud, or hybrid, what about Kubernetes? >> So, right, first of all, we've been at the KubeCon show for years, we've watching Kubernetes since the early days. Kubernetes is not a magic layer, it does not automatically say "Hey, I've got my application, I can move it willy-nilly." Data gravity's really important, how I architect my microservices solution absolutely is hugely important. When I talk to my friends in the app dev world, Dave, hybrid is the way they are building things a lot, if I took some big monolithic application, and I start pulling it apart, if I have that data warehouse or data store in my data center, I can't just migrate that to the cloud, David Floyer for years has been talking about the cost of migration, so microservice architecture's the way most customers are building, a hybrid environment often is there. Multicloud, we're not doing cloud bursting, we're not just saying "Oh hey, I woke up today, "and cloud A is cheaper than cloud B, "let me move my workload." Absolutely, I had a great conversation with a good Amazon customer that said two years ago, when they deployed Kubernetes, they did it on Azure. You want to know why, the Azure solution was more mature and they were doing Azure, they were doing things there, but as Amazon fully embraced Kubernetes, not just sitting on top of their solution, but launched the service, which is EKS, they looked at it, and they took an application, and they migrated it from Azure to Amazon. Now, migrating it, there's the underlying services and everybody does things a little bit different. If you look at some of the tooling out there, great one to look at is HashiCorp has some great tooling that can span across multiple clouds, but if you look at how they deploy, to Azure, to Google, to AWS, it's different, so you got to have different code, there's different skillsets, it's not a utility and just generic compute and storage and networking underneath, you need to have specific skills there, so Kubernetes, absolutely when I've been talking to users for the last few years and saying "Why are you using Kubernetes?" The answer is "I need that eject lever, "so that if I want to leave AWS with an application, "I can do that, and it's not press a button and it's easy, "that easy, but I know that I can move that, "'cause underneath the pods, and the containers, "and all those pieces, the core building blocks "are the same, I will have to do some reconfiguration," as we know with the migration, usually I can get 80 to 90 percent of the way there, and then I need to make the last minute-- >> So it's a viable hedge on your AWS strategy, okay. >> Absolutely, and I've talked to lots of customers, Amazon shows that most cloud Kubernetes solutions out there are running on Amazon, and when I go talk to customers, absolutely, a lot of the customers that are doing Kubernetes in the public cloud are doing that on Amazon, and one of the main reasons they're using it is in case they do want to, as a hedge against being all-in on Amazon. >> All right, let's talk about Outposts, specifically as part of Amazon's hybrid strategy, and now their edge strategy as well. >> Right, so Azure Stack, I mentioned earlier from Microsoft has been out there for a few years. It has not been doing phenomenally well, when I was at Microsoft Ignite this year, I heard basically certain government agencies and service providers are using it and basically acting, delivering Azure as a service, but, Azure Stack is basically an availability zone in my data center, and Amazon looked at this and says "That's not how we're going to build this." Outposts is an extension of your local region, so, while people look at the box and they say, I took a picture of the box and Shu was like, "Hey, whose server and what networking card, "and the chipset and everything," I said "Hold on a second. "You might look at that box, "and you might be able to open the door, "but Amazon is going to deploy that, "they're going to manage that, "really you should put a curtain in front of it "and say pay no attention to what's behind here, "because this is Amazon gear, it's an Amazon "as a service in your data center, "and there are only a few handful of services "that are going to be there at first." If I want to even use S3, day one, the Amazon native services, you're going to just use S3 in your local region. Well, what if I need special latency? Well, Amazon's going to look at that, and see what's available, so, it is Amazon hardware, the Amazon software, the Amazon control plane, reaching into that data center, and very scalable, it's, Amazon says over time it should be able to go to thousands of racks if you need, so absolutely that cloud experience closer to my environment, but where I need certain applications, certain latency, certain pieces of data that I need to store there. >> And we've seen Amazon dip its toe into the hybrid on-prem market with Snowball and Greengrass and stuff like that before, but this is a much bigger commitment, one might even say capitulation, to hybrid. >> Well, right, and the reason why I even say, this is hybrid, but it's all Amazon, it is not "Take my private cloud and my public cloud "and tie 'em together," it's not, "I've taken cloud to customer" or IBM solution, where they're saying "I'm going to put a rack here "and a rack there, and it's all going to work the same." It is the same hardware and software, but it is not all of the pieces-- >> VMware and Outposts is hybrid. >> Really interesting, Dave, as the native AWS solution is announced first here in 2019, and the VMware solution on Outposts isn't going to be available until 2020. Draw what you will, it's been a strong partnership, there are exabytes of data in the VMware cloud on AWS now, but yeah, it's a little bit of a-- >> Quid pro quo, I think is what you call that. >> Well I'd say Amazon is definitely, "We're going to encroach a little bit on your business, "and we're going to woke you into our environment, too." >> Okay, let's talk about the edge, and Outposts at the edge, they announced Wavelength, which is essentially taking Outposts and putting it into 5G networks at carriers. >> Yeah, so Outposts is this building block, and what Amazon did is they said, "This is pretty cool, "we actually have our environment "and we can do other things with it." So sometimes they're just taking, pretty much that same block, and using it for another service, so one that you didn't mention was AWS Local Zones. So it is not a whole new availability zone, but it is basically extending the cloud, multi-tenant, the first one is done for the TME market in Los Angeles, and you expect, how does Amazon get lower latency and get closer, and get specialized services, local zones are how they're going to do this. The Wavelength solution is something they built specifically for the telco environment. I actually got to sit down with Verizon, this was at least an 18 month integration, anybody that's worked in the telco space knows that it's usually not standard gear, there's NEBB certification, there's all these things, it's often even DC power, so, it is leveraging Outposts, but it is not them rolling the same thing into Verizon that they did in their environments. Similar how they're going to manage it, but as you said, it's going to push to the telco edge and in a partnership with Verizon, Vodafone, SK, Telecom, and some others that will be rolling out across the globe, they are going to have that 5G offering and this little bit, I actually buy it from Amazon, but you still buy the 5G from your local carrier. It's going to roll out in Chicago first, and enabling all of those edge applications. >> Well what I like about the Amazon strategy at the edge is, and I've said this before, on a number of occasions on theCUBE Breaking Analysis, they're taking programmable infrastructure to the edge, the edge will be won by developers in my view, and Amazon obviously has got great developer traction, I don't see that same developer traction at HPE, even Dell EMC proper, even within VMware, and now they've got Pivotal, they've got an opportunity there, but they've really got a long way to go in terms of appealing to developers, whereas Amazon I think is there, obviously, today. >> Yeah, absolutely true, Dave. When we first started going to the show seven years ago, it was very much the hoodie crowd, and all of those cloud-native, now, as you said, it's those companies that are trying to become born again in the cloud, and do these environments, because I had a great conversation with Andy Jassy on air, Dave, and I said "Do we just shrink wrap solutions "and make it easy for the enterprise to deploy, "or are we doing the enterprise a disservice?" Because if you are truly going to thrive and survive in the cloud-native era, you've got to go through a little bit of pain, you need to have more developers. I've seen lots of stats about how fast people are hiring developers and I need to, it's really a reversal of that old outsourcing trend, I really need IT and the business working together, being agile, and being able to respond and leverage data. >> It's that hyperscaler mentality that Jassy has, "We got engineers, we'll spend time "on creating a better mousetrap, on lowering costs," whereas the enterprise, they don't have necessarily as many resources or as many engineers running around, they'll spend money to save time, so your point about solutions I think is right on. We'll see, I mean look, never say never with Amazon. We've seen it, certainly with on-prem, hybrid, whatever you want to call it, and I think you'll see the same with multicloud, and so we watch. >> Yeah, Dave, the analogy I gave in the final wrap is "Finding the right cloud is like Goldilocks "finding the perfect solution." There's one solution out there, I think it's a little too hot, and you're probably not smart enough to use it just yet. There's one solution that, yeah, absolutely, you can use all of your credits to leverage it, and will meet you where you are and it's great, and then you've got Amazon trying to fit everything in between, and they feel that they are just right no matter where you are on that spectrum, and that's why you get 36 billion growing at 35%, not something I've seen in the software space. >> All right, Stu, thank you for your thoughts on re:Invent, and thank you for watching this episode of theCUBE Insights, powered by ETR, this is Dave Vellante for Stu Miniman, we'll see you next time. (techno music)

Published Date : Dec 13 2019

SUMMARY :

From the SiliconANGLE media office and that represents the net percentage and what's going on in the market. and they should be the center of what you're doing. and they migrated it from Azure to Amazon. and one of the main reasons they're using it and now their edge strategy as well. it should be able to go to thousands of racks if you need, and stuff like that before, It is the same hardware and software, but it is not is announced first here in 2019, and the VMware solution "and we're going to woke you into our environment, too." Okay, let's talk about the edge, and Outposts at the edge, across the globe, they are going to have the edge will be won by developers in my view, "and make it easy for the enterprise to deploy, and so we watch. and that's why you get 36 billion growing at 35%, All right, Stu, thank you for your thoughts

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Pat GelsingerPERSON

0.99+

Dave VellantePERSON

0.99+

AWSORGANIZATION

0.99+

VerizonORGANIZATION

0.99+

David FloyerPERSON

0.99+

Andy JassyPERSON

0.99+

VodafoneORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

JassyPERSON

0.99+

MicrosoftORGANIZATION

0.99+

OracleORGANIZATION

0.99+

John FurrierPERSON

0.99+

Stu MinimanPERSON

0.99+

AndyPERSON

0.99+

July '19DATE

0.99+

JulyDATE

0.99+

ChicagoLOCATION

0.99+

2019DATE

0.99+

80QUANTITY

0.99+

36 billionQUANTITY

0.99+

October '18DATE

0.99+

October '19DATE

0.99+

35%QUANTITY

0.99+

Los AngelesLOCATION

0.99+