Luis Ceze, OctoML | Amazon re:MARS 2022
(upbeat music) >> Welcome back, everyone, to theCUBE's coverage here live on the floor at AWS re:MARS 2022. I'm John Furrier, host for theCUBE. Great event, machine learning, automation, robotics, space, that's MARS. It's part of the re-series of events, re:Invent's the big event at the end of the year, re:Inforce, security, re:MARS, really intersection of the future of space, industrial, automation, which is very heavily DevOps machine learning, of course, machine learning, which is AI. We have Luis Ceze here, who's the CEO co-founder of OctoML. Welcome to theCUBE. >> Thank you very much for having me in the show, John. >> So we've been following you guys. You guys are a growing startup funded by Madrona Venture Capital, one of your backers. You guys are here at the show. This is a, I would say small show relative what it's going to be, but a lot of robotics, a lot of space, a lot of industrial kind of edge, but machine learning is the centerpiece of this trend. You guys are in the middle of it. Tell us your story. >> Absolutely, yeah. So our mission is to make machine learning sustainable and accessible to everyone. So I say sustainable because it means we're going to make it faster and more efficient. You know, use less human effort, and accessible to everyone, accessible to as many developers as possible, and also accessible in any device. So, we started from an open source project that began at University of Washington, where I'm a professor there. And several of the co-founders were PhD students there. We started with this open source project called Apache TVM that had actually contributions and collaborations from Amazon and a bunch of other big tech companies. And that allows you to get a machine learning model and run on any hardware, like run on CPUs, GPUs, various GPUs, accelerators, and so on. It was the kernel of our company and the project's been around for about six years or so. Company is about three years old. And we grew from Apache TVM into a whole platform that essentially supports any model on any hardware cloud and edge. >> So is the thesis that, when it first started, that you want to be agnostic on platform? >> Agnostic on hardware, that's right. >> Hardware, hardware. >> Yeah. >> What was it like back then? What kind of hardware were you talking about back then? Cause a lot's changed, certainly on the silicon side. >> Luis: Absolutely, yeah. >> So take me through the journey, 'cause I could see the progression. I'm connecting the dots here. >> So once upon a time, yeah, no... (both chuckling) >> I walked in the snow with my bare feet. >> You have to be careful because if you wake up the professor in me, then you're going to be here for two hours, you know. >> Fast forward. >> The average version here is that, clearly machine learning has shown to actually solve real interesting, high value problems. And where machine learning runs in the end, it becomes code that runs on different hardware, right? And when we started Apache TVM, which stands for tensor virtual machine, at that time it was just beginning to start using GPUs for machine learning, we already saw that, with a bunch of machine learning models popping up and CPUs and GPU's starting to be used for machine learning, it was clear that it come opportunity to run on everywhere. >> And GPU's were coming fast. >> GPUs were coming and huge diversity of CPUs, of GPU's and accelerators now, and the ecosystem and the system software that maps models to hardware is still very fragmented today. So hardware vendors have their own specific stacks. So Nvidia has its own software stack, and so does Intel, AMD. And honestly, I mean, I hope I'm not being, you know, too controversial here to say that it kind of of looks like the mainframe era. We had tight coupling between hardware and software. You know, if you bought IBM hardware, you had to buy IBM OS and IBM database, IBM applications, it all tightly coupled. And if you want to use IBM software, you had to buy IBM hardware. So that's kind of like what machine learning systems look like today. If you buy a certain big name GPU, you've got to use their software. Even if you use their software, which is pretty good, you have to buy their GPUs, right? So, but you know, we wanted to help peel away the model and the software infrastructure from the hardware to give people choice, ability to run the models where it best suit them. Right? So that includes picking the best instance in the cloud, that's going to give you the right, you know, cost properties, performance properties, or might want to run it on the edge. You might run it on an accelerator. >> What year was that roughly, when you were going this? >> We started that project in 2015, 2016 >> Yeah. So that was pre-conventional wisdom. I think TensorFlow wasn't even around yet. >> Luis: No, it wasn't. >> It was, I'm thinking like 2017 or so. >> Luis: Right. So that was the beginning of, okay, this is opportunity. AWS, I don't think they had released some of the nitro stuff that the Hamilton was working on. So, they were already kind of going that way. It's kind of like converging. >> Luis: Yeah. >> The space was happening, exploding. >> Right. And the way that was dealt with, and to this day, you know, to a large extent as well is by backing machine learning models with a bunch of hardware specific libraries. And we were some of the first ones to say, like, know what, let's take a compilation approach, take a model and compile it to very efficient code for that specific hardware. And what underpins all of that is using machine learning for machine learning code optimization. Right? But it was way back when. We can talk about where we are today. >> No, let's fast forward. >> That's the beginning of the open source project. >> But that was a fundamental belief, worldview there. I mean, you have a world real view that was logical when you compare to the mainframe, but not obvious to the machine learning community. Okay, good call, check. Now let's fast forward, okay. Evolution, we'll go through the speed of the years. More chips are coming, you got GPUs, and seeing what's going on in AWS. Wow! Now it's booming. Now I got unlimited processors, I got silicon on chips, I got, everywhere >> Yeah. And what's interesting is that the ecosystem got even more complex, in fact. Because now you have, there's a cross product between machine learning models, frameworks like TensorFlow, PyTorch, Keras, and like that and so on, and then hardware targets. So how do you navigate that? What we want here, our vision is to say, folks should focus, people should focus on making the machine learning models do what they want to do that solves a value, like solves a problem of high value to them. Right? So another deployment should be completely automatic. Today, it's very, very manual to a large extent. So once you're serious about deploying machine learning model, you got a good understanding where you're going to deploy it, how you're going to deploy it, and then, you know, pick out the right libraries and compilers, and we automated the whole thing in our platform. This is why you see the tagline, the booth is right there, like bringing DevOps agility for machine learning, because our mission is to make that fully transparent. >> Well, I think that, first of all, I use that line here, cause I'm looking at it here on live on camera. People can't see, but it's like, I use it on a couple couple of my interviews because the word agility is very interesting because that's kind of the test on any kind of approach these days. Agility could be, and I talked to the robotics guys, just having their product be more agile. I talked to Pepsi here just before you came on, they had this large scale data environment because they built an architecture, but that fostered agility. So again, this is an architectural concept, it's a systems' view of agility being the output, and removing dependencies, which I think what you guys were trying to do. >> Only part of what we do. Right? So agility means a bunch of things. First, you know-- >> Yeah explain. >> Today it takes a couple months to get a model from, when the model's ready, to production, why not turn that in two hours. Agile, literally, physically agile, in terms of walk off time. Right? And then the other thing is give you flexibility to choose where your model should run. So, in our deployment, between the demo and the platform expansion that we announced yesterday, you know, we give the ability of getting your model and, you know, get it compiled, get it optimized for any instance in the cloud and automatically move it around. Today, that's not the case. You have to pick one instance and that's what you do. And then you might auto scale with that one instance. So we give the agility of actually running and scaling the model the way you want, and the way it gives you the right SLAs. >> Yeah, I think Swami was mentioning that, not specifically that use case for you, but that use case generally, that scale being moving things around, making them faster, not having to do that integration work. >> Scale, and run the models where they need to run. Like some day you want to have a large scale deployment in the cloud. You're going to have models in the edge for various reasons because speed of light is limited. We cannot make lights faster. So, you know, got to have some, that's a physics there you cannot change. There's privacy reasons. You want to keep data locally, not send it around to run the model locally. So anyways, and giving the flexibility. >> Let me jump in real quick. I want to ask this specific question because you made me think of something. So we're just having a data mesh conversation. And one of the comments that's come out of a few of these data as code conversations is data's the product now. So if you can move data to the edge, which everyone's talking about, you know, why move data if you don't have to, but I can move a machine learning algorithm to the edge. Cause it's costly to move data. I can move computer, everyone knows that. But now I can move machine learning to anywhere else and not worry about integrating on the fly. So the model is the code. >> It is the product. >> Yeah. And since you said, the model is the code, okay, now we're talking even more here. So machine learning models today are not treated as code, by the way. So do not have any of the typical properties of code that you can, whenever you write a piece of code, you run a code, you don't know, you don't even think what is a CPU, we don't think where it runs, what kind of CPU it runs, what kind of instance it runs. But with machine learning model, you do. So what we are doing and created this fully transparent automated way of allowing you to treat your machine learning models if you were a regular function that you call and then a function could run anywhere. >> Yeah. >> Right. >> That's why-- >> That's better. >> Bringing DevOps agility-- >> That's better. >> Yeah. And you can use existing-- >> That's better, because I can run it on the Artemis too, in space. >> You could, yeah. >> If they have the hardware. (both laugh) >> And that allows you to run your existing, continue to use your existing DevOps infrastructure and your existing people. >> So I have to ask you, cause since you're a professor, this is like a masterclass on theCube. Thank you for coming on. Professor. (Luis laughing) I'm a hardware guy. I'm building hardware for Boston Dynamics, Spot, the dog, that's the diversity in hardware, it's tends to be purpose driven. I got a spaceship, I'm going to have hardware on there. >> Luis: Right. >> It's generally viewed in the community here, that everyone I talk to and other communities, open source is going to drive all software. That's a check. But the scale and integration is super important. And they're also recognizing that hardware is really about the software. And they even said on stage, here. Hardware is not about the hardware, it's about the software. So if you believe that to be true, then your model checks all the boxes. Are people getting this? >> I think they're starting to. Here is why, right. A lot of companies that were hardware first, that thought about software too late, aren't making it. Right? There's a large number of hardware companies, AI chip companies that aren't making it. Probably some of them that won't make it, unfortunately just because they started thinking about software too late. I'm so glad to see a lot of the early, I hope I'm not just doing our own horn here, but Apache TVM, the infrastructure that we built to map models to different hardware, it's very flexible. So we see a lot of emerging chip companies like SiMa.ai's been doing fantastic work, and they use Apache TVM to map algorithms to their hardware. And there's a bunch of others that are also using Apache TVM. That's because you have, you know, an opening infrastructure that keeps it up to date with all the machine learning frameworks and models and allows you to extend to the chips that you want. So these companies pay attention that early, gives them a much higher fighting chance, I'd say. >> Well, first of all, not only are you backable by the VCs cause you have pedigree, you're a professor, you're smart, and you get good recruiting-- >> Luis: I don't know about the smart part. >> And you get good recruiting for PhDs out of University of Washington, which is not too shabby computer science department. But they want to make money. The VCs want to make money. >> Right. >> So you have to make money. So what's the pitch? What's the business model? >> Yeah. Absolutely. >> Share us what you're thinking there. >> Yeah. The value of using our solution is shorter time to value for your model from months to hours. Second, you shrink operator, op-packs, because you don't need a specialized expensive team. Talk about expensive, expensive engineers who can understand machine learning hardware and software engineering to deploy models. You don't need those teams if you use this automated solution, right? Then you reduce that. And also, in the process of actually getting a model and getting specialized to the hardware, making hardware aware, we're talking about a very significant performance improvement that leads to lower cost of deployment in the cloud. We're talking about very significant reduction in costs in cloud deployment. And also enabling new applications on the edge that weren't possible before. It creates, you know, latent value opportunities. Right? So, that's the high level value pitch. But how do we make money? Well, we charge for access to the platform. Right? >> Usage. Consumption. >> Yeah, and value based. Yeah, so it's consumption and value based. So depends on the scale of the deployment. If you're going to deploy machine learning model at a larger scale, chances are that it produces a lot of value. So then we'll capture some of that value in our pricing scale. >> So, you have direct sales force then to work those deals. >> Exactly. >> Got it. How many customers do you have? Just curious. >> So we started, the SaaS platform just launched now. So we started onboarding customers. We've been building this for a while. We have a bunch of, you know, partners that we can talk about openly, like, you know, revenue generating partners, that's fair to say. We work closely with Qualcomm to enable Snapdragon on TVM and hence our platform. We're close with AMD as well, enabling AMD hardware on the platform. We've been working closely with two hyperscaler cloud providers that-- >> I wonder who they are. >> I don't know who they are, right. >> Both start with the letter A. >> And they're both here, right. What is that? >> They both start with the letter A. >> Oh, that's right. >> I won't give it away. (laughing) >> Don't give it away. >> One has three, one has four. (both laugh) >> I'm guessing, by the way. >> Then we have customers in the, actually, early customers have been using the platform from the beginning in the consumer electronics space, in Japan, you know, self driving car technology, as well. As well as some AI first companies that actually, whose core value, the core business come from AI models. >> So, serious, serious customers. They got deep tech chops. They're integrating, they see this as a strategic part of their architecture. >> That's what I call AI native, exactly. But now there's, we have several enterprise customers in line now, we've been talking to. Of course, because now we launched the platform, now we started onboarding and exploring how we're going to serve it to these customers. But it's pretty clear that our technology can solve a lot of other pain points right now. And we're going to work with them as early customers to go and refine them. >> So, do you sell to the little guys, like us? Will we be customers if we wanted to be? >> You could, absolutely, yeah. >> What we have to do, have machine learning folks on staff? >> So, here's what you're going to have to do. Since you can see the booth, others can't. No, but they can certainly, you can try our demo. >> OctoML. >> And you should look at the transparent AI app that's compiled and optimized with our flow, and deployed and built with our flow. That allows you to get your image and do style transfer. You know, you can get you and a pineapple and see how you look like with a pineapple texture. >> We got a lot of transcript and video data. >> Right. Yeah. Right, exactly. So, you can use that. Then there's a very clear-- >> But I could use it. You're not blocking me from using it. Everyone's, it's pretty much democratized. >> You can try the demo, and then you can request access to the platform. >> But you get a lot of more serious deeper customers. But you can serve anybody, what you're saying. >> Luis: We can serve anybody, yeah. >> All right, so what's the vision going forward? Let me ask this. When did people start getting the epiphany of removing the machine learning from the hardware? Was it recently, a couple years ago? >> Well, on the research side, we helped start that trend a while ago. I don't need to repeat that. But I think the vision that's important here, I want the audience here to take away is that, there's a lot of progress being made in creating machine learning models. So, there's fantastic tools to deal with training data, and creating the models, and so on. And now there's a bunch of models that can solve real problems there. The question is, how do you very easily integrate that into your intelligent applications? Madrona Venture Group has been very vocal and investing heavily in intelligent applications both and user applications as well as enablers. So we say an enable of that because it's so easy to use our flow to get a model integrated into your application. Now, any regular software developer can integrate that. And that's just the beginning, right? Because, you know, now we have CI/CD integration to keep your models up to date, to continue to integrate, and then there's more downstream support for other features that you normally have in regular software development. >> I've been thinking about this for a long, long, time. And I think this whole code, no one thinks about code. Like, I write code, I'm deploying it. I think this idea of machine learning as code independent of other dependencies is really amazing. It's so obvious now that you say it. What's the choices now? Let's just say that, I buy it, I love it, I'm using it. Now what do I got to do if I want to deploy it? Do I have to pick processors? Are there verified platforms that you support? Is there a short list? Is there every piece of hardware? >> We actually can help you. I hope we're not saying we can do everything in the world here, but we can help you with that. So, here's how. When you have them all in the platform you can actually see how this model runs on any instance of any cloud, by the way. So we support all the three major cloud providers. And then you can make decisions. For example, if you care about latency, your model has to run on, at most 50 milliseconds, because you're going to have interactivity. And then, after that, you don't care if it's faster. All you care is that, is it going to run cheap enough. So we can help you navigate. And also going to make it automatic. >> It's like tire kicking in the dealer showroom. >> Right. >> You can test everything out, you can see the simulation. Are they simulations, or are they real tests? >> Oh, no, we run all in real hardware. So, we have, as I said, we support any instances of any of the major clouds. We actually run on the cloud. But we also support a select number of edge devices today, like ARMs and Nvidia Jetsons. And we have the OctoML cloud, which is a bunch of racks with a bunch Raspberry Pis and Nvidia Jetsons, and very soon, a bunch of mobile phones there too that can actually run the real hardware, and validate it, and test it out, so you can see that your model runs performant and economically enough in the cloud. And it can run on the edge devices-- >> You're a machine learning as a service. Would that be an accurate? >> That's part of it, because we're not doing the machine learning model itself. You come with a model and we make it deployable and make it ready to deploy. So, here's why it's important. Let me try. There's a large number of really interesting companies that do API models, as in API as a service. You have an NLP model, you have computer vision models, where you call an API and then point in the cloud. You send an image and you got a description, for example. But it is using a third party. Now, if you want to have your model on your infrastructure but having the same convenience as an API you can use our service. So, today, chances are that, if you have a model that you know that you want to do, there might not be an API for it, we actually automatically create the API for you. >> Okay, so that's why I get the DevOps agility for machine learning is a better description. Cause it's not, you're not providing the service. You're providing the service of deploying it like DevOps infrastructure as code. You're now ML as code. >> It's your model, your API, your infrastructure, but all of the convenience of having it ready to go, fully automatic, hands off. >> Cause I think what's interesting about this is that it brings the craftsmanship back to machine learning. Cause it's a craft. I mean, let's face it. >> Yeah. I want human brains, which are very precious resources, to focus on building those models, that is going to solve business problems. I don't want these very smart human brains figuring out how to scrub this into actually getting run the right way. This should be automatic. That's why we use machine learning, for machine learning to solve that. >> Here's an idea for you. We should write a book called, The Lean Machine Learning. Cause the lean startup was all about DevOps. >> Luis: We call machine leaning. No, that's not it going to work. (laughs) >> Remember when iteration was the big mantra. Oh, yeah, iterate. You know, that was from DevOps. >> Yeah, that's right. >> This code allowed for standing up stuff fast, double down, we all know the history, what it turned out. That was a good value for developers. >> I could really agree. If you don't mind me building on that point. You know, something we see as OctoML, but we also see at Madrona as well. Seeing that there's a trend towards best in breed for each one of the stages of getting a model deployed. From the data aspect of creating the data, and then to the model creation aspect, to the model deployment, and even model monitoring. Right? We develop integrations with all the major pieces of the ecosystem, such that you can integrate, say with model monitoring to go and monitor how a model is doing. Just like you monitor how code is doing in deployment in the cloud. >> It's evolution. I think it's a great step. And again, I love the analogy to the mainstream. I lived during those days. I remember the monolithic propriety, and then, you know, OSI model kind of blew it. But that OSI stack never went full stack, and it only stopped at TCP/IP. So, I think the same thing's going on here. You see some scalability around it to try to uncouple it, free it. >> Absolutely. And sustainability and accessibility to make it run faster and make it run on any deice that you want by any developer. So, that's the tagline. >> Luis Ceze, thanks for coming on. Professor. >> Thank you. >> I didn't know you were a professor. That's great to have you on. It was a masterclass in DevOps agility for machine learning. Thanks for coming on. Appreciate it. >> Thank you very much. Thank you. >> Congratulations, again. All right. OctoML here on theCube. Really important. Uncoupling the machine learning from the hardware specifically. That's only going to make space faster and safer, and more reliable. And that's where the whole theme of re:MARS is. Let's see how they fit in. I'm John for theCube. Thanks for watching. More coverage after this short break. >> Luis: Thank you. (gentle music)
SUMMARY :
live on the floor at AWS re:MARS 2022. for having me in the show, John. but machine learning is the And that allows you to get certainly on the silicon side. 'cause I could see the progression. So once upon a time, yeah, no... because if you wake up learning runs in the end, that's going to give you the So that was pre-conventional wisdom. the Hamilton was working on. and to this day, you know, That's the beginning of that was logical when you is that the ecosystem because that's kind of the test First, you know-- and scaling the model the way you want, not having to do that integration work. Scale, and run the models So if you can move data to the edge, So do not have any of the typical And you can use existing-- the Artemis too, in space. If they have the hardware. And that allows you So I have to ask you, So if you believe that to be true, to the chips that you want. about the smart part. And you get good recruiting for PhDs So you have to make money. And also, in the process So depends on the scale of the deployment. So, you have direct sales How many customers do you have? We have a bunch of, you know, And they're both here, right. I won't give it away. One has three, one has four. in Japan, you know, self They're integrating, they see this as it to these customers. Since you can see the booth, others can't. and see how you look like We got a lot of So, you can use that. But I could use it. and then you can request But you can serve anybody, of removing the machine for other features that you normally have It's so obvious now that you say it. So we can help you navigate. in the dealer showroom. you can see the simulation. And it can run on the edge devices-- You're a machine learning as a service. know that you want to do, I get the DevOps agility but all of the convenience it brings the craftsmanship for machine learning to solve that. Cause the lean startup No, that's not it going to work. You know, that was from DevOps. double down, we all know the such that you can integrate, and then, you know, OSI on any deice that you Professor. That's great to have you on. Thank you very much. Uncoupling the machine learning Luis: Thank you.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Luis Ceze | PERSON | 0.99+ |
Qualcomm | ORGANIZATION | 0.99+ |
Luis | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Boston Dynamics | ORGANIZATION | 0.99+ |
two hours | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
2017 | DATE | 0.99+ |
Japan | LOCATION | 0.99+ |
Madrona Venture Capital | ORGANIZATION | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
four | QUANTITY | 0.99+ |
2016 | DATE | 0.99+ |
University of Washington | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
Pepsi | ORGANIZATION | 0.99+ |
Both | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
First | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
SiMa.ai | ORGANIZATION | 0.99+ |
OctoML | TITLE | 0.99+ |
OctoML | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.98+ |
one instance | QUANTITY | 0.98+ |
DevOps | TITLE | 0.98+ |
Madrona Venture Group | ORGANIZATION | 0.98+ |
Swami | PERSON | 0.98+ |
Madrona | ORGANIZATION | 0.98+ |
about six years | QUANTITY | 0.96+ |
Spot | ORGANIZATION | 0.96+ |
The Lean Machine Learning | TITLE | 0.95+ |
first | QUANTITY | 0.95+ |
theCUBE | ORGANIZATION | 0.94+ |
ARMs | ORGANIZATION | 0.94+ |
pineapple | ORGANIZATION | 0.94+ |
Raspberry Pis | ORGANIZATION | 0.92+ |
TensorFlow | TITLE | 0.89+ |
Snapdragon | ORGANIZATION | 0.89+ |
about three years old | QUANTITY | 0.89+ |
a couple years ago | DATE | 0.88+ |
two hyperscaler cloud providers | QUANTITY | 0.88+ |
first ones | QUANTITY | 0.87+ |
one of | QUANTITY | 0.85+ |
50 milliseconds | QUANTITY | 0.83+ |
Apache TVM | ORGANIZATION | 0.82+ |
both laugh | QUANTITY | 0.82+ |
three major cloud providers | QUANTITY | 0.81+ |
Massimo Re Ferre, AWS | DockerCon 2021
>>Mhm. Yes. Hello. Welcome back to the cubes coverage of dr khan 2021 virtual. I'm john for your host of the cube. We're messing my fair principal technologist at AWS amazon Web services messman. Thank you for coming on the cube, appreciate it. Um >>Thank you. Thank you for having me. >>Great to see you love this amazon integration with doctor want to get into that in a second. Um Been great to see the amazon cloud native integration working well. E. C. S very popular. Every interview I've done at reinvent uh every year it gets better and better more adoption every year. Um Tell us what's going on with amazon E. C. S because you have Pcs anywhere and now that's being available. >>Yeah that's fine, that's correct, join and uh yeah so customers has been appreciating the value and the simplicity of VCS for many years now. I mean we we launched GCS back in 2014 and we have seen great adoption of the product and customers has always been appreciating. Uh the fact that it was easy to operate and easy to use. Uh This is a journey with the CS anywhere that started a few years ago actually. And we started this journey uh listening to customers that had particular requirements. Um I'd like to talk about, you know, the the law of the land and the law um uh of the physic where customers wanted to go all in into uh into the cloud, but they did have this exception that they need to uh deal with with the application that could not move to the cloud. So as I said, this journey started three years ago when we launched outpost. Um and outpost is our managed infrastructure that customers can deploy in their own data centers. And we supported Pcs on day one on outpost. Um having that said, there are lots of customers that came to us and said we love outputs but there are certain applications and certain requirements, uh such as compliance or the fact simply that we have like assets that we need to reuse in our data center uh that we want to use and before we move into into the cloud. So they were asking us, we love the simplicity of Vcs but we have to use gears that we have in our data center. That is when we started thinking about Pcs anywhere. So basically the idea of VCS anywhere is that you can use e c s E C as part of that, you know, and love um uh appreciated the simplicity of using Pcs but using your customer managed infrastructure as the data plane, basically what you could do is you can define your application within the Ec. S country plane and deploy those applications on customer own um infrastructure. What that means from a very practical perspective is that you can deploy this application on your managed infrastructure ranging from uh raspberry pis this is the demo that we show the invent when we pronounce um e c s anywhere all the way up to bare metal server, we don't really care about the infrastructure underneath. As long as it supported, the OS is supported. Um we're fine with that. >>Okay, so let's take this to the next level and actually the big theme at dr Connors developer experience, you know, that's kind of want to talk about that and obviously developer productivity and innovation have to go hand in hand. You don't want to stunt the innovation equation, which is cloud, native and scale. Right. So how does the developer experience improve with amazon ECs and anywhere now that I'm on, on premises or in the cloud? Can you take me through? What's the improvements around pcs and the developer? >>Yeah I would argue that the the what you see as anywhere solved is more for operational aspect and the requirements that more that are more akin to the operation team that that they need to meet. Uh We're working very hard to um to improve the developing experience on top of the CS beyond what we're doing with the CS anywhere. So um I'd like to step back a little bit and maybe tell a little bit of a story of why we're working on those things. So um the customer as I said before, continue to appreciate the simplicity and the easier views of E. C. S. However what we learn um over the years is that as we added more features to E. C. S, we ended up uh leveraging more easy. Um AWS services um example uh would be a load balancer integration or secret manager or Fc. Or um other things like service discovery that uses underneath other AWS products like um clubman for around 53. And what happened is that the end user experience, the developer experience became a little bit more complicated because now customers opportunity easy of use of these fully managed services. However they were responsible for time and watering all uh together in the application definition. So what we're working on to simplify this experience is we're working on tools that kind of abstract these um this verbal city that you get with pcs. Um uh An example is a confirmation template that a developer we need to use uh to deploy an application leveraging all of these features. Could then could end up being uh many hundreds of transformation lines um in the in the in the definition of the service. So we're working on new tools and new capabilities to make this experience better. Uh Some of them are C d k uh the copilot cli, dws, copilot cli those are all instruments and technologies and tools that we're building to abstract that um uh verbosity that I was alluding to and this is where actually also the doctor composed integration with the CS falls in. >>Yeah, I'm just gonna ask you that the doctor piece because actually it's dr khan all the developers love containers, they love what they do. Um This is a native, you know, mindset of shifting left with security. How is the relationship with the Docker container ecosystem going with you guys? Can you take him in to explain for the folks here watching this event and participating in the community, explain the relationship with Docker container specifically. >>Yeah, absolutely. Uh so basically we started working with dR many, many years ago, um uh Pcs was based on on DR technology when we launch it. Uh and it's still using uh DR technology and last year we started to collaborate with dR more closely um when DR releases the doctor composed specification um as an open source projects. So basically doctor is trying to use the doctor composed specification to create uh infrastructure product gnostic, uh way to deploy Docker application um uh using those specification in multiple infrastructure as part of these journey, we work with dr to support pcs as a back end um for um for the specification, basically what this means from a very practical perspective, is that you can take a doctor composed an existing doctor composed file. Um and doctor says that there are 650,000 doctor composed files spread across the top and all um uh lose control uh system um over the world. And basically you can take those doctor composed file and uh composed up and deploy transparently um into E. C. S Target on AWS. So basically if we go back to what I was alluding to before, the fact that the developer would need to author many 100 line of confirmation template to be able to take their application and deploy it into the cloud. What they need to do now is um offering a new file, a um a file uh with a very clear and easy to use dr composed syntax composed up and deploy automatically on AWS. Um and using Pcs Fargate um and many other AWS services in the back end. >>And what's the expectation in your mind as you guys look at the container service to anywhere model the on premise and without post, what does he what's the vision? Because that's again, another question mark for me, it's like, okay, I get it totally makes sense. Um, but containers are showing the mainstream enterprises, not the hyper skills. You guys always been kind of the forward thinkers, but you know, main street enterprise, I call it. They're picking up adoption of containers in a massive way. They're looking at cloud native specifically as the place for modern application development period. That's happening. What's the story? Say it again? Because I want to make sure I get this right e C s anywhere if I want to get on premises hybrid, What's it mean for me? >>Uh, this goes back to what I was saying at the beginning. So there are there are there when we have been discussing here are mostly to or token of things. Right. So the fact that we enable these big enterprises to meet their requirements and meet their um their um checkboxes sometimes to be able to deploy outside of AWS when there is a need to do that. This could be for edge use cases or for um using years that exist in the data center. So this is where e c s anywhere is basically trying, this is what uh pcs anywhere is trying to address. There is another orthogonal discussion which is developer experience, uh and that development experience is being addressed by these additional tools. Um what I like to say is that uh the confirmation is becoming a little bit like assembler in a sense, right? It's becoming very low level, super powerful, but very low level and we want to abstract and bring the experience to the next level and make it simple for developers to leverage the simplicity of some of these tools including Docker compose um and and and being able to deploy into the cloud um and getting all the benefits of the cloud scalability, electricity and security. >>I love the assembler analogy because you think about it. A lot of the innovation has been kind of like low level foundational and if you start to see all the open source activity and the customers, the tooling does matter. And I think that's where the ease of use comes in. So the simplicity totally makes sense. Um can you give an example of some simplicity piece? Because I think, you know, you guys, you know, look at looking at ec. S as the cornerstone for simplicity. I get that. Can you give an example to walk us through a day in the life of of an example >>uh in an example of simplicity? Yeah, supposedly in action. Yeah. Well, one of the examples that I usually do and there is this uh, notion of being served less and I think that there is a little bit of a, of an obsession around surveillance and trying to talk about surveillance for so many things. When I talk about the C. S, I like to use another moniker that is version less. So to me, simplicity also means that I do not have to um update my service. Right? So the way E C. S works is that engineering in the service team keeps producing and keeps delivering new features for PCS overnight for customers to wake up in the morning and consuming those features without having to deal with upgrades and updates. I think that this is a very key, um, very key example of simplicity when it comes to e C s that is very hard to find um in other, um, solutions whether there are on prime or in the cloud. >>That's a great example in one of the big complaints I hear just anecdotally around the industry is, you know, the speed of the minds of business, want the apps to move faster and the iteration with some craft obviously with security and making sure things buttoned up, but things get pulled back. It's almost slowed down because the speed of the innovation is happening faster than the compliance of some sort of old governance model or code reviews. I want to approve everything. So there's a balance between making sure what's approved, whether security or some pipeline procedures and what not. >>So that I could have. I cannot agree more with you. Yeah, no, it's absolutely true because I think that we see these very interesting um, uh, economy, I would say between startups moving super fast and enterprises try to move fast but forced to move at their own speed. So when we when we deliver services based on, for example, open source software uh, that customers need to um, look after in terms of upgrade to latest release. What we usually see is start up asking us can you move faster? There is a new version of that software, can you enable us to deploy that version? And then on the other hand of the spectrum, there are these big enterprises trying to move faster but not so much that are asking us can use lower. Can you slow down a little bit? Right, because I cannot keep that pigs. So it's a very it's a very interesting um, um, a very interesting time to be alive. >>You know, one of the, one of the things that pop up into these conversations when you talk, when I talk to VP of engineering of companies and then enterprises that the operational efficiency, you got developer productivity and you've got innovation right, you've got the three kind of things going on there knobs and they all have to turn up. People want more efficiency of the operations, they want more developed productivity and more innovation. What's interesting is you start seeing, okay, it's not that easy. There's also a team formation and I know Andy Jassy kinda referred to this in his keynote at Reinvent last year around thinking differently around your organizational but you know, that could be applied to technologists too. So I'd love to get your thoughts while you're here. I know you blog about this and you tweet about this but this is kind of like okay if these things are all going to be knobs, we turned up innovation efficiency, operationally and develop productivity. What's the makeup of the team? Because some are saying, you have an SRE embedded, you've got the platform engineering, you've got version lists, you got survival is all these things are going on all goodness. But does that mean that the teams have to change? What's your thoughts on that you want to get your perspective? >>Yeah, no, absolutely. I think that there was a joke going around that um as soon as you see a job like VP of devoPS, I mean that is not going to work, right? Because these things are needs to be like embedded into each team, right? There shouldn't be a DEVOPS team or anything, it would be just a way of working. And I totally agree with you that these knobs needs to go insane, right? And you cannot just push too hard on innovation which are not having um other folks um to uh to be able to, you know, keep that pace um with you. And we're trying to health customers with multiple uh tools and services to try to um have not only developers and making developer experience uh better but also helping people that are building these underneath platforms. Like for example, prod on AWS protein is a good example of this, where we're focusing on helping these um teams that are trying to build platforms because they are not looking themselves as being a giant or very fast. But they're they're they're measured on being secure, being compliant and being, you know, within a guardrail uh that an enterprise um regulated enterprise needs to have. So we need to have all of these people um both organizationally as well as with providing tools and technologies that have them in their specific areas um to succeed. >>Yeah. And what's interesting about all this is that you know I think we're also having conversations and and again you're starting to see things more clearly here at dr khan we saw some things that coop con which the joke there was not joke but the observation was it's less about kubernetes which is now becoming boring, lee reliable to more about cloud native applications under the covers with program ability. So as all this is going on there truly is a flip of the script. You can actually re engineer and re factor everything, not just re platform your applications in I. T. At once. Right now there's a window whether it's security or whatever. Now that the containers and and the doctor ecosystem and the container ecosystem and the The kubernetes, you've got KS and you got six far gay and all the stuff of goodness. Companies can actually do this right now. They can actually change everything. This is a unique time. This window might close are certainly changed if you're not on it now, it's the same argument of the folks who got caught in the pandemic and weren't in the cloud got flat footed. So you're seeing that example of if you weren't in the cloud up during the pandemic before the pandemic, you were probably losing during the pandemic, the ones that one where the already guys are in the cloud. Now the same thing is true with cloud native. You're not getting into it now, you're probably gonna be on the wrong side of history. What's your reaction to that? >>Yeah, No, I I I agree totally. I I like to think about this. I usually uh talk about this if I can stay back step back a little bit and I think that in this industry and I have gray areas and I have seen lots of things, I think that there has been too big Democratisation event in 90 that happened and occurred in the last 30 years. So the first one was from, you know from when um the PC technology has been introduced, distributed computing from the mainframe area and that was the first Democratisation step. Right? So everyone had access to um uh computers so they could do things if you if you fast forward to these days. Um uh what happened is that on top of that computer, whatever that became a server or whatever, there is a state a very complex stack of technologies uh that allow you to deployment and develop and deploy your application. Right. But that stack of technology and the complexity of that stack of technology is daunting in some way. Right? So it is in a bit access and democratic access to technology. So to me this is what cloud enabled, Right? So the next step of democratisation was the introduction of services that allow you to bypass that stack, which we call undifferentiated heavy lifting because you know, um you don't get paid for managing, I don't know any M. R. Server or whatever, you get paid for extracting values through application logic from that big stack. So I totally agree with you that we're in a unique position to enable everyone um with what we're building uh to innovate a lot faster and in a more secure way. >>Yeah. And what comes out, I totally agree. And I think that's a great historical view and I think let's bring this down to the present today and then bring this as the as the bridge to the future. If you're a developer you could. And by the way, no matter whether you're programming infrastructure or just writing software or even just calling a PS and rolling your own, composing your services, it's programmable and it's just all accessible. So I think that that's going to change the again back to the three knobs, developer productivity or just people productivity, operational efficiency, which is scale and then innovation, which is the business logic where I think machine learning starts to come in, right? So if you can get the container thing going, you start tapping into that control plane. It's not so much just the data control plane. It's like a software control plane. >>Yeah, no, absolutely. The fact that you can, I mean as I said, I have great hair. So I've seen a lot of things and back in the days, I mean the, I mean the whole notion of being able to call an api and get 10 servers for example or today, 10 containers. It would be like, you know, almost a joke, right? So we spent a lot of time racking and um, and doing so much manual stuff that was so ever prone because we usually talk about velocity and agility, but we, we rarely talk about, you know, the difficulties and the problems that doing things manually introduced in the process, the way that you can get wrong. >>You know, you know, it reminds me of this industry and I was like finally get off my lawn in the old days. I walk to school with no shoes on in the snow. We had to build our own colonel and our own graphics libraries and then now they have all these tools. It's like, you're just an old, you know, coder, but joking aside, you know that experience, you're bringing up appointments for the younger generation who have never loaded a Linux operating system before or had done anything like that level. It's not so much old versus young, it's more of a systems thinking, he said distributed computing. If you look at all the action, it's essentially distributed computing with new software paradigm and it's a system architecture. It's not so much software engineering, software developer, you know, this that it's just basically all engineering at this point, all software. >>It is, it is very much indeed. It's uh, it's whole software, there is no other um, there is no other way to call it. It's um, I mean we go back to talk about, you know, infrastructure as code and everything is now uh corridor software in in in a way. It's, yeah. >>This is great to have you on. Congratulations. A CS anywhere being available. It's great stuff. Um, and great to see you and, and great to have this conversation. Um, amazon web services obviously, uh, the world has has gone super cloud. Uh, now you have distributed computing with edge iot exploding beautifully, which means a lot of new opportunities. So thanks for coming on. >>Thank you very much for having me. It was a pleasure. Okay, cube >>Coverage of Dr Khan 2021 virtual. This is the Cube. I'm John for your host. Thanks for watching.
SUMMARY :
Thank you for coming on the cube, appreciate it. Thank you for having me. Great to see you love this amazon integration with doctor want to get into that in a second. So basically the idea of VCS anywhere is that you can use e c s E C So how does the developer experience improve with amazon city that you get with pcs. How is the relationship with the Docker container is that you can take a doctor composed an existing doctor composed file. You guys always been kind of the forward thinkers, but you know, main street enterprise, So the fact that we enable these big enterprises to meet their requirements I love the assembler analogy because you think about it. When I talk about the C. S, I like to use another moniker that you know, the speed of the minds of business, want the apps to move faster and the iteration with What we usually see is start up asking us can you move faster? mean that the teams have to change? And I totally agree with you that these knobs needs Now that the containers and and the doctor ecosystem and the container ecosystem and the introduction of services that allow you to bypass that stack, So if you can get the container thing going, you start tapping into in the process, the way that you can get wrong. You know, you know, it reminds me of this industry and I was like finally get off my lawn in the old days. It's um, I mean we go back to talk about, you know, infrastructure as code Um, and great to see you and, and great to have this conversation. Thank you very much for having me. This is the Cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
2014 | DATE | 0.99+ |
10 servers | QUANTITY | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
10 containers | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
today | DATE | 0.99+ |
Massimo Re Ferre | PERSON | 0.99+ |
last year | DATE | 0.99+ |
Linux | TITLE | 0.99+ |
100 line | QUANTITY | 0.99+ |
each team | QUANTITY | 0.99+ |
first | QUANTITY | 0.98+ |
pandemic | EVENT | 0.98+ |
both | QUANTITY | 0.98+ |
three years ago | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
three | QUANTITY | 0.97+ |
three knobs | QUANTITY | 0.97+ |
DockerCon | EVENT | 0.96+ |
around 53 | QUANTITY | 0.95+ |
last 30 years | DATE | 0.94+ |
e C s | TITLE | 0.93+ |
many years ago | DATE | 0.93+ |
hundreds | QUANTITY | 0.93+ |
Reinvent | EVENT | 0.92+ |
john | PERSON | 0.91+ |
Docker | TITLE | 0.91+ |
six | QUANTITY | 0.9+ |
first one | QUANTITY | 0.89+ |
few years ago | DATE | 0.89+ |
90 | DATE | 0.88+ |
Pcs Fargate | TITLE | 0.85+ |
2021 | DATE | 0.85+ |
GCS | TITLE | 0.85+ |
650,000 doctor | QUANTITY | 0.85+ |
E C. S | TITLE | 0.83+ |
day one | QUANTITY | 0.81+ |
Dr Khan | PERSON | 0.78+ |
prime | COMMERCIAL_ITEM | 0.75+ |
C. S | TITLE | 0.74+ |
dr Connors | PERSON | 0.72+ |
a day | QUANTITY | 0.66+ |
Pcs | ORGANIZATION | 0.63+ |
dr khan | ORGANIZATION | 0.6+ |
DEVOPS | ORGANIZATION | 0.59+ |
law of | TITLE | 0.57+ |
outpost | ORGANIZATION | 0.54+ |
second | QUANTITY | 0.51+ |
dr khan | PERSON | 0.49+ |
devoPS | ORGANIZATION | 0.47+ |
raspberry pis | ORGANIZATION | 0.46+ |
VCS | ORGANIZATION | 0.44+ |
VCS | TITLE | 0.34+ |
Limor Fried, Adafruit, Saloni Garg, LNM Institute, & DeLisa Alexander, Red Hat | Red Hat Summit 2019
>> Announcer: Live from Boston, Massachusetts, it's theCUBE covering Red Hat Summit 2019. Brought to you by Red Hat. >> Welcome back to our coverage here on theCUBE of Red Hat Summit 2019. We're live in Boston right now, and I'm joined by a couple of award winning professionals. And we're looking forward to hearing what their story is because it's fascinating on both fronts. And also by DeLisa Alexander who has a great job title at Red Hat. Chief People Officer. I love that title. DeLisa, thanks for joining us. >> Thanks for having us. >> Also with us, Limor Fried who is the and founder and lead engineer of Adafruit and Saloni Garg who is an undgergrad student, third year student, at the LNM Institute of Technology. And that's in Jaipur, India. So Saloni, glad to have you with us. And Limor, a pleasure as well. >> Thank you. >> And you're all lit up. You've got things going on there, right? >> I'm glowing, we're gonna get all into that. >> We'll get into that later. First, let's talk about the award that, they're two women in open-source are our winners this year. On the community side, Limor won, on the academic side, Saloni won, so talk about the awards if you would, DeLisa. The process and really what you're trying to do with recognizing these kinds of achievements. >> Well, this is our fifth year for the Women in Open-Source Award. So after this period of time, I can tell you what we wanna do is make an impact by really fostering more diverse communities, particularly gender diverse in open-source. And so that's the whole goal. Five years into it, what we've discovered is that when you really focus on diversity and inclusion within a community, you actually can make an impact. And the thing that's so exciting this year is that our award winners are really evidence of that. >> So talk about the two categories then if you would please. You have community on one side, academics on the other. It appears to be pretty clear cut what you're hoping to achieve there by recognizing an active contributor, and then somebody who is in the wings and waiting for their moment. But go ahead and fill in a little bit about, >> Yeah, absolutely. >> Limor and Saloni too about, why are they here. >> Limor: Why am I here? >> Yes, well, really what we're trying to do is create role models for women and girls who would like to participate in technology but perhaps are not sure that that's the way that they can go. And they don't see people that are like them, so there's less a tendency to join into this type of community. So with the community award winner, we're looking at the professional who's been contributing to open-source for a period of time. And with our academic winner, we're looking to score more people who are in university to think about it. And, of course, the big idea is you'll all be looking at these women as people that will inspire you to potentially do more things with open-source and more things with technology. We've been hearing for many, many years that we definitely need to have more gender diversity in tech in general and in open-source. And Red Hat is kind of uniquely situated to focus on the open-source community, and so with our role as the open-source leader, we really feel like we need to make that commitment and to be able to foster that. >> Well, it makes perfect sense. Obviously. Great perfect sense. Saloni, if you would, let's talk first about your work. You've been involved in open-source for quite some time. I know you have a lot of really interesting projects that you're working on right now. We'll get to that in a bit, but just talk about, I guess, the attraction for you in terms of open-source and really kind of where that came from originally through your interest in stem education. >> Okay, so when I first came to college, I was really influenced to contribute to open-source by my seniors. They have already selected in programs like Google Summer of Code Outreach channel, so they actually felt empowered by open-source. So they encouraged me to join it too. I tried open-source, and I feel really, like, I'm a part of something bigger than myself. And I was helped greatly by my seniors, so I feel it's my duty to give it back to my juniors and to help them when they need it so that they can do wonders, yeah. >> Great. And Limor, for you, I know you founded the company. 100% female owned. You've got-- >> Yeah, 100% me. >> Yeah, right. 100% you. >> It's my fault. >> Right. Well, I wasn't going to blame you. I'll credit you instead. >> Yeah, that's our big thing. We wanna change. Get blame to get credit. >> Right. It's all about credit. >> More positive. >> So 100 employees? Is that right? >> 100, 150, yep. >> Okay, talk a little bit about kind of the origin, the genesis of the company and where that came from and then your connection on the open-source side. >> Well, I, yeah, so I grew up actually in Boston. So I've lived here a very long time. >> You said like a block from here. Two blocks. >> I used to live, actually, yes, in South Station nearby. I used to live by the Griffin Book line, and so Wilson has a very strong open-source community, you know. Ephesoft is here. And, yeah, that's kind of the origins of a lot of this free software and open-source software community. And when I went to school, I ended up going to MIT, and the open-source software and open-source technology is kind of part of, like, the genetics there. There's actually this thinking that you wouldn't do it. It's kind of by default. People write code, you open-source, you release it. There's a culture of collaboration. Scientists, engineers, students, researchers. All working together and sharing code. And when I was in school, so I had to take Thesis. I really didn't wanna do it, and so instead, I started building, like, MP3 players and video games. Taking all the engineering that I was studying and, like, not doing the work I was supposed to be doing. But instead, I was having fun and building cool electronic parts, and I would publish these projects online. I had, like, a MediaLab webs page, and I would publish, you know, here's all the chips and the schematics and the layout. And people sort of started coming up with the idea of open-source hardware. Let's take the philosophy of open-source software where we release the source code. But, in here, you release CAD files, firmware, layouts, 3D models. And so I did that, and I was publishing here's how you make this, like, Lite-Brite toy for Burning Man or an MP3 player or a cell phone jammer. All these fun projects, and people would end up contacting me and saying, hey, these are really cool projects. I would like to build this project myself, but unlike software where you just, like, type in, like, make, config, and compile and all that. You actually have to buy parts, you have to get these physical things. And so they said, you know, could you sell me a kit, like a box, where we'd get it and take it home and be able to build it. And I was totally like, no, I'm busy. I have to, like, not write this thesis. >> That's not what I do. >> But eventually, I did write the thesis. And then I was really stuck because I'm like, now what do I do? So I ended up selling kits. So I sold the synthesizer kits and such, and I did an art fellowship and stuff. And then, eventually, I was kind of like, this is, I was doing, you know, it's, you kind of fall into business by accident because if you knew what you were getting into, you wouldn't do it in my opinion. So I ended up sort of developing that, and that was 13 years ago. And now we have 4,000 products in the store, you know. >> 4,000 products? >> Yeah, I know. Ridiculous, right? That's a lot. >> Yeah, who's doing that inventory, right? >> Well, we have a pretty intense inventory system that I'd love to talk to you about, but it's kind of boring. >> I'll bet you do. Now, I was reading something about an circuit playground express. >> Yes. >> Is that right? So is that what this is all about is-- >> Yes! I knew you'd ask, and that's why I wore this. >> So it's a, kind of, an exploratory circuit board of-- >> Yeah! It's open-source, open-source hardware, open-source software and firmware. And we had a lot of parents and teachers and educators and camp counselors come to us and say, we wanna teach physical computing. We wanna teach coding but with physical hardware because, you know, we all, all the tier coders, right? No, I don't know. But, eventually, you're like, I'm typing on the screen. And you want to take that and you wanna make it physical. You wanna bring it out into the world where there's a wearable or a cosplay or assistive technology, or you wanna make video games, that are, like, physical video games. And the problem that teachers had were the classrooms, a lot of these classrooms, they don't have a lot of money. So they said it has to be very low-cost. It has to be durable because these kids are, like, chewing on it and stuff, which is fun. And it also has to work on any computer, even extremely old computers. 'Cause a lot of these schools, they only have a budget every seven years to buy laptops. And so this actually becomes a very difficult technological problem. How do you design something that's $20 but can teach physical computing to anybody? From kids who are not even good at typing all the way to college students who wanna implement fast 48 transforms, and so we designed this hardware. It's open-source, and it's cool 'cause people are, like, remixing it and making improvements to it. It's open-source circuit playground, and I'm wearing it. And it's glowing, and I don't know. It's fun! It's got LEDs and sensors. And you can just alligator clip to it and make projects, and we've got schools from around the world learning how to code. And I think it's a much more fun experience than just typing at a computer. >> Absolutely. Yeah, Solani, on your side of the fence, so I obviously, in your education years if you will, not that we ever stop learning, but formally right now. But you're involved, among the many projects that you've been involved with, a smart vehicle. >> Yeah, I'm working on it. >> Project, right? So tell us a little bit about that and how open-source has come into play with what you're looking at in terms of, I assume, traffic and congestion and flows and those kinds of things. >> Yeah. So what we're working on is, basically, we'll be fitting cameras and Raspberry Pis on buses, college buses. And then they'll detect, like, they'll detect lane detection and traffic signal violation and will report the assigned people. If there's any breakage of law or any breakage of traffic signals, so that's what, basically, we are working on and how open-source comes into the play is that we actually knew nothing about OpenCV and all the technology that is before all this. So I looked up some open-source projects that had already the lump sum of all this, and I got to learn a lot about how things actually work on the code-based side. So that's how open-source actually helped me to make this project. >> And, ultimately, who do you report to on that? Or how is that data gonna become actionable or, I assume it can be. >> Yeah. >> At some point, right? I mean, who's your partner in that? Or who is the agency or the body that, you know, can most benefit from that? >> Yeah, so, currently, this is an academy project, and a classmate of mine has been working with me. And we are working on a faculty member. And so, basically, we have decided to expand this project and to use it as a government project. And we, authorities we'll be reporting to whenever there's a signal or law breakage is that the traffic police department will be notifying them in case of any signal breakage. >> So if there's an uptick in speeding or red light running in Jaipur, we know who to blame. >> Yeah. >> Right? >> Shouldn't have run a report. >> It's, Solani, why'd you do that to them, right? All right, ladies, if you would. And I'm gonna end with DeLisa, but I'd like to hear your thoughts about each other. Just about, as you look at the role of women in tech and the diversity that Red Hat is trying to encourage, Limor, what have you seen in Solani here over the last day, day and a half, that maybe you think will leave a lasting impression on you? >> I love Solani's energy and her passion, and I can just, she's has this emanated strength. I can just tell that nothing stops her from achieving what she wants. Like, she wants to, like, do this Raspberry Pi traffic camera. She's just gonna figure out what it takes to solve that problem. She's gonna use open-source software, hardware, whatever it takes. And she's just gonna achieve her goal. I totally sense that from her from the last few days we've been together. >> That's great. >> Thank you. >> Yeah! >> All right. Solani, your turn. For Limor. >> What I have done is just a fraction of what she has been doing. She's, like, inspiration. I look up to her, and I, also, I mean, I hope I start my own company someday. And she's really a role model and an inspiration for me. So yeah. >> Yeah, I think you've got a pretty good mentor there in that respect. And then, DeLisa, when you see young ladies like this who are, you know, their achievements are so impressive in their respects. What does that say to you about Red Hat, the direction of the program, and then the impact on young women that you're having? >> Well, the program has gotten so much more participation. So many people, 8,000 people actually voted to select our winners. And all of our finalists were so impressive. We have major contributors to open-source, and so, along with our finalists, our winners are people who are just role models. And I am just so impressed with them, and I think that every year, we're learning something different from each of the winners. And so, as they round down into a community, the things that they'll be able to mentor people on will just be exponentially increasing. And so it's really exciting. >> Fantastic. Well, thank you all. The three of you, the ladies. Congratulations on your recognition, your accomplishments. Well done. Safe travels back to New York and back to India as well, and I would look forward to hearing more about what you're up to down the road. I think this is not the last we're gonna hear from the two of you. >> Thank you for having us. >> And thank you for calling me a young lady. >> Absolutely. I mean, look at the source. Open-source, you might say. That was awful. All right, back with more Red Hat Summit 2019. We're live here on theCUBE in Boston. (gentle music)
SUMMARY :
Brought to you by Red Hat. And also by DeLisa Alexander who has a great job title So Saloni, glad to have you with us. And you're all lit up. Saloni won, so talk about the awards if you would, DeLisa. And so that's the whole goal. So talk about the two categories then if you would please. Limor and but perhaps are not sure that that's the way the attraction for you in terms of open-source And I was helped greatly by my seniors, And Limor, for you, I know you founded the company. Yeah, right. I'll credit you instead. Get blame to get credit. It's all about credit. the genesis of the company and where that came from So I've lived here a very long time. You said like a block from here. And so they said, you know, could you sell me a kit, And now we have 4,000 products in the store, you know. Yeah, I know. to you about, but it's kind of boring. I'll bet you do. I knew you'd ask, and that's why I wore this. And you want to take that and you wanna make it physical. that we ever stop learning, but formally right now. what you're looking at in terms of, I assume, traffic and all the technology that is before all this. do you report to on that? that the traffic police department will be notifying them or red light running in Jaipur, we know who to blame. that maybe you think will leave a lasting impression on you? I can just tell that nothing stops her from achieving Solani, your turn. And she's really a role model and an inspiration for me. What does that say to you about Red Hat, the direction And I am just so impressed with them, and I think Well, thank you all. I mean, look at the source.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Saloni | PERSON | 0.99+ |
Limor | PERSON | 0.99+ |
DeLisa | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
Limor Fried | PERSON | 0.99+ |
New York | LOCATION | 0.99+ |
$20 | QUANTITY | 0.99+ |
India | LOCATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Jaipur | LOCATION | 0.99+ |
South Station | LOCATION | 0.99+ |
4,000 products | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
Solani | PERSON | 0.99+ |
Ephesoft | ORGANIZATION | 0.99+ |
8,000 people | QUANTITY | 0.99+ |
Saloni Garg | PERSON | 0.99+ |
100 | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
two categories | QUANTITY | 0.99+ |
DeLisa Alexander | PERSON | 0.99+ |
fifth year | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Jaipur, India | LOCATION | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
100 employees | QUANTITY | 0.99+ |
150 | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
LNM Institute | ORGANIZATION | 0.99+ |
LNM Institute of Technology | ORGANIZATION | 0.99+ |
third year | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
Red Hat Summit 2019 | EVENT | 0.98+ |
both fronts | QUANTITY | 0.98+ |
Two blocks | QUANTITY | 0.98+ |
Five years | QUANTITY | 0.98+ |
13 years ago | DATE | 0.98+ |
first | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
48 | QUANTITY | 0.96+ |
Griffin Book | ORGANIZATION | 0.96+ |
Wilson | PERSON | 0.96+ |
two women | QUANTITY | 0.96+ |
MediaLab | ORGANIZATION | 0.96+ |
Adafruit | ORGANIZATION | 0.95+ |
Women in Open-Source Award | TITLE | 0.93+ |
ORGANIZATION | 0.92+ | |
MIT | ORGANIZATION | 0.92+ |
every seven years | QUANTITY | 0.91+ |
OpenCV | TITLE | 0.89+ |
Garg | PERSON | 0.88+ |
Adafruit | PERSON | 0.86+ |
one side | QUANTITY | 0.85+ |
Raspberry Pis | COMMERCIAL_ITEM | 0.8+ |
Raspberry Pi | ORGANIZATION | 0.73+ |
couple | QUANTITY | 0.73+ |
day | QUANTITY | 0.72+ |
Saloni | ORGANIZATION | 0.64+ |
winning | QUANTITY | 0.63+ |
theCUBE | ORGANIZATION | 0.6+ |
last | DATE | 0.58+ |
a half | QUANTITY | 0.58+ |
Burning Man | TITLE | 0.56+ |
Code | ORGANIZATION | 0.47+ |
Summer of | TITLE | 0.34+ |
Brite | COMMERCIAL_ITEM | 0.33+ |
Leigh Day, Ellie Galloway & Sara Chipps | Red Hat Summit 2018
(upbeat electronic music) >> Announcer: Live from San Francisco, it's theCUBE. Covering Red Hat Summit 2018. Brought to you by Red Hat. >> Hey, welcome back, everyone. This is theCUBE, we're live in San Francisco, California, here at Moscone West, Red Hat Summit 2018. I'm John Furrier, the co-host of theCUBE. We've got three great guests, exciting segment. Really looking at the future of computer programming, the youth in our generation, the young minds, and the award winners here at Red Hat Summit. Our three guests are Leigh Day, Vice Present of Marketing and Communications at Red Hat. Ellie Galloway with Jewelbots, and Sara Chipps, CTO at Jewelbots Thanks for spending the time and coming on. I really appreciate it. Love this story because I always, as a computer person, I always love getting nerdy, but now nerd is the new cool. So starting young and coding is not just for guys anymore, it's for everybody. So congratulations on your success. Take a minute to explain what's happened here, because the folks watching don't know what happened yesterday. You guys were featured at part of Open Source Stars. Leigh, talk about the story. >> So about three years ago, the Red Hat Marketing Communications Group decided that they needed a passion project, something that would make them feel more energized about coming to work and not just selling products, but telling genuine stories about people. We started our Open Source Stories films series, and that has turned into Open Source Stories Live as well. So yesterday we brought awesome stories, like Jewelbots to our stage to tell the story of children and others getting involved in coding. And Ellie and Femmie on our stage, talking about how people should code for good and we really love that message and applaud that. >> And coding is so social because it's fun. So talk about Jewelbots and what's happening here? So how did this get started? And then I'll go into some specific questions for the young future star here. (laughter) Sara, how did it all get started? >> Yeah, so Jewelbots got started out of a desire to make a product for young girls, to get them excited about coding. So we talked to about 200 girls and we asked them what was interesting to them, and over and over from them we heard that their friendships are really important to them. And so when we were talking to them about a bracelet that lights up when your friends are nearby and you can use it to send secret messages, they got really excited. And so that's what we built and we made it open source so they would code it as well. >> How did it all get started? What was the motivation, what motivated you to take on this project? >> Good question. So I've been a software developer for seventeen years, I was five years into my career before I worked with another woman and it was another five years after that, before I worked with another one. So I really, you know, I love this career and I wanted to figure out a way to get more women excited about doing it. So, talking to my male peers, I heard from them that they started about middle school age, and so I wanted to find something for girls that would also inspire them in that way. >> That's awesome, thank you so much for doing that. I love the story, it's super important. Now, how did you get involved? You just loved programming? You wake up one day and say, hey, I love programming? How did you get involved? >> Well first, me and my dad, my dad works for Microsoft, he helped me code a game in Unity and so I love coding games so much that later he showed me Minecraft min code. And so I got involved in that, by then I kind of knew how to code and everything, so I only asked my dad for help if I absolutely needed it. And then, since my dad new Sara Chipps from Microsoft, he showed me Jewelbot one day when I got home from school and I've been on my own programming since then. >> John: You having fun? >> I am. >> What's the favorite thing about coding that you like? >> I love solving problems, and so solving problems is probably my favorite part in coding. I solve a lot of problems and inventions, tiny ones and just kind of figuring things out. >> Did you get all your friends involved? Did you spread it around to your friend group? >> I am getting some friends involved. In my YouTube channel I have someone I shared Jewel a lot with and showed how to code, and yeah. And at school, at my next school, I am going to create a Jewelbots club, and I'm hoping I can get a lot of people to join. >> So is it fun, is Jewelbot fun? I mean, how does it work, how does he Jewelbot work? So I wear a bracelet and then it lights up? So how does the code work? Is it an io sensor in the front end? How does it work? >> It works by Bluetooth. Do you mean friendship coding mode, or? >> Friendship coding mode. >> Okay, friendship coding mode. Yeah, you use Bluetooth for friendship coding mode. You pair Jewelbots together and it's pretty simple. You don't need a program, you can start right away without any program and it already has a default on it, so yeah. >> Do you have an agreement with Snapchat yet? Because that would be a great geofence feature, if I had like a Jewelbot with Snapchat integration. >> You can communicate by vibrates but there's not a Snapchat picture. >> Not yet, we'll make sure that we get that back and I'll get my daughter involved to jump in. How about the community aspect? I love the story, because what it does, it makes it fun. You don't want coding to be like eating spinach or, you know, taking out the trash or sweeping, you know, the floor up, you want to make it fun. Kids want to make it fun and gaming is key. When did it start clicking with you, Sara? You know, when did it start getting momentum? >> Yeah, well I think one thing that we realized, is that coding doesn't have to be a lonely activity, it doesn't have to be just one person sitting in a basement coding, it could be really anyone, and it's such a social thing, you know? All coders are self-taught and we all learn from each other, so having the ability to have a community that you can reach out to that are excited to help you and that kind of thing was a really important part of what we were building. >> So you guys were on stage... So tell about what happened here, 'cause folks didn't get to see and they can see it online after on a replay, you guys are out on stage, did you do like a demo? Tell us what happened on stage. >> We had a whole afternoon session that was focused on showcasing collaboration, young people coding, STEM. We had a group from our co-op, alumni come to the stage and talk about their experiences with Co.Lab, programming Raspberry Pis to take pictures. These are middle school girls, we've done programs with them all over the east coast. Then we had our CMO talk about his open-source experience. We had Women Open Source Awards, and then Sara and Ellie came out and told the audience about Jewelbots and it was just an opportunity to shine a light on their awesome project and to showcase young women doing great things. And showing women that they should have the confidence to code alongside men. >> Yeah, great program, how does someone get involved? How can someone get involved with Red Hat's Open Stories and your communities with Jewelbots. What can you guys share? Is there locations or a web app? Is there something you can get involved in? How does someone get involved? >> Well, Red Hat, we have seven Open Source Stories films, that people can go online and watch. But then yet, there's 90 of them for an open-source story, OpenSourceStories@RedHat.com is a way to contribute to that. But we're always thinking about new ideas, taking contributions and love to hear about these stories. >> Sara, how do I get involved in the Jewelbots? For anyone else watching who might be inspired by this awesomeness you guys have going on here. Great practice, I love how you're doing this. How do they get involved with what you're doing? >> So, if you have young girls in your life Jewelbots.com, Amazon.com, Target.com is all where you can get Jewelbots. If you don't and you know some people that do, a lot of people have started hosting events around Jewelbots, so if people in your office might have daughters and they might be interested in something like that, that's something that we help people do, as well. >> That's great. Ellie, what's your thoughts on all this? This celebrity status you have? Your YouTube followers are going to go through the roof now. >> Yeah, since yesterday I've had over 75 new followers. >> John: Wow. >> So yeah, it's amazing. >> Can she say the name of her YouTube channel? >> Of course. >> EllieGJewelbots. >> EllieGJewelbots, we're going to promote it, make sure it's on the screen, guys, great program. I'm so excited for you, that's amazing, don't stop. It gets better, more fun every time. When you build cool stuff it's magical. And tell all your friends. Great stuff, thanks so much for doing this. Great program, thanks for coming on. >> Thanks for having us. >> Thanks for having us. It's theCUBE, live here. A really inspirational inspirational moment here, getting everyone started at the young age really kind of opens the aperture of all people, all diversity, inclusion and diversity, really critical part of the community paying it forward. Of course, theCUBE's doing our part here, be back with more live coverage after this short break. (upbeat electronic music)
SUMMARY :
Brought to you by Red Hat. and the award winners here at Red Hat Summit. And Ellie and Femmie on our stage, And coding is so social because it's fun. and you can use it to send secret messages, So I really, you know, I love this career How did you get involved? and so I love coding games so much that later he showed me I love solving problems, and so solving problems And at school, at my next school, I am going to create Do you mean friendship coding mode, or? You don't need a program, you can start right away Do you have an agreement with Snapchat yet? You can communicate by vibrates but there's not the floor up, you want to make it fun. so having the ability to have a community So you guys were on stage... and to showcase young women doing great things. Is there something you can get involved in? taking contributions and love to hear about these stories. by this awesomeness you guys have going on here. So, if you have young girls in your life This celebrity status you have? When you build cool stuff it's magical. getting everyone started at the young age
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Sara | PERSON | 0.99+ |
Sara Chipps | PERSON | 0.99+ |
Ellie | PERSON | 0.99+ |
Ellie Galloway | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
EllieGJewelbots | PERSON | 0.99+ |
seventeen years | QUANTITY | 0.99+ |
Leigh Day | PERSON | 0.99+ |
five years | QUANTITY | 0.99+ |
San Francisco, California | LOCATION | 0.99+ |
Femmie | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
three guests | QUANTITY | 0.99+ |
Target.com | ORGANIZATION | 0.99+ |
Red Hat Marketing Communications Group | ORGANIZATION | 0.99+ |
YouTube | ORGANIZATION | 0.99+ |
Minecraft | TITLE | 0.99+ |
Amazon.com | ORGANIZATION | 0.99+ |
West | LOCATION | 0.99+ |
theCUBE | ORGANIZATION | 0.98+ |
Red Hat Summit | EVENT | 0.97+ |
Jewelbots | PERSON | 0.97+ |
first | QUANTITY | 0.97+ |
three great guests | QUANTITY | 0.97+ |
Red Hat Summit 2018 | EVENT | 0.97+ |
over 75 new followers | QUANTITY | 0.97+ |
Jewelbots | ORGANIZATION | 0.97+ |
about 200 girls | QUANTITY | 0.96+ |
OpenSourceStories@RedHat.com | OTHER | 0.95+ |
Women Open Source Awards | EVENT | 0.95+ |
one person | QUANTITY | 0.95+ |
Snapchat | ORGANIZATION | 0.94+ |
one thing | QUANTITY | 0.93+ |
Co.Lab | ORGANIZATION | 0.93+ |
seven | QUANTITY | 0.93+ |
one day | QUANTITY | 0.88+ |
three years ago | DATE | 0.87+ |
Open Source Stories Live | TITLE | 0.83+ |
Leigh | PERSON | 0.81+ |
Jewelbots | TITLE | 0.8+ |
Jewelbots club | ORGANIZATION | 0.77+ |
90 of | QUANTITY | 0.75+ |
Vice | PERSON | 0.74+ |
Unity | TITLE | 0.73+ |
Raspberry Pis | ORGANIZATION | 0.7+ |
Snapchat | TITLE | 0.68+ |
Moscone | EVENT | 0.62+ |
Jewelbot | PERSON | 0.62+ |
about | DATE | 0.57+ |
Jewel | TITLE | 0.57+ |
Jewelbot | TITLE | 0.57+ |
Open | TITLE | 0.55+ |
CTO | PERSON | 0.52+ |
Jewelbots.com | OTHER | 0.51+ |
one | QUANTITY | 0.5+ |
day | DATE | 0.47+ |
Jewelbot | COMMERCIAL_ITEM | 0.39+ |
Dustin Kirkland, Canonical | AWS Summit 2017
>> Announcer: Live from Manhattan, it's theCube, covering AWS Summit, New York City, 2017. Brought to you by Amazon Web Services. >> Welcome back to the Big Apple as we continue our coverage here on theCube of AWS Summit 2017. We're at the Javits Center. We're in midtown. A lot of hustle and bustle outsie and inside there, good buzz on the show floor with about 5,000 strong attending and some 20,000 registrants also for today's show. Along with Stu Miniman, I'm John Walls, and glad to have you here on theCube. And Dustin Kirkland now joins us. He's at Ubuntu, the product and strategy side of things at Canonical, and Dustin, good to see you back on theCube. >> Thank you very much. >> You just threw a big number out at us when we were talking off camera. I'll let you take it from there, but it shows you about the presence, you might say, of Ubuntu and AWS, what that nexus is right now. >> Ubuntu easily leads as the operating system in Amazon. About 70%, seven zero, 70% of all instances running in Amazon right now are running Ubuntu. And that's actually, despite the fact that Amazon have their own Amazon Linux and there are other, Windows, Rails, SUSE, Debian, Fedora, other alternatives. Ubuntu still represents seven out of 10 workloads in Amazon running right now. >> John: Huge number. >> So, Dustin, maybe give us a little insight as to what kind of workloads you're seeing. How much of this was people that, Ubuntu has a great footprint everywhere and therefore it kind of moved there. And how much of it is new and interesting things, IOT and machine learning and everything like that, where you also have support. >> When you're talking about that many instances, that's quite a bit of boat, right? So if you look at just EC2 and the two types of workloads, there are the long-running workloads. The workloads that are up for many months, years in some cases. I met a number of customers here this week that are running older versions of Ubuntu like 12.04 which are actually end of life, but as a customer of Canonical we continue providing security updates. So we have a product called Extended Security Maintenance. There's over a million instances of Ubuntu 12.04 which are already end of life but Canonical can continue providing security updates, critical security updates. That's great for the long-running workloads. The other thing that we do for long-running workloads are kernel live patches. So we're able to actually fix vulnerabilities in the Linux kernel without rebooting, using entirely upstream and open source technology to do that. So for those workloads that stay up for months or years, the combination of Extended Security Maintenance, covering it for a very long time, and the kernel live patch, ensuring that you're able to patch those vulnerabilities without rebooting those systems, it's great for hosting providers and some enterprise workloads. Now on the flip side, you also see a lot of workloads that are spikey, right. Workloads that come and go in bursts. Maybe they run at night or in the morning or just whenever an event happens. We see a lot of Ubuntu running there. It's really, a lot of that is focused on data and machine learning, artificial intelligence workloads, that run in that sort of bursty manner. >> Okay, so it was interesting, when I hear you talk about some things that have been running for a bunch of years, and on the other side of the spectrum is serverless and the new machine learning stuff where it tends to be there, what's Canonical doing there? What kind of exciting, any of the news, Macey, Glue, some of these other ones that came out, how much do those fit into the conversations you're having? >> Sure, they all really fit. When we talk about what we're doing to tune Ubuntu for those machine learning workloads, it really starts with the kernel. So we actually have an AWS-optimized Linux kernel. So we've taken the Ubuntu Linux kernel and we've tuned it, working with the Amazon kernel engineers, to ensure that we've carved out everything in that kernel that's not relevant inside of an Amazon data center and taken it out. And in doing so, we've actually made the kernel 15% smaller, which actually reduces the security footprint and the storage footprint of that kernel. And that means smaller downloads, smaller updates, and we've made it boot 30% faster. We've done that by adding support, turning on, configuring on some parameters that enable virtualization or divert IO drivers or specifically the Amazon drivers to work really well. We've also removed things like floppy disk drives and Bluetooth drivers, which you'll never find in a virtual machine in Amazon. And when you take all of those things in aggregate and you remove them from the kernel, you end up with a much smaller, better, more efficient package. So that's a great starting point. The other piece is we've ensured that the latest and greatest graphics adapters, the GPUs, GPGPUs from Invidia, that the experienced on Ubuntu out of the box just works. It works really well, and well at scale. You'll find almost all machine learning workloads are drastically improved inside of GPGPU instances. And for the dollar, you're able to compute sometimes hundreds or thousands of times more efficiently than a fewer CPU type workload. >> You're talking about machine learning, but on the artificial intelligence side of life, a lot of conversation about that at the keynotes this morning. A lot of good services, whatever, again, your activity in that and where that's going, do you think, over the next 12, 16 months? >> Yes, so artificial intelligence is a really nice place where we see a lot of Ubuntu, mainly because the nature of how AI is infiltrating our lives. It has these two sides. One side is at the edge, and those are really fundamentally connected devices. And for every one of those billions of devices out there, there are necessarily connections to an instance in the cloud somewhere. So if we take just one example, right, an autonomous vehicle. That vehicle is connected to the internet. Sometimes well, when you're at home, parked in the garage or parked at Whole Foods, right? But sometimes it's not. You're in the middle of the desert out in West Texas. That autonomous vehicle needs to have a lot of intelligence local to that vehicle. It gets downloaded opportunistically. And what gets downloaded are the results of that machine learning, the results of that artificial intelligence process. So we heard in the keynotes quite a bit about data modeling, right? Data modeling means putting a whole bunch of data into Amazon, which Amazon has made it really easy to do with things like Snowball and so forth. Once the data is there, then the big GPGPU instances crunch that data and the result is actually a very tight, tightly compressed bit of insight that then gets fed to devices. So an autonomous vehicle that every single night gets a little bit better by tweaking its algorithms, when to brake, when to change lanes, when to make a left turn safely or a right turn safely, those are constantly being updated by all the data that we're feeding that. Now why I said that's important from an Ubuntu perspective is that we find Ubuntu in both of those locations. So we open this by saying that Ubuntu is the leading operating system inside of Amazon, representing 70% of those instances. Ubuntu is, across the board, right now in 100% of the autonomous vehicles that are running today. So Uber's autonomous vehicle, the Tesla vehicles, the Google vehicles, a number of others from other manufacturers are all running Ubuntu on the CPU. There's usually three CPUs in a smart car. The CPU that's running the autonomous driving engine is, across the board, running Ubuntu today. The fact that it's the same OS makes it, makes life quite nice for the developers. The developers who are writing that software that's crunching the numbers in the cloud and making the critical real-time decisions in the vehicle. >> You talk about autonomous vehicles, I mean, it's about a car in general, thousands of data points coming in, in continual real time. >> Dustin: Right. >> So it's just not autonomous -- >> Dustin: Right. >> operations, right? So are you working in that way, diagnostics, navigation, all those areas? >> Yes, so we catch as headlines are a lot of the hobbyist projects, the fun stuff coming out of universities or startup space. Drones and robots and vacuum cleaners, right? And there's a lot of Ubuntu running there, anything from Raspberry Pis to smart appliances at home. But it's actually, I think, really where those artificially intelligent systems are going to change our lives, is in the industrial space. It's not the drone that some kids are flying around in the park, it's the drone that's surveying crops, that's coming to understand what areas of a field need more fertilizer or less water, right. And that's happening in an artificially intelligent way as smarter and smarter algorithms make its way onto those drones. It's less about the running Pandora and Spotify having to choose the right music for you when you're sitting in your car, and a lot more about every taxicab in the city taking data and analytics and understanding what's going on around them. It's a great way to detect traffic patterns, potentially threats of danger or something like that. That's far more industrial and less intresting than the fun stuff, you know, the fireworks that are shot off by a drone. >> Not nearly as sexy, right? It's not as much fun. >> But that's where the business is, you know. >> That's right. >> One of the things people have been looking at is how Amazon's really maturing their discussion of hyrid cloud. Now, you said that data centers, public cloud, edge devices, lots of mobile, we talked about IOT and everything, what do you see from customers, what do you think we're going to see from Amazon going forward to build these hybrid architectures and how does that fit in to autonomous vehicles and the like? >> So in the keynote we saw a couple of organizations who were spotlighted as all-in on Amazon, and that's great. And actually almost all of those logos that are all-in on Amazon are all-in on Amazon on Ubuntu and that's great. That's a very small number of logos compared to the number of organizations out there that are actually hybrid. Hybrid is certainly a ramp to being all-in but for quite a bit of the industry, that's the journey and the destination, too, in fact. That there's always going to be some amount compute that happens local and some amount of compute that happens in the cloud. Ubuntu helps provide an important portability layer. Knowing something runs well on Ubuntu locally, it's going to run well on Ubuntu in Amazon, or vise versa. The fact that it runs well in Amazon, it will also run well on Ubuntu locally. Now we have a support -- >> Yeah, I was just curious, you talked about some of the optimization you made for AWS. >> Dustin: Right. >> Is that now finding its way into other environments or do we have a little bit of a fork? >> We do, it does find it's way back into other environments so, you know, the Amazon hypervisors are usually Xen-based, although there are some interesting other things coming from Amazon there. Typically what we find on-prem is usually more KVM or Vmware based. Now, most of what goes into that virtual kernel that we build for Amazon actually applies to the virtual kernel that we built for Ubuntu that runs in Xen and Vmware and KVM. There's some subtle differences. Some, a few things that we've done very specifically for Amazon, but for the most part it's perfectly compatible all the way back to the virtual machines that you would run on-prem. >> Well, Dustin, always a pleasure, >> Yeah. >> to have you hear on theCube. >> Thanks, John. >> You're welcome back any time. >> All right. >> We appreciate the time and wish you the best of luck here the rest of the day, too. >> Great. >> Good deal. >> Thank you. >> Glad to be with us. Dustin Kirkland from Canonical joining us here on theCube. Back with more from AWS Summit 2017 here in New York City right after this.
SUMMARY :
Brought to you by Amazon Web Services. good buzz on the show floor with about 5,000 strong the presence, you might say, of Ubuntu and AWS, what And that's actually, despite the fact that Amazon where you also have support. Now on the flip side, you also see a lot of workloads And for the dollar, you're able to compute sometimes conversation about that at the keynotes this morning. The fact that it's the same OS makes it, it's about a car in general, thousands of data points than the fun stuff, you know, the fireworks that It's not as much fun. One of the things people have been looking at is So in the keynote we saw a couple of organizations some of the optimization you made for AWS. the virtual kernel that we built for Ubuntu that We appreciate the time and wish you the best of luck Glad to be with us.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Stu Miniman | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Canonical | ORGANIZATION | 0.99+ |
Dustin Kirkland | PERSON | 0.99+ |
70% | QUANTITY | 0.99+ |
Dustin | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
New York City | LOCATION | 0.99+ |
30% | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
Ubuntu 12.04 | TITLE | 0.99+ |
Ubuntu | TITLE | 0.99+ |
hundreds | QUANTITY | 0.99+ |
two sides | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Invidia | ORGANIZATION | 0.99+ |
One side | QUANTITY | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
two types | QUANTITY | 0.99+ |
Xen | TITLE | 0.99+ |
15% | QUANTITY | 0.99+ |
Pandora | ORGANIZATION | 0.99+ |
10 workloads | QUANTITY | 0.99+ |
Spotify | ORGANIZATION | 0.99+ |
one example | QUANTITY | 0.98+ |
12.04 | TITLE | 0.98+ |
Javits Center | LOCATION | 0.98+ |
both | QUANTITY | 0.98+ |
ORGANIZATION | 0.98+ | |
West Texas | LOCATION | 0.98+ |
Debian | TITLE | 0.98+ |
seven | QUANTITY | 0.97+ |
AWS Summit 2017 | EVENT | 0.97+ |
AWS Summit | EVENT | 0.97+ |
EC2 | TITLE | 0.96+ |
Big Apple | LOCATION | 0.96+ |
billions of devices | QUANTITY | 0.96+ |
about 5,000 strong | QUANTITY | 0.96+ |
Whole Foods | ORGANIZATION | 0.96+ |
this morning | DATE | 0.96+ |
thousands of times | QUANTITY | 0.95+ |
About 70% | QUANTITY | 0.95+ |
Windows | TITLE | 0.95+ |
this week | DATE | 0.95+ |
Vmware | TITLE | 0.95+ |