Image Title

Search Results for Boston Dynamics:

Eric Foellmer, Boston Dynamics | Amazon re:MARS 2022


 

(upbeat music) >> Okay, welcome back everyone. The cube coverage of AWS re:Mars, 2022. I'm John Furrier, host of theCUBE. We got Eric Foellmer, vice president of marketing at Boston Dynamics. Famous for Spot. We all know, we've seen the videos, zillion views. Mega views all over the internet. The dog robotics, it's famous. Rolls over, bounces up and down. I mean, how many TikTok videos are out there? Probably a ton. >> Oh, Spot is- Spot is world famous (John laughs) at this point, right? So it's the dance videos, and all the application videos that we have out there. Spot is become has become world famous. >> Eric, thanks for joining us on theCUBE here at re:Mars. This show really is back. There was still a pandemic hiatus there. But it's not a part of the re's. It's re Mars, reinforcement of security, and then reinvent the flagship show for AWS. But this show is different. It brings together a lot of disciplines. But it's converging in on what we see as the next general- Industrial space is a big poster child for that. Obviously in space, it's highly industrial, highly secure. Machine learning's powering all the devices. You guys have been in this, I mean a leader, in a robotics area. What's this show about? I mean, what's really happening here. What if you had to boil the essence of the top story of what's happening here? What would it be? >> So the way that I look at this show is it really is a convergence of innovation. Like this is really just the cutting edge of the innovation that's really happening throughout robotics, but throughout technology in general. And you know, part of this cultural shift will be to adopt these types of technologies in our everyday life. And I think if you ask any technology specialist here or any innovator here or entrepreneur. They'll tell you that they want their technologies to become ubiquitous in society, right? I mean, that's really what everyone is sort of driving towards from the perspective of- >> And we, and we got some company behind it. Look at this. >> Oh, there we go. >> All right. >> There's a (Eric laughs) There's one of our Spots. >> It's got one of those back there. All right so sorry to interrupt, got a little distracted by the beautiful thing there. >> So they're literally walking around and literally engulfing the show. So when I look at the show, that's what I see. >> Let's see the picture of- >> I see the future of technology. >> Get a camera on our photo bomb here going on. Get a photo bomb action. (Eric chuckles) It's just super exciting because it really, it humanizes, it makes you- Everyone loves dogs. And, you know, I mean, people have more empathy if you kicked Spot than, you know, a human. Because there's so much empathy for just the innovation. But let's get into the innovation because let's- The IOT tech scene has been slow. Cloud computing Amazon web services, the leader hyper scaler. They dominated the back office you know, data centers, all the servers, digital transformation. Now that's coming to the edge. Where robotics is now in play. Space, material handling, devices for helping people who are sick or in healthcare. >> Eric: Mhm. >> So a whole surge of revolutionary or transitionary technologies coming. What's your take on that? >> So I think, you know, data has become the driving force behind technology innovation. And so robotics are an enabler for the tech, for the data collection that is going to drive IOT and manufacturing 4.0 and other important edge related and, you know, futuristic technology innovations, right? So the driver of all of that is data. And so robots like Spot are collectors of data. And so instead of trying to retrofit a manufacturing plant, you know, with 30, 40, 50 year old equipment in some cases. With IOT sensors and, you know, fixed sensors throughout the network. We're bringing the sensors to the equipment in the form of an agile mobile robot that brings that technology forward and is able to assess. >> So explain that a little slower for me. So the one method would be retrofitting all the devices. Or the hardware currently installed. >> Eric: Sure. >> Versus almost like having a mobile unit next to it, kind of thing. Or- >> Right. So, I mean, if you're looking at antiquated equipment which is what most, you know, manufacturing plants are running off of. It's not really practical or feasible to update them with fixed sensors. So sensors that specifically take measurements from that machine. So, we enable Spot with a variety of sensors from audio sensors to listen for audio anomalies. Thermal detectors, to look for thermal hotspots in equipment. Or visual detectors, where it's reading analog gauges, that sort of thing. So by doing that, we are bringing the sensors to the machines. >> Yeah. >> And to be able to walk anywhere where a human can walk throughout a manufacturing plant. To inspect the equipment, take that reading. And then most importantly upload that to the cloud, to the users >> It's a service dog. >> you can apply some- >> It's a service dog. >> It really is. And it serves data for the understanding of how that equipment is operated. >> This is big agility for the customer. Get that data, agile. Talk about the cost impact of that, just alone. What the alternative would be versus say, deploying that scenario. Because I'd imagine the time and cost would be huge. >> Well, if you think, you know, about how much manufacturing facilities put into the predictive maintenance and being able to forecast when their equipment needs maintenance. But also when pieces of equipment are going to fail. Unexpected downtime is one of the biggest money drains of any manufacturing facility. So the ability to be able to forecast and get some insight into when that equipment is starting to perform less than optimally and start to degrade. The ability to forecast that in advance is massive. >> Well I think you just win on just in retrofit cost alone, nevermind the downside scenarios of manufacturing problems. All right, let's zoom out. You guys have been pioneers for a long time. What's changed in your mind now versus just a few years ago. I mean, look at even 5, 10 years ago. The evolution, cost and capability. What's changed the most? >> Yeah, I think the accessibility of robots has really changed. And we're just on the beginning stages of that evolution. We really are. We're at the precipice right now of robots becoming much more ubiquitous in people's lives. And that's really our foundation as a company. Is we really want to bring robots to mankind for the good of humanity, right? So if you think about, you know, taking humans out of harm's way. Or, you know, putting robots in situations where, you know, where it's assessing damage for a building, for example, right. You're taking people out of the, out of that harm's way and really standardizing what you're able to do with technology. So we see it as really being on the very entry point of having not only robotics, but technology in general to become much more prevalent in people's lives. >> Yeah. >> I mean, what, you know. 30 years ago, did you ever think that you would have the power of a supercomputer in your pocket to, you know. Which also happens to allow you to talk to people but it is so much more, right? So the power of a cell phone has changed our lives forever. >> A computer that happens to be a phone. You know, it's like, come on. >> Right. >> What's going on with that. >> That's almost secondary at this point. (John laughing) It really is. So, I mean, when you think about that transition from you know, I think we're at the cusp of that right now. We're at the beginning stages of it. And it's really, it's an exciting time to be part of this. An entire industry. >> Before I get your views on integration and scale. Because that's the next level. We're seeing a lot of action and growth. Talk about the use case. You've mentioned a few of them, take people out of harms way. What have you guys seen as use cases within Boston Dynamics customer base and or your partner network around use cases. That either you knew would happen, or ones that might have surprised you? >> Yeah. One of the biggest use cases for us right now is what we're demonstrating here at re:MARS. Which is the ability to walk through a manufacturing plant and collect data off various pieces of equipment. Whether that's pump or a gauge or seeing whether a valve is open or closed. These are all simple mundane tasks that people are, that manufacturers are having difficulty finding people to be able to perform. So the ability for a robot to go over and do that and standardize that process is really valuable. As companies are trying to collect that data in a consistent way. So that's one of the most prevalent use cases that we're seeing right now. And certainly also in cases where, you know, Spot is going into buildings that have been structurally damaged. Or, you know, assessing situations where we don't want people to be in harm's way. >> John: Yeah. >> You know- >> Bomb scares, or any kind of situation with police or, you know, threatening or danger situations. >> Sure. And fire departments as well. I mean, fire departments are becoming a huge, you know, a huge user of the robots themselves. Fire department in New York recently just adopted some of our robots as well. For that purpose, for search and rescue applications. >> Yeah. Go in, go see what's in there. See what's around the corner. It gives a very tactical edge capability for say the firefighter or law enforcement. I see that- I see the military applications must be really insane. >> Sure. From a search and rescue perspective. Absolutely. I mean, Spot helps you put eyes on situations that will allow a human to be operating at a safe distance. So it's really a great value for protecting human life and making sure that people stay out of harm's way. >> Well Eric, I really appreciate you coming on theCUBE and sharing your insight. One other question I'd like to ask if you don't mind is, you know. The one of the things I see next to your booth is the university piece. And then you see the Amazon, you know, material management. I don't know what to call it, but it's pretty impressive. And then I saw some of the demos on the keynotes. Looking at the scale of synthetic data. Just it's mind blowing what's going on in manufacturing. Amazon is pretty state of the art. I'm sure there are a customer of yours already. But they look complex these manufacturing sites. I mean, it looks like a maze. So how do you... I mean, I could see the consequences of something breaking, to be catastrophic. Because it's almost like, it's so integrated. Is this where you guys see success and how do these manufacturers deal with this? What's the... Is it like one big OS? >> Yeah, so the robots, because the robots are able to act independently. They can traverse difficult terrain and collect data on their own. And then, you know, what happens to that data afterwards is really up to the manufacturing. It can be delivered from the cloud and you can, it can be delivered via the edge. You know, edge devices and really that's where some of the exciting work is being done right now. Because that's where data can scale. And that's where robot deployments can scale as well, right? So you've got instead of a single robot. Now you have an operator deploying multiple robots. Monitoring, controlling, and assessing the data from multiple robots throughout a facility. And it really helps to scale that investment. >> All right, final question for you. This is personal question. Okay, I know- Saw your booth over there. And you have a lot of fan base. Spot's got a huge fan base. What are some of the crazy things that these nerd fans do? I mean, everyone get selfies with the Spot. They want to- I jump over the fence. I see, "Don't touch the dog." signs everywhere. The fan base is off the charts. What are the crazy things that people do to get either access to it. There's probably, been probably some theft, probably. Attempts, or selfies. Share some funny stories. >> I'll say this. My team is responsible for fielding a lot of the inbound inquiries that we get. Much of which comes from the entertainment industry. And as you've seen Spot has been featured in some really prominent, you know, entertainment pieces. You know, we were in that Super Bowl ad with Sam Adams. We were on Jimmy Kimmel, you know, during the Super Bowl time period. So the amount of entertainment... >> Value >> Pitches. Or the amount of entertainment value is immeasurable. But the number of pitches that we turn down is staggering. And when you can think about how most companies would probably pull out all the stops to take, you know. To be able to execute half the things that we're just, from a time perspective, from a resource perspective >> Okay, so Spots an A- not always able to do. >> So Spots an A-lister, I get that. Is there a B-lister now? I mean, that sounds like there's a market developing for Spot two. Is there a Spot two? The B player coming in? Understudy? >> So, I mean, Spot is always evolving. I think, you know, the physical- the physical statue that you see of Spot right now, Is where we're going to be in terms of the hardware, but we continue to move the robot forward. It becomes more and more advanced and more and more capable to do more and more things for people. So. >> All right. Well, we'll roll some B roll on this, on theCUBE. Thanks for coming on theCUBE. Really appreciate it. Boston Dynamics here in theCUBE, famous for Spot. And then here, the show packed here in re:MARS featuring, you know, robotics. It's a big feature hall. It's a set piece here in the show floor. And of course theCUBE's covering it. Thanks for watching. More coverage. I'm John Furrier, your host. After the short break. (upbeat music)

Published Date : Jun 23 2022

SUMMARY :

I mean, how many TikTok So it's the dance videos, of the top story of what's happening here? of the innovation that's really happening And we, and we got There's a (Eric laughs) by the beautiful thing there. and literally engulfing the show. I see the future for just the innovation. So a whole surge of revolutionary So the driver of all of that is data. So the one method would be retrofitting next to it, kind of thing. which is what most, you know, To inspect the equipment, And it serves data for the understanding This is big agility for the customer. So the ability to be able to forecast What's changed the most? on the very entry point So the power of a cell phone A computer that happens to be a phone. We're at the beginning stages of it. Because that's the next level. Which is the ability to walk with police or, you know, the robots themselves. I see the military applications I mean, Spot helps you I mean, I could see the consequences and assessing the data The fan base is off the charts. a lot of the inbound to take, you know. not always able to do. I mean, that sounds like I think, you know, the physical- It's a set piece here in the show floor.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Eric FoellmerPERSON

0.99+

EricPERSON

0.99+

Sam AdamsPERSON

0.99+

30QUANTITY

0.99+

John FurrierPERSON

0.99+

New YorkLOCATION

0.99+

40QUANTITY

0.99+

AmazonORGANIZATION

0.99+

Boston DynamicsORGANIZATION

0.99+

JohnPERSON

0.99+

AWSORGANIZATION

0.99+

Super BowlEVENT

0.99+

oneQUANTITY

0.99+

zillion viewsQUANTITY

0.99+

30 years agoDATE

0.98+

OneQUANTITY

0.98+

SpotTITLE

0.93+

5, 10 years agoDATE

0.91+

single robotQUANTITY

0.9+

a tonQUANTITY

0.9+

few years agoDATE

0.89+

Jimmy KimmelPERSON

0.88+

one methodQUANTITY

0.88+

Spot twoQUANTITY

0.87+

one bigQUANTITY

0.85+

pandemicEVENT

0.82+

SpotsORGANIZATION

0.8+

SpotPERSON

0.8+

50 year oldQUANTITY

0.8+

One other questionQUANTITY

0.77+

most prevalent use casesQUANTITY

0.75+

theCUBEORGANIZATION

0.74+

re:MarsTITLE

0.73+

4.0QUANTITY

0.71+

re:MarsEVENT

0.7+

MarsTITLE

0.69+

2022DATE

0.58+

TikTokORGANIZATION

0.52+

theCUBETITLE

0.48+

SpotORGANIZATION

0.42+

2022TITLE

0.37+

MARSEVENT

0.36+

MARSORGANIZATION

0.28+

Luis Ceze, OctoML | Amazon re:MARS 2022


 

(upbeat music) >> Welcome back, everyone, to theCUBE's coverage here live on the floor at AWS re:MARS 2022. I'm John Furrier, host for theCUBE. Great event, machine learning, automation, robotics, space, that's MARS. It's part of the re-series of events, re:Invent's the big event at the end of the year, re:Inforce, security, re:MARS, really intersection of the future of space, industrial, automation, which is very heavily DevOps machine learning, of course, machine learning, which is AI. We have Luis Ceze here, who's the CEO co-founder of OctoML. Welcome to theCUBE. >> Thank you very much for having me in the show, John. >> So we've been following you guys. You guys are a growing startup funded by Madrona Venture Capital, one of your backers. You guys are here at the show. This is a, I would say small show relative what it's going to be, but a lot of robotics, a lot of space, a lot of industrial kind of edge, but machine learning is the centerpiece of this trend. You guys are in the middle of it. Tell us your story. >> Absolutely, yeah. So our mission is to make machine learning sustainable and accessible to everyone. So I say sustainable because it means we're going to make it faster and more efficient. You know, use less human effort, and accessible to everyone, accessible to as many developers as possible, and also accessible in any device. So, we started from an open source project that began at University of Washington, where I'm a professor there. And several of the co-founders were PhD students there. We started with this open source project called Apache TVM that had actually contributions and collaborations from Amazon and a bunch of other big tech companies. And that allows you to get a machine learning model and run on any hardware, like run on CPUs, GPUs, various GPUs, accelerators, and so on. It was the kernel of our company and the project's been around for about six years or so. Company is about three years old. And we grew from Apache TVM into a whole platform that essentially supports any model on any hardware cloud and edge. >> So is the thesis that, when it first started, that you want to be agnostic on platform? >> Agnostic on hardware, that's right. >> Hardware, hardware. >> Yeah. >> What was it like back then? What kind of hardware were you talking about back then? Cause a lot's changed, certainly on the silicon side. >> Luis: Absolutely, yeah. >> So take me through the journey, 'cause I could see the progression. I'm connecting the dots here. >> So once upon a time, yeah, no... (both chuckling) >> I walked in the snow with my bare feet. >> You have to be careful because if you wake up the professor in me, then you're going to be here for two hours, you know. >> Fast forward. >> The average version here is that, clearly machine learning has shown to actually solve real interesting, high value problems. And where machine learning runs in the end, it becomes code that runs on different hardware, right? And when we started Apache TVM, which stands for tensor virtual machine, at that time it was just beginning to start using GPUs for machine learning, we already saw that, with a bunch of machine learning models popping up and CPUs and GPU's starting to be used for machine learning, it was clear that it come opportunity to run on everywhere. >> And GPU's were coming fast. >> GPUs were coming and huge diversity of CPUs, of GPU's and accelerators now, and the ecosystem and the system software that maps models to hardware is still very fragmented today. So hardware vendors have their own specific stacks. So Nvidia has its own software stack, and so does Intel, AMD. And honestly, I mean, I hope I'm not being, you know, too controversial here to say that it kind of of looks like the mainframe era. We had tight coupling between hardware and software. You know, if you bought IBM hardware, you had to buy IBM OS and IBM database, IBM applications, it all tightly coupled. And if you want to use IBM software, you had to buy IBM hardware. So that's kind of like what machine learning systems look like today. If you buy a certain big name GPU, you've got to use their software. Even if you use their software, which is pretty good, you have to buy their GPUs, right? So, but you know, we wanted to help peel away the model and the software infrastructure from the hardware to give people choice, ability to run the models where it best suit them. Right? So that includes picking the best instance in the cloud, that's going to give you the right, you know, cost properties, performance properties, or might want to run it on the edge. You might run it on an accelerator. >> What year was that roughly, when you were going this? >> We started that project in 2015, 2016 >> Yeah. So that was pre-conventional wisdom. I think TensorFlow wasn't even around yet. >> Luis: No, it wasn't. >> It was, I'm thinking like 2017 or so. >> Luis: Right. So that was the beginning of, okay, this is opportunity. AWS, I don't think they had released some of the nitro stuff that the Hamilton was working on. So, they were already kind of going that way. It's kind of like converging. >> Luis: Yeah. >> The space was happening, exploding. >> Right. And the way that was dealt with, and to this day, you know, to a large extent as well is by backing machine learning models with a bunch of hardware specific libraries. And we were some of the first ones to say, like, know what, let's take a compilation approach, take a model and compile it to very efficient code for that specific hardware. And what underpins all of that is using machine learning for machine learning code optimization. Right? But it was way back when. We can talk about where we are today. >> No, let's fast forward. >> That's the beginning of the open source project. >> But that was a fundamental belief, worldview there. I mean, you have a world real view that was logical when you compare to the mainframe, but not obvious to the machine learning community. Okay, good call, check. Now let's fast forward, okay. Evolution, we'll go through the speed of the years. More chips are coming, you got GPUs, and seeing what's going on in AWS. Wow! Now it's booming. Now I got unlimited processors, I got silicon on chips, I got, everywhere >> Yeah. And what's interesting is that the ecosystem got even more complex, in fact. Because now you have, there's a cross product between machine learning models, frameworks like TensorFlow, PyTorch, Keras, and like that and so on, and then hardware targets. So how do you navigate that? What we want here, our vision is to say, folks should focus, people should focus on making the machine learning models do what they want to do that solves a value, like solves a problem of high value to them. Right? So another deployment should be completely automatic. Today, it's very, very manual to a large extent. So once you're serious about deploying machine learning model, you got a good understanding where you're going to deploy it, how you're going to deploy it, and then, you know, pick out the right libraries and compilers, and we automated the whole thing in our platform. This is why you see the tagline, the booth is right there, like bringing DevOps agility for machine learning, because our mission is to make that fully transparent. >> Well, I think that, first of all, I use that line here, cause I'm looking at it here on live on camera. People can't see, but it's like, I use it on a couple couple of my interviews because the word agility is very interesting because that's kind of the test on any kind of approach these days. Agility could be, and I talked to the robotics guys, just having their product be more agile. I talked to Pepsi here just before you came on, they had this large scale data environment because they built an architecture, but that fostered agility. So again, this is an architectural concept, it's a systems' view of agility being the output, and removing dependencies, which I think what you guys were trying to do. >> Only part of what we do. Right? So agility means a bunch of things. First, you know-- >> Yeah explain. >> Today it takes a couple months to get a model from, when the model's ready, to production, why not turn that in two hours. Agile, literally, physically agile, in terms of walk off time. Right? And then the other thing is give you flexibility to choose where your model should run. So, in our deployment, between the demo and the platform expansion that we announced yesterday, you know, we give the ability of getting your model and, you know, get it compiled, get it optimized for any instance in the cloud and automatically move it around. Today, that's not the case. You have to pick one instance and that's what you do. And then you might auto scale with that one instance. So we give the agility of actually running and scaling the model the way you want, and the way it gives you the right SLAs. >> Yeah, I think Swami was mentioning that, not specifically that use case for you, but that use case generally, that scale being moving things around, making them faster, not having to do that integration work. >> Scale, and run the models where they need to run. Like some day you want to have a large scale deployment in the cloud. You're going to have models in the edge for various reasons because speed of light is limited. We cannot make lights faster. So, you know, got to have some, that's a physics there you cannot change. There's privacy reasons. You want to keep data locally, not send it around to run the model locally. So anyways, and giving the flexibility. >> Let me jump in real quick. I want to ask this specific question because you made me think of something. So we're just having a data mesh conversation. And one of the comments that's come out of a few of these data as code conversations is data's the product now. So if you can move data to the edge, which everyone's talking about, you know, why move data if you don't have to, but I can move a machine learning algorithm to the edge. Cause it's costly to move data. I can move computer, everyone knows that. But now I can move machine learning to anywhere else and not worry about integrating on the fly. So the model is the code. >> It is the product. >> Yeah. And since you said, the model is the code, okay, now we're talking even more here. So machine learning models today are not treated as code, by the way. So do not have any of the typical properties of code that you can, whenever you write a piece of code, you run a code, you don't know, you don't even think what is a CPU, we don't think where it runs, what kind of CPU it runs, what kind of instance it runs. But with machine learning model, you do. So what we are doing and created this fully transparent automated way of allowing you to treat your machine learning models if you were a regular function that you call and then a function could run anywhere. >> Yeah. >> Right. >> That's why-- >> That's better. >> Bringing DevOps agility-- >> That's better. >> Yeah. And you can use existing-- >> That's better, because I can run it on the Artemis too, in space. >> You could, yeah. >> If they have the hardware. (both laugh) >> And that allows you to run your existing, continue to use your existing DevOps infrastructure and your existing people. >> So I have to ask you, cause since you're a professor, this is like a masterclass on theCube. Thank you for coming on. Professor. (Luis laughing) I'm a hardware guy. I'm building hardware for Boston Dynamics, Spot, the dog, that's the diversity in hardware, it's tends to be purpose driven. I got a spaceship, I'm going to have hardware on there. >> Luis: Right. >> It's generally viewed in the community here, that everyone I talk to and other communities, open source is going to drive all software. That's a check. But the scale and integration is super important. And they're also recognizing that hardware is really about the software. And they even said on stage, here. Hardware is not about the hardware, it's about the software. So if you believe that to be true, then your model checks all the boxes. Are people getting this? >> I think they're starting to. Here is why, right. A lot of companies that were hardware first, that thought about software too late, aren't making it. Right? There's a large number of hardware companies, AI chip companies that aren't making it. Probably some of them that won't make it, unfortunately just because they started thinking about software too late. I'm so glad to see a lot of the early, I hope I'm not just doing our own horn here, but Apache TVM, the infrastructure that we built to map models to different hardware, it's very flexible. So we see a lot of emerging chip companies like SiMa.ai's been doing fantastic work, and they use Apache TVM to map algorithms to their hardware. And there's a bunch of others that are also using Apache TVM. That's because you have, you know, an opening infrastructure that keeps it up to date with all the machine learning frameworks and models and allows you to extend to the chips that you want. So these companies pay attention that early, gives them a much higher fighting chance, I'd say. >> Well, first of all, not only are you backable by the VCs cause you have pedigree, you're a professor, you're smart, and you get good recruiting-- >> Luis: I don't know about the smart part. >> And you get good recruiting for PhDs out of University of Washington, which is not too shabby computer science department. But they want to make money. The VCs want to make money. >> Right. >> So you have to make money. So what's the pitch? What's the business model? >> Yeah. Absolutely. >> Share us what you're thinking there. >> Yeah. The value of using our solution is shorter time to value for your model from months to hours. Second, you shrink operator, op-packs, because you don't need a specialized expensive team. Talk about expensive, expensive engineers who can understand machine learning hardware and software engineering to deploy models. You don't need those teams if you use this automated solution, right? Then you reduce that. And also, in the process of actually getting a model and getting specialized to the hardware, making hardware aware, we're talking about a very significant performance improvement that leads to lower cost of deployment in the cloud. We're talking about very significant reduction in costs in cloud deployment. And also enabling new applications on the edge that weren't possible before. It creates, you know, latent value opportunities. Right? So, that's the high level value pitch. But how do we make money? Well, we charge for access to the platform. Right? >> Usage. Consumption. >> Yeah, and value based. Yeah, so it's consumption and value based. So depends on the scale of the deployment. If you're going to deploy machine learning model at a larger scale, chances are that it produces a lot of value. So then we'll capture some of that value in our pricing scale. >> So, you have direct sales force then to work those deals. >> Exactly. >> Got it. How many customers do you have? Just curious. >> So we started, the SaaS platform just launched now. So we started onboarding customers. We've been building this for a while. We have a bunch of, you know, partners that we can talk about openly, like, you know, revenue generating partners, that's fair to say. We work closely with Qualcomm to enable Snapdragon on TVM and hence our platform. We're close with AMD as well, enabling AMD hardware on the platform. We've been working closely with two hyperscaler cloud providers that-- >> I wonder who they are. >> I don't know who they are, right. >> Both start with the letter A. >> And they're both here, right. What is that? >> They both start with the letter A. >> Oh, that's right. >> I won't give it away. (laughing) >> Don't give it away. >> One has three, one has four. (both laugh) >> I'm guessing, by the way. >> Then we have customers in the, actually, early customers have been using the platform from the beginning in the consumer electronics space, in Japan, you know, self driving car technology, as well. As well as some AI first companies that actually, whose core value, the core business come from AI models. >> So, serious, serious customers. They got deep tech chops. They're integrating, they see this as a strategic part of their architecture. >> That's what I call AI native, exactly. But now there's, we have several enterprise customers in line now, we've been talking to. Of course, because now we launched the platform, now we started onboarding and exploring how we're going to serve it to these customers. But it's pretty clear that our technology can solve a lot of other pain points right now. And we're going to work with them as early customers to go and refine them. >> So, do you sell to the little guys, like us? Will we be customers if we wanted to be? >> You could, absolutely, yeah. >> What we have to do, have machine learning folks on staff? >> So, here's what you're going to have to do. Since you can see the booth, others can't. No, but they can certainly, you can try our demo. >> OctoML. >> And you should look at the transparent AI app that's compiled and optimized with our flow, and deployed and built with our flow. That allows you to get your image and do style transfer. You know, you can get you and a pineapple and see how you look like with a pineapple texture. >> We got a lot of transcript and video data. >> Right. Yeah. Right, exactly. So, you can use that. Then there's a very clear-- >> But I could use it. You're not blocking me from using it. Everyone's, it's pretty much democratized. >> You can try the demo, and then you can request access to the platform. >> But you get a lot of more serious deeper customers. But you can serve anybody, what you're saying. >> Luis: We can serve anybody, yeah. >> All right, so what's the vision going forward? Let me ask this. When did people start getting the epiphany of removing the machine learning from the hardware? Was it recently, a couple years ago? >> Well, on the research side, we helped start that trend a while ago. I don't need to repeat that. But I think the vision that's important here, I want the audience here to take away is that, there's a lot of progress being made in creating machine learning models. So, there's fantastic tools to deal with training data, and creating the models, and so on. And now there's a bunch of models that can solve real problems there. The question is, how do you very easily integrate that into your intelligent applications? Madrona Venture Group has been very vocal and investing heavily in intelligent applications both and user applications as well as enablers. So we say an enable of that because it's so easy to use our flow to get a model integrated into your application. Now, any regular software developer can integrate that. And that's just the beginning, right? Because, you know, now we have CI/CD integration to keep your models up to date, to continue to integrate, and then there's more downstream support for other features that you normally have in regular software development. >> I've been thinking about this for a long, long, time. And I think this whole code, no one thinks about code. Like, I write code, I'm deploying it. I think this idea of machine learning as code independent of other dependencies is really amazing. It's so obvious now that you say it. What's the choices now? Let's just say that, I buy it, I love it, I'm using it. Now what do I got to do if I want to deploy it? Do I have to pick processors? Are there verified platforms that you support? Is there a short list? Is there every piece of hardware? >> We actually can help you. I hope we're not saying we can do everything in the world here, but we can help you with that. So, here's how. When you have them all in the platform you can actually see how this model runs on any instance of any cloud, by the way. So we support all the three major cloud providers. And then you can make decisions. For example, if you care about latency, your model has to run on, at most 50 milliseconds, because you're going to have interactivity. And then, after that, you don't care if it's faster. All you care is that, is it going to run cheap enough. So we can help you navigate. And also going to make it automatic. >> It's like tire kicking in the dealer showroom. >> Right. >> You can test everything out, you can see the simulation. Are they simulations, or are they real tests? >> Oh, no, we run all in real hardware. So, we have, as I said, we support any instances of any of the major clouds. We actually run on the cloud. But we also support a select number of edge devices today, like ARMs and Nvidia Jetsons. And we have the OctoML cloud, which is a bunch of racks with a bunch Raspberry Pis and Nvidia Jetsons, and very soon, a bunch of mobile phones there too that can actually run the real hardware, and validate it, and test it out, so you can see that your model runs performant and economically enough in the cloud. And it can run on the edge devices-- >> You're a machine learning as a service. Would that be an accurate? >> That's part of it, because we're not doing the machine learning model itself. You come with a model and we make it deployable and make it ready to deploy. So, here's why it's important. Let me try. There's a large number of really interesting companies that do API models, as in API as a service. You have an NLP model, you have computer vision models, where you call an API and then point in the cloud. You send an image and you got a description, for example. But it is using a third party. Now, if you want to have your model on your infrastructure but having the same convenience as an API you can use our service. So, today, chances are that, if you have a model that you know that you want to do, there might not be an API for it, we actually automatically create the API for you. >> Okay, so that's why I get the DevOps agility for machine learning is a better description. Cause it's not, you're not providing the service. You're providing the service of deploying it like DevOps infrastructure as code. You're now ML as code. >> It's your model, your API, your infrastructure, but all of the convenience of having it ready to go, fully automatic, hands off. >> Cause I think what's interesting about this is that it brings the craftsmanship back to machine learning. Cause it's a craft. I mean, let's face it. >> Yeah. I want human brains, which are very precious resources, to focus on building those models, that is going to solve business problems. I don't want these very smart human brains figuring out how to scrub this into actually getting run the right way. This should be automatic. That's why we use machine learning, for machine learning to solve that. >> Here's an idea for you. We should write a book called, The Lean Machine Learning. Cause the lean startup was all about DevOps. >> Luis: We call machine leaning. No, that's not it going to work. (laughs) >> Remember when iteration was the big mantra. Oh, yeah, iterate. You know, that was from DevOps. >> Yeah, that's right. >> This code allowed for standing up stuff fast, double down, we all know the history, what it turned out. That was a good value for developers. >> I could really agree. If you don't mind me building on that point. You know, something we see as OctoML, but we also see at Madrona as well. Seeing that there's a trend towards best in breed for each one of the stages of getting a model deployed. From the data aspect of creating the data, and then to the model creation aspect, to the model deployment, and even model monitoring. Right? We develop integrations with all the major pieces of the ecosystem, such that you can integrate, say with model monitoring to go and monitor how a model is doing. Just like you monitor how code is doing in deployment in the cloud. >> It's evolution. I think it's a great step. And again, I love the analogy to the mainstream. I lived during those days. I remember the monolithic propriety, and then, you know, OSI model kind of blew it. But that OSI stack never went full stack, and it only stopped at TCP/IP. So, I think the same thing's going on here. You see some scalability around it to try to uncouple it, free it. >> Absolutely. And sustainability and accessibility to make it run faster and make it run on any deice that you want by any developer. So, that's the tagline. >> Luis Ceze, thanks for coming on. Professor. >> Thank you. >> I didn't know you were a professor. That's great to have you on. It was a masterclass in DevOps agility for machine learning. Thanks for coming on. Appreciate it. >> Thank you very much. Thank you. >> Congratulations, again. All right. OctoML here on theCube. Really important. Uncoupling the machine learning from the hardware specifically. That's only going to make space faster and safer, and more reliable. And that's where the whole theme of re:MARS is. Let's see how they fit in. I'm John for theCube. Thanks for watching. More coverage after this short break. >> Luis: Thank you. (gentle music)

Published Date : Jun 24 2022

SUMMARY :

live on the floor at AWS re:MARS 2022. for having me in the show, John. but machine learning is the And that allows you to get certainly on the silicon side. 'cause I could see the progression. So once upon a time, yeah, no... because if you wake up learning runs in the end, that's going to give you the So that was pre-conventional wisdom. the Hamilton was working on. and to this day, you know, That's the beginning of that was logical when you is that the ecosystem because that's kind of the test First, you know-- and scaling the model the way you want, not having to do that integration work. Scale, and run the models So if you can move data to the edge, So do not have any of the typical And you can use existing-- the Artemis too, in space. If they have the hardware. And that allows you So I have to ask you, So if you believe that to be true, to the chips that you want. about the smart part. And you get good recruiting for PhDs So you have to make money. And also, in the process So depends on the scale of the deployment. So, you have direct sales How many customers do you have? We have a bunch of, you know, And they're both here, right. I won't give it away. One has three, one has four. in Japan, you know, self They're integrating, they see this as it to these customers. Since you can see the booth, others can't. and see how you look like We got a lot of So, you can use that. But I could use it. and then you can request But you can serve anybody, of removing the machine for other features that you normally have It's so obvious now that you say it. So we can help you navigate. in the dealer showroom. you can see the simulation. And it can run on the edge devices-- You're a machine learning as a service. know that you want to do, I get the DevOps agility but all of the convenience it brings the craftsmanship for machine learning to solve that. Cause the lean startup No, that's not it going to work. You know, that was from DevOps. double down, we all know the such that you can integrate, and then, you know, OSI on any deice that you Professor. That's great to have you on. Thank you very much. Uncoupling the machine learning Luis: Thank you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Luis CezePERSON

0.99+

QualcommORGANIZATION

0.99+

LuisPERSON

0.99+

2015DATE

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

Boston DynamicsORGANIZATION

0.99+

two hoursQUANTITY

0.99+

NvidiaORGANIZATION

0.99+

2017DATE

0.99+

JapanLOCATION

0.99+

Madrona Venture CapitalORGANIZATION

0.99+

AMDORGANIZATION

0.99+

oneQUANTITY

0.99+

AmazonORGANIZATION

0.99+

threeQUANTITY

0.99+

IBMORGANIZATION

0.99+

OneQUANTITY

0.99+

AWSORGANIZATION

0.99+

fourQUANTITY

0.99+

2016DATE

0.99+

University of WashingtonORGANIZATION

0.99+

TodayDATE

0.99+

PepsiORGANIZATION

0.99+

BothQUANTITY

0.99+

yesterdayDATE

0.99+

FirstQUANTITY

0.99+

bothQUANTITY

0.99+

SecondQUANTITY

0.99+

todayDATE

0.99+

SiMa.aiORGANIZATION

0.99+

OctoMLTITLE

0.99+

OctoMLORGANIZATION

0.99+

IntelORGANIZATION

0.98+

one instanceQUANTITY

0.98+

DevOpsTITLE

0.98+

Madrona Venture GroupORGANIZATION

0.98+

SwamiPERSON

0.98+

MadronaORGANIZATION

0.98+

about six yearsQUANTITY

0.96+

SpotORGANIZATION

0.96+

The Lean Machine LearningTITLE

0.95+

firstQUANTITY

0.95+

theCUBEORGANIZATION

0.94+

ARMsORGANIZATION

0.94+

pineappleORGANIZATION

0.94+

Raspberry PisORGANIZATION

0.92+

TensorFlowTITLE

0.89+

SnapdragonORGANIZATION

0.89+

about three years oldQUANTITY

0.89+

a couple years agoDATE

0.88+

two hyperscaler cloud providersQUANTITY

0.88+

first onesQUANTITY

0.87+

one ofQUANTITY

0.85+

50 millisecondsQUANTITY

0.83+

Apache TVMORGANIZATION

0.82+

both laughQUANTITY

0.82+

three major cloud providersQUANTITY

0.81+

Andy Thurai, Constellation Research & Larry Carvalho, RobustCloud LLC


 

(upbeat music) >> Okay, welcome back everyone. CUBE's coverage of re:MARS, here in Las Vegas, in person. I'm John Furrier, host of theCUBE. This is the analyst panel wrap up analysis of the keynote, the show, past one and a half days. We got two great guests here. We got Andy Thurai, Vice President, Principal Consultant, Constellation Research. Larry Carvalho, Principal Consultant at RobustCloud LLC. Congratulations going out on your own. >> Thank you. >> Andy, great to see you. >> Great to see you as well. >> Guys, thanks for coming out. So this is the session where we break down and analyze, you guys are analysts, industry analysts, you go to all the shows, we see each other. You guys are analyzing the landscape. What does this show mean to you guys? 'Cause this is not obvious to the normal tech follower. The insiders see the confluence of robotics, space, automation and machine learning. Obviously, it's IoTs, industrials, it's a bunch of things. But there's some dots to connect. Let's start with you, Larry. What do you see here happening at this show? >> So you got to see how Amazon started, right? When AWS started. When AWS started, it primarily took the compute storage, networking of Amazon.com and put it as a cloud service, as a service, and started selling the heck out of it. This is a stage later now that Amazon.com has done a lot of physical activity, and using AIML and the robotics, et cetera, it's now the second phase of innovation, which is beyond digital transformation of back office processes, to the transformation of physical processes where people are now actually delivering remotely and it's an amazing area. >> So back office's IT data center kind of vibe. >> Yeah. >> You're saying front end, industrial life. >> Yes. >> Life as we know it. >> Right, right. I mean, I just stopped at a booth here and they have something that helps anybody who's stuck in the house who cannot move around. But with Alexa, order some water to bring them wherever they are in the house where they're stuck in their bed. But look at the innovation that's going on there right at the edge. So I think those are... >> John: And you got the Lunar, got the sex appeal of the space, Lunar Outpost interview, >> Yes. >> those guys. They got Rover on Mars. They're going to have be colonizing the moon. >> Yes. >> I made a joke, I'm like, "Well, I left a part back on earth, I'll be right back." (Larry and Andy laugh) >> You can't drive back to the office. So a lot of challenges. Andy, what's your take of the show? Take us your analysis. What's the vibe, what's your analysis so far? >> It's a great show. So, as Larry was saying, one of the thing was that when Amazon started, right? So they were more about cloud computing. So, which means is they try to commoditize more of data center components or compute components. So that was working really well for what I call it as a compute economy, right? >> John: Mm hmm. >> And I call the newer economy as more of a AIML-based data economy. So when you move from a compute economy into a data economy, there are things that come into the forefront that never existed before, never popular before. Things like your AIML model creation, model training, model movement, model influencing, all of the above, right? And then of course the robotics has come long way since then. And then some of what they do at the store, or the charging, the whole nine yards. So, the whole concept of all of these components, when you put them on re:Invent, such a big show, it was getting lost. So that's why they don't have it for a couple of years. They had it one year. And now all of a sudden they woke up and say, "You know what? We got to do this!" >> John: Yeah. >> To bring out this critical components that we have, that's ripe, mature for the world to next component. So that's why- I think they're pretty good stuff. And some of the robotics things I saw in there, like one of them I posted on my Twitter, it's about the robot dog, sniffing out the robot rover, which I thought was pretty hilarious. (All laugh) >> Yeah, this is the thing. You're seeing like the pandemic put everything on hold on the last re:Mars, and then the whole world was upside down. But a lot of stuff pulled forward. You saw the call center stuff booming. You saw the Zoomification of our workplace. And I think a lot of people got to the realization that this hybrid, steady-state's here. And so, okay. That settles that. But the digital transformation of actually physical work? >> Andy: Yeah. >> Location, the walk in and out store right over here we've seen that's the ghost store in Seattle. We've all been there. In fact, I was kind of challenged, try to steal something. I'm like, okay- (Larry laughs) I'm pulling all my best New Jersey moves on everyone. You know? >> Andy: You'll get charged for it. >> I couldn't get away with it. Two double packs, drop it, it's smart as hell. Can't beat the system. But, you bring that to where the AI machine learning, and the robotics meet, robots. I mean, we had robots here on theCUBE. So, I think this robotics piece is a huge IoT, 'cause we've been covering industrial IoT for how many years, guys? And you could know what's going on there. Huge cyber threats. >> Mm hmm. >> Huge challenges, old antiquated OT technology. So I see a confluence in the collision between that OT getting decimated, to your point. And so, do you guys see that? I mean, am I just kind of seeing mirage? >> I don't see it'll get decimated, it'll get replaced with a newer- >> John: Dave would call me out on that. (Larry laughs) >> Decimated- >> Microsoft's going to get killed. >> I think it's going to have to be reworked. And just right now, you want do anything in a shop floor, you have to have a physical wire connected to it. Now you think about 5G coming in, and without a wire, you get minute details, you get low latency, high bandwidth. And the possibilities are endless at the edge. And I think with AWS, they got Outposts, they got Snowcone. >> John: There's a threat to them at the edge. Outpost is not doing well. You talk to anyone out there, it's like, you can't find success stories. >> Larry: Yeah. >> I'm going to get hammered by Amazon people, "Oh, what're you're saying that?" You know, EKS for example, with serverless is kicking ass too. So, I mean I'm not saying Outpost was wrong answer, it was a right at the time, what, four years ago that came out? >> Yeah. >> Okay, so, but that doesn't mean it's just theirs. You got Dell Technologies want some edge action. >> Yeah. >> So does HPE. >> Yes. >> So you got a competitive edge situation. >> I agree with that and I think that's definitely not Amazon's strong point, but like everything, they try to make it easy to use. >> John: Yeah. >> You know, you look at the AIML and they got Canvas. So Canvas says, hey, anybody can do AIML. If they can do that for the physical robotic processes, or even like with Outpost and Snowcone, that'll be good. I don't think they're there yet, and they don't have the presence in the market, >> John: Yeah. >> like HPE and, >> John: Well, let me ask you guys this question, because I think this brings up the next point. Will the best technology win or will the best solution win? Because if cloud's a platform and all software's open source, which you can make those assumptions, you then say, hey, they got this killer robotics thing going on with Artemis and Moonshot, they're trying to colonize the moon, but oh, they discovered a killer way to solve a big problem. Does something fall out of this kind of re:Mars environment, that cracks the code and radically changes and disrupts the IoT game? That's my open question. I don't know the answer. I'd love to get your take on what might be possible, what wild card's out there around, disrupting the edge. >> So one thing I see the way, so when IoT came into the world of play, it's when you're digitizing the physical world, it's IoT that does digitalization part of that actually, right? >> But then it has its own set of problems. >> John: Yeah. >> You're talking about you installing sensor everywhere, right? And not only installing your own sensor, but also you're installing competitor sensors. So in a given square feet how many sensors can you accommodate? So there are physical limitations on liabilities of bandwidth and networking all of that. >> John: And integration. >> As well. >> John: Your point. >> Right? So when that became an issue, this is where I was talking to the robotic guys here, a couple of companies, and one of the use cases they were talking about, which I thought was pretty cool, is, rather than going the sensor route, you go the robot route. So if you have either a factor that you want to map out, you put as many sensors on your robot, whatever that is, and then you make it go around, map the whole thing, and then you also do a surveillance in the whole nine yards. So, you can either have a fixed sensors or you can have moving sensors. So you can have three or four robots. So initially, when I was asking them about the price of it, when they were saying about a hundred thousand dollars, I was like, "Who would buy that?" (John and Larry laugh) >> When they then explained that, this is the use case, oh, that makes sense, because if you had to install, entire factory floor sensors, you're talking about millions of dollars. >> John: Yeah. >> But if you do the moveable sensors in this way, it's a lot cheaper. >> John: Yeah, yeah. >> So it's based on your use case, what are your use cases? What are you trying to achieve? >> The general purpose is over. >> Yeah. >> Which you're getting at, and that the enablement, this is again, this is the cloud scale open question- >> Yep. >> it's, okay, the differentiations isn't going to be open source software. That's open. >> It's going to be in the, how you configure it. >> Yes. >> What workflows you might have, the data streams. >> I think, John, you're bringing up a very good point about general purpose versus special purpose. Yesterday Zoox was on the stage and when they talked about their vehicle, it's made just for self-driving. You walk around in Vegas, over here, you see a bunch of old fashioned cars, whether they're Ford or GM- >> and they put all these devices around it, but you're still driving the same car. >> John: Yeah, exactly. >> You can retrofit those, but I don't think that kind of IoT is going to work. But if you redo the whole thing, we are going to see a significant change in how IoT delivers value all the way from the industrial to home, to healthcare, mining, agriculture, it's going to have to redo. I'll go back to the OT question. There are some OT guys, I know Rockwell and Siemens, some of them are innovating faster. The ones who innovate faster to keep up with the IT side, as well as the MLAI model are going to be the winners on that one. >> John: Yeah, I agree. Andy, your thoughts on manufacturing, you brought up the sensor thing. Robotics ultimately is, end of the day, an opportunity there. Obviously machine learning, we know what that does. As we move into these more autonomous builds, what does that look like? And is Amazon positioned well there? Obviously they have big manufacturers. Some are saying that they might want to get out of that business too, that Jassy's evaluating that some are saying. So, where does this all lead for that robotics manufacturing lifestyle, walk in, grab my food? 'Cause it's all robotics and AI at the end of the day, I got sensors, I got cameras, I got non-humans moving heavy lifting stuff, fixing the moon will be done by robots, not humans. So it's all coming. What's your analysis? >> Well, so, the point about robotics is on how far it has come, it is unbelievable, right? Couple of examples. One was that I was just talking to somebody, was explaining to them, to see that robot dog over there at the Boston Dynamics one- >> John: Yeah. >> climbing up and down the stairs. >> Larry: Yeah. >> That's more like the dinosaur movie opening the doors scene. (John and Larry laugh) It's like that for me, because the coordinated things, it is able to go walk up and down, that's unbelievable. But okay, it does that, and then there was also another video which is going on viral on the internet. This guy kicks the dog, robot dog, and then it falls down and it gets back up, and the sentiment that people were feeling for the dog, (Larry laughs) >> you can't, it's a robot, but people, it just comes at that level- >> John: Empathy, for a non-human. >> Yeah. >> But you see him, hey you, get off my lawn, you know? It's like, where are we? >> It has come to that level that people are able to kind of not look at that as a robot, but as more like a functioning, almost like a pet-level, human-level being. >> John: Yeah. >> And you saw that the human-like walking robot there as well. But to an extent, in my view, they are all still in an experimentation, innovation phase. It doesn't made it in the industrial terms yet. >> John: Yeah, not yet, it's coming. >> But, the problem- >> John: It's coming fast. That's what I'm trying to figure out is where you guys see Amazon and the industry relative to what from the fantasy coming reality- >> Right. >> of space in Mars, which is, it's intoxicating, let's face it. People love this. The nerds are all here. The geeks are all here. It's a celebration. James Hamilton's here- >> Yep. >> trying to get him on theCUBE. And he's here as a civilian. Jeff Barr, same thing. I'm here, not for Amazon, I bought a ticket. No, you didn't buy a ticket. (Larry laughs) >> I'm going to check on that. But, he's geeking out. >> Yeah. >> They're there because they want to be here. >> Yeah. >> Not because they have to work here. >> Well, I mean, the thing is, the innovation velocity has increased, because, in the past, remember, the smaller companies couldn't innovate because they don't have the platform. Now Compute is a platform available at the scale you want, AI is available at the scale. Every one of them is available at the scale you want. So if you have an idea, it's easy to innovate. The innovation velocity is high. But where I see most of the companies failing, whether startup or big company, is that you don't find the appropriate use case to solve, and then don't sell it to the right people to buy that. So if you don't find the right use case or don't sell the right value proposition to the actual buyer, >> John: Mm hmm. >> then why are you here? What are you doing? (John laughs) I mean, you're not just an invention, >> John: Eh, yeah. >> like a telephone kind of thing. >> Now, let's get into next talk track. I want to get your thoughts on the experience here at re:Mars. Obviously AWS and the Amazon people kind of combined effort between their teams. The event team does a great job. I thought the event, personally, was first class. The coffee didn't come in late today, I was complaining about that, (Larry laughs) >> people complaining out there, at CUBE reviews. But world class, high bar on the quality of the event. But you guys were involved in the analyst program. You've been through the walkthrough, some of the briefings. I couldn't do that 'cause I'm doing theCUBE interviews. What would you guys learn? What were some of the key walkaways, impressions? Amazon's putting all new teams together, seems on the analyst relations. >> Larry: Yeah. >> They got their mojo booming. They got three shows now, re:Mars, re:inforce, re:invent. >> Andy: Yeah. >> Which will be at theCUBE at all three. Now we got that coverage going, what's it like? What was the experience like? Did you feel it was good? Where do they need to improve? How would you grade the Amazon team? >> I think they did a great job over here in just bringing all the physical elements of the show. Even on the stage, where they had robots in there. It made it real and it's not just fake stuff. And every, or most of the booths out there are actually having- >> John: High quality demos. >> high quality demos. (John laughs) >> John: Not vaporware. >> Yeah, exactly. Not vaporware. >> John: I won't say the name of the company. (all laugh) >> And even the sessions were very good. They went through details. One thing that stood out, which is good, and I cover Low Code/No Code, and Low Code/No Code goes across everything. You know, you got DevOps No Low-Code Low-Code. You got AI Low Code/No Code. You got application development Low Code/No Code. What they have done with AI with Low Code/No Code is very powerful with Canvas. And I think that has really grown the adoption of AI. Because you don't have to go and train people what to do. And then, people are just saying, Hey, let me kick the tires, let me use it. Let me try it. >> John: It's going to be very interesting to see how Amazon, on that point, handles this, AWS handles this data tsunami. It's cause of Snowflake. Snowflake especially running the table >> Larry: Yeah. >> on the old Hadoop world. I think Dave had a great analysis with other colleagues last week at Snowflake Summit. But still, just scratching the surface. >> Larry: Yeah. >> The question is, how shared that ecosystem, how will that morph? 'Cause right now you've got Data Bricks, you've got Snowflake and a handful of others. Teradata's got some new chops going on there and a bunch of other folks. Some are going to win and lose in this downturn, but still, the scale that's needed is massive. >> So you got data growing so much, you were talking earlier about the growth of data and they were talking about the growth. That is a big pie and the pie can be shared by a lot of folks. I don't think- >> John: And snowflake pays AWS, remember that? >> Right, I get it. (John laughs) >> I get it. But they got very unique capabilities, just like Netflix has very unique capabilities. >> John: Yeah. >> They also pay AWS. >> John: Yeah. >> Right? But they're competing on prime. So I really think the cooperation is going to be there. >> John: Yeah. >> The pie is so big >> John: Yeah. >> that there's not going to be losers, but everybody could be winners. >> John: I'd be interested to follow up with you guys after next time we have an event together, we'll get you back on and figure out how do you measure this transitions? You went to IDC, so they had all kinds of ways to measure shipments. >> Larry: Yep. >> Even Gartner had fumbled for years, the Magic Quadrant on IaaS and PaaS when they had the market share. (Larry laughs) And then they finally bundled PaaS and IaaS together after years of my suggesting, thank you very much Gartner. (Larry laughs) But that just performs as the landscape changes so does the scoreboard. >> Yep. >> Right so, how do you measure who's winning and who's losing? How can we be critical of Amazon so they can get better? I mean, Andy Jassy always said to me, and Adam Salassi same way, we want to hear how bad we're doing so we can get better. >> Yeah. >> So they're open-minded to feedback. I mean, not (beep) posting on them, but they're open to critical feedback. What do you guys, what feedback would you give Amazon? Are they winning? I see them number one clearly over Azure, by miles. And even though Azure's kicking ass and taking names, getting back in the game, Microsoft's still behind, by a long ways, in some areas. >> Andy: Yes. In some ways. >> So, the scoreboard's changing. What's your thoughts on that? >> So, look, I mean, at the end of the day, when it comes to compute, right, Amazon is a clear winner. I mean, there are others who are catching up to it, but still, they are the established leader. And it comes with its own advantages because when you're trying to do innovation, when you're trying to do anything else, whether it's a data collection, we were talking about the data sensors, the amount of data they are collecting, whether it's the store, that self-serving store or other innovation projects, what they have going on. The storage compute and process of that requires a ton of compute. And they have that advantage with them. And, as I mentioned in my last article, one of my articles, when it comes to AIML and data programs, there is a rich and there is a poor. And the rich always gets richer because they, they have one leg up already. >> John: Yeah. >> I mean the amount of model training they have done, the billion or trillion dollar trillion parametrization, fine tuning of the model training and everything. They could do it faster. >> John: Yeah. >> Which means they have a leg up to begin with. So unless you are given an opportunity as a smaller, mid-size company to compete at them at the same level, you're going to start at the negative level to begin with. You have a lot of catch up to do. So, the other thing about Amazon is that they, when it comes to a lot of areas, they admit that they have to improve in certain areas and they're open and willing and listen to the people. >> Where are you, let's get critical. Let's do some critical analysis. Where does Amazon Websters need to get better? In your opinion, what criticism would you, in a constructive way, share? >> I think on the open source side, they need to be more proactive in, they are already, but they got to get even better than what they are. They got to engage with the community. They got to be able to talk on the open source side, hey, what are we doing? Maybe on the hardware side, can they do some open-sourcing of that? They got graviton. They got a lot of stuff. Will they be able to share the wealth with other folks, other than just being on an Amazon site, on the edge with their partners. >> John: Got it. >> If they can now take that, like you said, compute with what they have with a very end-to-end solution, the full stack. And if they can extend it, that's going to be really beneficial for them. >> Awesome. Andy, final word here. >> So one area where I think they could improve, which would be a game changer would be, right now, if you look at all of their solutions, if you look at the way they suggest implementation, the innovations, everything that comes out, comes out across very techy-oriented. The persona is very techy-oriented. Very rarely their solutions are built to the business audience or to the decision makers. So if I'm, say, an analyst, if I want to build, a business analyst rather, if I want to build a model, and then I want to deploy that or do some sort of application, mobile application, or what have you, it's a little bit hard. It's more techy-oriented. >> John: Yeah, yeah. >> So, if they could appeal or build a higher level abstraction of how to build and deploy applications for business users, or even build something industry specific, that's where a lot of the legacy companies succeeded. >> John: Yeah. >> Go after manufacturing specific or education. >> Well, we coined the term 'Supercloud' last re:Invent, and that's what we see. And Jerry Chen at Greylock calls it Castles in the Cloud, you can create these moats >> Yep. >> on top of the CapEx >> Yep. >> of Amazon. >> Exactly. >> And ride their back. >> Yep. >> And the difference in what you're paying and what you're charging, if you're good, like a Snowflake or a Mongo. I mean, Mongo's, they're just as big as Snow, if not bigger on Amazon than Snowflake is. 'Cause they use a lot of compute. No one turns off their database. (John laughs) >> Snowflake a little bit different, a little nuanced point, but, this is the new thing. You see Goldman Sachs, you got Capital One. They're building their own kind of, I call them sub clouds, but Dave Vellante says it's a Supercloud. And that essentially is the model. And then once you have a Supercloud, you say, great, I'm going to make sure it works on Azure and Google. >> Andy: Yep. >> And Alibaba if I have to. So, we're kind of seeing a playbook. >> Andy: Mm hmm. >> But you can't get it wrong 'cause it scales. >> Larry: Yeah, yeah. >> You can't scale the wrong answer. >> Andy: Yeah. >> So that seems to be what I'm watching is, who gets it right? Product market fit. Then if they roll it out to the cloud, then it becomes a Supercloud, and that's pure product market fit. So I think that's something that I've seen some people trying to figure out. And then, are you a supplier to the Superclouds? Like a Dell? Or you become an enabler? >> Andy: Yeah. >> You know, what's Dell Technologies do? >> Larry: Yeah. >> I mean, how do the box movers compete? >> Larry: I, the whole thing is now hybrid and you're going to have to see just, you said. (Larry laughs) >> John: Hybrid's a steady-state. I don't need to. >> Andy: I mean, >> By the way we're (indistinct), we can't get the chips, cause Broadcom and Apple bought 'em all. (Larry laughs) I mean there's a huge chip problem going on. >> Yes. I agree. >> Right now. >> I agree. >> I mean all these problems when you attract to a much higher level, a lot of those problems go away because you don't care about what they're using underlying as long as you deliver my solution. >> Larry: Yes. >> Yeah, it could be significantly, a little bit faster than what it used to be. But at the end of the day, are you solving my specific use case? >> John: Yeah. >> Then I'm willing to wait a little bit longer. >> John: Yeah. Time's on our side and now they're getting the right answers. Larry, Andy, thanks for coming on. This great analyst session turned into more of a podcast vibe, but you know what? (Larry laughs) To chill here at re:Mars, thanks for coming on, and we unpacked a lot. Thanks for sharing. >> Both: Thank you. >> Appreciate it. We'll get you back on. We'll get you in the rotation. We'll take it virtual. Do a panel. Do a panel, do some panels around this. >> Larry: Absolutely. >> Andy: Oh this not virtual, this physical. >> No we're live right now! (all laugh) We get back to Palo Alto. You guys are influencers. Thanks for coming on. You guys are moving the market, congratulations. Take a minute, quick minute each to plug any work you're doing for the people watching. Larry, what are you working on? Andy? You go after Larry, what you're working on. >> Yeah. So since I started my company, RobustCloud, since I left IDC about a year ago, I'm focused on edge computing, cloud-native technologies, and Low Code/No Code. And basically I help companies put their business value together. >> All right, Andy, what are you working on? >> I do a lot of work on the AIML areas. Particularly, last few of my reports are in the AI Ops incident management and ML Ops areas of how to generally improve your operations. >> John: Got it, yeah. >> In other words, how do you use the AIML to improve your IT operations? How do you use IT Ops to improve your AIML efficiency? So those are the- >> John: The real hardcore business transformation. >> Yep. >> All right. Guys, thanks so much for coming on the analyst session. We do keynote review, breaking down re:Mars after day two. We got a full day tomorrow. I'm John Furrier with theCUBE. See you next time. (pleasant music)

Published Date : Jun 24 2022

SUMMARY :

This is the analyst panel wrap What does this show mean to you guys? and started selling the heck out of it. data center kind of vibe. You're saying front But look at the innovation be colonizing the moon. (Larry and Andy laugh) What's the vibe, what's one of the thing was that And I call the newer economy as more And some of the robotics You saw the call center stuff booming. Location, the walk in and and the robotics meet, robots. So I see a confluence in the collision John: Dave would call me out on that. And the possibilities You talk to anyone out there, it's like, I'm going to get hammered You got Dell Technologies So you got a I agree with that You know, you look at the I don't know the answer. But then it has its how many sensors can you accommodate? and one of the use cases if you had to install, But if you do the it's, okay, the differentiations It's going to be in have, the data streams. you see a bunch of old fashioned cars, and they put all from the industrial to AI at the end of the day, Well, so, the point about robotics is and the sentiment that people that people are able to And you saw that the and the industry relative to of space in Mars, which is, No, you didn't buy a ticket. I'm going to check on that. they want to be here. at the scale you want. Obviously AWS and the Amazon on the quality of the event. They got their mojo booming. Where do they need to improve? And every, or most of the booths out there (John laughs) Yeah, exactly. the name of the company. And even the sessions were very good. John: It's going to be very But still, just scratching the surface. but still, the scale That is a big pie and the (John laughs) But they got very unique capabilities, cooperation is going to be there. that there's not going to be losers, John: I'd be interested to follow up as the landscape changes I mean, Andy Jassy always said to me, getting back in the game, So, the scoreboard's changing. the amount of data they are collecting, I mean the amount of model So, the other thing about need to get better? on the edge with their partners. end-to-end solution, the full stack. Andy, final word here. if you look at the way they of how to build and deploy Go after manufacturing calls it Castles in the Cloud, And the difference And that essentially is the model. And Alibaba if I have to. But you can't get it So that seems to be to see just, you said. John: Hybrid's a steady-state. By the way we're (indistinct), problems when you attract But at the end of the day, Then I'm willing to vibe, but you know what? We'll get you in the rotation. Andy: Oh this not You guys are moving the and Low Code/No Code. the AI Ops incident John: The real hardcore coming on the analyst session.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
LarryPERSON

0.99+

Andy ThuraiPERSON

0.99+

Jeff BarrPERSON

0.99+

JohnPERSON

0.99+

DavePERSON

0.99+

Larry CarvalhoPERSON

0.99+

AmazonORGANIZATION

0.99+

AndyPERSON

0.99+

Andy ThuraiPERSON

0.99+

Adam SalassiPERSON

0.99+

FordORGANIZATION

0.99+

Dave VellantePERSON

0.99+

John FurrierPERSON

0.99+

AWSORGANIZATION

0.99+

James HamiltonPERSON

0.99+

Boston DynamicsORGANIZATION

0.99+

Jerry ChenPERSON

0.99+

GMORGANIZATION

0.99+

RockwellORGANIZATION

0.99+

threeQUANTITY

0.99+

SeattleLOCATION

0.99+

AppleORGANIZATION

0.99+

Andy JassyPERSON

0.99+

VegasLOCATION

0.99+

DellORGANIZATION

0.99+

Rick Gouin, Winslow Technology Group | WTG Transform 2018


 

from Boston Massachusetts it's the cube covering wtg transform 2018 brought to you by Winslow technology group you're watching the cubes coverage of wtg transform 2018 happy to welcome back to the program fresh off the keynotes page where he discussed the specter of clouds Rick Cowan who is the chief technology officer of window technology group Rick great to talk to you thanks for having me - all right yeah thank you for having us here so we're talking about this whole cloud thing and you and I've been talking about this for a couple of years gives it gives your viewpoint you talked a lot of customers we can talk about architecture but you know the average customer when they hear you know cloud you know there's some puffy things up in the sky but you know what what does it mean to them sure yeah so I think one of the things that we're advocating as it relates to sort of starting that cloud journey is to do some homework ahead of time make data-driven decisions and we don't want you as our you know customer base to get into a situation where you're kind of backed into a corner right where you move something you decide you need to bring it back or or or anything like that so we're a big advocate of you know running some analytics and making some intelligent decisions you know try and start with that low-hanging fruit where you can kind of ease your way in and and the stuff that doesn't require re-platforming and you know get your toes toes in the sand a little bit before you wade all the way out there yes so if I step back for a second just immersed and point one of the things I liked in your keynote is so many times we think about technology it's like oh well it's a new server or it's you know something I swipe a credit card and I go using the cloud cloud really we need to think about the operating knowledge as it's the policies and the people that are as important if not more important than the okay what's the pipe price per CPU and things like that right right yeah yeah and one of the things that we talk about a lot is that when we're talking about cloud we're not talking about a place we're not talking about who owns it we're not talking about any particular public cloud provider we're talking about a way of doing business a way of bringing your services to your internal customers and a way of kind of transforming your IT infrastructure to more efficiently consume those resources right and that's that's a change in operating model that's a change in sort of way of thinking not just from you know this whole cloud thing but also towards delivering IT more as a service yeah and you spent a lot of time talking about applications which I really like because I'm an infrastructure guy by background when we talked about virtualization when we talked about converge and hyper-converged a lot of times we're talking about you know boxes and cabling and networking and things like that the role of infrastructure is are on my application they're all the applications to run my business right that's the big theme we've been hearing for years is you know IT your role isn't to be this thing off on the side and it's not necessary you know dollars in headcount and all that are important but if you don't serve the ultimate business and what they need to do to keep us running we're all out of business right yeah this whole transformation is all about aligning right those business requirements with IT and starting to deliver services that are tailored towards what the business needs as opposed to what I can offer what my capabilities are right those need to be more in synced and that's what this whole operating model is all about right is aligning those services to the business and and creating the infrastructure so that the business can consume it more easily yeah and you gave some really good pointers I want you to give us the you know your customers because when I heard things like oh well you know let's say I'm using a public cloud well I need to understand how availability zones work and how I spread things out which you know if I'm used to you know H a on VMware or you know your hypervisor Troy some of those things I got used to it because things work they were built for the enterprise now it's you know well it's distributed but you need to think about things from that application level a little bit more right yeah and so that's something that we're trying to educate our customer base on is as we move forward and as we start to move workloads into various you know clouds public/private what-have-you we have to start considering some of those availability aspects that today we don't even think about right almost everybody who is still sitting in that traditional infrastructure they're all having their availability provided probably at the hardware level they have you know multiple controllers and clusters and all this stuff so they put a they drop an application into their environment and it's already gonna have pretty good availability when we as we move forward we have to start pushing that availability up the stack and thinking about it more at the application level and so when we're deploying workloads into different cloud environments we may be sponsible for providing our own high availability and that's something that in some cases requires a fair amount of expertise to to you know get that architecture right so that we do have the same level of high availability out in these cloud environments that we have in our on-prem infrastructure all right so Rick inside your customers you know who are the people you're talking to that kind of get this you know we lived through the transformation of like well you know the storage guy was doing this thing we need to kind of have the virtualization person own more you know cloud architect has been a title that's been you know expanding quite a lot over the last few years who are you getting at the table who's making these architectural decisions when you're working with your users yeah so we feel like it's something that we have to get the entire team on board with it's something where it might be an initiative that we start to address with the CIOs and the IT directors but it's important to get the entire organization's IT staff on board with the transition because each one is going to have a part to play and in sort of moving forward into that IT as a service sort of an organization great so you know when it looked at some of the things that wtg is doing you know obviously you know Dell EMC Nutanix VMware your biggest partners you know what's what's kind of you know the the big you know big push team today from the majority your customers and what are some of the you know more advanced customers getting excited about yeah so I think you know you listed off those those partners and when we look at them a common theme there is adding this built-in sort of cloud interoperability connectivity and feature set so when we're thinking about all the characteristics that we look for in a cloud operating model we're seeing things like self-service portals things like you know the ability to measure multiple tenants and things like this and so what we're seeing across all those partners is more and more of those features come as parts of the infrastructure solutions and that's reducing the burden on our customers to be able to deploy something that you know operates in that cloud sort of IT as a service offering and so you know some of these customers are getting really excited about that capability to write out-of-the-box deploy a self-service portal deliver these these capabilities straight to their in colonel customers without having to do a bunch of development or or you know build complicated systems to deliver them so it's a self-service portals it's the built-in cloud connectivity to be able to archive things and and send dr out to you know third-party service providers so those are some of the things that our customers who are on this journey and you know maybe they started last year the year before they're moving forward those are the sorts of things that they're starting to point out you know one of the big challenges when we talk about this rather dispersed world we're moving towards you spend some time talking about SAS absolutely SAS is the biggest piece of you know if you call public cloud some of it doesn't live in one of the big clouds or can live lots of places data right data protection a security are something that no matter where I go I need to worry about that it's there's there's no way actually in your definition you're like oh if I do SAS I don't need to worry about the data no no no great - well I think you took somebody's slide there but you know there are some people that mistakingly oh well I ran on a pass I don't need to worry about security no you do containers any of these things data protection my data and you know security I need to worry about that everywhere and that brings a whole new set of challenges yeah and and you know so you make a good point because on for example I'm a security side of things it's continues to be just as much of a concern as it ever was but it's an entirely different way to think about it you know likewise with data protection it's just as important as it ever was but it's an entirely different way to think about one of the things though that I thought was really interesting about security is that when I'm talking to these CIOs and IT directors across our customer base in the past you know if I go back rewind this thing three years they would say I can't go to the cloud because of security right now we're you know were a little bit more mature and in our cloud understanding and and starting to you know transform a little bit and they now that lists that as one of the reasons they want to move to the cloud and I think that was one of the most startling sort of realizations as that shift in my chair yeah absolutely we actually did did some surveys there was a big survey we were attached with called the future of cloud computing and you're right if I hadn't dipped my toe in I was worried about it but once I got there I realized I kind of looked in sighed and said oh my gosh what did I be doing interesting analogy I've paired sometimes is you know the autonomous vehicles and things like that I'm worried about this self-driving or even the braking or things like that that challenging have you looked at most drivers most people you know oh my gosh they're checking their tax they're you know doing all sorts of stuff there it is a bit of a mind flip as done how you think of these things doesn't mean it changes overnight or that there's there's never a silver bullet night day but you know it's some of these viewpoints that we need to change and think a little differently yeah yeah I think that's a great analogy I'm probably SD laughs all right Rick what's exciting you these days your CTO you know you're here there's you know Boston area love if you've got you know anything about you know cool things in the area or just cool tech in general yeah I think you know and and I dressed a lot of this in my keynote earlier today but I'm really high on an analytical approach to a hybrid cloud I want to start to get customers thinking about you know how we can make this a transition as opposed to a you know just jump in right in the deep end it doesn't have to be this this big jarring event as we sort of transform this is something where we can take baby steps and and start to move ourselves forward and so you know we're getting really excited about those technologies that allow us to integrate our existing infrastructure with various other you know cloud services whether they be you know platforms infrastructure and software offerings things that allow us to take the investments that we already have and you know sort of integrate and make use of these cloud services that we know can deliver value to our organization that's what we're most excited about is you know getting more out of what we have yeah you mentioned analytics I mean here in Boston you had in the opening video there was some of the I think it's the Boston Dynamics robots you know right across the river here yeah in the area when I talk to people like in the storage world we talk about intelligence but their eyes light up because we've been talking about intelligence storage for decades but no really now that with all the cool technologies yet we can really put this in here and it's not about you know getting rid of the admins it about really supercharging and be able to deal with you know we've got way more data we've got way more devices we've got way more things I'm going to have to do so you know we need some help with all of these machines to be able to pair the machines with the people to make them do their jobs better yep yep couldn't agree more all right we're going pleasure to catch up with you and thanks again thanks so much for having us here be sure to check out the cube net for all of the content here and all the shows will be back with lots more coverage thanks for watching the Q

Published Date : Jun 15 2018

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
Rick CowanPERSON

0.99+

BostonLOCATION

0.99+

Boston DynamicsORGANIZATION

0.99+

RickPERSON

0.99+

WinslowORGANIZATION

0.99+

last yearDATE

0.99+

Winslow Technology GroupORGANIZATION

0.98+

oneQUANTITY

0.98+

decadesQUANTITY

0.97+

three yearsQUANTITY

0.97+

2018DATE

0.94+

todayDATE

0.93+

each oneQUANTITY

0.91+

Boston MassachusettsLOCATION

0.89+

a couple of yearsQUANTITY

0.88+

earlier todayDATE

0.86+

Rick GouinPERSON

0.84+

last few yearsDATE

0.82+

yearsQUANTITY

0.81+

VMwareTITLE

0.8+

WTGORGANIZATION

0.79+

a lot of timeQUANTITY

0.73+

thingsQUANTITY

0.7+

SASORGANIZATION

0.7+

one of the thingsQUANTITY

0.69+

Dell EMC NutanixORGANIZATION

0.68+

window technologyORGANIZATION

0.6+

lot of customersQUANTITY

0.59+

secondQUANTITY

0.48+

SASTITLE

0.48+