Dr Matt Wood, AWS | AWS Summit NYC 2018
live from New York it's the cube covering AWS summit New York 2018 hot GUI Amazon Web Services and its ecosystem partners hello and welcome back here live cube coverage in New York City for AWS Amazon Web Services summit 2018 I'm John Fourier with Jeff Rick here at the cube our next guest is dr. Matt wood general manager of artificial intelligence with Amazon Web Services keep alumnae been so busy for the past year and been on the cubanía thanks for coming back appreciate you spending the time so promotions keep on going on you got now general manager of the AI group AI operations ai automation machine learning offices a lot of big category of new things developing and a you guys have really taken AI and machine learning to a whole new level it's one of the key value propositions that you guys now have for not just a large enterprise but down to startups and developers so you know congratulations and what's the update oh well the update is this morning in the keynote I was lucky enough to introduce some new capabilities across our platform when it comes to machine learning our mission is that we want to be able to take machine learning and make it available to all developers we joke internally that we just want to we want to make machine learning boring we wanted to make it vanilla it's just it's another tool in the tool chest of any developer and any any data data scientist and we've done that this idea of taking technology that is traditionally only within reached a very very small number of well-funded organizations and making it as broadly distributed as possible we've done that pretty successfully with compute storage and databases and analytics and data warehousing and we want to do the exact same thing for the machine learning and to do that we have to kind of build an entirely new stack and we think of that stack in in three different tiers the bottom tier really for academics and researchers and data scientists we provide a wide range of frameworks open source programming libraries the developers and data scientists use to build neural networks and intelligence they're things like tend to flow and Apache mx9 and by torch and they're really they're very technical you can build you know arbitrarily sophisticated says most she open source to write mostly open source that's right we contribute a lot of our work back to MX net but we also contribute to buy torch and to tend to flow and there's big healthy open source projects growing up around you know all these popular frameworks plus more like chaos and gluon and horror boredom so that's a very very it's a key area for for researchers and academics the next level up we have machine learning platforms this is for developers and data scientists who have data they see in the clout although they want to move to the cloud quickly but they want to be able to use for modeling they want to be able to use it to build custom machine learning models and so here we try and remove as much of the undifferentiated heavy lifting associated with doing that as possible and this is really where sage maker fits in Cersei's maker allows developers to quickly fill train optimize and host their machine learning models and then at the top tier we have a set of AI services which are for application developers that don't want to get into the weeds they just want to get up and running really really quickly and so today we announced four new services really across those their middle tier in that top tier so for Sage maker we're very pleased to introduce a new streaming data protocol which allows you to take data straight from s3 and pump it straight into your algorithm and straight onto the computer infrastructure and what that means is you no longer have to copy data from s3 onto your computer infrastructure in order to be able to start training you just take away that step and just stream it right on there and it's an approach that we use inside sage maker for a lot of our built-in algorithms and it significantly increases the the speed of the algorithm and significantly of course decreases the cost of running the training because you pay by the second so any second you can save off it's a coffin for the customer and they also it helps the machine learn more that's right yeah you can put more data through it absolutely so you're no longer constrained by the amount of disk space you're not even constrained by the amount of memory on the instance you can just pump terabyte after terabyte after terabyte and we actually had another thing like talked about in the keynote this morning a new customer of ours snap who are routinely training on over 100 terabytes of image data using sage maker so you know the ability to be able to pump in lots of data is one of the keys to building successful machine learning applications so we brought that capability to everybody that's using tensorflow now you can just have your tensor flow model bring it to Sage maker do a little bit of wiring click a button and you were just start streaming your data to your tents upload what's the impact of the developer time speed I think it is it is the ability to be able to pump more data it is the decrease in time it takes to start the training but most importantly it decreases the training time all up so you'll see between a 10 and 25 percent decrease in training time some ways you can train more models or you can train more models per in the same unit time or you can just decrease the cost so it's a completely different way of thinking about how to train over large amounts of data we were doing it internally and now we're making it available for everybody through tej matrix that's the first thing the second thing that we're adding is the ability to be able to batch process and stage make them so stage maker used to be great at real-time predictions but there's a lot of use cases where you don't want to just make a one-off prediction you want to predict hundreds or thousands or even millions of things all at once so let's say you've got all of your sales information at the end of the month you want to use that to make a forecast for the next month you don't need to do that in real-time you need to do it once and then place the order and so we added batch transforms to Sage maker so you can pull in all of that data large amounts of data batch process it within a fully automated environment and then spin down the infrastructure and you're done it's a very very simple API anyone that uses a lambda function it's can take advantage of this again just dramatically decreasing the overhead and making it so much easier for everybody to take advantage of machine load and then at the top layer we had new capabilities for our AI services so we announced 12 new language pairs for our translation service and we announced new transcription so capability which allows us to take multi-channel audio such as might be recorded here but more commonly on contact centers just like you have a left channel on the right channel for stereo context centers often record the agent and the customer on the same track and today you can now pass that through our transcribed service long-form speech will split it up into the channels or automatically transcribe it will analyze all the timestamps and create just a single script and from there you can see what was being talked about you can check the topics automatically using comprehend or you can check the compliance did the agents say the words that they have to say for compliance reasons at some point during the conversation that's a material new capability for what's the top surface is being used obviously comprehend transcribe and barri of others you guys have put a lot of stuff out there all kinds of stuff what's the top sellers top use usage as a proxy for uptake you know I think I think we see a ton of we see a ton of adoption across all of these areas but where a lot of the momentum is growing right now is sage maker so if you look at a formula one they just chose Formula One racing they just chose AWS and sage maker as their machine learning platform the National Football League Major League Baseball today announcer they're you know re offering their relationship and their strategic partnership with AWS cream machine learning so all of these groups are using the data which just streams out of these these races all these games yeah and that can be the video or it can be the telemetry of the cars or the telemetry of the players and they're pumping that through Sage maker to drive more engaging experiences for their viewers so guys ok streaming this data is key this is a stage maker quickly this can do video yeah just get it all in all of it well you know we'd love data I would love to follow up on that so the question is is that when will sage maker overtake Aurora as the fastest growing product in history of Amazon because I predicted that reinvent that sage maker would go on err is it looking good right now I mean I sorta still on paper you guys are seeing is growing but see no eager give us an indicator well I mean I don't women breakout revenue per service but even the same excitement I'll say this the same excitement that I see Perseids maker now and the same opportunity and the same momentum it really really reminds me of AWS ten years ago it's the same sort of transformative democratizing approach to which really engages builders and I see the same level of the excitement as levels are super super high as well no super high in general reader pipe out there but I see the same level of enthusiasm and movement and the middle are building with it basically absolutely so what's this toy you have here I know we don't have a lot of time but this isn't you've got a little problem this is the world's first deep learning in April were on wireless video camera we thought it D blends we announced it and launched it at reinvent 2017 and actually hold that but they can hold it up to the camera it's a cute little device we modeled it after wall-e the Pixar movie and it is a HD video camera on the front here and in the base here we have a incredibly powerful custom piece of machine learning hardware so this can process over a billion machine learning operations per second you can take the video in real time you send it to the GPU on board and we'll just start processing the stream in real time so that's kind of interesting but the real value of this and why we designed it was we wanted to try and find a way for developers to get literally hands-on with machine learning so the way that build is a lifelong learners right they they love to learn they have an insatiable appetite for new information and new technologies and the way that they learn that is they experiment they start working and they kind of spin this flywheel where you try something out it works you fiddle with it it stops working you learn a little bit more and you want to go around around around that's been tried and tested for developers for four decades the challenge with machine learning is doing that is still very very difficult you need a label data you need to understand the algorithms it's just it's hard to do but with deep lens you can get up and running in ten minutes so it's connected back to the cloud it's good at about two stage makeup you can deploy a pre-built model down onto the device in ten minutes to do object detection we do some wacky visual effects with neural style transfer we do hot dog and no hot dog detection of course but the real value comes in that you can take any of those models tear them apart so sage maker start fiddling around with them and then immediately deploy them back down onto the camera and every developer on their desk has things that they can detect there are pens and cups and people whatever it is so they can very very quickly spin this flywheel where they're experimenting changing succeeding failing and just going round around a row that's for developers your target audience yes right okay and what are some of the things that have come out of it have you seen any cool yes evolutionary it has been incredibly gratifying and really humbling to see developers that have no machine learning experience take this out of the box and build some really wonderful projects one in really good example is exercise detection so you know when you're doing a workout they build a model which detects the exerciser there and then detects the reps of the weights that you're lifting now we saw skeletal mapping so you could map a person in 3d space using a simple camera we saw security features where you could put this on your door and then it would send you a text message if it didn't recognize who was in front of the door we saw one which was amazing which would read books aloud to kids so you would hold up the book and they would detect the text extract the text send the text to paly and then speak aloud for the kids so there's games as educational tools as little security gizmos one group even trained a dog detection model which detected individual species plug this into an enormous power pack and took it to the local dog park so they could test it out so it's all of this from from a cold start with know machine learning experience you having fun yes absolutely one of the great things about machine learning is you don't just get to work in one area you get to work in you get to work in Formula One and sports and you get to work in healthcare and you get to work in retail and and develop a tool in CTO is gonna love this chief toy officers chief toy officers I love it so I got to ask you so what's new in your world GM of AI audition intelligence what does that mean just quickly explain it for our our audience is that all the software I mean what specifically are you overseeing what's your purview within the realm of AWS yeah that's that's a totally fair question so my purview is I run the products for deep learning machine learning and artificial intelligence really across the AWS machine learning team so I get I have a lot of fingers in a lot of pies I get involved in the new products we're gonna go build out I get involved in helping grow usage of existing products I get it to do a lot of invention it spent a ton of time with customers but overall work with the rest of the team on setting the technical and pronto strategy for machine learning at AWS when what's your top priorities this year adoption uptake new product introductions and you guys don't stop it well we do sync we don't need to keep on introducing more and more things any high ground that you want to take what's what's the vision I didn't the vision is to is genuinely to continue to make it as easy as possible for developers to use Ruggiero my icon overstate the importance or the challenge so we're not at the point where you can just pull down some Python code and figure it out we're not even we don't have a JVM for machine learning where there's no there's no developer tools or debuggers there's very few visualizers so it's still very hard if you kind of think of it in computing terms we're still working in assembly language and you're seen learning so there's this wealth of opportunity ahead of us and the responsibility that I feel very strongly is to be able to continually in crew on the staff to continually bring new capabilities to mortar but well cloud has been disrupting IT operations AI ops with a calling in Silicon Valley and the venture circuit Auto ml as a term has been kicked around Auto automatic machine learning you got to train the machines with something data seems to be it strikes me about this compared to storage or compared to compute or compared to some of the core Amazon foundational products those are just better ways to do something they already existed this is not a better way to do something that are exists this is a way to get the democratization at the start of the process of the application of machine learning and artificial intelligence to a plethora of applications in these cases that is fundamentally yeah different in it just a step up in terms of totally agree the power to the hands of the people it's something which is very far as an area which is very fast moving and very fast growing but what's funny is it totally builds on top of the cloud and you really can't do machine learning in any meaningful production way unless you have a way that is cheap and easy to collect large amounts of data in a way which allows you to pull down high-performance computation at any scale that you need it and so through the cloud we've actually laid the foundations for machine learning going forwards and other things too coming oh yes that's a search as you guys announced the cloud highlights the power yet that it brings to these new capabilities solutely yeah and we get to build on them at AWS and at Amazon just like our customers do so osage make the runs on ec2 we wouldn't we won't be able to do sage maker without ec2 and you know in the fullness of time we see that you know the usage of machine learning could be as big if not bigger than the whole of the rest of AWS combined that's our aspiration dr. Matt would I wish we had more time to Chad loved shopping with you I'd love to do a whole nother segment on what you're doing with customers I know you guys are great customer focus as Andy always mentions when on the cube you guys listen to customers want to hear that maybe a reinvent will circle back sounds good congratulations on your success great to see you he showed it thanks off dr. Matt would here in the cube was dreaming all this data out to the Amazon Cloud is whether they be hosts all of our stuff of course it's the cube bringing you live action here in New York City for cube coverage of AWS summit 2018 in Manhattan we'll be back with more after this short break
SUMMARY :
amount of memory on the instance you can
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Rick | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
John Fourier | PERSON | 0.99+ |
New York City | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
ten minutes | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
Manhattan | LOCATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Andy | PERSON | 0.99+ |
Matt Wood | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
New York City | LOCATION | 0.99+ |
25 percent | QUANTITY | 0.99+ |
ten minutes | QUANTITY | 0.99+ |
New York | LOCATION | 0.99+ |
second thing | QUANTITY | 0.99+ |
millions | QUANTITY | 0.99+ |
Pixar | ORGANIZATION | 0.99+ |
dr. Matt wood | PERSON | 0.99+ |
Python | TITLE | 0.99+ |
April | DATE | 0.99+ |
today | DATE | 0.99+ |
four decades | QUANTITY | 0.98+ |
terabyte | QUANTITY | 0.98+ |
over 100 terabytes | QUANTITY | 0.98+ |
Sage maker | ORGANIZATION | 0.98+ |
ten years ago | DATE | 0.97+ |
12 new language pairs | QUANTITY | 0.97+ |
next month | DATE | 0.97+ |
four new services | QUANTITY | 0.97+ |
first thing | QUANTITY | 0.96+ |
thousands | QUANTITY | 0.96+ |
s3 | TITLE | 0.95+ |
Aurora | TITLE | 0.95+ |
second | QUANTITY | 0.95+ |
one | QUANTITY | 0.95+ |
sage maker | ORGANIZATION | 0.94+ |
Formula One | TITLE | 0.94+ |
dr. Matt | PERSON | 0.93+ |
first deep learning | QUANTITY | 0.93+ |
ec2 | TITLE | 0.93+ |
AWS Summit | EVENT | 0.92+ |
single script | QUANTITY | 0.9+ |
a ton of time | QUANTITY | 0.9+ |
one of the keys | QUANTITY | 0.9+ |
this morning | DATE | 0.9+ |
MX net | ORGANIZATION | 0.89+ |
National Football League Major League Baseball | EVENT | 0.88+ |
Cersei | ORGANIZATION | 0.88+ |
sage maker | ORGANIZATION | 0.88+ |
year | DATE | 0.88+ |
reinvent 2017 | EVENT | 0.87+ |
three different tiers | QUANTITY | 0.87+ |
AWS summit 2018 | EVENT | 0.87+ |
cubanía | LOCATION | 0.86+ |
one area | QUANTITY | 0.86+ |
2018 | EVENT | 0.86+ |
dr. Matt | PERSON | 0.85+ |
Perseids | ORGANIZATION | 0.85+ |
about two stage | QUANTITY | 0.82+ |
lot of time | QUANTITY | 0.81+ |
Web Services summit 2018 | EVENT | 0.81+ |
this year | DATE | 0.8+ |
Apache | TITLE | 0.79+ |
over a billion machine learning operations per second | QUANTITY | 0.79+ |
Chad | PERSON | 0.79+ |
things | QUANTITY | 0.78+ |
lot of use cases | QUANTITY | 0.77+ |
a ton of | QUANTITY | 0.77+ |
lots of data | QUANTITY | 0.74+ |
CTO | TITLE | 0.73+ |
this morning | DATE | 0.72+ |
amounts of | QUANTITY | 0.71+ |
Sage maker | TITLE | 0.69+ |