Stuti Deshpande, AWS | Smart Data Marketplaces
>> Announcer: From around the globe it's theCUBE with digital coverage of smart data marketplaces brought to you by Io Tahoe. >> Hi everybody, this is Dave Vellante. And welcome back. We've been talking about smart data. We've been hearing Io Tahoe talk about putting data to work and keep heart of building great data outcomes is the Cloud of course, and also Cloud native tooling. Stuti Deshpande is here. She's a partner solutions architect for Amazon Web Services and an expert in this area. Stuti, great to see you. Thanks so much for coming on theCUBE. >> Thank you so much for having me here. >> You're very welcome. So let's talk a little bit about Amazon. I mean, you have been on this machine learning journey for quite sometime. Take us through how this whole evolution has occurred in technology over the period of time. Since the Cloud really has been evolving. >> Amazon in itself is a company, an example of a company that has gotten through a multi year machine learning transformation to become the machine learning driven company that you see today. They have been improvising on original personalization model using robotics to all different women's centers, developing a forecasting system to predict the customer needs and improvising on that and reading customer expectations on convenience, fast delivery and speed, from developing natural language processing technology for end user infraction, to developing a groundbreaking technology such as Prime Air jobs to give packages to the customers. So our goal at Amazon With Services is to take this rich expertise and experience with machine learning technology across Amazon, and to work with thousands of customers and partners to handle this powerful technology into the hands of developers or data engineers of all levels. >> Great. So, okay. So if I'm a customer or a partner of AWS, give me the sales pitch on why I should choose you for machine learning. What are the benefits that I'm going to get specifically from AWS? >> Well, there are three main reasons why partners choose us. First and foremost, we provide the broadest and the deepest set of machine learning and AI services and features for your business. The velocity at which we innovate is truly unmatched. Over the last year, we launched 200 different services and features. So not only our pace is accelerating, but we provide fully managed services to our customers and partners who can easily build sophisticated AI driven applications and utilizing those fully managed services began build and train and deploy machine learning models, which is both valuable and differentiating. Secondly, we can accelerate the adoption of machine learning. So as I mentioned about fully managed services for machine learning, we have Amazon SageMaker. So SageMaker is a fully managed service that are any developer of any level or a data scientist can utilize to build complex machine learning, algorithms and models and deploy that at scale with very less effort and a very less cost. Before SageMaker, it used to take so much of time and expertise and specialization to build all these extensive models, but SageMaker, you can literally build any complex models within just a time of days or weeks. So to increase it option, AWS has acceleration programs just in a solution maps. And we also have education and training programs such as DeepRacer, which are enforces on enforcement learning and Embark, which actually help organization to adopt machine learning very readily. And we also support three major frameworks such as TensorFlow five charge, or they have separate teams who are dedicated to just focus on all these frameworks and improve the support of these frameworks for a wide variety of workloads. And finaly, we provide the most comprehensive platform that is optimized for machine learning. So when you think about machine learning, you need to have a data store where you can store your training sets, your test sets, which is highly reliable, highly scalable, and secure data store. Most of our customers want to store all of their data and any kind of data into a centralized repository that can be treated at the central source of fraud. And in this case from the Amazon Esri data store to build and endurance machine learning workflow. So we believe that we provide this capability of having the most comprehensive platform to build the machine learning workflow from internally. >> Great. Thank you for that. So I wanted, my next question is, this is a complicated situation for a lot of customers. You know, having the technology is one thing, but adoption is sort of everything. So I wonder if you could paint a picture for us and help us understand, how you're helping customers think about machine learning, thinking about that journey and maybe give us the context of what the ecosystem looks like? >> Sure. If someone can put up the belt, I would like to provide a picture representation of how AWS and fusion machine learning as three layers of stack. And moving on to next bill, I can talk about the bottom there. And bottom there as you can see over this screen, it's basically for advanced technologists advanced data scientists who are machine learning practitioners who work at the framework level. 90% of data scientists use multiple frameworks because multiple frameworks are adjusted and are suitable for multiple and different kinds of workloads. So at this layer, we provide support for all of the different types of frameworks. And the bottom layer is only for the advanced scientists and developers who are actually actually want to build, train and deploy these machine learning models by themselves and moving onto the next level, which is the middle layer. This layer is only suited for non-experts. So here we have SageMaker where it provides a fully managed service there you can build, tune, train and deploy your machine learning models at a very low cost and with very minimal efforts and at a higher scale, it removes all the complexity, heavy lifting and guesswork from this stage of machine learning and Amazon SageMaker has been the scene that will change. Many of our customers are actually standardizing on top off Amazon SageMaker. And then I'm moving on to the next layer, which is the top most layer. We call this as AI services because this may make the human recognition. So all of the services mentioned here such as Amazon Rekognition, which is basically a deep learning service optimized for image and video analysis. And then we have Amazon Polly, which can do the text to speech conversion and so on and so forth. So these are the AI services that can be embedded into the application so that the end user or the end customer can build AI driven applications. >> Love it. Okay. So you've got the experts at the bottom with the frameworks, the hardcore data scientists, you kind of get the self driving machine learning in the middle, and then you have all the ingredients. I'm like an AI chef or a machine learning chef. I can pull in vision, speech, chatbots, fraud detection, and sort of compile my own solutions that's cool. We hear a lot about SageMaker studio. I wonder if you could tell us a little bit more, can we double click a little bit on SageMaker? That seems to be a pretty important component of that stack that you just showed us. >> I think that was an absolutely very great summarization of all the different layers of machine unexpected. So thank you for providing the gist of that. Of course, I'll be really happy to talk about Amazon SageMaker because most of our customers are actually standardizing on top of SageMaker. That is spoken about how machine learning traditionally has so many complications and it's very complex and expensive and I traded process, which makes it even harder because they don't know integrated tools or if you do the traditional machine learning all kind of deployment, there are no integrated tools for the entire workflow process and deployment. And that is where SageMaker comes into the picture. SageMaker removes all the heaviness thing and complexities from each step of the deployment of machine learning workflow, how it solves our challenges by providing all of the different components that are optimized for every stage of the workflow into one single tool set. So that models get to production faster and with much less effort and at a lower cost. We really continue to add important (indistinct) leading to Amazon SageMaker. I think last year we announced 50 cubic litres in this far SageMaker being improvised it's features and functionalities. And I would love to call out a couple of those here, SageMaker notebooks, which are just one thing, the prominent notebooks that comes along with easy two instances, I'm sorry for quoting Jarvin here is Amazon Elastic Compute Instances. So you just need to have a one thing deployment and you have the entire SageMaker Notebook Interface, along with the Elastic Compute Instances running that gives you the faster time to production. If you're a machine, if you are a data scientist or a data engineer who worked extensively for machine learning, you must be aware about building training datasets is really complex. So there we have on his own ground truth, that is only for building machine learning training data sets, which can reduce your labeling cost by 70%. And if you perform machine learning and other model technology in general, there are some workflows where you need to do inferences. So there we have inference, Elastic Inference Incense, which you can reduce the cost by 75% by adding a little GP acceleration. Or you can reduce the cost by adding managed squad training, utilizing easy to spot instances. So there are multiple ways that you can reduce the costs and there are multiple ways there you can improvise and speed up your machine, learning deployment and workflow. >> So one of the things I love about, I mean, I'm a prime member who is not right. I love to shop at Amazon. And what I like about it is the consumer experience. It kind of helps me find things that maybe I wasn't aware of, maybe based on other patterns that are going on in the buying community with people that are similar. If I want to find a good book. It's always gives me great reviews and recommendations. So I'm wondering if that applies to sort of the tech world and machine learning, are you seeing any patterns emerge across the various use cases, you have such scale? What can you tell us about that? >> Sure. One of the battles that we have seen all the time is to build scalable layer for any kind of use case. So as I spoke before that as much, I'm really looking to put their data into a single set of depository where they have the single source of truth. So storing of data and any kind of data at any velocity into a single source of would actually help them build models who run on these data and get useful insights out of it. So when you speak about an entry and workflow, using Amazon SageMaker along bigger, scalable analytical tool is actually what we have seen as one of the factors where they can perform some analysis using Amazon SageMaker and build predictive models to say samples, if you want to take a healthcare use case. So they can build a predictive model that can victimize the readmissions of using Amazon SageMaker. So what I mean, to say is, by not moving data around and connecting different services to the same set of source of data, that's tumor avoid creating copies of data, which is very crucial when you are having training data set and test data sets with Amazon SageMaker. And it is highly important to consider this. So the pattern that we have seen is to utilize a central source of depository of data, which could be Amazon Extra. In this scenario, scalable analytical layer along with SageMaker. I would have to code at Intuit for a success story over here. I'm using sandwich, a Amazon SageMaker Intuit had reviews the machine learning deployment time by 90%. So I'm quoting here from six months to one week. And if you think about a healthcare industry, there hadn't been a shift from reactive to predictive care. So utilizing predictive models to accelerate research and discovery of new drugs and new treatments. And you've also observed that nurses were supported by AI tools increase their, their productivity has increased by 50%. I would like to say that one of our customers are really diving deep into the AWS portfolio of machine learning and AI services and including transcribed medical, where they are able to provide some insights so that their customers are getting benefits from them. Most of their customers are healthcare providers and they are able to give some into insights so that they can create some more personalized and improvise patient care. So there you have the end user benefits as well. One of the patterns that I have, I can speak about and what we have seen as well, appearing a predictive model with real time integration into healthcare records will actually help their healthcare provider customers for informed decision making and improvising the personalized patient care. >> That's a great example, several there. And I appreciate that. I mean, healthcare is one of those industries that is just so right for technology ingestion and transformation, that is a great example of how the cloud has really enabled really. I mean, I'm talking about major changes in healthcare with proactive versus reactive. We're talking about lower costs, better health, longer lives is really inspiring to see that evolve. We're going to watch it over the next several years. I wonder if we could close in the marketplace. I've had the pleasure of interviewing Dave McCann, a number of times. He and his team have built just an awesome capability for Amazon and its ecosystem. What about the data products, whether it's SageMaker or other data products in the marketplace, what can you tell us? >> Sure. Either of this market visits are interesting thing. So let me first talk about the AWS marketplace of what, AWS marketplace you can browse and search for hundreds of machine learning algorithms and machine learning, modern packages in a broad range of categories that this company provision, fixed analysis, voice answers, email, video, and it says predictive models and so on and so forth. And all of these models and algorithms can be deployed to a Jupiter notebook, which comes as part of the SageMaker that form. And you can integrate all of these different models and algorithms into our fully managed service, which is Amazon SageMaker to Jupiter notebooks, Sage maker, STK, and even command as well. And this experience is followed by either of those marketplace catalog and API. So you get the same benefits as any other marketplace products, the just seamless deployments and consolidate it. So you get the same benefits as the products and the invest marketplace for your machine learning algorithms and model packages. And this is really important because these can be darkly integrated into our SageMaker platform. And I don't even be honest about the data products as well. And I'm really happy to provide and code one of the example over here in the interest of cooler times and because we are in unprecedented times over here we collaborated with our partners to provide some data products. And one of them is data hub by tablet view that gives you the time series data of phases and depth data gathered from multiple trusted sources. And this is to provide better and informed knowledge so that everyone who was utilizing this product can make some informed decisions and help the community at the end. >> I love it. I love this concept of being able to access the data, algorithms, tooling. And it's not just about the data, it's being able to do something with the data and that we've been talking about injecting intelligence into those data marketplaces. That's what we mean by smart data marketplaces. Stuti Deshpande, thanks so much for coming to theCUBES here, sharing your knowledge and tell us a little bit about AWS. There's a pleasure having you. >> It's my pleasure too. Thank you so much for having me here. >> You're very welcome. And thank you for watching. Keep it right there. We will be right back right after this short break. (soft orchestral music)
SUMMARY :
brought to you by Io Tahoe. and keep heart of building in technology over the period of time. and to work with thousands What are the benefits that I'm going to and improve the support of these So I wonder if you could paint So all of the services mentioned here in the middle, and then you So that models get to production faster So one of the things I love about, So the pattern that we of how the cloud has and code one of the example And it's not just about the data, Thank you so much for having me here. And thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave McCann | PERSON | 0.99+ |
Stuti Deshpande | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Stuti | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
Jarvin | PERSON | 0.99+ |
75% | QUANTITY | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
70% | QUANTITY | 0.99+ |
200 different services | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
six months | QUANTITY | 0.99+ |
one week | QUANTITY | 0.99+ |
each step | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
SageMaker | TITLE | 0.99+ |
first | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Intuit | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.97+ |
TensorFlow | TITLE | 0.97+ |
two instances | QUANTITY | 0.97+ |
Secondly | QUANTITY | 0.97+ |
Io Tahoe | PERSON | 0.97+ |
One | QUANTITY | 0.96+ |
single source | QUANTITY | 0.96+ |
Prime Air | COMMERCIAL_ITEM | 0.94+ |
single set | QUANTITY | 0.93+ |
one thing | QUANTITY | 0.92+ |
today | DATE | 0.92+ |
three main reasons | QUANTITY | 0.92+ |
Elastic Compute | TITLE | 0.9+ |
DeepRacer | TITLE | 0.9+ |
single tool | QUANTITY | 0.87+ |
50 cubic litres | QUANTITY | 0.85+ |
Elastic Compute | TITLE | 0.84+ |
Rekognition | TITLE | 0.84+ |
Amazon With Services | ORGANIZATION | 0.82+ |
hundreds of machine learning algorithms | QUANTITY | 0.82+ |
three major frameworks | QUANTITY | 0.81+ |
Stuti Deshpande, AWS | Smart Data Marketplaces
>> Announcer: From around the globe it's theCUBE with digital coverage of smart data marketplaces brought to you by Io Tahoe. >> Hi everybody, this is Dave Vellante. And welcome back. We've been talking about smart data. We've been hearing Io Tahoe talk about putting data to work and keep heart of building great data outcomes is the Cloud of course, and also Cloud native tooling. Stuti Deshpande is here. She's a partner solutions architect for Amazon Web Services and an expert in this area. Stuti, great to see you. Thanks so much for coming on theCUBE. >> Thank you so much for having me here. >> You're very welcome. So let's talk a little bit about Amazon. I mean, you have been on this machine learning journey for quite sometime. Take us through how this whole evolution has occurred in technology over the period of time. Since the Cloud really has been evolving. >> Amazon in itself is a company, an example of a company that has gotten through a multi year machine learning transformation to become the machine learning driven company that you see today. They have been improvising on original personalization model using robotics to all different women's centers, developing a forecasting system to predict the customer needs and improvising on that and reading customer expectations on convenience, fast delivery and speed, from developing natural language processing technology for end user infraction, to developing a groundbreaking technology such as Prime Air jobs to give packages to the customers. So our goal at Amazon With Services is to take this rich expertise and experience with machine learning technology across Amazon, and to work with thousands of customers and partners to handle this powerful technology into the hands of developers or data engineers of all levels. >> Great. So, okay. So if I'm a customer or a partner of AWS, give me the sales pitch on why I should choose you for machine learning. What are the benefits that I'm going to get specifically from AWS? >> Well, there are three main reasons why partners choose us. First and foremost, we provide the broadest and the deepest set of machine learning and AI services and features for your business. The velocity at which we innovate is truly unmatched. Over the last year, we launched 200 different services and features. So not only our pace is accelerating, but we provide fully managed services to our customers and partners who can easily build sophisticated AI driven applications and utilizing those fully managed services began build and train and deploy machine learning models, which is both valuable and differentiating. Secondly, we can accelerate the adoption of machine learning. So as I mentioned about fully managed services for machine learning, we have Amazon SageMaker. So SageMaker is a fully managed service that are any developer of any level or a data scientist can utilize to build complex machine learning, algorithms and models and deploy that at scale with very less effort and a very less cost. Before SageMaker, it used to take so much of time and expertise and specialization to build all these extensive models, but SageMaker, you can literally build any complex models within just a time of days or weeks. So to increase it option, AWS has acceleration programs just in a solution maps. And we also have education and training programs such as DeepRacer, which are enforces on enforcement learning and Embark, which actually help organization to adopt machine learning very readily. And we also support three major frameworks that just tensive no charge, or they have separate teams who are dedicated to just focus on all these frameworks and improve the support of these frameworks for a wide variety of workloads. And finaly, we provide the most comprehensive platform that is optimized for machine learning. So when you think about machine learning, you need to have a data store where you can store your training sets, your test sets, which is highly reliable, highly scalable, and secure data store. Most of our customers want to store all of their data and any kind of data into a centralized repository that can be treated at the central source of fraud. And in this case from the Amazon Esri data store to build and endurance machine learning workflow. So we believe that we provide this capability of having the most comprehensive platform to build the machine learning workflow from internally. >> Great. Thank you for that. So I wanted, my next question is, this is a complicated situation for a lot of customers. You know, having the technology is one thing, but adoption is sort of everything. So I wonder if you could paint a picture for us and help us understand, how you're helping customers think about machine learning, thinking about that journey and maybe give us the context of what the ecosystem looks like? >> Sure. If someone can put up the belt, I would like to provide a picture representation of how AWS and fusion machine learning as three layers of stack. And moving on to next bill, I can talk about the bottom there. And bottom there as you can see over this screen, it's basically for advanced technologists advanced data scientists who are machine learning practitioners who work at the framework level. 90% of data scientists use multiple frameworks because multiple frameworks are adjusted and are suitable for multiple and different kinds of workloads. So at this layer, we provide support for all of the different types of frameworks. And the bottom layer is only for the advanced scientists and developers who are actually actually want to build, train and deploy these machine learning models by themselves and moving onto the next level, which is the middle layer. This layer is only suited for non-experts. So here we have seen Jamaica where it provides a fully managed service there you can build, tune, train and deploy your machine learning models at a very low cost and with very minimal efforts and at a higher scale, it removes all the complexity, having a thing and guess guesswork from this stage of machine learning and Amazon SageMaker has been the scene that will change. Many of our customers are actually standardizing on top off Amazon SageMaker. And then I'm moving on to the next layer, which is the top most layer. We call this as AI services because this may make the human recognition. So all of the services mentioned here such as Amazon Rekognition, which is basically a deep learning service optimized for image and video analysis. And then we have Amazon Polly, which can do the text to speech from Russian and so on and so forth. So these are the AI services that can be embedded into the application so that the end user or the end customer can build AI driven applications. >> Love it. Okay. So you've got the experts at the bottom with the frameworks, the hardcore data scientists, you kind of get the self driving machine learning in the middle, and then you have all the ingredients. I'm like an AI chef or a machine learning chef. I can pull in vision, speech, chatbots, fraud detection, and sort of compile my own solutions that's cool. We hear a lot about SageMaker studio. I wonder if you could tell us a little bit more, can we double click a little bit on SageMaker? That seems to be a pretty important component of that stack that you just showed us. >> I think that was an absolutely very great summarization of all the different layers of machine unexpected. So thank you for providing the gist of that. Of course, I'll be really happy to talk about Amazon SageMaker because most of our customers are actually standardizing on top of SageMaker. That is spoken about how machine learning traditionally has so many complications and it's very complex and expensive and I traded process, which makes it even harder because they don't know integrated tools or if you do the traditional machine learning all kind of deployment, there are no integrated tools for the entire workflow process and deployment. And that is where SageMaker comes into the picture. SageMaker removes all the heaviness thing and complexities from each step of the deployment of machine learning workflow, how it solves our challenges by providing all of the different components that are optimized for every stage of the workflow into one single tool set. So that models get to production faster and with much less effort and at a lower cost. We really continue to add important (indistinct) leading to Amazon SageMaker. I think last year we announced 50 cubic litres in this far SageMaker being improvised it's features and functionalities. And I would love to call out a couple of those here, SageMaker notebooks, which are just one thing, the prominent notebooks that comes along with easy two instances, I'm sorry for quoting Jarvin here is Amazon Elastic Compute Instances. So you just need to have a one thing deployment and you have the entire SageMaker Notebook Interface, along with the Elastic Compute Instances running that gives you the faster time to production. If you're a machine, if you are a data scientist or a data engineer who worked extensively for machine learning, you must be aware about building training datasets is really complex. So there we have on his own ground truth, that is only for building machine learning training data sets, which can reduce your labeling cost by 70%. And if you perform machine learning and other model technology in general, there are some workflows where you need to do inferences. So there we have inference, Elastic Inference Incense, which you can reduce the cost by 75% by adding a little GP acceleration. Or you can reduce the cost by adding managed squad training, utilizing easy to spot instances. So there are multiple ways that you can reduce the costs and there are multiple ways there you can improvise and speed up your machine, learning deployment and workflow. >> So one of the things I love about, I mean, I'm a prime member who is not right. I love to shop at Amazon. And what I like about it is the consumer experience. It kind of helps me find things that maybe I wasn't aware of, maybe based on other patterns that are going on in the buying community with people that are similar. If I want to find a good book. It's always gives me great reviews and recommendations. So I'm wondering if that applies to sort of the tech world and machine learning, are you seeing any patterns emerge across the various use cases, you have such scale? What can you tell us about that? >> Sure. One of the battles that we have seen all the time is to build scalable layer for any kind of use case. So as I spoke before that as much, I'm really looking to put their data into a single set of depository where they have the single source of truth. So storing of data and any kind of data at any velocity into a single source of would actually help them build models who run on these data and get useful insights out of it. So when you speak about an entry and workflow, using Amazon SageMaker along bigger, scalable analytical tool is actually what we have seen as one of the factors where they can perform some analysis using Amazon SageMaker and build predictive models to say samples, if you want to take a healthcare use case. So they can build a predictive model that can victimize the readmissions of using Amazon SageMaker. So what I mean, to say is, by not moving data around and connecting different services to the same set of source of data, that's tumor avoid creating copies of data, which is very crucial when you are having training data set and test data sets with Amazon SageMaker. And it is highly important to consider this. So the pattern that we have seen is to utilize a central source of depository of data, which could be Amazon Extra. In this scenario, scalable analytical layer along with SageMaker. I would have to code at Intuit for a success story over here. I'm using sandwich, a Amazon SageMaker Intuit had reviews the machine learning deployment time by 90%. So I'm quoting here from six months to one week. And if you think about a healthcare industry, there hadn't been a shift from reactive to predictive care. So utilizing predictive models to accelerate research and discovery of new drugs and new treatments. And you've also observed that nurses were supported by AI tools increase their, their productivity has increased by 50%. I would like to say that one of our customers are really diving deep into the AWS portfolio of machine learning and AI services and including transcribed medical, where they are able to provide some insights so that their customers are getting benefits from them. Most of their customers are healthcare providers and they are able to give some into insights so that they can create some more personalized and improvise patient care. So there you have the end user benefits as well. One of the patterns that I have, I can speak about and what we have seen as well, appearing a predictive model with real time integration into healthcare records will actually help their healthcare provider customers for informed decision making and improvising the personalized patient care. >> That's a great example, several there. And I appreciate that. I mean, healthcare is one of those industries that is just so right for technology ingestion and transformation, that is a great example of how the cloud has really enabled really. I mean, I'm talking about major changes in healthcare with proactive versus reactive. We're talking about lower costs, better health, longer lives is really inspiring to see that evolve. We're going to watch it over the next several years. I wonder if we could close in the marketplace. I've had the pleasure of interviewing Dave McCann, a number of times. He and his team have built just an awesome capability for Amazon and its ecosystem. What about the data products, whether it's SageMaker or other data products in the marketplace, what can you tell us? >> Sure. Either of this market visits are interesting thing. So let me first talk about the AWS marketplace of what, AWS marketplace you can browse and search for hundreds of machine learning algorithms and machine learning, modern packages in a broad range of categories that this company provision, fixed analysis, voice answers, email, video, and it says predictive models and so on and so forth. And all of these models and algorithms can be deployed to a Jupiter notebook, which comes as part of the SageMaker that form. And you can integrate all of these different models and algorithms into our fully managed service, which is Amazon SageMaker to Jupiter notebooks, Sage maker, STK, and even command as well. And this experience is followed by either of those marketplace catalog and API. So you get the same benefits as any other marketplace products, the just seamless deployments and consolidate it. So you get the same benefits as the products and the invest marketplace for your machine learning algorithms and model packages. And this is really important because these can be darkly integrated into our SageMaker platform. And I don't even be honest about the data products as well. And I'm really happy to provide and code one of the example over here in the interest of cooler times and because we are in unprecedented times over here we collaborated with our partners to provide some data products. And one of them is data hub by tablet view that gives you the time series data of phases and depth data gathered from multiple trusted sources. And this is to provide better and informed knowledge so that everyone who was utilizing this product can make some informed decisions and help the community at the end. >> I love it. I love this concept of being able to access the data, algorithms, tooling. And it's not just about the data, it's being able to do something with the data and that we've been talking about injecting intelligence into those data marketplaces. That's what we mean by smart data marketplaces. Stuti Deshpande, thanks so much for coming to theCUBES here, sharing your knowledge and tell us a little bit about AWS. There's a pleasure having you. >> It's my pleasure too. Thank you so much for having me here. >> You're very welcome. And thank you for watching. Keep it right there. We will be right back right after this short break. (soft orchestral music)
SUMMARY :
brought to you by Io Tahoe. and keep heart of building in technology over the period of time. and to work with thousands What are the benefits that I'm going to and improve the support of these So I wonder if you could paint So all of the services mentioned here in the middle, and then you So that models get to production faster and machine learning, are you So the pattern that we of how the cloud has and code one of the example And it's not just about the data, Thank you so much for having me here. And thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave McCann | PERSON | 0.99+ |
Stuti Deshpande | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Stuti | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
Jarvin | PERSON | 0.99+ |
75% | QUANTITY | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
200 different services | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
70% | QUANTITY | 0.99+ |
one week | QUANTITY | 0.99+ |
six months | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
SageMaker | TITLE | 0.99+ |
each step | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Jamaica | LOCATION | 0.98+ |
Intuit | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.97+ |
two instances | QUANTITY | 0.97+ |
Secondly | QUANTITY | 0.97+ |
Io Tahoe | PERSON | 0.97+ |
One | QUANTITY | 0.96+ |
single source | QUANTITY | 0.96+ |
Prime Air | COMMERCIAL_ITEM | 0.94+ |
one thing | QUANTITY | 0.92+ |
today | DATE | 0.92+ |
Elastic Compute | TITLE | 0.92+ |
three main reasons | QUANTITY | 0.92+ |
single set | QUANTITY | 0.9+ |
DeepRacer | TITLE | 0.89+ |
single tool | QUANTITY | 0.87+ |
50 cubic litres | QUANTITY | 0.87+ |
Elastic Compute | TITLE | 0.86+ |
Rekognition | TITLE | 0.86+ |
Amazon With Services | ORGANIZATION | 0.82+ |
Jupiter | ORGANIZATION | 0.81+ |
three layers | QUANTITY | 0.79+ |
Sage | ORGANIZATION | 0.78+ |
Swami Sivasubramanian, AWS | AWS Summit Online 2020
>> Narrator: From theCUBE Studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE conversation. >> Hello everyone, welcome to this special CUBE interview. We are here at theCUBE Virtual covering AWS Summit Virtual Online. This is Amazon's Summits that they normally do all around the world. They're doing them now virtually. We are here in the Palo Alto COVID-19 quarantine crew getting all the interviews here with a special guest, Vice President of Machine Learning, we have Swami, CUBE Alumni, who's been involved in not only the machine learning, but all of the major activity around AWS around how machine learning's evolved, and all the services around machine learning workflows from transcribe, recognition, you name it. Swami, you've been at the helm for many years, and we've also chatted about that before. Welcome to the virtual CUBE covering AWS Summit. >> Hey, pleasure to be here, John. >> Great to see you. I know times are tough. Everything okay at Amazon? You guys are certainly cloud scaled, not too unfamiliar of working remotely. You do a lot of travel, but what's it like now for you guys right now? >> We're actually doing well. We have been I mean, this many of, we are working hard to make sure we continue to serve our customers. Even from their site, we have done, yeah, we had taken measures to prepare, and we are confident that we will be able to meet customer demands per capacity during this time. So we're also helping customers to react quickly and nimbly, current challenges, yeah. Various examples from amazing startups working in this area to reorganize themselves to serve customer. We can talk about that common layer. >> Large scale, you guys have done a great job and fun watching and chronicling the journey of AWS, as it now goes to a whole 'nother level with the post pandemic were expecting even more surge in everything from VPNs, workspaces, you name it, and all these workloads are going to be under a lot of pressure to do more and more value. You've been at the heart of one of the key areas, which is the tooling, and the scale around machine learning workflows. And this is where customers are really trying to figure out what are the adequate tools? How do my teams effectively deploy machine learning? Because now, more than ever, the data is going to start flowing in as virtualization, if you will, of life, is happening. We're going to be in a hybrid world with life. We're going to be online most of the time. And I think COVID-19 has proven that this new trajectory of virtualization, virtual work, applications are going to have to flex, and adjust, and scale, and be reinvented. This is a key thing. What's going on with machine learning, what's new? Tell us what are you guys doing right now. >> Yeah, I see now, in AWS, we offer broadest-- (poor audio capture obscures speech) All the way from like expert practitioners, we offer our frameworks and infrastructure layer support for all popular frameworks from like TensorFlow, Apache MXNet, and PyTorch, PowerShell, (poor audio capture obscures speech) custom chips like inference share. And then, for aspiring ML developers, who want to build their own custom machine learning models, we're actually building, we offer SageMaker, which is our end-to-end machine learning service that makes it easy for customers to be able to build, train, tune, and debug machine learning models, and it is one of our fastest growing machine learning services, and many startups and enterprises are starting to standardize their machine learning building on it. And then, the final tier is geared towards actually application developers, who did not want to go into model-building, just want an easy API to build capabilities to transcribe, run voice recognition, and so forth. And I wanted to talk about one of the new capabilities we are about to launch, enterprise search called Kendra, and-- >> So actually, so just from a news standpoint, that's GA now, that's being announced at the Summit. >> Yeah. >> That was a big hit at re:Invent, Kendra. >> Yeah. >> A lot of buzz! It's available. >> Yep, so I'm excited to say that Kendra is our new machine learning powered, highly accurate enterprise search service that has been made generally available. And if you look at what Kendra is, we have actually reimagined the traditional enterprise search service, which has historically been an underserved market segment, so to speak. If you look at it, on the public search, on the web search front, it is a relatively well-served area, whereas the enterprise search has been an area where data in enterprise, there are a huge amount of data silos, that is spread in file systems, SharePoint, or Salesforce, or various other areas. And deploying a traditional search index has always that even simple persons like when there's an ID desk open or when what is the security policy, or so forth. These kind of things have been historically, people have to find within an enterprise, let alone if I'm actually in a material science company or so forth like what 3M was trying to do. Enable collaboration of researchers spread across the world, to search their experiment archives and so forth. It has been super hard for them to be able to things, and this is one of those areas where Kendra has enabled the new, of course, where Kendra is a deep learning powered search service for enterprises, which breaks down data silos, and collects actually data across various things all the way from S3, or file system, or SharePoint, and various other data sources, and uses state-of-art NLP techniques to be able to actually index them, and then, you can query using natural language queries such as like when there's my ID desk-scoping, and the answer, it won't just give you a bunch of random, right? It'll tell you it opens at 8:30 a.m. in the morning. >> Yeah. >> Or what is the credit card cashback returns for my corporate credit card? It won't give you like a long list of links related to it. Instead it'll give you answer to be 2%. So it's that much highly accurate. (poor audio capture obscures speech) >> People who have been in the enterprise search or data business know how hard this is. And it is super, it's been a super hard problem, the old in the old guard models because databases were limiting to schemas and whatnot. Now, you have a data-driven world, and this becomes interesting. I think the big takeaway I took away from Kendra was not only the new kind of discovery navigation that's possible, in terms of low latency, getting relevant content, but it's really the under-the-covers impact, and I think I'd like to get your perspective on this because this has been an active conversation inside the community, in cloud scale, which is data silos have been a problem. People have had built these data silos, and they really talk about breaking them down but it's really again hard, there's legacy problems, and well, applications that are tied to them. How do I break my silos down? Or how do I leverage either silos? So I think you guys really solve a problem here around data silos and scale. >> Yeah. >> So talk about the data silos. And then, I'm going to follow up and get your take on the kind of size of of data, megabytes, petabytes, I mean, talk about data silos, and the scale behind it. >> Perfect, so if you look at actually how to set up something like a Kendra search cluster, even as simple as from your Management Console in the AWS, you'll be able to point Kendra to various data sources, such as Amazon S3, or SharePoint, and Salesforce, and various others. And say, these are kind of data I want to index. And Kendra automatically pulls in this data, index these using its deep learning and NLP models, and then, automatically builds a corpus. Then, I, as in user of the search index, can actually start querying it using natural language, and don't have to worry where it comes from, and Kendra takes care of things like access control, and it uses finely-tuned machine learning algorithms under the hood to understand the context of natural language query and return the most relevant. I'll give a real-world example of some of the field customers who are using Kendra. For instance, if you take a look at 3M, 3M is using Kendra to support search, support its material science R&D by enabling natural language search of their expansive repositories of past research documents that may be relevant to a new product. Imagine what this does to a company like 3M. Instead of researchers who are spread around the world, repeating the same experiments on material research over and over again, now, their engineers and researchers will allow everybody to quickly search through documents. And they can innovate faster instead of trying to literally reinvent the wheel all the time. So it is better acceleration to the market. Even we are in this situation, one of the interesting work that you might be interested in is the Semantic Scholar team at Allen Institute for AI, recently opened up what is a repository of scientific research called COVID-19 Open Research Dataset. These are expert research articles. (poor audio capture obscures speech) And now, the index is using Kendra, and it helps scientists, academics, and technologists to quickly find information in a sea of scientific literature. So you can even ask questions like, "Hey, how different is convalescent plasma "treatment compared to a vaccine?" And various in that question and Kendra automatically understand the context, and gets the summary answer to these questions for the customers, so. And this is one of the things where when we talk about breaking the data silos, it takes care of getting back the data, and putting it in a central location. Understanding the context behind each of these documents, and then, being able to also then, quickly answer the queries of customers using simple query natural language as well. >> So what's the scale? Talk about the scale behind this. What's the scale numbers? What are you guys seeing? I see you guys always do a good job, I've run a great announcement, and then following up with general availability, which means I know you've got some customers using it. What are we talking about in terms of scales? Petabytes, can you give some insight into the kind of data scale you're talking about here? >> So the nice thing about Kendra is it is easily linearly scalable. So I, as a developer, I can keep adding more and more data, and that is it linearly scales to whatever scale our customers want. So and that is one of the underpinnings of Kendra search engine. So this is where even if you see like customers like PricewaterhouseCoopers is using Kendra to power its regulatory application to help customers search through regulatory information quickly and easily. So instead of sifting through hundreds of pages of documents manually to answer certain questions, now, Kendra allows them to answer natural language question. I'll give another example, which is speaks to the scale. One is Baker Tilly, a leading advisory, tax, and assurance firm, is using Kendra to index documents. Compared to a traditional SharePoint-based full-text search, now, they are using Kendra to quickly search product manuals and so forth. And they're able to get answers up to 10x faster. Look at that kind of impact what Kendra has, being able to index vast amount of data, with in a linearly scalable fashion, keep adding in the order of terabytes, and keep going, and being able to search 10x faster than traditional, I mean traditional keyword search based algorithm is actually a big deal for these customers. They're very excited. >> So what is the main problem that you're solving with Kendra? What's the use case? If I'm the customer, what's my problem that you're solving? Is it just response to data, whether it's a call center, or support, or is it an app? I mean, what's the main focus that you guys came out? What was the vector of problem that you're solving here? >> So when we talked to customers before we started building Kendra, one of the things that constantly came back for us was that they wanted the same ease of use and the ability to search the world wide web, and customers like us to search within an enterprise. So it can be in the form of like an internal search to search within like the HR documents or internal wiki pages and so forth, or it can be to search like internal technical documentation or the public documentation to help the contact centers or is it the external search in terms of customer support and so forth, or to enable collaboration by sharing knowledge base and so forth. So each of these is really dissected. Why is this a problem? Why is it not being solved by traditional search techniques? One of the things that became obvious was that unlike the external world where the web pages are linked that easily with very well-defined structure, internal world is very messy within an enterprise. The documents are put in a SharePoint, or in a file system, or in a storage service like S3, or on naturally, tell-stores or Box, or various other things. And what really customers wanted was a system which knows how to actually pull the data from various these data silos, still understand the access control behind this, and enforce them in the search. And then, understand the real data behind it, and not just do simple keyword search, so that we can build remarkable search service that really answers queries in a natural language. And this has been the theme, premise of Kendra, and this is what had started to resonate with our customers. I talked with some of the other examples even in areas like contact centers. For instance, Magellan Health is using Kendra for its contact centers. So they are able to seamlessly tie like member, provider, or client specific information with other inside information about health care to its agents so that they can quickly resolve the call. Or it can be on internally to do things like external search as well. So very satisfied client. >> So you guys took the basic concept of discovery navigation, which is the consumer web, find what you're looking for as fast as possible, but also took advantage of building intelligence around understanding all the nuances and configuration, schemas, access, under the covers and allowing things to be discovered in a new way. So you basically makes data be discoverable, and then, provide an interface. >> Yeah. >> For discovery and navigation. So it's a broad use cat, then. >> Right, yeah that's sounds somewhat right except we did one thing more. We actually understood not just, we didn't just do discovery and also made it easy for people to find the information but they are sifting through like terabytes or hundreds of terabytes of internal documentation. Sometimes, one other things that happens is throwing a bunch of hundreds of links to these documents is not good enough. For instance, if I'm actually trying to find out for instance, what is the ALS marker in an health care setting, and for a particular research project, then, I don't want to actually sift through like thousands of links. Instead, I want to be able to correctly pinpoint which document contains answer to it. So that is the final element, which is to really understand the context behind each and every document using natural language processing techniques so that you not only find discover the information that is relevant but you also get like highly accurate possible precise answers to some of your questions. >> Well, that's great stuff, big fan. I was really liking the announcement of Kendra. Congratulations on the GA of that. We'll make some room on our CUBE Virtual site for your team to put more Kendra information up. I think it's fascinating. I think that's going to be the beginning of how the world changes, where this, this certainly with the voice activation and API-based applications integrating this in. I just see a ton of activity that this is going to have a lot of headroom. So appreciate that. The other thing I want to get to while I have you here is the news around the augmented artificial intelligence has been brought out as well. >> Yeah. >> So the GA of that is out. You guys are GA-ing everything, which is right on track with your cadence of AWS laws, I'd say. What is this about? Give us the headline story. What's the main thing to pay attention to of the GA? What have you learned? What's the learning curve, what's the results? >> So augmented artificial intelligence service, I called it A2I but Amazon A2I service, we made it generally available. And it is a very unique service that makes it easy for developers to augment human intelligence with machine learning predictions. And this is historically, has been a very challenging problem. We look at, so let me take a step back and explain the general idea behind it. You look at any developer building a machine learning application, there are use cases where even actually in 99% accuracy in machine learning is not going to be good enough to directly use that result as the response to back to the customer. Instead, you want to be able to augment that with human intelligence to make sure, hey, if my machine learning model is returning, saying hey, my confidence interval for this prediction is less than 70%, I would like it to be augmented with human intelligence. Then, A2I makes it super easy for customers to be, developers to use actually, a human reviewer workflow that comes in between. So then, I can actually send it either to the public pool using Mechanical Turk, where we have more than 500,000 Turkers, or I can use a private workflow as a vendor workflow. So now, A2I seamlessly integrates with our Textract, Rekognition, or SageMaker custom models. So now, for instance, NHS is integrated A2I with Textract, so that, and they are building these document processing workflows. The areas where the machine learning model confidence load is not as high, they will be able augment that with their human reviewer workflows so that they can actually build in highly accurate document processing workflow as well. So this, we think is a powerful capability. >> So this really kind of gets to what I've been feeling in some of the stuff we worked with you guys on our machine learning piece. It's hard for companies to hire machine learning people. This has been a real challenge. So I like this idea of human augmentation because humans and machines have to have that relationship, and if you build good abstraction layers, and you abstract away the complexity, which is what you guys do, and that's the vision of cloud, then, you're going to need to have that relationship solidified. So at what point do you think we're going to be ready for theCUBE team, or any customer that doesn't have the or can't find a machine learning person? Or may not want to pay the wages that's required? I mean it's hard to find a machine learning engineer, and when does the data science piece come in with visualization, the spectrum of pure computer science, math, machine learning guru to full end user productivity? Machine learning is where you guys are doing a lot of work. Can you just share your opinion on that evolution of where we are on that? Because people want to get to the point where they don't have to hire machine learning folks. >> Yeah. >> And have that kind support too. >> If you look at the history of technology, I actually always believe that many of these highly disruptive technology started as a way that it is available only to experts, and then, they quickly go through the cycles, where it becomes almost common place. I'll give an example with something totally outside the IT space. Let's take photography. I think more than probably 150 years ago, the first professional camera was invented, and built like three to four years still actually take a really good picture. And there were only very few expert photographers in the world. And then, fast forward to time where we are now, now, even my five-year-old daughter takes actually very good portraits, and actually gives it as a gift to her mom for Mother's Day. So now, if you look at Instagram, everyone is a professional photographer. I kind of think the same thing is about to, it will happen in machine learning too. Compared to 2012, where there were very few deep learning experts, who can really build these amazing applications, now, we are starting to see like tens of thousands of actually customers using machine learning in production in AWS, not just proof of concepts but in production. And this number is rapidly growing. I'll give one example. Internally, if you see Amazon, to aid our entire company to transform and make machine learning as a natural part of the business, six years ago, we started a Machine Learning University. And since then, we have been training all our engineers to take machine learning courses in this ML University, and a year ago, we actually made these coursework available through our Training and Certification platform in AWS, and within 48 hours, more than 100,000 people registered. Think about it, that's like a big all-time record. That's why I always like to believe that developers are always eager to learn, they're very hungry to pick up new technology, and I wouldn't be surprised if four or five years from now, machine learning is kind of becomes a normal feature of the app, the same with databases are, and that becomes less special. If that day happens, then, I would see it as my job is done, so. >> Well, you've got a lot more work to do because I know from the conversations I've been having around this COVID-19 pandemic is it's that there's general consensus and validation that the future got pulled forward, and what used to be an inside industry conversation that we used to have around machine learning and some of the visions that you're talking about has been accelerated on the pace of the new cloud scale, but now that people now recognize that virtual and experiencing it firsthand globally, everyone, there are now going to be an acceleration of applications. So we believe there's going to be a Cambrian explosion of new applications that got to reimagine and reinvent some of the plumbing or abstractions in cloud to deliver new experiences, because the expectations have changed. And I think one of the things we're seeing is that machine learning combined with cloud scale will create a whole new trajectory of a Cambrian explosion of applications. So this has kind of been validated. What's your reaction to that? I mean do you see something similar? What are some of the things that you're seeing as we come into this world, this virtualization of our lives, it's every vertical, it's not one vertical anymore that's maybe moving faster. I think everyone sees the impact. They see where the gaps are in this new reality here. What's your thoughts? >> Yeah, if you see the history from machine learning specifically around deep learning, while the technology is really not new, especially because the early deep learning paper was probably written like almost 30 years ago. And why didn't we see deep learning take us sooner? It is because historically, deep learning technologies have been hungry for computer resources, and hungry for like huge amount of data. And then, the abstractions were not easy enough. As you rightfully pointed out that cloud has come in made it super easy to get like access to huge amount of compute and huge amount of data, and you can literally pay by the hour or by the minute. And with new tools being made available to developers like SageMaker and all the AI services, we are talking about now, there is an explosion of options available that are easy to use for developers that we are starting to see, almost like a huge amount of like innovations starting to pop up. And unlike traditional disruptive technologies, which you usually see crashing in like one or two industry segments, and then, it crosses the chasm, and then goes mainstream, but machine learning, we are starting to see traction almost in like every industry segment, all the way from like in financial sector, where fintech companies like Intuit is using it to forecast its call center volume and then, personalization. In the health care sector, companies like Aidoc are using computer vision to assist radiologists. And then, we are seeing in areas like public sector. NASA has partnered with AWS to use machine learning to do anomaly detection, algorithms to detect solar flares in the space. And yeah, examples are plenty. It is because now, machine learning has become such common place that and almost every industry segment and every CIO is actually already looking at how can they reimagine, and reinvent, and make their customer experience better covered by machine learning. In the same way, Amazon actually asked itself, like eight or 10 years ago, so very exciting. >> Well, you guys continue to do the work, and I agree it's not just machine learning by itself, it's the integration and the perfect storm of elements that have come together at this time. Although pretty disastrous, but I think ultimately, it's going to come out, we're going to come out of this on a whole 'nother trajectory. It's going to be creativity will be emerged. You're going to start seeing really those builders thinking, "Okay hey, I got to get out there. "I can deliver, solve the gaps we are exposed. "Solve the problems, "pre-create new expectations, new experience." I think it's going to be great for software developers. I think it's going to change the computer science field, and it's really bringing the lifestyle aspect of things. Applications have to have a recognition of this convergence, this virtualization of life. >> Yeah. >> The applications are going to have to have that. So and remember virtualization helped Amazon formed the cloud. Maybe, we'll get some new kinds of virtualization, Swami. (laughs) Thanks for coming on, really appreciate it. Always great to see you. Thanks for taking the time. >> Okay, great to see you, John, also. Thank you, thanks again. >> We're with Swami, the Vice President of Machine Learning at AWS. Been on before theCUBE Alumni. Really sharing his insights around what we see around this virtualization, this online event at the Amazon Summit, we're covering with the Virtual CUBE. But as we go forward, more important than ever, the data is going to be important, searching it, finding it, and more importantly, having the humans use it building an application. So theCUBE coverage continues, for AWS Summit Virtual Online, I'm John Furrier, thanks for watching. (enlightening music)
SUMMARY :
leaders all around the world, and all the services around Great to see you. and we are confident that we will the data is going to start flowing in one of the new capabilities we are about announced at the Summit. That was a big hit A lot of buzz! and the answer, it won't just give you list of links related to it. and I think I'd like to get and the scale behind it. and then, being able to also then, into the kind of data scale So and that is one of the underpinnings One of the things that became obvious to be discovered in a new way. and navigation. So that is the final element, that this is going to What's the main thing to and explain the general idea behind it. and that's the vision of cloud, And have that and built like three to four years still and some of the visions of options available that are easy to use and it's really bringing the are going to have to have that. Okay, great to see you, John, also. the data is going to be important,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
NASA | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Swami | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2012 | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Boston | LOCATION | 0.99+ |
99% | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Kendra | ORGANIZATION | 0.99+ |
Aidoc | ORGANIZATION | 0.99+ |
2% | QUANTITY | 0.99+ |
hundreds of pages | QUANTITY | 0.99+ |
Swami Sivasubramanian | PERSON | 0.99+ |
four years | QUANTITY | 0.99+ |
less than 70% | QUANTITY | 0.99+ |
thousands of links | QUANTITY | 0.99+ |
S3 | TITLE | 0.99+ |
10x | QUANTITY | 0.99+ |
more than 100,000 people | QUANTITY | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
Intuit | ORGANIZATION | 0.99+ |
Mother's Day | EVENT | 0.99+ |
3M | ORGANIZATION | 0.99+ |
six years ago | DATE | 0.99+ |
SharePoint | TITLE | 0.99+ |
Magellan Health | ORGANIZATION | 0.99+ |
hundreds of links | QUANTITY | 0.98+ |
eight | DATE | 0.98+ |
a year ago | DATE | 0.98+ |
each | QUANTITY | 0.98+ |
8:30 a.m. | DATE | 0.98+ |
48 hours | QUANTITY | 0.98+ |
Mechanical Turk | ORGANIZATION | 0.98+ |
PricewaterhouseCoopers | ORGANIZATION | 0.98+ |
one example | QUANTITY | 0.98+ |
Textract | TITLE | 0.97+ |
Amazon Summit | EVENT | 0.97+ |
five-year-old | QUANTITY | 0.97+ |
Salesforce | TITLE | 0.97+ |
ML University | ORGANIZATION | 0.97+ |
hundreds of terabytes | QUANTITY | 0.97+ |
Allen Institute for AI | ORGANIZATION | 0.97+ |
first professional camera | QUANTITY | 0.96+ |
COVID-19 pandemic | EVENT | 0.96+ |
A2I | TITLE | 0.96+ |
One | QUANTITY | 0.95+ |
COVID-19 | OTHER | 0.95+ |
Machine Learning University | ORGANIZATION | 0.95+ |
GA | LOCATION | 0.94+ |
ORGANIZATION | 0.94+ | |
pandemic | EVENT | 0.93+ |
theCUBE Studios | ORGANIZATION | 0.93+ |
COVID | TITLE | 0.93+ |
Baker Tilly | ORGANIZATION | 0.92+ |
AWS Summit | EVENT | 0.92+ |
Adam Burden, Accenture | Accenture Executive Summit at AWS re:Invent 2019
>> Announcer: Live from Las Vegas, it's theCUBE! Covering AWS Executive Summit. Brought to you by Accenture. >> Everyone to theCUBE's live coverage of the Accenture Executive Summit here at the Venetian part of the AWS re:Invent show, I'm your host, Rebecca Knight. We're joined by Adam Burden, he is the chief software engineer at Accenture, thank you so much for coming back on theCUBE, Adam. >> It's great to be here again, Rebecca, thanks a lot for inviting me. >> So I want to talk to you about some research that you conducted about the future, about future systems. We're going to get into what future systems are in a little bit, but I first want to hear about this research itself, what was the genesis of it, what were you trying to understand? >> It was really interesting. First of all, we actually followed the scientific method for this, starting with a real hypothesis, and then conducted a really big research study to find out, was that hypothesis true? And what we were trying to understand is, we see this thing called an innovation achievement gap at many of our clients, where they're investing heavily in new disruptive technologies, but they're not seeing the benefit out of it that they expect, and others, their peers often are. And why is that? And we thought that was really important to understand for our clients who are trying to compete in the digital era. >> So you had this hypothesis, so what did you go in thinking? >> First of all, we went in and we said, we believe that there's a number of barriers out there that people have to really preventing them from embracing and adapting in the digital age in the right way. A lot of it has to do with what I call the inertia of legacy, or the handicap of legacy. So the things that, the way that they used to build systems, like the methods can be a really serious drawback, like if they're using waterfall techniques. Maybe their legacy systems, for example, that they are not really open, they don't provide the ability to interface with them properly. Another great example of the challenges of legacy systems can really be that they're built in a more monolithic nature, and because they're built in that fashion, it's really hard to maintain them in an Agile way, with lots of different teams working on components, because they need them all to be assembled together at once. So it forces you into this release schedule which can be months long or even years long to do things, and that type of speed, that just doesn't work in the digital age, so it's holding them back, and that's some of the diagnostic that we went into this research study with, saying these are the challenges that are out there. >> So before we get talking about the results, I want you to just define for us what these future systems are. >> Great, and this is where we really were trying to say that we think it's time for a hard reset around a lot of the way that business systems and applications are built today. And the reason that we believe that is that, companies who are very large enterprises that really should be dominating in their industry, that there are so many examples of where small startups have come in and disrupted them, things that you think should never have happened. So the democratization of technology, the introduction of cloud, et cetera, the capabilities that AWS is talking to us here at this conference about, that's what's enabling them to do it. But enterprises have so many advantages, the wealth of data that they've got, the enormous investment capacity in others, how is that possible? And we really believe a lot of it comes down to the way that they're using, and the way they're embracing these future systems. And there's three characteristics of these things that we look at, first we say that they're boundaryless, and they really break down the traditional stack of IT, so that it's more open and it's able to connect with services outside of their enterprise, and they embrace the way that that works, so the traditional layers of application and data, and compute and storage, those are really going away, and everything's becoming code and much more components. Another one is adaptable, I'm a really big believer in this space, because I've seen so many things come in, that just makes you really kind of rethink the way that you may have built some things in the past, so that might be like blockchain, or it could be DevOps or other things, and are there ways to build systems that are much more flexible and evolutionary in nature, so they don't have to be completely disrupted and changed, in order to embrace some new technology, so adaptable is another one. And the third one is radically human, this is my favorite one, I think if I had one, it's about building systems for people, rather than building the people around the technology that you're using, in fact I'll give you an example, that keyboard right in front of you today, that keyboard, you know when that keyboard was designed? >> Rebecca: Oh my god, when? >> 1887, or 1880s, about. And basically, that keyboard was designed to slow you down, to keep you from typing too fast. And that was because people were typesetting newspapers, and they were crossing the little bars in their typewriter. Yet, today, what's the date today, 2019, we're still using that, right? Isn't it time for us to have more of a radically human approach to technology, and instead of having people design themselves around how technology works, having the technology best designed for them, so taking better advantage of artificial intelligence, maybe making AI the new UI, those types of things are really going to change it, and we think that future systems will exhibit this key characteristic of radically human in the way that they're built and organized. >> Okay, so I like it. Adaptable, and boundaryless, and radically human. What did you find, so how did you go about this survey, and then what did you find? >> Okay, so first, this was the single biggest survey of enterprise systems that Accenture's ever conducted. And we surveyed more than 8300 companies, c-level, across 20 industries and 20 different geographies. And the survey was looking at more than 100 data points from each one of them, as well as other demographic data, we collected 1.6 million pieces of data about this. We ran machine learning on the data to find patterns that surprised us, we looked at the data in terms of our hypothesis to say, what is it about these future systems, are there some companies that are starting to do things like this boundaryless, adaptable, and radically human space that we could learn something from? And we found some really interesting things. So when I dug into the data, maybe the biggest headline out of it was, the companies that have begun to adapt or to use these future systems type of approaches for things, we'll call them the top 10% of this group. Their revenues are growing at twice the speed of anyone else in their peer group. So think about that, if their revenues are growing faster, and everything else about their peers is the same, they're competitors, they're in the same geography, even the same industry, but the revenues of this group is changing faster, isn't that great evidence that adapting these characteristics of future systems is super important to the business performance that you've got there? It's a huge difference. >> Right, so that's compelling me, so what are they doing differently, what is this 10% of companies, how are they leading the pack? >> Yeah, so it boils down to a couple of key things that they're really doing differently, and I'll start by saying that they look at, instead of just looking at things as applications, they look at them more as systems of interconnected solutions, and they are treating components in a way that allows them to reassemble things in different and unique ways much faster than others can do. Sometimes they're using API solutions, a lot of times they're using outside functions outside of their enterprise to do that, and it's giving them remarkable flexibility. Another thing is the methods, the way that they build systems and what they're embracing, but it goes beyond just using Agile, it's almost like a different culture altogether. I think about some clients that I visited that really are getting this right, and the way that they look at failure, for example, is success, and the conservative nature of a lot of enterprises as it pertains to technology, to carefully study it before they invest, before they move forward, it's holding them back, and maybe that paid dividends for a long time when things were done in a much more waterfall nature, but in the digital age, you can't afford to take that kind of time to embrace or to try and leverage new technologies. I think another one that really stands out for me too, is the breadth of disruptive technologies that they tried, and so it wasn't just that they experimented with everything that worked, they've experimented with a lot of things that maybe haven't produced the kind of results or outcomes that conventional wisdom said that they were going to. Augmented reality is a good example, right, I think it's taken time for augmented reality to really start producing value in the enterprise, but it's been around for a while now. We found that the leaders had all experimented with augmented reality, it didn't necessarily mean that they'd adopted it and begun to use it, but that was actually something that separated them from the laggards, what a surprise, right? Because you had thought, "Okay, well maybe the leaders "are just smarter, they only choose the things "that are really going to make a difference." But it's the fact that they were trying lots of different things, and they weren't afraid to experiment, that really made a difference for them. >> And not afraid to fail, too, as you said. >> Or maybe shelve it and say, "Not quite ready yet. "Maybe in a few years we'll get there." So I thought that was fascinating, and it really helped us sort of confirm that there are definitely things different that these leaders are doing than laggards, and it goes beyond just their adoption of future systems, it's the way that they were building them too, and the culture that they've embraced as a result. >> So we had a dizzying number of announcements on the main stage this morning from Andy Jassy, so many different mainframe legacy migrations, so many different areas that AWS is moving into, and starting new services, how does what you heard today from Andy Jassy translate to the research that you're doing? >> Well it's actually great, and I think it's a great microcosm of what is truly different about these leaders, and laggards. All of them, in some ways, have said, "We're adopting cloud." Okay, great, everybody's doing cloud, all 8300 companies, I can't think of one that said they were doing nothing with cloud. They were doing something with SaaS, or maybe they've got public cloud or others. But here's the difference, here's the difference. When the leaders do cloud, they think about it differently. The laggards look at cloud as a cheaper data center. They say "Okay, we can just move our compute and storage "into cloud, great, awesome." The leaders look at cloud as an innovation catalyst, they're taking advantage of the cloud native services, the things Andy was talking about today, fraud detect, private VPNs, all of the things that he was introducing and describing today, they can't wait to get their hands on that capability. And it's more than that, though, because you could do this on-premise, but it's too expensive, and it takes too long to do that. When you've got a cloud service provider that's making things like Rekognition or SageMaker available at your fingertips, to do amazing things with artificial intelligence, that is what an innovation catalyst is all about, and the leaders are taking advantage of that at every turn, and that's why, that's why they can do things so fast. >> So for the 90% that are not in this leading category, it sounds as though it will require a real change in mindset. Are there any other, what's your advice to help these laggards improve? >> Yeah, so I would say it really boils down to two things, I would give them, if you're in that laggard category, first of all, you can definitely move out of it, and the other thing is, is that you're in strange company. There's digital natives, like the most successful born-in-the-cloud kind of companies, that have this problem too, so it's kind of surprising, right, that you wouldn't expect that, but that's definitely the case, and we see lots of examples of that. The good news is, though, is that you can move from A to B, and I would say it starts with doing two things. The first is, is embracing more fast and flexible technologies, so the things that I really like to see companies embrace, or the things that we observed in this research that they're doing is, looking at Agile at scale, embracing product-based operating models, doing things that allow them, like DevOps, to increase automation and the way that they're building and deploying systems, that type of change is a significant adjustment to the way that you think about technology, and how quickly it can be deployed for use, and if you look at the difference between these digital born-in-the-cloud, digital companies that are the succeeding companies in this space, that's the way that they do it, and they really, it is really kind of part of the secret sauce, so that's one thing, embracing these solutions that make them fast and flexible. And the other one gets back to what I was describing earlier about cloud, recognize that cloud is an innovation catalyst. It is not going to be successful for you to think about cloud as just a cheaper data center. It might very well be lower cost for you to do that, but if you're not taking advantage of the cloud native services, whether that's AWS databases like Aurora, it's the new features that they introduced around the low latency application development, those are the things that will really allow you to do stuff much faster than you could've ever imagined on-premise, so I'd start there, if I was a company that's one of those laggards, and then I'd look at, what is my blueprint for future systems, and how do I embrace those characteristics of boundaryless, adaptable, and radically human. >> Cloud as an innovation engine, I love it. Adam, thank you so much for coming back on theCUBE, it was a pleasure. >> It's great to be here, Rebecca, thank you again for inviting me. >> I'm Rebecca Knight, stay tuned for more of theCUBE's live coverage from the Accenture Executive Summit. (techno music)
SUMMARY :
Brought to you by Accenture. of the Accenture Executive Summit It's great to be here again, Rebecca, some research that you conducted in the digital era. of the diagnostic that we went into this research study I want you to just define for us kind of rethink the way that you may have built some things in the way that they're built and organized. and then what did you find? the companies that have begun to adapt and the way that they look at failure, for example, and the culture that they've embraced as a result. and the leaders are taking advantage of that So for the 90% that are not in this leading category, so the things that I really like to see Adam, thank you so much for coming back on theCUBE, It's great to be here, Rebecca, from the Accenture Executive Summit.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Adam Burden | PERSON | 0.99+ |
Rebecca | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Adam | PERSON | 0.99+ |
10% | QUANTITY | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
90% | QUANTITY | 0.99+ |
20 industries | QUANTITY | 0.99+ |
more than 8300 companies | QUANTITY | 0.99+ |
20 different geographies | QUANTITY | 0.99+ |
1880s | DATE | 0.99+ |
1887 | DATE | 0.99+ |
1.6 million pieces | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
more than 100 data points | QUANTITY | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
8300 companies | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Accenture Executive Summit | EVENT | 0.99+ |
two things | QUANTITY | 0.99+ |
twice | QUANTITY | 0.99+ |
each one | QUANTITY | 0.98+ |
Venetian | LOCATION | 0.98+ |
Las Vegas | LOCATION | 0.98+ |
First | QUANTITY | 0.98+ |
DevOps | TITLE | 0.96+ |
Agile | TITLE | 0.96+ |
AWS Executive Summit | EVENT | 0.95+ |
three characteristics | QUANTITY | 0.94+ |
third one | QUANTITY | 0.91+ |
one | QUANTITY | 0.9+ |
theCUBE | ORGANIZATION | 0.9+ |
2019 | DATE | 0.9+ |
one thing | QUANTITY | 0.87+ |
this morning | DATE | 0.82+ |
single biggest survey | QUANTITY | 0.75+ |
AWS re:Invent 2019 | EVENT | 0.71+ |
Aurora | TITLE | 0.64+ |
re:Invent | EVENT | 0.62+ |
SageMaker | TITLE | 0.55+ |
couple | QUANTITY | 0.54+ |
years | QUANTITY | 0.49+ |
Rekognition | TITLE | 0.47+ |
theCUBE | TITLE | 0.34+ |
Shayn Hawthorne, AWS | AWS re:Invent 2018
>> Live, from Las Vegas, it's theCUBE covering AWS re:Invent 2018. Brought to you by Amazon Web Services, Intel, and their ecosystem partners. >> Hey, welcome back everyone. Live, Cube here in Las Vegas for AWS re:Invent. I'm John Furrier with my co-host, Dave Vellante. Day three of wall to wall coverage, holding our voices together, excited for our next guest, Shayn Hawthorne, general manager at AWS, for the exciting project around the Ground Station, partnership with Lockheed Martin. Really kind of outside the box, announced on Tuesday, not at the keynote, but this is a forward thinking real project which satellites can be provisioned like cloud computing resources. Totally innovative, and will change the nature of edge computing, feeding connectivity to anything. So, thanks for joining us. >> Thank you guys for having me. You're right, my voice is going out this week too. We've been doing a lot of talking. (John laughs) >> Great service. This is really compelling, 'cause it changes the nature of the network. You can feed connectivity, 'cause power and connectivity drive everything. Power, you got battery. Connectivity, you got satellite. Totally obvious, now that you look at it, but, not before this. Where did it come from? How did it all start? >> You know, it came from listening to our customers. Our customers have been talking with us and they had a number of challenges in getting the data off of their satellites and down to the ground. So, we listened to these customers and we listened to the challenges they were experiencing in getting their data to the ground, having access to ground stations, having the ability at the network level, to move the data around the world quickly to where they wanted to process it. And then also, having complex business process logic and other things that were required to help them run their satellite downlinks and uplinks. And then finally, the ability to actually have AWS services right there where the data came down into the cloud, so that you could do great things with that data within milliseconds of it hitting the ground. >> So it's a essentially satellite as a service with a back end data capability, data ingestion, analytics, and management capability. That, how'd that idea come about? I mean, it just underscores the scale of AWS. And I'm thinking about other things that you might be able to, where'd the idea come from? How was it germinated? >> Well and actually, let me just say one thing, we actually would call it Ground Station as the service. It's the Ground Station on the surface of the earth that communicates with the satellite. It allows us to get the data off the satellite or send commands up to it. And so, like I was saying, we came up the idea by talking to our customers, and so we went into, I think this is an incredible part of working at Amazon, because we actually follow through with our leadership principals. We worked backwards from the customer. We actually put together a press release and a frequently asked questions document, a PR/FAQ, in a traditional six page format. And we started working it through our leadership and it got all the way to the point that Andy and the senior leadership team within AWS made the decision that they were going to support our idea and the concept and the architecture that we had come up with to meet these customers' requirements, we actually were able to get to that by about March of 2018. By the end of March, Andy had even had us go in and talk with Jeff. He gave us the thumbs up as well, and after six months, we've already procured 24 antennas. We've already built two Ground Stations in the United States and we've downlinked over hundreds of contacts with satellites, bringing Earth imagery down and other test data to prove that this system works. Get it ready for preview. >> It's unbelievable, because you're basically taking the principals of AWS, which is eliminating the heavy lifting, applying that to building Ground Stations, presumably, right, so, the infrastructure that you're building out, do you have partners that you're working with, are there critical players there, that are enabling this? >> Yeah, it's really neat. We've actually had some really great partnerships, both with helping us build AWS Ground Station, as well as partners that helped us learn what the customers need. Let me tell you, first off, about the partnership that we've had with Lockheed Martin to develop a new innovative antenna system that will collaboratively come together with the parabolic reflectors that AWS Ground Station uses. They've been working on this really neat idea that gives them ability to downlink data all over the entire United States in a very resilient way, which means if some of their Ground Stations antennas in Verge don't work, due to man made reasons or due to natural occurrences, then we're actually able to use the rest of the network to still continue to downlink data. And then, we complimentary bring in AWS Astra for certain types of downlinks and then also to provide uplink commanding to other satellites. The other customer partnership that we've worked with was working with the actual customers who are going to use AWS Ground Station, like DigitalGlobe, Black Sky, Capella SAR, HawkEye 360, who all provided valuable inputs to us about exactly what do they need in a Ground Station. They need the ability to rapidly downlink data, they need the ability to pay by the minute so that there are actually able to use variable expense to pay for satellite downlinks instead of capital expenses to go out and build it. And then by doing that, we're able to offer them a product that's 80% cheaper than if they'd had to go out and build a complete network similar to what we built. And, they're able to, like I said before, access great AWS services like Rekognition, or SageMaker, so that they can make sense of the data that they bring down to the Earth. >> It's a big idea and I'm just sort of curious as to, how and if you, sort of, validated it. How'd ya increase the probability that it was actually going to, you know, deliver a business return? Can you talk about that process? >> Well, we were really focused on validating that we could meet customer challenges and really give them the data securely and reliably with great redundancy. So we validated, first off by, we built our antennas and the Ground Stations in the previous software. We finished over a month and a half ago, and we've been rigorously testing it with our customer partners and then letting them validate that the information we've provided back to them was 100% as good as what they would've received on their own network, and we tested it out, and we've actually got a number of pictures and images downloaded over at our kiosk that were all brought in on AWS Ground Station, and its a superb products over there. >> So Shayn, how does it work? You write this press release, this working backwards document, describe that process. Was that process new to you? Had you done it at other companies? How did you find it? Was it a useful process, obviously it was, 'cause you got the outcome you're looking for, but, talk a little bit more about that approach. >> Yeah, it's actually very cool, I've only been at AWS for a year and a half. And so, I would say that my experience at AWS so far completely validates working backwards from customers. We were turned on to the idea by talking to our customers and the challenges they said. I started doing analysis after the job was assigned to me by Dave Nolton, my boss, and I started putting together the first draft of our PR/FAQ, started engaging with customers immediately. Believe it or not, we went through 28 iterations of the PR/FAQ before we even got to Andy. Everybody in our organization took part in helping to make it better, add in, ask hard questions, ensure that we were really thinking this idea through and that we were obsessing on the customer. And then after we got to Andy, and we got through approving that, it probably went through another 28 iterations before we got to Jeff. And then we went through talking with him. He asked additional hard questions to make sure that we were doing the right for the customer and that we were putting together the right kind of product. And finally we've been iterating it on it ever since until we launched it couple of days ago. >> Sounds like you were iterating, raising the bar, and it resonated with customers. >> Totally. And even as part of getting out of it-- >> That's Amazon's language of love. >> And then your engineering resource, you know, if people are asking you hard questions, you obviously need engineering folks to validate that it's doable. At what point do you get that engineering resource, how does that all work? >> Well, it's neat. In my division, Region Services Division, we actually were supporting it completely from within the division, all the way until we got approval from Andy. And then we actually went in and started hiring very good skills. To show you what kind of incredible people we have at Amazon, we only had to hire about 10% space expertise from outside of the company. We were actually able to bring together 80-90% of the needed skills to build AWS Ground Station from people who've been working at Amazon.com and AWS. And we came together, we really learned quickly, we iterated, failed fast, put things together, changed it. And we were able to deliver the product in time. The whole cloth made from our own expertise. >> So just to summarize, from idea to actual, we're going to do this, how long did that take? >> I'd say that took about three months. From idea to making a decision, three months. From decision to have a preview product that we could launch at re:Invent, six months. >> That's unbelievable. >> It is. >> If you think about something of this scope. >> And it was a joy, I mean it was an incredible to be a part of something like this. It was the best work I've ever done in my life. >> Yeah, space is fun. >> It is. >> Shayn, thanks for coming on theCUBE, sharing your story and insight, we love this. We're going to keep following it. And we're going see you guys at the Public Sector Summits, and all the events you guys are at, so, looking forward to seeing and provisioning some satellite. >> I'm looking forward to showing you what we do next. So thank you for having me. >> Great. We'll get a sneak peak. >> Congratulations. >> This is theCUBE here in Las Vegas, we'll be back with more coverage after this short break. (futuristic music)
SUMMARY :
Brought to you by Amazon Web Services, Intel, of edge computing, feeding connectivity to anything. Thank you guys for having me. Totally obvious, now that you look at it, and we listened to the challenges they were experiencing that you might be able to, where'd the idea come from? that we had come up with and then also to provide that it was actually going to, you know, that the information we've provided back to them Was that process new to you? and that we were obsessing on the customer. and it resonated with customers. And even as part of getting out of it-- to validate that it's doable. of the needed skills to build AWS Ground Station that we could launch at re:Invent, six months. to be a part of something like this. and all the events you guys are at, so, I'm looking forward to showing you what we do next. with more coverage after this short break.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff | PERSON | 0.99+ |
Dave Nolton | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Shayn Hawthorne | PERSON | 0.99+ |
Shayn | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
80% | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Amazon.com | ORGANIZATION | 0.99+ |
24 antennas | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Lockheed Martin | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
28 iterations | QUANTITY | 0.99+ |
Tuesday | DATE | 0.99+ |
Earth | LOCATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
six page | QUANTITY | 0.99+ |
a year and a half | QUANTITY | 0.99+ |
United States | LOCATION | 0.99+ |
three months | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
this week | DATE | 0.98+ |
end of March | DATE | 0.98+ |
80-90% | QUANTITY | 0.98+ |
six months | QUANTITY | 0.98+ |
Ground Station | COMMERCIAL_ITEM | 0.97+ |
Public Sector Summits | EVENT | 0.97+ |
earth | LOCATION | 0.96+ |
about 10% | QUANTITY | 0.95+ |
Ground Station | LOCATION | 0.95+ |
first draft | QUANTITY | 0.94+ |
couple of days ago | DATE | 0.94+ |
about three months | QUANTITY | 0.94+ |
first | QUANTITY | 0.93+ |
one thing | QUANTITY | 0.93+ |
March of 2018 | DATE | 0.92+ |
Day three | QUANTITY | 0.9+ |
over a month and a half ago | DATE | 0.9+ |
Astra | COMMERCIAL_ITEM | 0.9+ |
Station | ORGANIZATION | 0.87+ |
HawkEye 360 | COMMERCIAL_ITEM | 0.86+ |
Live | TITLE | 0.85+ |
Region Services Division | ORGANIZATION | 0.82+ |
Capella | ORGANIZATION | 0.81+ |
re:Invent 2018 | EVENT | 0.8+ |
hundreds | QUANTITY | 0.8+ |
Sky | COMMERCIAL_ITEM | 0.77+ |
Cube | PERSON | 0.74+ |
Black | ORGANIZATION | 0.74+ |
two Ground Stations | QUANTITY | 0.71+ |
Invent 2018 | EVENT | 0.71+ |
AWS Ground Station | ORGANIZATION | 0.71+ |
Ground Stations | COMMERCIAL_ITEM | 0.68+ |
Stations | LOCATION | 0.65+ |
re:Invent | EVENT | 0.64+ |
Ground | COMMERCIAL_ITEM | 0.6+ |
SAR | COMMERCIAL_ITEM | 0.58+ |
contacts | QUANTITY | 0.56+ |
Erica Windisch, IOpipe - CloudNOW Awards 2017
>> Lisa: I'm Lisa Martin with theCUBE. We're on the ground at Google for the 6th Annual CloudNOW Top Women in Cloud Awards. Very excited to be joined by award winner and CUBE alumni Erica Windisch, founder and CTO of Iopipe. Welcome back to theCUBE, Erica. >> Erica: Thank you. Great to have you here, and congratulations on being one of the top women in Cloud. >> Yeah, of course. >> Tell me, when you heard about that you were being recognized, what did that mean to you and where you are in your career? >> Well, oh gosh, I mean it really meant, it was really big for me. I actually wasn't really expecting it. I think I was nominated and I totally forgot. I think somebody had mentioned to me that they were nominating me and I had no idea about it. I totally forgot about it. But I mean, for me it's just so validating because as much as I've, well one, because I've done a lot of interesting things in Cloud and in tech, but I've never really gotten a lot of recognition for that. And also, just recognition, I mean to be quite honest, I'm transgender. So the fact that I was recognized as a woman, Top Ten Women in Cloud Computing, was extra important and special for me. >> Oh, that's awesome. So tell me about your path to being where you are now. Were you always interested in computers and technology, or is that something that you kind of zigzagged your way to? >> Yeah, well, it was one of these things I guess I had some interest. When I was a child, we had BASIC exercises printed in our math books but our teachers never went over it. So I got kind of interested and I would read through those like those little appendums in my math books, and I would start teaching myself BASIC. And I picked up a Commodore 64 and it didn't work and I taught myself BASIC, more BASIC with those manuals. And I just had these little tiny introductions to technology and just self-taught myself everything. Eventually using a high school job to buy myself books and just teaching myself from those books. Managed to grab Linux on some floppy disks, installed it and tried to figure out how to use it. But I didn't really have lot of mentors or anything that I could really follow. At best there were other kids at school who were into computers and I just wanted to try and do what they were doing or do better than they were doing. >> I love that, self-taught, you knew you liked this and you were not afraid to try, "Hey, let me teach myself." That's really inspiring, Erica. >> Yeah. >> So, speaking of inspiring, tell me about the Iopipes story. So you're a TechSource company, tell us a little bit about TechSource, what that investment in IOpipe means. >> Yeah, so, I started, I guess I first started IOpipe two years ago. And I found the co-founder Adam Johnson, who joined me. And we applied for Techstars, got in, and that was like the first validation that we had from outside of ourselves and maybe one angel investor at that time. And that was a really big deal because it really helped accelerate us, give us validation, allow us to make the first hire, and they also taught us a lot about how to refine our elevator pitch, and how to raise money effectively. And then we ended up raising money, of course. So with the end of Techstars we had a lot of visibility, and that helped us raise two and a half million dollars seed round. >> Wow, so a really good launching pad for you. >> Yes, yeah. >> That's fantastic. So tell us a little bit more about the technology, I know that there's AWS Lambda, we just got back from re:Invent last week, so tell us a little bit more about exactly what you guys do. >> Oh yeah, so what we do is we provide a service that allows developers to get better insights into their application, they get observability into the application running a Lambda, as well as debugging and profiling tools. So you can actually get profiling data out of your Lambda and load that into Google DevTools and get Flame Graphs and dig in deep into which function called which function inside of each function call, so every Lambda invocation you can really dig down and see what's happening. We have things like custom metrics and alerts for that. So you can, for instance, we built this bot. I built it in two days. It's a Slack bot that, if you put an image in a Slack, it will run it through Amazon Rekognition and tell you, describe the objects in it, and describe it. So, for instance, if you have visually impaired members of your team, they can find out what was in the images that people pasted. I built it in only two days, and I could use our tool, let's say to extract how many objects were found in that image, whether or not a specific object was found in that image, and then we can create alerts around those, and do searches based on those, and get statistics out of our product on the data that was extracted from those images. So that was really cool, and we actually announced that feature, the profiling feature, at Midnight Madness at re:Invent so it was like the opening ceremony for re:Invent. It was just us, Andy Jassy and Shaquille O'Neal. >> Lisa: What? >> Yeah, and we launched our product, and we did the demo of this Slack bot, and it was a lot of fun. >> Wow! So you were there last week, then? >> I was there, we were there last week, and we were actually the first, myself, my co-founder and one of our engineers were up there and we were the first non-AWS speakers at the entire arena, it was really amazing. >> Wow, amazing. Congratulations. >> Thank you. >> So with all the cool announcements that came out last week on Lambda, Serverless, even new features that were announced for recognition, how does that either change the game or maybe kind of ignite the fire under you guys even a little bit more? >> Well I think one of the biggest announcements relative to us was Cloud9. And we knew that this was going to happen, Amazon acquired them a year ago, a year and a half ago, but they finally launched it. And they really doubled down on providing a much better experience for developers of Lambda to make it easier for developers to really build and ship and run that code on Lambda, which provides a much tighter experience for them so that they can on-board into things like IOpipe more easily. So that was really exciting, because I think that's really going to help with the adoption of Lambda. And some of the other features like Alexa for work is really interesting. It will probably just again, a lot of Alexa apps are built on top of Lambda, so all of these are going to provide value to my own company because we can tell you things like, "Well, how are your users interacting "with those Alexa skills?" But I think it's just generally exciting because there's just so many really cool, I mean, I don't know how many things they announced at this re:Invent that were just really amazing. Another one I really loved was Fargate, because I mean I came from Docker, I used to be a maintainer of the Docker engine and something that I was pushing for at that time in OpenStack and other projects, was the idea of just containers completely as a service without the VM management side of things, because with like ECS, you had to manage virtual machines, and I was like, "Well that is a little, like, "I don't want to manage virtual machines, "I just want Amazon to give me containers." So I was really excited that they finally launched Fargate to offer that. >> So the last question in our last couple of minutes here, tell me about the culture and your team that you lead at IOpipe. You were saying before, you know, when you were a kid you were really self-taught and very inspired by your own desire to learn, but tell me a little bit about the people that work for you and how you help inspire them. >> Oh gosh, well I think first of all, we are, right now we're nine people. I would say about four or five of us are under-represented minorities in tech in one way or another. It's really been fantastic that we've been able to have that level of diversity and inclusion. I think part of that is that we started very diverse. You know, a lot of companies will say, well, one of their problems with not having enough diversity is that they hire within their networks, well we hire within our networks, but we started very diverse in the first place. So that organic growth was very natural and very diverse for us, whereas that organic pairing growth can be problematic if you don't start in a very diverse place. So I think that's been really great, and I think that the fact that we have that level of diversity and inclusion with our employees is kind of inspiring, because a lot of workplaces just aren't like that in tech. It's really hard to find, and granted we're only nine right now. I would really hope that we can keep that up and I would like to actually make our workforce even more diverse than it is today. But yeah, I don't know, I just think it's fantastic and I want what we're doing to be a role model and an inspiration to other companies and say, "Yes, you can do this." And also the work people in the workforce, yes, you can be a woman in tech, yes, you can be trans in tech, yes, you can be non-binary in tech. I am binary, but we have non-binary people in staff. And, I don't know, I hope that's inspiring to people and also myself being a transgender founder, I maybe know one or two other people who are transgender founders, it's very uncommon. And I hope that also is an inspiration for people. >> Well I think so, speaking for myself I find you very inspiring. You seem to be someone that's really known for thinking, "I'm not afraid of anything. "I'm just going to try it. "Starting a company, I'm going to try it." And it sounds like you guys are very purposefully building a culture that's very inclusive, and so I think that, as well as your recognition as one of the Top Women in Cloud, be proud of that, Erica. That's awesome. >> Thank you. >> And you got to meet Shaquille O'Neal? >> I got to meet Shaquille O'Neal, yeah. >> I've got to see the photo. (laughs) >> Yeah. >> Well thank you so much Erica for joining us back on theCUBE. Congratulations on the award, and we look forward to seeing exciting things that you do in the future. >> Okay great, thank you. >> I'm Lisa Martin on the ground with theCUBE at Google for the CloudNOW Top Women in Cloud Awards. Thanks for watching, bye for now.
SUMMARY :
for the 6th Annual CloudNOW Top Women in Cloud Awards. and congratulations on being one of the top women in Cloud. I think somebody had mentioned to me or is that something that you kind of zigzagged your way to? And I just had these little tiny introductions to technology and you were not afraid to try, "Hey, let me teach myself." tell me about the Iopipes story. and that was like the first validation that we had so tell us a little bit more about exactly what you guys do. So that was really cool, and we actually announced and it was a lot of fun. I was there, we were there last week, Wow, amazing. and something that I was pushing for at that time that work for you and how you help inspire them. and say, "Yes, you can do this." and so I think that, as well as your recognition I've got to see the photo. Congratulations on the award, and we look forward to seeing I'm Lisa Martin on the ground with theCUBE at Google
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Erica | PERSON | 0.99+ |
Erica Windisch | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Shaquille O'Neal | PERSON | 0.99+ |
Adam Johnson | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
TechSource | ORGANIZATION | 0.99+ |
Techstars | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
a year ago | DATE | 0.99+ |
a year and a half ago | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
IOpipe | ORGANIZATION | 0.99+ |
Lambda | TITLE | 0.99+ |
first | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
nine | QUANTITY | 0.99+ |
two days | QUANTITY | 0.99+ |
two years ago | DATE | 0.98+ |
Alexa | TITLE | 0.98+ |
Andy Jassy | PERSON | 0.98+ |
CUBE | ORGANIZATION | 0.98+ |
nine people | QUANTITY | 0.98+ |
Linux | TITLE | 0.98+ |
today | DATE | 0.97+ |
IOpipe | TITLE | 0.97+ |
OpenStack | TITLE | 0.97+ |
first hire | QUANTITY | 0.96+ |
Slack | TITLE | 0.96+ |
each function | QUANTITY | 0.95+ |
re:Invent | EVENT | 0.95+ |
6th Annual CloudNOW Top Women in Cloud Awards | EVENT | 0.95+ |
first validation | QUANTITY | 0.95+ |
theCUBE | ORGANIZATION | 0.94+ |
two and a half million dollars | QUANTITY | 0.93+ |
one angel | QUANTITY | 0.92+ |
Commodore 64 | COMMERCIAL_ITEM | 0.91+ |
Invent | EVENT | 0.91+ |
Midnight Madness | EVENT | 0.87+ |
Cloud9 | TITLE | 0.86+ |
CloudNOW Awards 2017 | EVENT | 0.84+ |
TITLE | 0.84+ | |
Flame Graphs | TITLE | 0.81+ |
Top Ten Women | QUANTITY | 0.81+ |
ECS | TITLE | 0.81+ |
Docker | TITLE | 0.78+ |
in Cloud Awards | EVENT | 0.75+ |
Rekognition | TITLE | 0.72+ |
about four | QUANTITY | 0.72+ |
Serverless | TITLE | 0.68+ |
DevTools | TITLE | 0.65+ |