Jonathan Ballon, Intel | AWS re:Invent 2018
>> Live from Las Vegas, it's theCUBE, covering AWS re:Invent 2018. Brought to you by Amazon Web Services, Intel, and their Ecosystem partners. >> Oh welcome back, to theCUBE. Continuing coverage here from AWS re:Invent, as we start to wind down our coverage here on the second day. We'll be here tomorrow as well, live on theCUBE, bringing you interviews from Hall D at the Sands Expo. Along with Justin Warren, I'm John Walls, and we're joined by Jonathan Ballon, who's the Vice President of the internet of things at Intel. Jonathan, thank you for being with us today. Good to see you, >> Thanks for having me guys. >> All right, interesting announcement today, and last year it was all about DeepLens. This year it's about DeepRacer. Tell us about that. >> What we're really trying to do is make AI accessible to developers and democratize various AI tools. Last year it was about computer vision. The DeepLens camera was a way for developers to very inexpensively get a hold of a camera, the first camera that was a deep-learning enabled, cloud connected camera, so that they could start experimenting and see what they could do with that type of device. This year we took the camera and we put it in a car, and we thought what could they do if we add mobility to the equation, and specifically, wanted to introduce a relatively obscure form of AI called reinforcement learning. Historically this has been an area of AI that hasn't really been accessible to most developers, because they haven't had the compute resources at their disposal, or the scale to do it. And so now, what we've done is we've built a car, and a set of tools that help the car run. >> And it's a little miniature car, right? I mean it's a scale. >> It's 1/118th scale, it's an RC car. It's four-wheel drive, four-wheel steering. It's got GPS, it's got two batteries. One that runs the car itself, one that runs the compute platform and the camera. It's got expansion capabilities. We've got plans for next year of how we can turbo-charge the car. >> I love it. >> Right now it's baby steps, so to speak, and basically giving the developer the chance to write a reinforcement learning model, an algorithm that helps them to determine what is the optimum way that this car can move around a track, but you're not telling the car what the optimum way is, you're letting the car figure it out on their own. And that's really the key to reinforcement learning is you don't need a large dataset to begin with, it's pre-trained. You're actually letting, in this case, a device figure it out for themselves, and this becomes very powerful as a tool, when you think about it being applied to various industries, or various use-cases, where we don't know the answer today, but we can allow vast amounts of computing resources to run a reinforcement model over and over, perhaps millions of times, until they find the optimum solution. >> So how do you, I mean that's a lot of input right? That's a lot, that's a crazy number of variables. So, how do you do that? So, how do you, like in this case, provide a car with all the multiple variables that will come into play. How fast it goes, and which direction it goes, and all that, and on different axes and all those things, to make these own determinations, and how will that then translate to a real specific case in the workplace? >> Well, I mean the obvious parallel is of course autonomous driving. AWS had Formula One on stage today during Andy Jassy's keynote, that's also an Intel customer, and what Formula One does is they have the fastest cars in the world, and they have over 120 sensors on that car that are bringing in over a million pieces of data per second. Being able to process that vast amount of data that quickly, which includes a variety of data, like it's not just, it's also audio data, it's visual data, and being able to use that to inform decisions in close to real time, requires very powerful compute resources, and those resources exist both in the cloud as well as close to the source of the data itself at the edge, in the physical environment. >> So, tell us a bit about the software that's involved here, 'cause people think of Intel, you know that some people don't know about the software heritage that Intel has. It's not just about, the Intel inside isn't just the hardware chips that's there, there's a lot of software that goes into this. So, what's the Intel angle here on the software that powers this kind of distributed learning. >> Absolutely, software is a very important part of any AI architecture, and for us we've a tremendous amount of investment. It's almost perhaps, equal investment in software as we do in hardware. In the case of what we announced today with DeepRacer and AWS, there's some toolkits that allow developers to better harness the compute resources on the car itself. Two things specifically, one is we have a tool called, RL Coach or Reinforcement Learning Coach, that is integrated into SageMaker, AWS' machine learning toolkit, that allows them to access better performance in the cloud of that data that's coming into the, off their model and into their cloud. And then we also have a toolkit called OpenVINO. It's not about drinking wine. >> Oh darn. >> Alright. >> Open means it's an opensource contribution that we made to the industry. Vino, V-I-N-O is Visual Inference and Neural Network Optimization, and this is a powerful tool, because so much of AI is about harnessing compute resources efficiently, and as more and more of the data that we bring into our compute environments is actually taking place in the physical world, it's really important to be able to do that in a cost-effective and power-efficient way. OpenVINO allows developers to actually isolate individual cores or an integrated GPU on a CPU without knowing anything about hardware architecture, and it allows them then to apply different applications, or different algorithms, or inference workloads very efficiently onto that compute architecture, but it's abstracted away from any knowledge of that. So, it's really designed for an application developer, who maybe is working with a data scientist that's built a neural network in a framework like TensorFlow, or Onyx, or Pytorch, any tool that they're already comfortable with, abstract away from the silicon and optimize their model onto this hardware platform, so it performs at orders of magnitude better performance then what you would get from a more traditional GPU approach. >> Yeah, and that kind of decision making about understanding chip architectures to be able to optimize how that works, that's some deep magic really. The amount of understanding that you would need to have to do that as a human is enormous, but as a developer, I don't know anything about chip architectures, so it sounds like the, and it's a thing that we've been hearing over the last couple of days, is these tools allow developers to have essentially superpowers, so you become an augmented intelligence yourself. Rather than just giving everything to an artificial intelligence, these tools actually augment the human intelligence and allow you to do things that you wouldn't otherwise be able to do. >> And that's I think the key to getting mass market adoption of some of these AI implementations. So, for the last four or five years since ImageNet solved the image recognition problem, and now we have greater accuracy from computer models then we do from our own human eyes, really AI was limited to academia, or large IT tech companies, or proof-of-concepts. It didn't really scale into these production environments, but what we've seen over the couple of years is really a democratization of AI by companies like AWS and Intel that are making tools available to developers, so they don't need to know how to code in Python to optimize a compute module, or they don't need to, in many cases, understand the fundamental underlying architectures. They can focus on whatever business problem they're tryin' to solve, or whatever AI use-case it is that they're working on. >> I know you talked about DeepLens last year, and now we've got DeepRacer this year, and you've got the contest going on throughout this coming year with DeepRacer, and we're going to have a big race at the AWS re:Invent 2019. So what's next? I mean, or what are you thinking about conceptually to, I guess build on what you've already started there? >> Well, I can't reveal what next years, >> Well that I understand >> Project will be. >> But generally speaking. >> But what I can tell you, what I can tell you is what's available today in these DeepRacer cars is a level playing field. Everyone's getting the same car and they have essentially the same tool sets, but I've got a couple of pro-tips for your viewers if they want to win some of these AWS Summits that are going to be around the world in 2019. Two pro-tips, one is they can leverage the OpenVINO toolkit to get much higher inference performance from what's already on that car. So, I encourage them to work with OpenVINO. It's integrated into SageMaker, so that they have easy access to it if they're an AWS developer, but also we're going to allow an expansion of, almost an accelerator of the car itself, by being able to plug in an Intel Neural Compute Stick. We just released the second version of this stick. It's a USB form factor. It's got a Movidius Myriad X Vision processing unit inside. This years version is eight times more powerful than last years version, and when they plug it into the car, all of that inference workload, all of those images, and information that's coming off those sensors will be put onto the VPU, allowing all the CPU, and GPU resources to be used for other activities. It's going to allow that car to go at turbo speed. >> To really cook. >> Yeah. (laughing) >> Alright, so now you know, you have no excuse, right? I mean Jonathan has shared the secret sauce, although I still think when you said OpenVINO you got Justin really excited. >> It is vino time. >> It is five o'clock actually. >> Alright, thank you for being with us. >> Thanks for having me guys. >> And good luck with DeepRacer for the coming year. >> Thank you. >> It looks like a really, really fun project. We're back with more, here at AWS re:Invent on theCUBE, live in Las Vegas. (rhythmic digital music)
SUMMARY :
Brought to you by Amazon Web Services, Intel, Good to see you, and last year it was all about DeepLens. that hasn't really been accessible to most developers, And it's a little miniature car, right? One that runs the car itself, And that's really the key to reinforcement learning to a real specific case in the workplace? and being able to use that to inform decisions It's not just about, the Intel inside that allows them to access better performance in the cloud and as more and more of the data that we bring Yeah, and that kind of decision making about And that's I think the key to getting mass market adoption I mean, or what are you thinking about conceptually to, so that they have easy access to it I mean Jonathan has shared the secret sauce, on theCUBE, live in Las Vegas.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Justin Warren | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Jonathan Ballon | PERSON | 0.99+ |
Jonathan | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
AWS' | ORGANIZATION | 0.99+ |
Last year | DATE | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
next year | DATE | 0.99+ |
Justin | PERSON | 0.99+ |
two batteries | QUANTITY | 0.99+ |
first camera | QUANTITY | 0.99+ |
This year | DATE | 0.99+ |
second version | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
eight times | QUANTITY | 0.99+ |
five o'clock | DATE | 0.99+ |
Two things | QUANTITY | 0.99+ |
this year | DATE | 0.98+ |
Two pro-tips | QUANTITY | 0.98+ |
over a million pieces | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
over 120 sensors | QUANTITY | 0.98+ |
OpenVINO | TITLE | 0.98+ |
One | QUANTITY | 0.98+ |
four-wheel | QUANTITY | 0.97+ |
Sands Expo | EVENT | 0.97+ |
DeepRacer | ORGANIZATION | 0.97+ |
SageMaker | TITLE | 0.96+ |
Myriad X Vision | COMMERCIAL_ITEM | 0.95+ |
DeepLens | COMMERCIAL_ITEM | 0.95+ |
V-I-N-O | TITLE | 0.94+ |
second day | QUANTITY | 0.94+ |
TensorFlow | TITLE | 0.94+ |
both | QUANTITY | 0.94+ |
millions of times | QUANTITY | 0.92+ |
Pytorch | TITLE | 0.92+ |
Onyx | TITLE | 0.91+ |
Neural Compute Stick | COMMERCIAL_ITEM | 0.91+ |
RL Coach | TITLE | 0.91+ |
Movidius | ORGANIZATION | 0.89+ |
Invent 2018 | EVENT | 0.86+ |
coming year | DATE | 0.86+ |
Reinforcement Learning Coach | TITLE | 0.85+ |
this coming year | DATE | 0.82+ |
ImageNet | ORGANIZATION | 0.82+ |
theCUBE | ORGANIZATION | 0.82+ |
five years | QUANTITY | 0.81+ |
re:Invent 2019 | EVENT | 0.8+ |
Vino | TITLE | 0.78+ |
last couple of days | DATE | 0.77+ |
Formula One | TITLE | 0.75+ |
AWS re:Invent 2018 | EVENT | 0.72+ |
Hall D | LOCATION | 0.71+ |
couple of years | QUANTITY | 0.71+ |
four | QUANTITY | 0.71+ |
data per second | QUANTITY | 0.69+ |
re: | EVENT | 0.67+ |
1/118th scale | QUANTITY | 0.67+ |
DeepRacer | COMMERCIAL_ITEM | 0.67+ |
Formula | ORGANIZATION | 0.67+ |
DeepRacer | TITLE | 0.65+ |
Mark Shuttleworth, Canonical | OpenStack Summit 2018
(soft electronic music) >> Announcer: Live from Vancouver, Canada, it's theCUBE. Covering OpenStack Summit North America 2018. Brought to you by Red Hat, the OpenStack Foundation, and it's ecosystem partners. >> Welcome back, I'm Stu Miniman here with my cohost John Troyer and you're watching theCUBE's exclusive coverage of OpenStack Summit 2018 in Vancouver. Happy to welcome you back to the program, off the keynote stage this morning, Mark Shuttleworth, the founder of Canonical. Thank you so much for joining us. >> Stu, thanks for the invitation. >> Alright, so you've been involved in this OpenStack stuff for quite a bit. >> Right, since the beginning. >> I remember three years ago we were down in the other hall talking about the maturity of the platform. I think three years ago, it was like this container thing was kind of new and the basic infrastructure stuff was starting to get, in a nice term, boring. Because that meant we could go about business and be on the buzz of there's this cool new thing and we're going to kill Amazon, kill VMware, whatever else things that people thought that had a misconceived notion. So bring us forward to where we are 2018, what you're hearing from customers as you look at OpenStack and this community. >> Well, I think you pretty much called it. OpenStack very much now is about solving a real business problem, which is the automation of the data center and the cost parody of private data centers with public data centers. So I think we're at a time now where people understand the public cloud is a really good thing. It's great that you have these giant companies dueling it out to deliver better quality infrastructure at a better price. But then at the same time, having your own private infrastructure that runs cost-effectively is important. And OpenStack really is the only approach to that that exists today. And it's important to us that the conversation is increasingly about what we think really matters, which is the economics of owning it, the economics of running it, and how people can essentially keep that in line with what they get from the public cloud providers. >> Yeah, one of the barometers I use for vendors these days is in this multi-cloud world, where do you sit? Do you play with the HyperScalers? Are you a public cloud denier? Or, like most people you're, most people are somewhere in-between. In your keynote this morning, you were talking a bit about all of the HyperScalers that use your products as well as-- >> Ubuntu is at the heart of all of the major public cloud operations at multiple levels. So we see them as great drivers of innovation, great drivers of exposure of Ubuntu into the enterprise. We're still, by far, the number one platform used in public cloud by enterprises. It's hard to argue that public cloud is testing Dev now. It really, really isn't and so most of that is still Ubuntu. And now we're seeing that pendulum swing, all of those best practices, that consumption of Ubuntu, that understanding of what a leaner, meaner Enterprise Linux looks like. Bringing that back to the data center is exciting. For us, it's an opportunity to help enterprises rethink the data center to make it fully automated from the ground up. OpenStack is part of that, Kubernetes is part of that and now the cherry on top is really AI where people understand they have to be able to do it on public cloud, on private infrastructure and at the Edge. >> Mark, I wanted to talk about open source. Marketing open source, for a minute. We are obviously here, we're part of an open source community. Open source, defacto, has won the cloud technology stack wars. So there's one way of selling OpenStack where you pound on open a lot. >> I'm always a bit nervous about projects that put open. It sounds like they're sort of trying to gloss over something or wash over something or prove a point. They shouldn't have to. >> There's one about the philosophy of open source, which certainly has to stay there, right. Because that's what drove the innovation but I was kind of impressed about on the stage today, you talked about the benefits. You didn't say, well the venture's open. You said, well, we're facilitating these benefits. Speed to market, cost, et cetera. Can you talk about your approach, Canonical's approach to talking about this open source product in terms of its benefits? >> Sure, look, open source is a license. Under that license, there's room for a huge spectrum of interest and opinions and approaches. And I'd say that I certainly see an enormous amount of value in what I would call the passion-based open source story. Now, OpenStack is not that. It's too big, too complicated, to be one person's deep passion. It really isn't. But there's still a ton of innovation that happens in our world, across the full spectrum of what we see with open source, which is really experts trying to do something beautiful and elegant. And I still think that's really important in open source. You also have a new kind of dimension, which is almost like industrial trench warfare with open source. Which is huge organizations leveraging effectively their ability go get something widespread, widely adopted, quickly and efficiently by essentially publishing it as open source. And often, people get confused between these two ends of the spectrum. There's a bunch in between. What I like about OpenStack is that I think it's over the industrial trench warfare phase. You know, you just don't see a ton of people showing up here to throw parties and prove to everyone how cool they are. They've moved on to other open source projects. The people who are here are people who essentially have the real problem of I want to automate my data center, I want to have, essentially, a cloud that runs cost-effectively in my data center that I can use as part of a multi-cloud strategy. And so now I think we're in to that sort of, a more mature place with OpenStack. We're not either sort of artisan or craftsmen oriented, nor are we a guns blazing brand oriented. It's kind of now just solving the problems. >> Mark, there's still some nay-sayers out in the marketplace. Either they say that this never matured, there's a certain analyst firm that put out a report a couple of months ago that, it kind of denigrated what's happening here. And then there's others that, as you said, off chasing that next big wave of open source. What are you hearing from your customers? You've got a good footprint around the globe. >> So that report is nonsense, for a start. They're always wrong, right. If they're hyping something, they're wrong and if they're dissing something then they're usually wrong too. >> Stu: They have a cycle for that, I believe. (chuckling) >> Exactly. Selling gold at the barroom. Here's how I see it. I think that enterprises have a real problem, which is how do they create private cloud infrastructure. OpenStack had a real problem in that it had too many opinions, too many promises. Essentially a governing structure not a leadership structure. Our position on this has always been focus on the stuff that is really necessary. There was a ton of nonsense in OpenStack and that stuff is all failing. And so what? It was never essential to the mission. The mission is stand up a data center in an automated way, provide it, essentially, as resources, as a service to everybody who you think is authorized to be there, effectively. Segment and operate that efficiently. There's only a small part of OpenStack that was ever really focused on that. That's the stuff that's succeeding, that's the stuff we deliver. That's the stuff, we think very carefully about how to automate it so that, essentially, anybody can consume it at reasonable prices. Now, we have learned that it's better for us to do the operations almost. It's better for us actually to take it to people as a solution, say look, explain your requirements to us then let us architect that cloud with you then let us build that cloud then let us operate that cloud. Until it's all stable and the economics are good, then you can take over. I think what we have seen is that you ask every single different company to build OpenStack, they will make a bunch of mistakes and then they'll say OpenStack is the problem. OpenStack's not the problem. Because we do it again and again and again, because we do it in many different data centers, because we do it with many different industries, we're able to essentially put it on rails. When you consume OpenStack that way it's super cheap. These aren't my numbers, analysts have studied the costs of public infrastructure, the cost of the established, incumbent enterprise, virtualization solutions and so on. And they found that when you consume OpenStack from Canonical it is much, much cheaper than any of your other options in your own private data center. And I think that's a success that OpenStack should be proud of. >> Alright, you've always done a good job at poking at some of the discussions happening in the industry. I wouldn't say I was surprised but you were highlighting AI as something that was showing a lot of promise. People have been a little hot and cold depending on what part of the market you're at. Tell us about AI and I'd love to hear your thoughts in general. Kubernetes, Serverless, and ask you to talk about some of those new trends that are out there. >> Sure, the big problem with data science was always finding the right person to ask the right question. So you could get all the data in the world in a data lake but now you have to hire somebody who instinctively has to ask the right question that you can test out of that data. And that's a really hard problem. What machine learning does is kind of inverts the problem. It says, well, why don't we put all that data through a pattern matching system and then we'll end up with something that reflects the underlying patterns, even if we don't know what they are. Now, we can essentially say if you saw this, what would you expect? And that turns out to be a very powerful way to deal with huge amounts of data that, previously, you had to kind of have this magical intuition to kind of get to the bottom of. So I think machine learning is real, it's valuable in almost every industry, and the challenges now are really about standardizing underlying operations so that the people who focus on the business problems can, essentially, use them. So that's really what I wanted to show today is us working with, in that case it was Google, but you can generalize that. To standardize the experience for an institution who wants to hire developers, have them effectively build machine-driven models if they can then put those into production. There's a bunch of stuff I didn't show that's interesting. For example, you really want to take the learnings from machine-learning and you want to put those at the Edge. You want to react to what's happening as close to where it's happening as possible. So there's a bunch of stuff that we're working on with various companies. It's all about taking that AI outcome right to the Edge, to IOT, to Edge Cloud but we don't have time to get in to all of that today. >> Yeah, and Ubuntu is at the Edge, on the mobile platform. >> So we're in a great position that we're on the Cloud. Now you see what we're doing in the data center for enterprises, effectively recrafting the data center has a much leaner, more automated machine. Really driving down the cost of the data center. And yes, we're on the higher-end things. We're never going to be on the LightBulb. We're a full general-purpose operating system. But you can run Ubuntu on a $10 board now and that means that people are taking it everywhere. Amazon, for example, put Ubuntu on the DeepLens so that's a great example of AI at the edge. It's super exciting. >> So the Kubernetes, Serverless-type applications, what are your thinkings around there? >> Serverless is a lovely way to think about the flow of code in a distributed system. It's a really nice way to solve certain problems. What we haven't yet seen is we haven't seen a Serverless framework that you can port. We've seen great Serverless experiences being built inside the various public clouds but there's nothing consistent about them. Everything that you invest in a particular place is very useful there but you can't imagine taking that anywhere else. I think that's fine. >> Stu: Today's primarily Lando. >> And I think the other clouds have done a credible job of getting there quickly. But kudos to Amazon for kind of pioneering that. I do think we'll see generalized Serverless, it just doesn't exist at the moment and as soon as it does we'll be itching to get it into people's hands. >> Okay, yeah? >> Well, I just wanted to pull out something that you had said in case people miss it, you talked about managed OpenStack. And that, I think, managed Kubernetes has been a trend over the last year. Managed OpenStack now. Has been trans-- >> With these complex pieces of infrastructure, you could easily drown in learning it all and if you're only ever going to do one, maybe it makes sense to have somebody else do it for a while. You can always take it over later. So we're unusual in that we will essentially standup something complex like an OpenStack or a Kubernetes, operate it as long as people want and then train them to take over. So we're not exclusively managed and we're not exclusively arms-length. We're happy to start the one way and then hand over. >> I think that's an important development, though, that's been developing as the systems get more complicated. One UNIX admin needs a whole new skill set or broader skill set now that we're orchestrating a whole cloud so that's, I think that's great. And that's interesting. Anything else you're looking forward to, in terms of operation models. I guess we've said, Ubuntu everywhere from the edge to the center and now managed, as well. Anything else we're looking at in terms of operators should be looking at? >> Well, I think it just is going to stay sort of murky for a while simply because each different group inside a large institution has a boundary of their authority and to them, that's the edge. (chuckling) And so the term is heavily overloaded. But I would say, ultimately, there are a couple of underlying problems that have to be solved and if you look at the reference architectures that the various large institutions are putting out, they all show you how they're trying to attack these patterns using Ubuntu. One is physical provisioning. The one thing that's true with every Edge deployment is there are no humans there. So you can't kind of Band-Aid over the idea that when something breaks you need to completely be able to reset it from the ground up. So MAAS, Middle as a Service, shows up in the reference architectures from AT&T and from SoftBank and from Dorich Telecom and a bunch of others because it solves their problem. It's the smallest piece of software you can use to take one server or 10 servers or 100 servers and just reflash them with Windows or CentOS or whatever you need. That's one thing. The other thing that I think is consistently true in all these different H-Cloud permutations or combinations is that overhead's really toxic. If you need three nodes of overhead for a hundred node OpenStack, it's 3%. For a thousand node OpenStack, it's .3%. It's nothing, you won't notice it. If you need three nodes of OpenStack for a nine node Edge Cloud, well then that's 30% of your infrastructure costs. So really thinking through how to get the overhead down is kind of a key for us. And all the projects with telcos in particular that we're working, that's really what we bring is that underlying understanding and some of those really lightweight tools to solve those problems. On top of that, they're all different, right. Kubenetes here, Lixti there, OpenStack on the next one. AI everywhere. But those two problems, I think, are the consistent things we see as a pattern in the Edge. >> Alright, so Mark, last question I have for you. Company update. So last year we talked a little bit about focusing, where the company's going, talked a bit about the business model and you said to me, "Developers should never have to pay for anything." It's the governance people and everything like that. Give us the company update, everything from rumors from hey, maybe you're IPO-ing to what's happening, what can you share? >> Right, so the twin areas of focus, IOT and cloud infrastructure. IOT continues to be an area of R and D for us so we're still essentially underwriting an IOT investment. I'm very excited about that. I think it's the right thing to be doing at the moment. I think IOT is the next wave, effectively, and we're in a special position. We really can get down, both economically and operationally, into that sort of small itch kind of scenario. Cloud, for us, is a growth story. I talked a little bit about taking Ubuntu and Canonical into the finance sector. In one year, we closed deals with 20% of the top 20 banks in the world to build Ubuntu base and open infrastructure. That's a huge shift from the traditional dependence exclusively on VMware Red Hat. Now, suddenly, Ubuntu's in there, Canonical's in there. I think everybody understands that telcos really love Ubuntu and so that continues to grow for us. Commercially, we're expanding both in Emir and here in the Americas. I won't talk more about our corporate plans other than to say I see no reason for us to scramble to cover any other areas. I think cloud infrastructure and IOT is plenty for one company. For me, it's a privilege to combine that kind of business with what happens in the Ubuntu community. I'm still very passionate about the fact that we enable people to consume free software and innovate. And we do that without any friction. We don't have an enterprise version of Ubuntu. We don't need an enterprise version of Ubuntu, the whole thing's enterprise. Even if you're a one-person startup. >> Mark Shuttleworth, always a pleasure to catch up. Thank you so much for joining us. >> Mark: Thank you, Stu. >> For John Troyer, I'm Stu Miniman. Back with lots more coverage here from OpenStack Summit 2018 in Vancouver. Thanks for watching theCUBE. (soft electronic music)
SUMMARY :
Brought to you by Red Hat, the OpenStack Foundation, Happy to welcome you back to the program, in this OpenStack stuff for quite a bit. and be on the buzz of there's this cool new thing And OpenStack really is the only approach a bit about all of the HyperScalers that use your products Ubuntu is at the heart of all of the major the cloud technology stack wars. I'm always a bit nervous about projects that put open. There's one about the philosophy of open source, It's kind of now just solving the problems. And then there's others that, as you said, So that report is nonsense, for a start. Stu: They have a cycle for that, I believe. to us then let us architect that cloud with you happening in the industry. so that the people who focus on the business problems so that's a great example of AI at the edge. a Serverless framework that you can port. it just doesn't exist at the moment something that you had said in case people miss it, of infrastructure, you could easily drown from the edge to the center and now managed, as well. that the various large institutions are putting out, about the business model and you said to me, really love Ubuntu and so that continues to grow for us. Thank you so much for joining us. from OpenStack Summit 2018 in Vancouver.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mark Shuttleworth | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
Madrid | LOCATION | 0.99+ |
60 | QUANTITY | 0.99+ |
Jeff | PERSON | 0.99+ |
Dorich Telecom | ORGANIZATION | 0.99+ |
Canonical | ORGANIZATION | 0.99+ |
Vodafone | ORGANIZATION | 0.99+ |
$10 | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Miguel Perez | PERSON | 0.99+ |
Spain | LOCATION | 0.99+ |
10 servers | QUANTITY | 0.99+ |
two questions | QUANTITY | 0.99+ |
Carrefour | ORGANIZATION | 0.99+ |
45 | QUANTITY | 0.99+ |
North Carolina | LOCATION | 0.99+ |
Miguel | PERSON | 0.99+ |
Americas | LOCATION | 0.99+ |
SoftBank | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
25 years | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
Vancouver | LOCATION | 0.99+ |
AT&T | ORGANIZATION | 0.99+ |
20% | QUANTITY | 0.99+ |
Mark | PERSON | 0.99+ |
100 servers | QUANTITY | 0.99+ |
30% | QUANTITY | 0.99+ |
Java | TITLE | 0.99+ |
2018 | DATE | 0.99+ |
OpenStack Foundation | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
last year | DATE | 0.99+ |
PowerPoint | TITLE | 0.99+ |
Stu | PERSON | 0.99+ |
one server | QUANTITY | 0.99+ |
15 years | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
64% | QUANTITY | 0.99+ |
Jeffrey | PERSON | 0.99+ |
next year | DATE | 0.99+ |
3% | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
today | DATE | 0.99+ |
11 | QUANTITY | 0.99+ |
CentOS | TITLE | 0.99+ |
Vancouver, Canada | LOCATION | 0.99+ |
.3% | QUANTITY | 0.99+ |
two words | QUANTITY | 0.99+ |
120 | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Kaleena | PERSON | 0.99+ |
three years ago | DATE | 0.99+ |
Python | TITLE | 0.99+ |
OpenStack | ORGANIZATION | 0.99+ |
two problems | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
Swami Sivasubramanian, AWS | AWS re:Invent 2017
>> Announcer: Live from Las Vegas, it's theCUBE. Covering AWS re:Invent 2017. Presented by AWS, Intel and our ecosystem of partners. >> Hey, welcome back everyone. We're live here in Las Vegas. It's theCUBE's exclusive coverage of AWS. Amazon Web Services re:Invent 2017. Amazon web Services annual conference, 45,000 people here. Five years in a row for theCUBE, and we're going to be continuing to cover years and decades after, it's on a tear. I'm John Furrier, my co-host Stu Miniman. Exciting science, one of the biggest themes here is AI, IoT, data, Deep Learning, DeepLens, all the stuff that's been really trending has been really popular at the show. And the person behind that Amazon is Swami. He's the Vice President of Machine Learning at AWS, among other things, Deep Learning and data. Welcome to theCUBE. >> Stu: Good to see you. >> Excited to be here. >> Thanks for coming on. You're the star of the show. Your team put out some great announcements, congratulations. We're seeing new obstruction layers of complexity going away. You guys have made it easy to do voice, Machine Learning, all those great stuff. >> Swami: Yeah. >> What are you most excited about, so many good things? Can you pick a child? I don't want to pick my favorite child among all my children. Our goal is to actually put Machine Learning capabilities in the hands of all developers and data scientists. That's why, I mean, we want to actually provide different kinds of capabilities right from like machine developers who want to build their own Machine Learning models. That's where SageMakers and n21 platform that lets people build, train and deploy these models in a one-click fashion. It supports all popular Deep Learning frameworks. It can be TensorFlow, MXNet or PyCharm. We also not only help train but automatically tune where we use Machine Learning for Machine Learning to build these things. It's very powerful. The other thing we're excited about is the API services that you talked about, the new obstruction layer where app developers who do not want to know anything about Machine Learning but they want to transcribe their audio to convert from speech to text, or translate it or understand the text, or analyze videos. The other thing coming from academia where I'm excited about is I want to teach developers and students Machine Learning in a fun fashion, where they should be excited about Machine Learning. It's such a transformative capability. That's why actually we built a device meant for Machine Learning in a hands-on fashion that's called DeepLens. We have developers right on re:Invent where from the time they take to un-box to actually build a computer with an application to build Hotdog or Not Hotdog, they can do it in less than 10 minutes. It's an amazing time to be a developer. >> John: Yeah. >> Stu: Oh my God, Swami. I've had so many friends that have sat through that session. First of all, the people that sit through it they get like a kit. >> Swami: That's awesome. >> Stu: They're super excited. Last year it was the Ecodot and everybody with new skills. This year, DeepLens definitely seems to be the one that all the geeks are playing with, really programing stuff. There's a bunch of other things here, but definitely some huge buzz and excitement. >> That's awesome, glad to hear. >> Talk about the culture at Amazon. Because I know in covering you guys for so many years and now being intimate with a lot of the developers in your teams. You guys just don't launch products, you actually listen to customers. You brought up Machine Learning for developers. What specifically jumped out at you from talking to customers around making it easier? It was too hard, was it, or it was confined to hardcore math driven data scientists? Was it just the thirst and desire for Machine Learning? Or you're just doing this for side benefits, it's like a philanthropy project? >> No, in Amazon we don't build technology because it's cool. We build technology because that's what our customers want. Like 90 to 95% of our roadmap is influenced by listening to customers. The other 5 to 10% is us reading between the lines. One of the things I actually ... When I started playing with Machine Learning, having built a bunch of database storage and analytics products. When I started getting into Deep Learning and various things I realized there's a transformative capability of these technologies. It was too hard for developers to use it on a day to day fashion, because these models are too hard to build and train. Our data now, the right level of obstruction. That's why we actually think of it as in a multi-layered strategy where we cater to export practitioners and data scientists. For them we have SageMaker. Then for app developers who do not want to know anything about Machine Learning they say, "I'll give you an audio file, transcribe it for me," or "I'll give you text, get me insights or translate it." For them we actually we actually provide simple to use API services, so that they can actually get going without having to know anything about what is TensorFlow or PyCharm. >> TensorFlow got a lot of attention, because that really engaged the developer community in the current Machine Learning, because we're like, "Oh wow, this is cool." >> Swami: Yeah. >> Then it got, I won't say hard to use, but it was high end. Are you guys responding to TensorFlow in particular or you're responding to other forces? What was the driver? >> In amazon we have been using Machine Learning for like 20 years. Since the year of like 1995 we have been leveraging Machine Learning for recommendation engine, fulfillment center where we use robots to pick packages and then Elixir of course and Amazon Go. One of the things we actually hear is while frameworks like TensorFlow or PyCharm, MXNet or PyCharm is cool. It is just too hard for developers to make use of it. We actually don't mind, our users use Cafe or TensorFlow. We want the, to be successful where they take from idea to product shell. And when we talk to developers, this process took anywhere from 6 to 18 months and it should not be this hard. We wanted to do what AWS did to IT industry for compute storage and databases. We want to do the same for Machine Learning by making it really easy to get started and consumer does in utility. That was our intel. >> Swami, I wonder if you can tell us. We've been talking for years about the flywheel of customers for Amazon. What are the economies of scale that you get for the data that you have there. I think of all the training of all the Machine Learning, the developers. How can you leverage the economies of scale that Amazon has in all those kind of environments? >> When you look at Machine Learning, Machine Learning tends to be mostly the icing on the cake. Even when we talk to the expert professors who are the top 10 scientists in the world, the data that goes into the Machine Learning is going to be the determining factor for how good it is in terms of how well you train it and so forth. This is where data scientists keep saying the breath of storage and database and analytics offerings that exist really matter for them to build highly accurate models. When you talk about not just the data, but actually the underlying database technology and storage technology really is important. S3 is the world's most powerful data leg that exists that is highly secure, reliable, scalable and cost effective. We really wanted to make sure customers like Glacier Cloud who store high resolution satellite imagery on S3 and glacier. We wanted them to leverage ML capabilities in a really easy one-click fashion. That's important. >> I got to ask you about the roadmap, because you say customers are having input on that. I would agree with you that that would be true, because you guys have a track record there. But I got to put the dots that I'm connecting in my mind right now forward by saying, you guys ... And telegraphing here certainly heard well, Furner say it and Andy, data is key and opening up that data and we're seeing New Relic here, Sumo Logic. They're sharing anonymous data from usage, workloads really instructive. Data is instructive for the marketplace, but you got to feed the models on the data. The question for you is you guys get so much data. It's really a systems management dream it's an application performance dream. You got more use case data. Are you going to open that up and what's the vision behind it? Because it seems like you could offer more and more services. >> Actually we already have. If you look at x-rays and service that we launched last year. That is one of the coolest capabilities, even I am a developer during the weekends when I cool out. Being able to dive into specific capabilities so one of the performance insights where is the borderline. It's so important that actually we are able to do things like x-raying into an application. We are just getting started. The Cloud transformed how we are building applications. Now with Machine Learning, what is going to happen is we can even do various things like ... Which is going to be the borderline on what kind of datasets. It's just going to be such an amazing time. >> You can literally reimagine applications that are once dominant with all the data you have, if you opened it up and then let me bring my data in. Then that will open up a bigger aperture of data. Wouldn't that make the Machine Learning and then AI more effective? >> Actually, you already can do similar things with Lex. Lex, think of it as it's an automatic speech recognition natural language understanding where we are pre-trained on our data. But then to customize it for your own chat bots or voice applications, you can actually add your own intents and several things and we customize it underlying Deep Learning model specific to your data. You're leveraging the amount of data that we have trained in addition to specifically tuning for yours. It's only going to get better and better, to your point. >> It's going to happen, it's already happening. >> It's already happening, yeah. >> Swami, great slate of announcements on the Machine Learning side. We're seeing the products get all updated. I'm wondering if you can talk to us a little bit about the human side of things. Because we've seen a lot of focus, right, it's not just these tools but it's the tools and the people putting those together. How does Amazon going to help the data scientists, help retrain, help them get ready to be able to leverage and work even better with all these tools? >> Machine Learning, we have seen some amazing usage of how developers are using Machine Learning. For example, Mariness Analytics is a non-profit organization that its goal is to fight human trafficking. They use recognition which is our image processing. They do actually identify persons of interest and victims so that they can notify law enforcement officer. Like Royal National Institute of Blind. They actually are using audio text to speech to generate audio books for visually impaired. I'm really excited about all the innovative applications that we can do to simply improve our everyday lives using Machine Learning, and it's such in early days. >> Swami, the innovation is endless in my mind. But I want to get two thoughts from you, one startup and one practitioner. Because we've heard here in theCUBE, people come here and saying, "I can do so much more now. "I've got my EMR, it's so awesome. "I can do this solving problem." Obviously making it easy to use is super cool, that's one. I want to get your thoughts on where that goes next. And two, startups. We're seeing a lot of startups retooling on Cloud economics. I call it post-2013 >> Swami: Yeah. >> They don't need a lot of money, they can hit critical mass. They can get market product, market fit earlier. They can get economic value quicker. So they're changing the dynamics. But the worry is, how do I leverage the benefit of Amazon? Because we know Amazon is going to grow and all Clouds grow and just for you guys. How do I play with Amazon? Where is the white space? How do I engage, do I just ...? Once I'm on the platform, how do I become the New Relic or slunk? How can I grow my marketplace and differentiate? Because Amazon might come out with something similar. How do I stay in that cadence of growth, even a startup? >> If you see in AWS we have tens of thousands of partners of course, right from ISV, SIs and whatnot. Software industry is an amazing industry where it's not like winner take all market. For example, in the document management space, even though we have S3 and WorkDocs, it doesn't mean Dropbox and Box are not successful either, and so forth. What we provide in AWS is the same infrastructure for any startup or for my team, even though I build probably many of the underlying infrastructure. Nowadays for my AI team, it's literally like a startup except I probably stay in an AWS building, but otherwise I don't get any internal APIs, it's the same API so easy to S3. >> John: It's a level playing field. >> It's a level playing field. >> By the way, everyone should know, he wrote DynamoDB. As an intern or was that ...? (Swami laughs) And then SQS, rockstar techy here, so it's great to have. You're what we call a tech athlete. Great to have you on. No white space, just go for it. >> Innovation is the key. The key thing, what we have seen amazing startups who have done exceptionally well is they intently listen to customers and innovate and really look for what it matters for their customers and go for it. >> The biggest buzz of the show from your group. What's your biggest buzz from the show here? DeepLens? >> DeepLens has been ... Our idea was to actually come up with a fun way to learn Machine Learning. Machine Learning, it used to be, even until recently actually as well as last week, it was actually an intimate thing for developers to learn while there is, it's all the buzz. It's not really straight forward for developers to use it. We thought, "Hey, what is a fun way for developers "to get engaged and build Machine Learning?" That's why we actually can see DeepLens so that you can actually build fun applications. I talked about Hotdog, Not Hotdog. I'm personally going to be building what I call as a Bear Cam. Because I live in the suburbs of Seattle where we actually have bears visiting our backyard digging our trash. I want to actually have DeepLens with a pre-train model that I'm going to train to detect bears. That it sends me a message through SQS and SNS so I get a text. >> Here's an idea we want to do, maybe your team can build it for us. CUBE Cam, we put the DeepLens here and then as anyone goes by, if they're a Twitter follower of theCUBE they can send me a message. (John and Swami laughing) Swami, great stuff. Deep Learning again, more goodness coming. >> Swami: That's awesome. >> What are you most excited about? >> In Amazon we have a phrase called, "It's Day One." Even though we are a 22-year-old company, I jokingly tell my team that, "It's day one for us, "except we just woke up and we haven't even "had a cup of coffee yet." We have just scratched the surface with Machine Learning, there is so much stuff to do. I'm super excited about this space. >> Your goals for this year is what? What's your goals? >> Our goals for this year was to put Machine Learning capabilities in the hands of all developers of all skill levels. I think we have done pretty well so far I think. >> Well, congratulations Swami here on theCUBE. Vice president of Machine Learning and a lot more, all those applications that were announced Wednesday along with the Deep Leaning and the AI and the DeepLens all part of his innovative team here at Amazon. Changing the game is theCUBE doing our part bringing data to you, video and more coverage. Go to Siliconangle.com for all the stories, Wikibon.com for research and of course theCUBE.net. I'm John Furrier and Stu Miniman. Thanks for watching, we'll be right back.
SUMMARY :
Announcer: Live from Las Vegas, it's theCUBE. has been really popular at the show. You're the star of the show. is the API services that you talked about, First of all, the people that sit through it that all the geeks are playing with, a lot of the developers in your teams. One of the things I actually ... because that really engaged the developer community Are you guys responding to TensorFlow in particular One of the things we actually hear is What are the economies of scale that you get is going to be the determining factor for how good it is I got to ask you about the roadmap, so one of the performance insights where is the borderline. Wouldn't that make the Machine Learning You're leveraging the amount of data that we have trained and the people putting those together. I'm really excited about all the innovative applications Swami, the innovation is endless in my mind. Where is the white space? it's the same API so easy to S3. Great to have you on. Innovation is the key. The biggest buzz of the show from your group. Because I live in the suburbs of Seattle Here's an idea we want to do, We have just scratched the surface with Machine Learning, Machine Learning capabilities in the hands Changing the game is theCUBE doing our part
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Swami | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
5 | QUANTITY | 0.99+ |
90 | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Royal National Institute of Blind | ORGANIZATION | 0.99+ |
Seattle | LOCATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Wednesday | DATE | 0.99+ |
Swami Sivasubramanian | PERSON | 0.99+ |
1995 | DATE | 0.99+ |
Five years | QUANTITY | 0.99+ |
Last year | DATE | 0.99+ |
Andy | PERSON | 0.99+ |
two thoughts | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
6 | QUANTITY | 0.99+ |
less than 10 minutes | QUANTITY | 0.99+ |
45,000 people | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
95% | QUANTITY | 0.99+ |
one practitioner | QUANTITY | 0.99+ |
Mariness Analytics | ORGANIZATION | 0.99+ |
TensorFlow | TITLE | 0.99+ |
S3 | TITLE | 0.99+ |
20 years | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
New Relic | ORGANIZATION | 0.98+ |
last week | DATE | 0.98+ |
18 months | QUANTITY | 0.98+ |
10% | QUANTITY | 0.98+ |
This year | DATE | 0.98+ |
Siliconangle.com | OTHER | 0.98+ |
PyCharm | TITLE | 0.98+ |
2013 | DATE | 0.98+ |
Amazon web Services | ORGANIZATION | 0.98+ |
10 scientists | QUANTITY | 0.98+ |
Cafe | TITLE | 0.97+ |
amazon | ORGANIZATION | 0.97+ |
Intel | ORGANIZATION | 0.97+ |
Hotdog | ORGANIZATION | 0.97+ |
one startup | QUANTITY | 0.97+ |
MXNet | TITLE | 0.97+ |
22-year-old | QUANTITY | 0.96+ |
WorkDocs | TITLE | 0.96+ |
one | QUANTITY | 0.96+ |
Machine Learning | TITLE | 0.95+ |
this year | DATE | 0.95+ |
one-click | QUANTITY | 0.95+ |
DeepLens | COMMERCIAL_ITEM | 0.95+ |
DeepLens | ORGANIZATION | 0.95+ |
One | QUANTITY | 0.94+ |
ORGANIZATION | 0.94+ |
Miles Kingston, Intel | AWS re:Invent
>> Narrator: Live from Las Vegas, it's theCUBE. Covering AWS re:Invent 2017 presented by AWS, Intel and our ecosystem of partners. >> Hello and welcome back. Live here is theCUBE's exclusive coverage here in Las Vegas. 45,000 people attending Amazon Web Services' AWS re:Invent 2017. I'm John Furrier with Lisa Martin. Our next guest is Miles Kingston, he is the General Manager of the Smart Home Group at Intel Corporation. Miles, it's great to have you. >> Thank you so much for having me here, I'm really happy to be here. >> Welcome to theCUBE Alumni Club. First time on. All the benefits you get as being an Alumni is to come back again. >> Can't wait, I'll be here next year, for sure. >> Certainly, you are running a new business for Intel, I'd like to get some details on that, because smart homes. We were at the Samsung Developer Conference, we saw smart fridge, smart living room. So we're starting to see this become a reality, for the CES, every 10 years, that's smart living room. So finally, with cloud and all of the computing power, it's arrived or has it? >> I believe we're almost there. I think the technology has finally advanced enough and there is so much data available now that you have this combination of this technology that can analyze all of this data and truly start doing some of the artificial intelligence that will help you make your home smarter. >> And we've certainly seen the growth of Siri with Apple, Alexa for the home with Amazon, just really go crazy. In fact, during the Industry Day, yesterday, you saw the repeat session most attended by developers, was Alexa. So Alexa's got the minds and has captured the imagination of the developers. Where does it go from here and what is the difference between a smart home and a connected home? Can you just take a minute to explain and set the table on that? >> Yeah and I agree, the voice capability in the home, it's absolutely foundational. I think I saw a recent statistic that by 2022, 55% of US households are expected to have a smart speaker type device in their home. So that's a massive percentage. So I think, if you look in the industry, connected home and smart home, they're often use synonymously. We personally look at it as an evolution. And so what I mean by that is, today, we think the home is extremely connected. If I talk about my house, and I'm a total geek about this stuff, I've got 60 devices connected to an access point, I've got another 60 devices connected to an IOT hub. My home does not feel very smart. It's crazy connected, I can turn on lights on and off, sprinklers on and off, it's not yet smart. What we're really focused on at Intel, is accelerating that transition for your home to truly become a smart home and not just a connected home. >> And software is a key part of it, and I've seen developers attack this area very nicely. At the same time, the surface area with these Smart Homes for security issues, hackers. Cause WiFi is, you can run a process on, these are computers. So how does security fit into all of this? >> Yeah, security is huge and so at Intel we're focused on four technology pillars, which we'll get through during this discussion. One of the first ones is connectivity, and we actually have technology that goes into a WiFi access point, the actual silicon. It's optimized for many clients to be in the home, and also, we've partnered with companies, like McAfee, on security software that will sit on top of that. That will actually manage all of the connected devices in your home, as that extra layer of security. So we fundamentally agree that the security is paramount. >> One of the things that I saw on the website that says, Intel is taking a radically different approach based on proactive research into ways to increase smart home adoption. What makes Intel's approach radically different? >> Yeah, so I'm glad that you asked that. We've spent years going into thousands of consumers' homes in North America, Western Europe, China, etc. To truly understand some of the pain points they were experiencing. From that, we basically, gave all this information to our architects and we really synthesized it into what areas we need to advance technology to enable some of these richer use cases. So we're really working on those foundational building blocks and so those four ones I mentioned earlier, connectivity, that one is paramount. You know, if you want to add 35 to 100 devices in your home, you better make sure they're all connected, all the time and that you've got good bandwidth between them. The second technology was voice, and it's not just voice in one place in your home, it's voice throughout your home. You don't want to have to run to the kitchen to turn your bedroom lights on. And then, vision. You know, making sure your home has the ability to see more. It could be cameras, could be motion sensors, it could be vision sensors. And then this last one is this local intelligence. This artificial intelligence. So the unique approach that Intel is taking is across all of our assets. In the data center, in our artificial intelligence organization, in our new technology organization, our IOT organization, in our client computing group. We're taking all of these assets and investing them in those four pillars and kind of really delivering unique solutions, and there's actually a couple of them that have been on display this week so far. >> How about DeepLens? That certainly was an awesome keynote point, and the device that Andy introduced is essentially a wireless device, that is basically that machine learning an AI in it. And that is awesome, because it's also an IOT device, it's got so much versatility to it. What's behind that? Can you give some color to DeepLens? What does it mean for people? >> So, we're really excited about that one. We partnered with Amazon at AWS on that for quite some time. So, just a reminder to everybody, that is the first Deep Learning enabled wireless camera. And what we're helped do in that, is it's got an Intel Atom processor inside that actually runs the vision processing workload. We also contributed a Deep Learning toolkit, kind of a software middleware layer, and we've also got the Intel Compute Library for deep neural networks. So basically, a lot of preconfigured algorithms that developers can use. The bigger thing, though, is when I talked about those four technology pillars; the vision pillar, as well as the artificial intelligence pillar, this is a proof point of exactly that. Running an instance of the AWS service on a local device in the home to do this computer vision. >> When will that device be available? And what's the price point? Can we get our hands on one? And how are people going to be getting this? >> Yeah, so what was announced during the keynote today is that there are actually some Deep Learning workshops today, here at re:Invent where they're actually being given away, and then actually as soon as the announcement was made during the keynote today, they're actually available for pre-order on Amazon.com right now. I'm not actually sure on the shipping date on Amazon, but anybody can go and check. >> Jeff Frick, go get one of those quickly. Order it, put my credit card down. >> Miles: Yes, please do. >> Well, that's super exciting and now, where's the impact in that? Because it seems like it could be a great IOT device. It seems like it would be a fun consumer device. Where do you guys see the use cases for these developing? >> So the reason I'm excited about this one, is I fundamentally believe that vision is going to enable some richer use cases. The only way we're going to get those though, is if you get these brilliant developers getting their hands on the hardware, with someone like Amazon, who's made all of the machine learning, and the cloud and all of the pieces easier. It's now going to make it very easy for thousands, ideally, hundreds of thousands of developers to start working on this, so they can enable these new use cases. >> The pace of innovation that AWS has set, it's palpable here, we hear it, we feel it. This is a relatively new business unit for Intel. You announced this, about a year ago at re:Invent 2016? Are you trying to match the accelerated pace of innovation that AWS has? And what do you see going on in the next 12 months? Where do you think we'll be 12 months from now? >> Yeah, so I think we're definitely trying to be a fantastic technology partner for Amazon. One of the things we have since last re:Invent is we announced we were going to do some reference designs and developer kits to help get Alexa everywhere. So during this trade show, actually, we are holding, I can't remember the exact number, but many workshops, where we are providing the participants with a Speech Enabling Developer toolkit. And basically, what this is, is it's got an Intel platform, with Intel's dual DSP on it, a microarray, and it's paired with Raspberry Pi. So basically, this will allow anybody who already makes a product, it will allow them to easily integrate Alexa into that product with Intel inside. Which is perfect for us. >> So obviously, we're super excited, we love the cloud. I'm kind of a fanboy of the cloud, being a developer in my old days, but the resources that you get out of the cloud are amazing. But now when you start looking at these devices like DeepLens, the possibilities are limitless. So it's really interesting. The question I have for you is, you know, we had Tom Siebel on earlier, pioneer, invented the CRM category. He's now the CEO of C3 IOT, and I asked him, why are you doing a startup, you're a billionaire. You're rich, you don't need to do it. He goes, "I'm a computer guy, I love doing this." He's an entrepreneur at heart. But he said something interesting, he said that the two waves that he surfs, they call him a big time surfer, he's hanging 10 on the waves, is IOT and AI. This is an opportunity for you guys to reimagine the smart home. How important is the IOT trend and the AI trend for really doing it right with smart home, and whatever we're calling it. There's an opportunity there. How are you guys viewing that vision? What progress points have you identified at Intel, to kind of, check? >> Completely agree. For me, AI really is the key turning point here. 'Cause even just talking about connected versus smart, the thing that makes it smart is the ability to learn and think for itself. And the reason we have focused on those technology pillars, is we believe that by adding voice everywhere in the home, and the listening capability, as well as adding the vision capability, you're going to enable all of this rich new data, which you have to have some of these AI tools to make any sense of, and when you get to video, you absolutely have to have some amount of it locally. So, that either for bandwidth reasons, for latency reasons, for privacy reasons, like some of the examples that were given in the keynote today, you just want to keep that stuff locally. >> And having policy and running on it, you know, access points are interesting, it gives you connectivity, but these are computers, so if someone gets malware on the home, they can run a full threaded process on these machines. Sometimes you might not want that. You want to be able to control that. >> Yes, absolutely. We would really believe that the wireless access point in the home is one of the greatest areas where you can add additional security in the home and protect all of the devices. >> So you mentioned, I think 120 different devices in your home that are connected. How far away do you think your home is from being, from going from connected to smart? What's that timeline like? >> You know what I think, honestly, I think a lot of the hardware is already there. And the examples I will give is, and I'm not just saying this because I'm here, but I actually do have 15 Echos in my house because I do want to be able to control all of the infrastructure everywhere in the home. I do believe in the future, those devices will be listening for anomalies, like glass breaking, a dog barking, a baby crying, and I believe the hardware we have today is very capable of doing that. Similarly, I think that a lot of the cameras today are trained to, whenever they see motion, to do certain things and to start recording. I think that use case is going to evolve over time as well, so I truly believe that we are probably two years away from really seeing, with some of the existing infrastructure, truly being able to enable some smarter home use cases. >> The renaissance going on, the creativity is going to be amazing. I'm looking at a tweet that Bert Latimar, from our team made, on our last interview with the Washington County Sheriff, customer of Amazon, pays $6 a month for getting all the mugshots. He goes, "I'm gonna use DeepLens for things like "recognizing scars and tattoos." Because now they have to take pictures when someone comes in as a criminal, but now with DeepLens, they can program it to look for tattoos. >> Yeah, absolutely. And if you see things like the Ring Doorbell today, they have that neighborhood application of it so you can actually share within your local neighborhood if somebody had a package stolen, they can post a picture of that person. And even just security cameras, my house, it feels like Fort Knox sometimes, I've got so many security cameras. It used to be, every time there was a windstorm, I got 25 alerts on my phone, because a branch was blowing. Now I have security cameras that actually can do facial recognition and say, your son is home, your daughter is home, your wife is home. >> So are all the houses going to have a little sign that says,"Protected by Alexa and Intel and DeepLens" >> Don't you dare, exactly. (laughs) >> Lisa: And no sneaking out for the kids. >> Yes, exactly. >> Alright, so real quick to end the segment, quickly summarize and share, what is the Intel relationship with Amazon Web Services? Talk about the partnership. >> It's a great relationship. We've been partnering with Amazon for over a decade, starting with AWS. Over the last couple of years, we've started working closely with them on their first party products. So, many of you have seen the Echo Show and the Echo Look, that has Intel inside. It also has a RealSense Camera in the Look. We've now enabled the Speech Enabling Developer Kit, which is meant to help get Alexa everywhere, running on Intel. We've now done DeepLens, which is a great example of local artificial intelligence. Partnered with all the work we've done with them in the cloud, so it really is, I would say the partnership expands all the way from the very edge device in the home, all the way to the cloud. >> Miles, thanks for coming, Miles Kingston with Intel, General Manager of the Smart Home Group, new business unit at Intel, really reimagining the future for people's lives. I think in this great case where technology can actually help people, rather than making it any more complicated. Which we all know if we have access points and kids gaming, it can be a problem. It's theCUBE, live here in Las Vegas. 45,000 people here at Amazon re:Invent. Five years ago, our first show, only 7,000. Now what amazing growth. Thanks so much for coming out, Lisa Martin and John Furrier here, reporting from theCUBE. More coverage after this short break. (light music)
SUMMARY :
and our ecosystem of partners. he is the General Manager of the Smart Home Group I'm really happy to be here. All the benefits you get as being an Alumni for the CES, every 10 years, that's smart living room. that will help you make your home smarter. and has captured the imagination of the developers. Yeah and I agree, the voice capability in the home, At the same time, the surface area with these Smart Homes One of the first ones is connectivity, and we actually One of the things that I saw on the website that says, Yeah, so I'm glad that you asked that. and the device that Andy introduced in the home to do this computer vision. I'm not actually sure on the shipping date on Amazon, Jeff Frick, go get one of those quickly. Where do you guys see the use cases for these developing? and all of the pieces easier. And what do you see going on in the next 12 months? One of the things we have since last re:Invent in my old days, but the resources that you get And the reason we have focused on those technology so if someone gets malware on the home, in the home is one of the greatest areas where you How far away do you think your home is from being, and I believe the hardware we have today is very the creativity is going to be amazing. so you can actually share within your local neighborhood Don't you dare, exactly. Talk about the partnership. and the Echo Look, that has Intel inside. General Manager of the Smart Home Group,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Bert Latimar | PERSON | 0.99+ |
Tom Siebel | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
60 devices | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Miles Kingston | PERSON | 0.99+ |
China | LOCATION | 0.99+ |
McAfee | ORGANIZATION | 0.99+ |
Miles | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Siri | TITLE | 0.99+ |
35 | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
Western Europe | LOCATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
two years | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
Amazon Web Services' | ORGANIZATION | 0.99+ |
Andy | PERSON | 0.99+ |
Five years ago | DATE | 0.99+ |
first show | QUANTITY | 0.99+ |
45,000 people | QUANTITY | 0.99+ |
CES | EVENT | 0.99+ |
today | DATE | 0.99+ |
2022 | DATE | 0.99+ |
Smart Home Group | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
Amazon.com | ORGANIZATION | 0.98+ |
One | QUANTITY | 0.98+ |
Echo Show | COMMERCIAL_ITEM | 0.98+ |
Intel Corporation | ORGANIZATION | 0.98+ |
120 different devices | QUANTITY | 0.98+ |
100 devices | QUANTITY | 0.98+ |
four ones | QUANTITY | 0.98+ |
first | QUANTITY | 0.97+ |
this week | DATE | 0.97+ |
$6 a month | QUANTITY | 0.97+ |
four technology pillars | QUANTITY | 0.97+ |
55% | QUANTITY | 0.97+ |
7,000 | QUANTITY | 0.96+ |
First time | QUANTITY | 0.96+ |
first ones | QUANTITY | 0.96+ |
Echos | COMMERCIAL_ITEM | 0.96+ |
Alexa | TITLE | 0.96+ |
one place | QUANTITY | 0.95+ |
thousands of consumers' | QUANTITY | 0.95+ |
first party | QUANTITY | 0.95+ |
US | LOCATION | 0.94+ |
12 months | QUANTITY | 0.94+ |
Chris Adzima, Washington County Sheriff | AWS re:Invent
>> Announcer: Live from Las Vegas, it's theCUBE. Covering AWS re:Invent 2017. Presented by AWS, Intel and our ecosystem of partners. >> Hey, welcome back everyone. Live here this is theCUBE in Las Vegas for AWS Amazon Web Services re:Invent 2017. Our 5th year covering the event. Wall to wall coverage. Three days, this is our day two. 45,000 people here. Developers and business connecting together this year. Big show. Amazon continues its growth. I'm John Furrier, my co-host Justin Warren. Our next guest is from Washington County Sheriff's Office using Amazon, Amazon Recognition, Chris Adzima, who is the Senior Information Systems Analyst at the Washington County Sheriff. Welcome to theCUBE. >> Nice to have you. >> So Chris. >> be here. >> So, so tons of cool stuff we saw on stage today. You know they've had polylex out for awhile. But you're gonna start to see some of these multi-media services around. Human identification, transcription, Recognition's been out for awhile. With the power of the cloud, you can start rollin' out some pretty cool services. You have one of 'em, talk about your solution and what you guys are doing with it. >> Sure, about last year when Recognition was announced, I wanted to provide our deputies at the Sheriff's office with the way to identify people based on videos that we get from either surveillance or eyewitnesses. So, I looked into Recognition and decided that we should give it a try by giving all of our booking photos or mugshots up to the cloud for it to be indexed. So, that's what I did. I indexed all, about 300,000 booking photos, we have in the last 10 years, and put that into a Recognition Collection. And now I can use the simple tools that AWS gives me to search against that index for any new image that we get in, either from surveillance or an eyewitness, allowing us to get identification within seconds as opposed to having to go through all 700 employees at the Sheriff's Office for the chance that they might have known the person. >> So the old way was essentially grab the footage, and then do the old mugshot kinda scan manually, right? >> Yeah, manually. It wasn't in a book, it was on a website, but essentially, yeah, you had to-- >> I made my point, it sucks. It's hard as hell. >> It's very difficult, very difficult. >> You see on TV all the magic pictures goin' on and the facial recognition, you see on the movies and stuff. How close are we to that right now in terms of that capability? >> Well as far as facial recognition goes it all depends on the data that you have at your fingertips. Right now I have booking photos, so I can identify people with a very high level of certainty if they've been in our jail. If they haven't been in our jail, I obviously don't have much of a chance of identifying them. So, what you see on the TV where it's like, we looked through all the DMV records. We looked through all of the people on the street and all this stuff, We're pretty far off from that because nobody has a catalog of all those images. >> You need to incorporate of all the pictures, all the data. >> Yeah, but when you have the data, it's very simple. >> Right, and it's a lot like scanning for fingerprints. It's like, people would have seen that. You know, you have a fingerprint that you've collected from the crime scene-- >> Chris: Exactly. >> We see it on NCIS or something where you scan through all of that. So, it's pretty similar to that. >> Yeah, it's similar to that, or DNA, or anything like that. If you have the data set, it's very easy to search for those people. >> Yeah. >> So, faces are no different. >> So, how long did it take you to get up and running? Did you have to ingest the photos? How did you do that or? >> So... >> John: They're on a website so you had 'em on digital already. >> From never knowing anything about Amazon Web Services, to a fully-functional prototype of this product took me 30 days. >> John: Wow. >> I had the photos uploaded and the ability to actually run the searches via the API in three. So, extremely easy. Extremely easy. >> So, given the success that you've had with that particular producr, are there other services at AWS that you're looking into? That say, hey, that would actually be really useful for us? >> Yes, a couple that were announced today. First off, the recognition for video. Something that we have a problem with, and I'm hoping recognition for video's going to help with is when you have a surveillance camera, people are moving all the time. Therefore, trying to get a screenshot is going to get a blurry image. We're not getting good results with low-light or low frame rate. But recognition for video is gonna be able to take that movement and still look at the face. Hopefully we're gonna be able to get a better facial identification that way. >> Justin: Okay. >> Another thing that I want to look into is this DeepLens they just announced today. >> John: Awesome. >> That looks extremely promising in the way of me being able to teach it things that we need. A great example of what I would use this for is when a inmate comes in, we take pictures of scars, marks and tattoos. That way, we have a database of all the scars, marks and tattoos on somebody. In case, if they recommit a crime and our eye-witness says, "They had a skull tattoo on their chest" we can then look through all of the people that have a skull tattoo and say, "These are our list of possible suspects." The problem with that is, is that you may enter somebody in as a skull, and you may enter it in as crossbones. Somebody else might put an accidental I in there. So it's very hard to do a text search against that. But if recognition were to come through, or it wouldn't be recognition in this case. If whatever model I built with the DeepLens came through, and said this is a skull and this is the word we use, then I'd be able to index all of those images, quickly pull them up, so we wouldn't even need a picture. We would just need to know, from an eye-witness, that there was a skull on that person's chest. >> John: We had a guest on yesterday from Thorn, which Intel is doing AI for good, and they use essentially, and they didn't say Craigslist, but trying to look for women who were being sold for prostitution, and exploited children and whatnot. And it's all machine learning, and some natural language processing. When you look at the Sage announcement, that looks promising, 'cause they're gonna make, as I was try to democratize the heavy-lifting around all of this, you know, voodoo machine learning. Which, I mean, if you're totally a computer science geek and that's all you do, yeah, you could probably master machine learning. But if you're a practitioner, you're just whipping up. >> Well, yeah, and that's a good example. Because I am not a data scientist. I have no idea how this stuff works in the back end. But being able to utilize, stand on the shoulders of these giants, so to speak, is allowing people like me who A, I only have seven people on my team to devote to this kind of thing. We don't have a lot of resources. We wouldn't be able to get a data scientist. But opening this stuff up to us allows us to build these things, like this facial recognition and other things based on machine learning. And ultimately keep our citizens safe through the work that AWS does in getting this to us. >> Justin: Yeah, and we've been saying at a couple of different interviews so far, that humans don't scale. So these tools that provide the humans that you do have a lot more leverage to get things done. So, we were talking just before, before we started recording that these are tools that assist the humans. You're not replacing the humans with machines that just go oh we're gonna cede all decision-making to you. This is just another tool like being able to fingerprint people and search that. It's one more way of doing the standard policing that you are already doing. >> Exactly, and the tool that I've already created, and any tool I create after that, doesn't ever look to replace our deputies or our detectives. We give them things so that they don't have to do the things like flipping through that book for hours upon hours. They can be out in the field, following the leads, keeping the community safe and apprehending these criminals. >> Do they have on body cameras too? >> Not yet. We are currently looking into body cameras. >> John: That's a trend. They're gonna be instrumented basically like warriors: fully loaded, you know, cameras. >> I tend not to think of it like that. Only because, again, that's a tool that we use. Not to, you know, be that land-warrior so to speak. But more of a-- >> Documentation, I mean, you see 'em on cars when people get pulled over. >> Exactly. >> You've got the evidence. >> It's documentation, just like anything else. It's just that one more tool that helps that deputy, that detective, that police officer get a better idea of the entire situation. >> Maybe I shouldn't have said war. Maybe I'm just into the Twitch culture where they're all geared up with all the gear. Okay, so next question for you is what's your vibe on the show? Obviously you have great experience working at Amazon. You're a success study because you're trying to get a job done, you got some tools and, >> Right. >> making it happen. What's your take this year? What's your vibe of the show? >> I'm really excited about a lot of stuff I'm seeing at the show. A lot of the announcements seemed like they were almost geared towards me. And I know they weren't obviously, but it really felt like announcement after announcement were these things that I'm wanting to go home and immediately start to play with. Anywhere from the stuff that was in the machine learning to the new elastic containers that they are announcing, to the new LAM defunctions that they're talking about. I mean, just all over the board. I'm very excited for all these new things that I get to go home and play with. >> What do you think, Justin? What's your take on the vibe show? >> I find that it's an interesting show. I'm finding it a little different than what I was expecting. This is my first time here at AWS re:Invent. I go to a lot of other trade shows and I was expecting more of like a developer show. Like I'm going to CubeCon next week and that's full of people with spiky hair, and pink shoes, and craziness. >> John: That's the area, by the way. >> Oh that's the area, right. It's a bit more casual than some of the other more businessy sort of conferences. I mean, here I am, wearing a jacket. So I don't feel completely out of place here, but it does feel like it's that blending of business and use cases and the things that you actually get done with it as well as there being people who have the tools that they want to go and build amazing new things with. >> Chris: Right, right, yeah. >> So it's a nice blend, I think. >> Yeah, I've found that it definitely doesn't feel like any other developer conference I've been to. But being in the public sector, I tend to go to the more business-suit conferences. >> John: This is like total developer for you, from a public sector perspective. >> From where I'm coming from, this is very laid back. And extremely... >> Oh yeah. >> But at the same time, it's very like a mixture. Like you said, you see executives mingling with the developers talking about things-- >> John: You're a good example I think of Amazon. First of all, there's the builder thing in the area is supposed to be pretty cool. I was told to go there last night. People came back, it was very much builder, kind of maker culture. They're doing prototypes, it was very developer-oriented. But the public sector, I'm astonished by Amazon's success there because the stuff is easy and low-cost to get in. And public sector is not known for its agility. >> Chris: No. >> I mean, it's music to your ears, right? I mean, if you're in the public sector, you're like, "What? Now I can get it done?" >> Very much so. And one thing I love to share about our solution is the price, right? Because I spent $6 a month for my AWS bill. Right? >> John: Wow. >> That's extremely easy to sell to tax payers, right? It's extremely easy to sell to the higher-ups in government to say, I'm gonna tinker around with this, but even if we solve one crime, we've already seen a return on our investment above and beyond what we expected. >> Yeah. >> No brainer, no brainer. Chris, thanks so much for sharing your story. We really appreciate it. Congratulations on your success and keep in touch with theCube. Welcome to theCube Alumni Club. >> Alright. >> John: For coming out, it's theCube here. Amazon re:Invent, bringing all the action down, all of the success stories, all of the analysis. I'm John Furrier with theCube. More live coverage after this short break. (upbeat music)
SUMMARY :
Announcer: Live from Las Vegas, it's theCUBE. at the Washington County Sheriff. With the power of the cloud, you can start So, I looked into Recognition and decided that we should it was on a website, but essentially, yeah, you had to-- I made my point, it sucks. and the facial recognition, you see on the movies and stuff. it all depends on the data that you have at your fingertips. You know, you have a fingerprint that you've So, it's pretty similar to that. Yeah, it's similar to that, or DNA, or anything like that. so you had 'em on digital already. to a fully-functional prototype I had the photos uploaded and the ability is going to get a blurry image. is this DeepLens they just announced today. of all the scars, marks and tattoos on somebody. around all of this, you know, voodoo machine learning. of these giants, so to speak, is allowing people like me that you are already doing. Exactly, and the tool that I've already created, We are currently looking into body cameras. fully loaded, you know, cameras. I tend not to think of it like that. Documentation, I mean, you see 'em get a better idea of the entire situation. to get a job done, you got some tools and, What's your vibe of the show? that I get to go home and play with. I go to a lot of other trade shows and and the things that you actually get done with it as well I tend to go to the more business-suit conferences. John: This is like total developer for you, And extremely... But at the same time, it's very like a mixture. because the stuff is easy and low-cost to get in. And one thing I love to share It's extremely easy to sell to the higher-ups Welcome to theCube Alumni Club. all of the success stories, all of the analysis.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Justin Warren | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Justin | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Chris Adzima | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
30 days | QUANTITY | 0.99+ |
seven people | QUANTITY | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Three days | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
700 employees | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
next week | DATE | 0.99+ |
first time | QUANTITY | 0.99+ |
one crime | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.98+ |
Washington County Sheriff's Office | ORGANIZATION | 0.98+ |
last year | DATE | 0.98+ |
First | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
CubeCon | EVENT | 0.97+ |
5th year | QUANTITY | 0.97+ |
last night | DATE | 0.97+ |
about 300,000 booking photos | QUANTITY | 0.97+ |
Sheriff's Office | ORGANIZATION | 0.96+ |
one | QUANTITY | 0.96+ |
three | QUANTITY | 0.96+ |
Craigslist | ORGANIZATION | 0.96+ |
Thorn | ORGANIZATION | 0.94+ |
$6 a month | QUANTITY | 0.94+ |
Recognition | ORGANIZATION | 0.93+ |
this year | DATE | 0.93+ |
Twitch | ORGANIZATION | 0.91+ |
DeepLens | ORGANIZATION | 0.9+ |
one more tool | QUANTITY | 0.89+ |
Amazon Recognition | ORGANIZATION | 0.87+ |
Sheriff | ORGANIZATION | 0.86+ |
Washington County Sheriff | ORGANIZATION | 0.86+ |
Invent 2017 | EVENT | 0.85+ |
last 10 years | DATE | 0.83+ |
two. 45,000 people | QUANTITY | 0.79+ |
Amazon Web | ORGANIZATION | 0.78+ |
DMV | ORGANIZATION | 0.77+ |
theCube Alumni Club | ORGANIZATION | 0.76+ |
theCUBE | ORGANIZATION | 0.7+ |
theCube | ORGANIZATION | 0.7+ |
Services re:Invent 2017 | EVENT | 0.69+ |
re: | EVENT | 0.64+ |
theCube | COMMERCIAL_ITEM | 0.61+ |
NCIS | ORGANIZATION | 0.61+ |
Sage | ORGANIZATION | 0.61+ |
Washington County | LOCATION | 0.6+ |
way | QUANTITY | 0.59+ |
re:Invent | EVENT | 0.57+ |
couple | QUANTITY | 0.56+ |