Dr. Matt Wood, AWS | AWS Summit SF 2022
(gentle melody) >> Welcome back to theCUBE's live coverage of AWS Summit in San Francisco, California. Events are back. AWS Summit in New York City this summer, theCUBE will be there as well. Check us out there. I'm glad to have events back. It's great to have of everyone here. I'm John Furrier, host of theCUBE. Dr. Matt Wood is with me, CUBE alumni, now VP of Business Analytics Division of AWS. Matt, great to see you. >> Thank you, John. It's great to be here. I appreciate it. >> I always call you Dr. Matt Wood because Andy Jackson always says, "Dr. Matt, we would introduce you on the arena." (Matt laughs) >> Matt: The one and only. >> The one and only, Dr. Matt Wood. >> In joke, I love it. (laughs) >> Andy style. (Matt laughs) I think you had walk up music too. >> Yes, we all have our own personalized walk up music. >> So talk about your new role, not a new role, but you're running the analytics business for AWS. What does that consist of right now? >> Sure. So I work. I've got what I consider to be one of the best jobs in the world. I get to work with our customers and the teams at AWS to build the analytics services that millions of our customers use to slice dice, pivot, better understand their data, look at how they can use that data for reporting, looking backwards. And also look at how they can use that data looking forward, so predictive analytics and machine learning. So whether it is slicing and dicing in the lower level of Hadoop and the big data engines, or whether you're doing ETL with Glue, or whether you're visualizing the data in QuickSight or building your models in SageMaker. I got my fingers in a lot of pies. >> One of the benefits of having CUBE coverage with AWS since 2013 is watching the progression. You were on theCUBE that first year we were at Reinvent in 2013, and look at how machine learning just exploded onto the scene. You were involved in that from day one. It's still day one, as you guys say. What's the big thing now? Look at just what happened. Machine learning comes in and then a slew of services come in. You've got SageMaker, became a hot seller right out of the gate. The database stuff was kicking butt. So all this is now booming. That was a real generational change over for database. What's the perspective? What's your perspective on that's evolved? >> I think it's a really good point. I totally agree. I think for machine learning, there's sort of a Renaissance in machine learning and the application of machine learning. Machine learning as a technology has been around for 50 years, let's say. But to do machine learning right, you need like a lot of data. The data needs to be high quality. You need a lot of compute to be able to train those models and you have to be able to evaluate what those models mean as you apply them to real world problems. And so the cloud really removed a lot of the constraints. Finally, customers had all of the data that they needed. We gave them services to be able to label that data in a high quality way. There's all the compute you need to be able to train the models. And so where you go? And so the cloud really enabled this Renaissance with machine learning. And we're seeing honestly a similar Renaissance with data and analytics. If you look back five to ten years, analytics was something you did in batch, your data warehouse ran an analysis to do reconciliation at the end of the month, and that was it. (John laughs) And so that's when you needed it. But today, if your Redshift cluster isn't available, Uber drivers don't turn up, DoorDash deliveries don't get made. Analytics is now central to virtually every business, and it is central to virtually every business's digital transformation. And being able to take that data from a variety of sources, be able to query it with high performance, to be able to actually then start to augment that data with real information, which usually comes from technical experts and domain experts to form wisdom and information from raw data. That's kind of what most organizations are trying to do when they kind of go through this analytics journey. >> It's interesting. Dave Velanta and I always talk on theCUBE about the future. And you look back, the things we're talking about six years ago are actually happening now. And it's not hyped up statement to say digital transformation is actually happening now. And there's also times when we bang our fists on the table saying, say, "I really think this is so important." And David says, "John, you're going to die on that hill." (Matt laughs) And so I'm excited that this year, for the first time, I didn't die on that hill. I've been saying- >> Do all right. >> Data as code is the next infrastructure as code. And Dave's like, "What do you mean by that?" We're talking about how data gets... And it's happening. So we just had an event on our AWS startups.com site, a showcase for startups, and the theme was data as code. And interesting new trends emerging really clearly, the role of a data engineer, right? Like an SRE, what an SRE did for cloud, you have a new data engineering role because of the developer onboarding is massively increasing, exponentially, new developers. Data science scientists are growing, but the pipelining and managing and engineering as a system, almost like an operating system. >> Kind of as a discipline. >> So what's your reaction to that about this data engineer, data as code? Because if you have horizontally scalable data, you've got to be open, that's hard (laughs), okay? And you got to silo the data that needs to be siloed for compliance and reason. So that's a big policy around that. So what's your reaction to data's code and the data engineering phenomenon? >> It's a really good point. I think with any technology project inside of an organization, success with analytics or machine learning, it's kind of 50% technology and then 50% cultural. And you have often domain experts. Those could be physicians or drug design experts, or they could be financial experts or whoever they might be, got deep domain expertise, and then you've got technical implementation teams. And there's kind of a natural, often repulsive force. I don't mean that rudely, but they just don't talk the same language. And so the more complex a domain and the more complex the technology, the stronger their repulsive force. And it can become very difficult for domain experts to work closely with the technical experts to be able to actually get business decisions made. And so what data engineering does and data engineering is, in some cases a team, or it can be a role that you play. It's really allowing those two disciplines to speak the same language. You can think of it as plumbing, but I think of it as like a bridge. It's a bridge between the technical implementation and the domain experts, and that requires a very disparate range of skills. You've got to understand about statistics, you've got to understand about the implementation, you got to understand about the data, you got to understand about the domain. And if you can put all of that together, that data engineering discipline can be incredibly transformative for an organization because it builds the bridge between those two groups. >> I was advising some young computer science students at the sophomore, junior level just a couple of weeks ago, and I told them I would ask someone at Amazon this question. So I'll ask you, >> Matt: Okay. since you've been in the middle of it for years, they were asking me, and I was trying to mentor them on how do you become a data engineer, from a practical standpoint? Courseware, projects to work on, how to think, not just coding Python, because everyone's coding in Python, but what else can they do? So I was trying to help them. I didn't really know the answer myself. I was just trying to kind of help figure it out with them. So what is the answer, in your opinion, or the thoughts around advice to young students who want to be data engineers? Because data scientists is pretty clear on what that is. You use tools, you make visualizations, you manage data, you get answers and insights and then apply that to the business. That's an application. That's not the standing up a stack or managing the infrastructure. So what does that coding look like? What would your advice be to folks getting into a data engineering role? >> Yeah, I think if you believe this, what I said earlier about 50% technology, 50 % culture, the number one technology to learn as a data engineer is the tools in the cloud which allow you to aggregate data from virtually any source into something which is incrementally more valuable for the organization. That's really what data engineering is all about. It's about taking from multiple sources. Some people call them silos, but silos indicates that the storage is kind of fungible or undifferentiated. That's really not the case. Success requires you to have really purpose built, well crafted, high performance, low cost engines for all of your data. So understanding those tools and understanding how to use them, that's probably the most important technical piece. Python and programming and statistics go along with that, I think. And then the most important cultural part, I think is... It's just curiosity. You want to be able to, as a data engineer, you want to have a natural curiosity that drives you to seek the truth inside an organization, seek the truth of a particular problem, and to be able to engage because probably you're going to some choice as you go through your career about which domain you end up in. Maybe you're really passionate about healthcare, or you're really just passionate about transportation or media, whatever it might be. And you can allow that to drive a certain amount of curiosity. But within those roles, the domains are so broad you kind of got to allow your curiosity to develop and lead you to ask the right questions and engage in the right way with your teams, because you can have all the technical skills in the world. But if you're not able to help the team's truth seek through that curiosity, you simply won't be successful. >> We just had a guest, 20 year old founder, Johnny Dallas who was 16 when he worked at Amazon. Youngest engineer- >> Johnny Dallas is a great name, by the way. (both chuckle) >> It's his real name. It sounds like a football player. >> That's awesome. >> Rock star. Johnny CUBE, it's me. But he's young and he was saying... His advice was just do projects. >> Matt: And get hands on. Yeah. >> And I was saying, hey, I came from the old days where you get to stand stuff up and you hugged on for the assets because you didn't want to kill the project because you spent all this money. And he's like, yeah, with cloud you can shut it down. If you do a project that's not working and you get bad data no one's adopting it or you don't like it anymore, you shut it down, just something else. >> Yeah, totally. >> Instantly abandon it and move on to something new. That's a progression. >> Totally! The blast radius of decisions is just way reduced. We talk a lot about in the old world, trying to find the resources and get the funding is like, all right, I want to try out this kind of random idea that could be a big deal for the organization. I need $50 million and a new data center. You're not going to get anywhere. >> And you do a proposal, working backwards, documents all kinds of stuff. >> All that sort of stuff. >> Jump your hoops. >> So all of that is gone. But we sometimes forget that a big part of that is just the prototyping and the experimentation and the limited blast radius in terms of cost, and honestly, the most important thing is time, just being able to jump in there, fingers on keyboards, just try this stuff out. And that's why at AWS, we have... Part of the reason we have so many services, because we want, when you get into AWS, we want the whole toolbox to be available to every developer. And so as your ideas develop, you may want to jump from data that you have that's already in a database to doing realtime data. And then you have the tools there. And when you want to get into real time data, you don't just have kinesis, you have real time analytics, and you can run SQL against that data. The capabilities and the breadth really matter when it comes to prototyping. >> That's the culture piece, because what was once a dysfunctional behavior. I'm going to go off the reservation and try something behind my boss' back, now is a side hustle or fun project. So for fun, you can just code something. >> Yeah, totally. I remember my first Hadoop projects. I found almost literally a decommissioned set of servers in the data center that no one was using. They were super old. They're about to be literally turned off. And I managed to convince the team to leave them on for me for another month. And I installed Hadoop on them and got them going. That just seems crazy to me now that I had to go and convince anybody not to turn these servers off. But what it was like when you- >> That's when you came up with Elastic MapReduce because you said this is too hard, we got to make it easier. >> Basically yes. (John laughs) I was installing Hadoop version Beta 9.9 or whatever. It was like, this is really hard. >> We got to make it simpler. All right, good stuff. I love the walk down memory Lane. And also your advice. Great stuff. I think culture is huge. That's why I like Adam's keynote at Reinvent, Adam Selipsky talk about Pathfinders and trailblazers, because that's a blast radius impact when you can actually have innovation organically just come from anywhere. That's totally cool. >> Matt: Totally cool. >> All right, let's get into the product. Serverless has been hot. We hear a lot of EKS is hot. Containers are booming. Kubernetes is getting adopted, still a lot of work to do there. Cloud native developers are booming. Serverless, Lambda. How does that impact the analytics piece? Can you share the hot products around how that translates? >> Absolutely, yeah. >> Aurora, SageMaker. >> Yeah, I think it's... If you look at kind of the evolution and what customers are asking for, they don't just want low cost. They don't just want this broad set of services. They don't just want those services to have deep capabilities. They want those services to have as low an operating cost over time as possible. So we kind of really got it down. We got built a lot of muscle, a lot of services about getting up and running and experimenting and prototyping and turning things off and turning them on and turning them off. And that's all great. But actually, you really only in most projects start something once and then stop something once, and maybe there's an hour in between or maybe there's a year. But the real expense in terms of time and operations and complexity is sometimes in that running cost. And so we've heard very loudly and clearly from customers that running cost is just undifferentiated to them. And they want to spend more time on their work. And in analytics, that is slicing the data, pivoting the data, combining the data, labeling the data, training their models, running inference against their models, and less time doing the operational pieces. >> Is that why the service focuses there? >> Yeah, absolutely. It dramatically reduces the skill required to run these workloads of any scale. And it dramatically reduces the undifferentiated heavy lifting because you get to focus more of the time that you would have spent on the operations on the actual work that you want to get done. And so if you look at something just like Redshift Serverless, that we launched a Reinvent, we have a lot of customers that want to run the cluster, and they want to get into the weeds where there is benefit. We have a lot of customers that say there's no benefit for me, I just want to do the analytics. So you run the operational piece, you're the experts. We run 60 million instant startups every single day. We do this a lot. >> John: Exactly. We understand the operations- >> I just want the answers. Come on. >> So just give me the answers or just give me the notebook or just give me the inference prediction. Today, for example, we announced Serverless Inference. So now once you've trained your machine learning model, just run a few lines of code or you just click a few buttons and then you got an inference endpoint that you do not have to manage. And whether you're doing one query against that end point per hour or you're doing 10 million, we'll just scale it on the back end. I know we got not a lot of time left, but I want to get your reaction on this. One of the things about the data lakes not being data swamps has been, from what I've been reporting and hearing from customers, is that they want to retrain their machine learning algorithm. They need that data, they need the real time data, and they need the time series data. Even though the time has passed, they got to store in the data lake. So now the data lake's main function is being reusing the data to actually retrain. It works properly. So a lot of post mortems turn into actually business improvements to make the machine learnings smarter, faster. Do you see that same way? Do you see it the same way? >> Yeah, I think it's really interesting >> Or is that just... >> No, I think it's totally interesting because it's convenient to kind of think of analytics as a very clear progression from point A to point B. But really, you're navigating terrain for which you do not have a map, and you need a lot of help to navigate that terrain. And so having these services in place, not having to run the operations of those services, being able to have those services be secure and well governed. And we added PII detection today. It's something you can do automatically, to be able to use any unstructured data, run queries against that unstructured data. So today we added text queries. So you can just say, well, you can scan a badge, for example, and say, well, what's the name on this badge? And you don't have to identify where it is. We'll do all of that work for you. It's more like a branch than it is just a normal A to B path, a linear path. And that includes loops backwards. And sometimes you've got to get the results and use those to make improvements further upstream. And sometimes you've got to use those... And when you're downstream, it will be like, "Ah, I remember that." And you come back and bring it all together. >> Awesome. >> So it's a wonderful world for sure. >> Dr. Matt, we're here in theCUBE. Just take the last word and give the update while you're here what's the big news happening that you're announcing here at Summit in San Francisco, California, and update on the business analytics group. >> Yeah, we did a lot of announcements in the keynote. I encourage everyone to take a look at, that this morning with Swami. One of the ones I'm most excited about is the opportunity to be able to take dashboards, visualizations. We're all used to using these things. We see them in our business intelligence tools, all over the place. However, what we've heard from customers is like, yes, I want those analytics, I want that visualization, I want it to be up to date, but I don't actually want to have to go from my tools where I'm actually doing my work to another separate tool to be able to look at that information. And so today we announced 1-click public embedding for QuickSight dashboard. So today you can literally as easily as embedding a YouTube video, you can take a dashboard that you've built inside QuickSight, cut and paste the HTML, paste it into your application and that's it. That's what you have to do. It takes seconds. >> And it gets updated in real time. >> Updated in real time. It's interactive. You can do everything that you would normally do. You can brand it, there's no power by QuickSight button or anything like that. You can change the colors, fit in perfectly with your application. So that's an incredibly powerful way of being able to take an analytics capability that today sits inside its own little fiefdom and put it just everywhere. Very transformative. >> Awesome. And the business is going well. You got the Serverless detail win for you there. Good stuff. Dr. Matt Wood, thank you for coming on theCUBE. >> Anytime. Thank you. >> Okay, this is theCUBE's coverage of AWS Summit 2022 in San Francisco, California. I'm John Furrier, host of theCUBE. Stay with us for more coverage of day two after this short break. (gentle music)
SUMMARY :
It's great to have of everyone here. I appreciate it. I always call you Dr. Matt Wood The one and only, In joke, I love it. I think you had walk up music too. Yes, we all have our own So talk about your and the big data engines, One of the benefits and you have to be able to evaluate And you look back, and the theme was data as code. And you got to silo the data And so the more complex a domain students at the sophomore, junior level I didn't really know the answer myself. the domains are so broad you kind of We just had a guest, is a great name, by the way. It's his real name. His advice was just do projects. Matt: And get hands on. and you hugged on for the assets move on to something new. and get the funding is like, And you do a proposal, And then you have the tools there. So for fun, you can just code something. And I managed to convince the team That's when you came I was installing Hadoop I love the walk down memory Lane. How does that impact the analytics piece? that is slicing the data, And so if you look at something We understand the operations- I just want the answers. that you do not have to manage. And you don't have to and give the update while you're here is the opportunity to be able that you would normally do. And the business is going well. Thank you. I'm John Furrier, host of theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Johnny Dallas | PERSON | 0.99+ |
Andy Jackson | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Dave Velanta | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Matt | PERSON | 0.99+ |
Adam Selipsky | PERSON | 0.99+ |
10 million | QUANTITY | 0.99+ |
$50 million | QUANTITY | 0.99+ |
Matt Wood | PERSON | 0.99+ |
60 million | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
50% | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Adam | PERSON | 0.99+ |
two groups | QUANTITY | 0.99+ |
San Francisco, California | LOCATION | 0.99+ |
16 | QUANTITY | 0.99+ |
2013 | DATE | 0.99+ |
Python | TITLE | 0.99+ |
1-click | QUANTITY | 0.99+ |
a year | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
Hadoop | TITLE | 0.99+ |
ten years | QUANTITY | 0.99+ |
two disciplines | QUANTITY | 0.99+ |
New York City | LOCATION | 0.99+ |
San Francisco, California | LOCATION | 0.99+ |
an hour | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
first time | QUANTITY | 0.98+ |
50 % | QUANTITY | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
millions | QUANTITY | 0.98+ |
AWS Summit | EVENT | 0.98+ |
YouTube | ORGANIZATION | 0.98+ |
memory Lane | LOCATION | 0.98+ |
Uber | ORGANIZATION | 0.98+ |
20 year old | QUANTITY | 0.97+ |
day two | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
SageMaker | TITLE | 0.97+ |
AWS Summit 2022 | EVENT | 0.97+ |
QuickSight | TITLE | 0.96+ |
both | QUANTITY | 0.96+ |
Swami | PERSON | 0.96+ |
50 years | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
SQL | TITLE | 0.95+ |
Elastic MapReduce | TITLE | 0.95+ |
Dr. | PERSON | 0.94+ |
Johnny CUBE | PERSON | 0.93+ |
Bill Pearson, Intel | CUBE Conversation, August 2020
>> Narrator: From theCUBE studios in Palo Alto in Boston, connecting with our leaders all around the world. This is theCUBE conversation. >> Welcome back everybody. Jeff Frick here with theCUBE we are in our Palo Alto studios today. We're still getting through COVID, thankfully media was a necessary industry, so we've been able to come in and keep a small COVID crew, but we can still reach out to the community and through the magic of the internet and camera's on laptops, we can reach out and touch base with our friends. So we're excited to have somebody who's talking about and working on kind of the next big edge, the next big cutting thing going on in technology. And that's the internet of things you've heard about it the industrial Internet of Things. There's a lot of different words for it. But the foundation of it is this company it's Intel. We're happy to have joined us Bill Pearson. He is the Vice President of Internet of Things often said IoT for Intel, Bill, great to see you. >> Same Jeff. Nice to be here. >> Yeah, absolutely. So I just was teasing getting ready for this interview, doing a little homework and I saw you talking about Internet of Things in a 2015 interview, actually referencing a 2014 interview. So you've been at this for a while. So before we jump into where we are today, I wonder if you can share, you know, kind of a little bit of a perspective of what's happened over the last five or six years. >> I mean, I think data has really grown at a tremendous pace, which has changed the perception of what IoT is going to do for us. And the other thing that's been really interesting is the rise of AI. And of course we need it to be able to make sense of all that data. So, you know, one thing that's different is today where we're really focused on how do we take that data that is being produced at this rapid rate and really make sense of it so that people can get better business outcomes from that. >> Right, right. But the thing that's so interesting on the things part of the Internet of Things and even though people are things too, is that the scale and the pace of data that's coming off, kind of machine generated activity versus people generated is orders of magnitude higher in terms of the frequency, the variety, and all kind of your classic big data meme. So that's a very different challenge then, you know, kind of the growth of data that we had before and the types of data, 'cause it's really gone kind of exponential across every single vector. >> Absolutely. It has, I mean, we've seen estimates that data is going to increase by about five times as much as it is today, over the next, just a couple years. So it's exponential as you said. >> Right. The other thing that's happened is Cloud. And so, you know, kind of breaking the mold of the old mold roar, all the compute was either in your mini computer or data center or mainframe or on your laptop. Now, you know, with Cloud and instant connectivity, you know, it opens up a lot of different opportunities. So now we're coming to the edge and Internet of Things. So when you look at kind of edge in Internet of Things, kind of now folding into this ecosystem, you know, what are some of the tremendous benefits that we can get by leveraging those things that we couldn't with kind of the old infrastructure and our old way kind of gathering and storing and acting on data? >> Yeah. So one of the things we're doing today with the edge is really bringing the compute much closer to where all the data is being generated. So these sensors and devices are generating tons and tons of data and for a variety of reasons, we can't send it somewhere else to get processed. You know, there may be latency requirements for that control loop that you're running in your factory or there's bandwidth constraints that you have, or there's just security or privacy reasons to keep it onsite. And so you've got to process a lot of this data onsite and maybe some estimates or maybe half of the data is going to remain onsite here. And when you look at that, you know, that's where you need compute. And so the edge is all about taking compute, bringing it to where the data is, and then being able to use the intelligence, the AI and analytics to make sense of that data and take actions in real time. >> Right, right. But it's a complicated situation, right? 'Cause depending on where that edge is, what the device is, does it have power? Does it not have power? Does it have good connectivity? Does it not have good connectivity? Does it have even the ability to run those types of algorithms or does it have to send it to some interim step, even if it doesn't have, you know, kind of the ability to send it all the way back to the Cloud or all the way back to the data center for latency. So as you kind of slice and dice all these pieces of the chain, where do you see the great opportunity for Intel, where's a good kind of sweet spot where you can start to bring in some compute horsepower and you can start to bring in some algorithmic processing and actually do things between just the itty-bitty sensor at the itty-bitty end of the chain versus the data center that's way, way upstream and far, far away. >> Yeah. Our business is really high performance compute and it's this idea of taking all of these workloads and bringing them in to this high performance compute to be able to run multiple software defined workloads on single boxes, to be able to then process and analyze and store all that data that's being created at the edge, do it in a high performance way. And whether that's a retail smart shelf, for example, that we can do realtime inventory on that shelf, as things are coming and going, or whether it's a factory and somebody's doing, you know,real time defect detection of something moving across their textile line. So all of that comes down to being able to have the compute horsepower, to make sense of the data and do something with it. >> Right, right. So you wouldn't necessarily like in your shelf example that the compute might be done there at the local store or some aggregation point beyond just that actual, you know, kind of sensor that's underneath that one box of tide, if you will. >> Absolutely. Yeah, you could have that on-prem, a big box that does multiple shelves, for example. >> Okay, great. So there's a great example and you guys have the software development kit, you have a lot of resources for developers and in one of the case studies that I just wanted to highlight before we jump into the dev side was I think Audi was the customer. And it really illustrates a point that we talked about a lot in kind of the big data meme, which is, you know, people used to take action on a sample of data after the fact. And I think this case here we're talking about running 1,000 cars a day through this factory, they're doing so many welds, 5 million welds a day, and they would pull one at the end of the day, sample a couple welds and did we have a good day or not? Versus what they're doing now with your technology is actually testing each and every weld as it's being welded, based on data that's coming off the welding machine and they're inspecting every single weld. So I just love you've been at this for a long time. When you talk to customers about what is possible from a business point of view, when you go from after the fact with a sample of data, to in real time with all the data, how that completely changes your view and ability to react to your business. >> Yeah. I mean, it makes people be able to make better decisions in real time. You know, as you've got cameras on things like textile manufacturers or footwear manufacturers, or even these realtime inventory examples you mentioned, people are going to be able to make and can make decisions in real time about how to stock that shelf, what to order about what to pull off the line, am I getting a good product or not? And this has really changed, as you said, we don't have to go back and sample anymore. You can tell right now as that part is passing through your manufacturing line, or as that item is sitting on your shelf, what's happening to it. It's really incredible. >> So let's talk about developers. So you've got a lot of resources available for developers and everyone knows Intel obviously historically in PCs and data centers. And you would do what they call design wins back when I was there, many moons ago, right? You try to get a design win and then, you know, they're going to put your microprocessors and a bunch of other components in a device. When you're trying to work with, kind of Cutting Edge Developers in kind of new fields and new areas, this feels like a much more direct touch to the actual people building the applications than the people that are really just designing the systems of which Intel becomes a core part of. I wonder if you could talk about, you know, the role developers and really Intel's outreach to developers and how you're trying to help them, you know, kind of move forward in this new crazy world. >> Yeah, developers are essential to our business. They're essential to IoT. Developers, as you said, create the applications that are going to really make the business possible. And so we know the value of developers and want to make sure that they have the tools and resources that they need to use our products most effectively. We've done some things around OpenVINO toolkit as an example, to really try and simplify, democratize AI application so that more developers can take advantage of this and, you know, take the ambitions that they have to do something really interesting for their business, and then go put it into action. And the whole, you know, our whole purpose is making sure we can actually accomplish that. >> Right. So let's talk about OPenVINO. It's an interesting topic. So I actually found out what OpeVINO means, Open Visual Inference and Neural Optimization toolkit,. So it's a lot about computer vision. So I will, you know, and computer vision is an interesting early AI application that I think a lot of people are familiar with through Google photos or other things where, you know, suddenly they're putting together little or a highlight movies for you, or they're pulling together all the photos of a particular person or a particular place. So the computer vision is pretty interesting. Inference is a special subset of AI. So I wonder, you know, you guys are way behind OpenVINO. Where do you see the opportunities in visualization? What are some of the instances that you're seeing with the developers out there doing innovative things around computer vision? >> Yeah, there's a whole variety of used cases with computer vision. You know, one that we talked about earlier here was looking at defect detection. There's a company that we work with that has a 360 degree view. They use cameras all around their manufacturing line. And from there, they didn't know what a good part looks like and using inference and OpenVINO, they can tell when a bad part goes through or there's a defect in their line and they can go and pull that and make corrections as needed. We've also seen, you know, use cases like smart shopping, where there's a point of sale fraud detection. We call it, you know, is the item being scanned the same as the item that is actually going through the line. And so we can be much smarter about understanding retail. One example that I saw was a customer who was trying to detect if it was a vodka or potatoes that was being scanned in an automated checkout system. And again, using cameras and OpenVINO, they can tell the difference. >> We haven't talked about a computer testing yet. We're still sticking with computer vision and the natural language processing. I know one of the areas you're interested in and it's going to only increase in importance is education. Especially with what's going on, I keep waiting for someone to start rolling out some national, you know, best practice education courses for kindergartens and third graders and sixth graders. And you know, all these poor teachers that are learning to teach on the fly from home, you guys are doing a lot of work in education. I wonder if you can share, I think your work doing some work with Udacity. What are you doing? Where do you see the opportunity to apply some of this AI and IoT in education? >> Yeah, we launched the Nanodegree with Udacity, and it's all about OpenVINO and Edge AI and the idea is, again, get more developers educated on this technology, take a leader like your Udacity, partner with them to make the coursework available and get more developers understanding using and building things using Edge AI. And so we partnered with them as part of their million developer goal. We're trying to get as many developers as possible through that. >> Okay. And I would be remiss if we talked about IoT and I didn't throw 5G into the conversation. So 5G is a really big deal. I know Intel has put a ton of resources behind it and have been talking about it for a long, long time. You know, I think the huge value in 5G is a lot around IoT as opposed to my handset going faster, which is funny that they're actually releasing 5G handsets out there. But when you look at 5G combined with the other capabilities in IoT, again, how do you see 5G being this kind of step function in ability to do real time analysis and make real time business decisions? >> Well, I think it brings more connectivity certainly and bandwidth and reduces latency. But the cool thing about it is when you look at the applications of it, you know, we talked about factories. A lot of those factors may want to have a private 5G networks that are running inside that factory, running all the machines or robots or things in there. And so, you know, it brings capabilities that actually make a difference in the world of IoT and the things that developers are trying to build. >> That's great. So before I let you go, you've been at this for a while. You've been at Intel for a while. You've seen a lot of big sweeping changes, kind of come through the industry, you know, as you sit back with a little bit of perspective, and it's funny, even IoT, like you said, you've been talking about it for five years and 5G we've been been waiting for it, but the waves keep coming, right? That's kind of the fun of being in this business. As you sit there where you are today, you know, kind of looking forward the next couple of years, couple of four or five years, you know, what has just surprised you beyond compare and what are you still kind of surprised that's it's still a little bit lagging that you would have expected to see a little bit more progress at this point. >> You know, to me the incredible thing about the computing industry is just the insatiable demand that the world has for compute. It seems like we always come up with, our customers always come up with more and more uses for this compute power. You know, as we've talked about data and the exponential growth of data and now we need to process and analyze and store that data. It's impressive to see developers just constantly thinking about new ways to apply their craft and, you know, new ways to use all that available computing power. And, you know, I'm delighted 'cause I've been at this for a while, as you said, and I just see this continuing to go far as far as the eye can see. >> Yeah, yeah. I think you're right. There's no shortage of opportunity. I mean, the data explosion is kind of funny. The data has always been there, we just weren't keeping track of it before. And the other thing that as I look at Jira, Internet of Things, kind of toolkit, you guys have such a broad portfolio now where a lot of times people think of Intel pretty much as a CPU company, but as you mentioned, you got to FPGAs and VPUs and Vision Solutions, stretch applications Intel has really done a good job in terms of broadening the portfolio to go after, you know, kind of this disparate or kind of sharding, if you will, of all these different types of computer applications have very different demands in terms of power and bandwidth and crunching utilization to technical (indistinct). >> Yeah. Absolutely the various computer architectures really just to help our customers with the needs, whether it's high power or low performance, a mixture of both, being able to use all of those heterogeneous architectures with a tool like OpenVINO, so you can program once, right once and then run your application across any of those architectures, help simplify the life of our developers, but also gives them the compute performance, the way that they need it. >> Alright Bill, well keep at it. Thank you for all your hard work. And hopefully it won't be five years before we're checking in to see how far this IoT thing is going. >> Hopefully not, thanks Jeff. >> Alright Bill. Thanks a lot. He's bill, I'm Jeff. You're watching theCUBE. Thanks for watching, we'll see you next time. (upbeat music)
SUMMARY :
all around the world. And that's the internet of and I saw you talking And the other thing that's is that the scale and the pace of data So it's exponential as you said. And so, you know, kind of breaking the AI and analytics to kind of the ability to send it So all of that comes down to being able just that actual, you know, Yeah, you and in one of the case studies And this has really changed, as you said, to help them, you know, And the whole, you know, So I wonder, you know, you We've also seen, you know, and the natural language processing. and the idea is, again, But when you look at 5G and the things that developers couple of four or five years, you know, to apply their craft and, you know, to go after, you know, a mixture of both, being able to use Thank you for all your hard work. we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
2014 | DATE | 0.99+ |
Bill Pearson | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
August 2020 | DATE | 0.99+ |
five years | QUANTITY | 0.99+ |
360 degree | QUANTITY | 0.99+ |
Audi | ORGANIZATION | 0.99+ |
Bill | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
one box | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
OpenVINO | TITLE | 0.98+ |
Udacity | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
theCUBE | ORGANIZATION | 0.97+ |
One example | QUANTITY | 0.97+ |
bill | PERSON | 0.97+ |
1,000 cars a day | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
each | QUANTITY | 0.95+ |
one thing | QUANTITY | 0.94+ |
single boxes | QUANTITY | 0.94+ |
OpeVINO | TITLE | 0.93+ |
couple | QUANTITY | 0.91+ |
sixth graders | QUANTITY | 0.91+ |
million developer | QUANTITY | 0.91+ |
about five times | QUANTITY | 0.91+ |
5 million welds a day | QUANTITY | 0.9+ |
Edge AI | TITLE | 0.9+ |
ORGANIZATION | 0.9+ | |
once | QUANTITY | 0.89+ |
next couple of years | DATE | 0.89+ |
six years | QUANTITY | 0.87+ |
COVID | ORGANIZATION | 0.86+ |
many moons ago | DATE | 0.86+ |
four | QUANTITY | 0.84+ |
half of | QUANTITY | 0.84+ |
5G | TITLE | 0.83+ |
Cloud | TITLE | 0.82+ |
Internet of | ORGANIZATION | 0.8+ |
tons and tons of data | QUANTITY | 0.8+ |
single vector | QUANTITY | 0.78+ |
OPenVINO | TITLE | 0.78+ |
Jira | ORGANIZATION | 0.76+ |
single weld | QUANTITY | 0.74+ |
OpenVINO | ORGANIZATION | 0.71+ |
third | QUANTITY | 0.68+ |
Vice | PERSON | 0.67+ |
couple years | QUANTITY | 0.65+ |
every | QUANTITY | 0.63+ |
ton of resources | QUANTITY | 0.62+ |
case studies | QUANTITY | 0.58+ |
data | QUANTITY | 0.55+ |
last | DATE | 0.53+ |
Open Visual Inference | TITLE | 0.51+ |
COVID | OTHER | 0.5+ |
Making Artifical Intelligance Real With Dell & VMware
>>artificial intelligence. The words are full of possibility. Yet to many it may seem complex, expensive and hard to know where to get started. How do you make AI really for your business? At Dell Technologies, we see AI enhancing business, enriching lives and improving the world. Dell Technologies is dedicated to making AI easy, so more people can use it to make a real difference. So you can adopt and run AI anywhere with your current skill. Sets with AI Solutions powered by power edge servers and made portable across hybrid multi clouds with VM ware. Plus solved I O bottlenecks with breakthrough performance delivered by Dell EMC Ready solutions for HPC storage and Data Accelerator. And enjoy automated, effortless management with open manage systems management so you can keep business insights flowing across a multi cloud environment. With an AI portfolio that spans from workstations to supercomputers, Dell Technologies can help you get started with AI easily and grow seamlessly. AI has the potential to profoundly change our lives with Dell Technologies. AI is easy to adopt, easy to manage and easy to scale. And there's nothing artificial about that. Yeah, yeah, from >>the Cube Studios in Palo Alto and Boston >>connecting with >>thought leaders all around the world. This is a cube conversation. Hi, I'm Stew Minimum. And welcome to this special launch with our friends at Dell Technologies. We're gonna be talking about AI and the reality of making artificial intelligence real happy to welcome to the program. Two of our Cube alumni Rob, depending 90. He's the senior vice president of server product management and very Pellegrino vice president, data centric workloads and solutions in high performance computing, both with Dell Technologies. Thank you both for joining thanks to you. So you know, is the industry we watch? You know, the AI has been this huge buzz word, but one of things I've actually liked about one of the differences about what I see when I listen to the vendor community talking about AI versus what I saw too much in the big data world is you know, it used to be, you know Oh, there was the opportunity. And data is so important. Yes, that's really But it was. It was a very wonky conversation. And the promise and the translation of what has been to the real world didn't necessarily always connect and We saw many of the big data solutions, you know, failed over time with AI on. And I've seen this in meetings from Dell talking about, you know, the business outcomes in general overall in i t. But you know how ai is helping make things real. So maybe we can start there for another product announcements and things we're gonna get into. But Robbie Interior talk to us a little bit about you know, the customers that you've been seeing in the impact that AI is having on their business. >>Sure, Teoh, I'll take us a job in it. A couple of things. For example, if you start looking at, uh, you know, the autonomous vehicles industry of the manufacturing industry where people are building better tools for anything they need to do on their manufacturing both. For example, uh, this is a good example of where that honors makers and stuff you've got Xeon ut It's actually a world war balcony. Now it is using our whole product suite right from the hardware and software to do multiple iterations off, ensuring that the software and the hardware come together pretty seamlessly and more importantly, ingesting, you know, probably tens of petabytes of data to ensure that we've got the right. They're training and gardens in place. So that's a great example of how we are helping some of our customers today in ensuring that we can really meet is really in terms of moving away from just a morning scenario in something that customers are able to use like today. >>Well, if I can have one more, Ah Yanai, one of our core and more partners than just customers in Italy in the energy sector have been been really, really driving innovation with us. We just deployed a pretty large 8000 accelerator cluster with them, which is the largest commercial cluster in the world. And where they're focusing on is the digital transformation and the development of energy sources. And it's really important not be an age. You know, the plan. It's not getting younger, and we have to be really careful about the type of energies that we utilize to do what we do every day on they put a lot of innovation. We've helped set up the right solution for them, and we'll talk some more about what they've done with that cluster. Later, during our chat, but it is one of the example that is tangible with the appointment that is being used to help there. >>Great. Well, we love starting with some of the customer stories. Really glad we're gonna be able to share some of those, you know, actual here from some of the customers a little bit later in this launch. But, Robbie, you know, maybe give us a little bit as to what you're hearing from customers. You know, the overall climate in AI. You know, obviously you know, so many challenges facing, you know, people today. But you know, specifically around ai, what are some of the hurdles that they might need to overcome Be able to make ai. Really? >>I think the two important pieces I can choose to number one as much as we talk about AI machine learning. One of the biggest challenges that customers have today is ensuring that they have the right amount and the right quality of data to go out and do the analytics percent. Because if you don't do it, it's giggle garbage in garbage out. So the one of the biggest challenges our customers have today is ensuring that they have the most pristine data to go back on, and that takes quite a bit of an effort. Number two. A lot of times, I think one of the challenges they also have is having the right skill set to go out and have the execution phase of the AI pod. You know, work done. And I think those are the two big challenges we hear off. And that doesn't seem to be changing in the very near term, given the very fact that nothing Forbes recently had an article that said that less than 15% off, our customers probably are using AI machine learning today so that talks to the challenges and the opportunities ahead for me. All right, >>So, Ravi, give us the news. Tell us the updates from Dell Technologies how you're helping customers with AI today, >>going back to one of the challenges, as I mentioned, which is not having the right skin set. One of the things we are doing at Dell Technologies is making sure that we provide them not just the product but also the ready solutions that we're working with. For example, Tier and his team. We're also working on validated and things are called reference architectures. The whole idea behind this is we want to take the guesswork out for our customers and actually go ahead and destroying things that we have already tested to ensure that the integration is right. There's rightsizing attributes, so they know exactly the kind of a product that would pick up our not worry about me in time and the resources needed you get to that particular location. So those are probably the two of the biggest things we're doing to help our customers make the right decision and execute seamlessly and on time. >>Excellent. So teary, maybe give us a little bit of a broader look as to, you know, Dell's part participation in the overall ecosystem when it comes to what's happening in AI on and you know why is this a unique time for what's happening in the in the industry? >>Yeah, I mean, I think we all live it. I mean, I'm right here in my home, and I'm trying to ensure that the business continues to operate, and it's important to make sure that we're also there for our customers, right? The fight against covered 19 is eyes changing what's happening around the quarantines, etcetera. So Dell, as a participant not only in the AI the world that we live in on enabling AI is also a participant in all of the community's s. So we've recently joined the covered 19 High Performance Computing Consortium on. We also made a lot of resources available to researchers and scientists leveraging AI in order to make progress towards you're and potentially the vaccine against Corbyn. 19 examples are we have our own supercomputers in the lab here in Austin, Texas, and we've given access to some of our partners. T. Gen. Is one example. The beginning of our chat I mentioned and I So not only did they have barely deport the cluster with us earlier this year that could 19 started hitting, so they've done what's the right thing to do for community and humanity is they made the resource available to scientists in Europe on tack just down the road here, which had the largest I can't make supercomputer that we deployed with them to. Ai's doing exactly the same thing. So this is one of the real examples that are very timely, and it's it's it's happening right now we hadn't planned for it. A booth there with our customers, the other pieces. This is probably going to be a trend, but healthcare is going through and version of data you mentioned in the beginning. You're talking about 2.3000 exabytes, about 3000 times the content of the Library of Congress. It's incredible, and that data is useless. I mean, it's great we can We can put that on our great ice on storage, but you can also see it as an opportunity to get business value out of it. That's going to be we're a lot more resource is with AI so a lot happening here. That's that's really if I can get into more of the science of it because it's healthcare, because it's the industry we see now that our family members at the M. Ware, part of the Dell Technologies Portfolio, are getting even more relevance in the discussion. The industry is based on virtualization, and the M ware is the number one virtualization solution for the industry. So now we're trying to weave in the reality in the I T environment with the new nodes of AI and data science and HPC. So you will see the VM Ware just added kubernetes control plane. This fear Andi were leveraging that to have a very flexible environment On one side, we can do some data science on the other side. We can go back to running some enterprise class hardware class software on top of it. So this is is great. And we're capitalizing on it with validates solutions, validated design on. And I think that's going to be adding a lot of ah power in the hands of our customers and always based on their feedback. And they asked back, >>Yeah, I may ask you just to build on that interesting comment that you made on we're actually looking at very shortly will be talking about how we're gonna have the ability to, for example, read or V Sphere and Allah servers begin. That essentially means that we're going to cut down the time our customers need to go ahead and deploy on their sites. >>Yeah, excellent. Definitely been, you know, very strong feedback from the community. We did videos around some of the B sphere seven launch, you know, theory. You know, we actually had done an interview with you. Ah, while back at your big lab, Jeff Frick. Otto, See the supercomputers behind what you were doing. Maybe bring us in a little bit inside as who? You know, some of the new pieces that help enable AI. You know, it often gets lost on the industry. You know, it's like, Oh, yeah, well, we've got the best hardware to accelerate or enable these kind of workloads. So, you know, bring us in its But what, You know, the engineering solution sets that are helping toe make this a reality >>of today. Yeah, and truly still you've been there. You've seen the engineers in the lab, and that's more than AI being real. That that is double real because we spend a lot of time analyzing workloads customer needs. We have a lot of PhD engineers in there, and what we're working on right now is kind of the next wave of HPC enablement Azaz. We all know the consumption model or the way that we want to have access to resources is evolving from something that is directly in front of us. 1 to 1 ratio to when virtualization became more prevalent. We had a one to many ratio on genes historically have been allocated on a per user. Or sometimes it is study modified view to have more than one user GP. But with the addition of big confusion to the VM our portfolio and be treated not being part of these fear. We're building up a GPU as a service solutions through a VM ware validated design that we are launching, and that's gonna give them flexibility. And the key here is flexibility. We have the ability, as you know, with the VM Ware environment, to bring in also some security, some flexibility through moving the workloads. And let's be honest with some ties into cloud models on, we have our own set of partners. We all know that the big players in the industry to But that's all about flexibility and giving our customers what they need and what they expect in the world. But really, >>Yeah, Ravi, I guess that brings us to ah, you know, one of the key pieces we need to look at here is how do we manage across all of these environments? Uh, and you know, how does AI fit into this whole discussion between what Dell and VM ware doing things like v Sphere, you know, put pulling in new workloads >>stew, actually a couple of things. So there's really nothing artificial about the real intelligence that comes through with all that foolish intelligence we're working out. And so one of the crucial things I think we need to, you know, ensure that we talk about is it's not just about the fact that it's a problem. So here are our stories there, but I think the crucial thing is we're looking at it from an end to end perspective from everything from ensuring that we have direct workstations, right servers, the storage, making sure that is well protected and all the way to working with an ecosystem of software renders. So first and foremost, that's the whole integration piece, making sure they realized people system. But more importantly, it's also ensuring that we help our customers by taking the guess work out again. I can't emphasize the fact that there are customers who are looking at different aliens off entry, for example, somebody will be looking at an F G. A. Everybody looking at GP use. API is probably, as you know, are great because they're price points and normal. Or should I say that our needs our lot lesser than the GP use? But on the flip side, there's a need for them to have a set of folks who can actually program right. It is why it's called the no programming programmable gate arrays of Saas fee programmable. My point being in all this, it's important that we actually provide dried end to end perspective, making sure that we're able to show the integration, show the value and also provide the options, because it's really not a cookie cutter approach of where you can take a particular solution and think that it will put the needs of every single customer. He doesn't even happen in the same industry, for that matter. So the flexibility that we provide all the way to the services is truly our attempt. At Dell Technologies, you get the entire gamut of solutions available for the customer to go out and pick and choose what says their needs the best. >>Alright, well, Ravi interior Thank you so much for the update. So we're gonna turn it over to actually hear from some of your customers. Talk about the power of ai. You're from their viewpoint, how real these solutions are becoming. Love the plan words there about, you know, enabling really artificial intelligence. Thanks so much for joining after the customers looking forward to the VM Ware discussion, we want to >>put robots into the world's dullest, deadliest and dirtiest jobs. We think that if we can have machines doing the work that put people at risk than we can allow people to do better work. Dell Technologies is the foundation for a lot of the >>work that we've done here. Every single piece of software that we developed is simulated dozens >>or hundreds of thousands of times. And having reliable compute infrastructure is critical for this. Yeah, yeah, A lot of technology has >>matured to actually do something really useful that can be used by non >>experts. We try to predict one system fails. We try to predict the >>business impatience things into images. On the end of the day, it's that >>now we have machines that learn how to speak a language from from zero. Yeah, everything >>we do really, at Epsilon centered around data and our ability >>to get the right message to >>the right person at the right >>time. We apply machine learning and artificial intelligence. So in real time you can adjust those campaigns to ensure that you're getting the most optimized message theme. >>It is a joint venture between Well, cars on the Amir are your progress is automated driving on Advanced Driver Assistance Systems Centre is really based on safety on how we can actually make lives better for you. Typically gets warned on distracted in cars. If you can take those kind of situations away, it will bring the accidents down about 70 to 80%. So what I appreciate it with Dell Technologies is the overall solution that they have to live in being able to deliver the full package. That has been a major differentiator compared to your competitors. >>Yeah. Yeah, alright, welcome back to help us dig into this discussion and happy to welcome to the program Chris Facade. He is the senior vice president and general manager of the B sphere business and just Simon, chief technologist for the High performance computing group, both of them with VM ware. Gentlemen, thanks so much for joining. Thank >>you for having us. >>All right, Krish. When vm Ware made the bit fusion acquisition. Everybody was looking the You know what this will do for space Force? GPU is we're talking about things like AI and ML. So bring us up to speed. As to you know, the news today is the what being worth doing with fusion. Yeah. >>Today we have a big announcement. I'm excited to announce that, you know, we're taking the next big step in the AI ML and more than application strategy. With the launch off bit fusion, we're just now being fully integrated with VCF. They're in black home, and we'll be releasing this very shortly to the market. As you said when we acquire institution A year ago, we had a showcase that's capable days as part of the animal event. And at that time we laid out a strategy that part of our institution as the cornerstone off our capabilities in the black home in the Iot space. Since then, we have had many customers take a look at the technology and we have had feedback from them as well as from partners and analysts. And the feedback has been tremendous. >>Excellent. Well, Chris, what does this then mean for customers? You know What's the value proposition that diffusion brings the VC? Yeah, >>if you look at our customers, they are in the midst of a big ah journey in digital transformation. And basically, what that means is customers are building a ton of applications and most of those applications some kind of data analytics or machine learning embedded in it. And what this is doing is that in the harbor and infrastructure industry, this is driving a lot of innovation. So you see the advent off a lot off specialized? Absolutely. There's custom a six FPs. And of course, the views being used to accelerate the special algorithms that these AI ml type applications need. And unfortunately, customer environment. Most of these specialized accelerators uh um bare metal kind of set up, but they're not taking advantage off optimization and everything that it brings to that. Also, with fusion launched today, we are essentially doing the accelerator space. What we need to compute several years ago and that is essentially bringing organization to the accelerators. But we take it one step further, which is, you know, we use the customers the ability to pull these accelerators and essentially going to be couple it from the server so you can have a pool of these accelerators sitting in the network. And customers are able to then target their workloads and share the accelerators get better utilization by a lot of past improvements and, in essence, have a smaller pool that they can use for a whole bunch of different applications across the enterprise. That is a huge angle for our customers. And that's the tremendous positive feedback that we get getting both from customers as well. >>Excellent. Well, I'm glad we've got Josh here to dig into some of the thesis before we get to you. They got Chris. Uh, part of this announcement is the partnership of VM Ware in Dell. So tell us about what the partnership is in the solutions for for this long. Yeah. >>We have been working with the Dell in the in the AI and ML space for a long time. We have ah, good partnership there. This just takes the partnership to the next level and we will have ah, execution solution. Support in some of the key. I am el targeted words like the sea for 1 40 the r 7 40 Those are the centers that would be partnering with them on and providing solutions. >>Excellent. Eso John. You know, we've watched for a long time. You know, various technologies. Oh, it's not a fit for virtualized environment. And then, you know, VM Ware does does what it does. Make sure you know, performance is there. And make sure all the options there bring us inside a little bit. You know what this solution means for leveraging GPS? Yeah. So actually, before I before us, answer that question. Let me say that the the fusion acquisition and the diffusion technology fits into a larger strategy at VM Ware around AI and ML. That I think matches pretty nicely the overall Dell strategy as well, in the sense that we are really focused on delivering AI ml capabilities or the ability for our customers to run their am ai and ml workloads from edge before the cloud. And that means running it on CPU or running it on hardware accelerators like like G fuse. Whatever is really required by the customer in this specific case, we're quite excited about using technology as it really allows us. As Chris was describing to extend our capabilities especially in the deep learning space where GPU accelerators are critically important. And so what this technology really brings to the table is the ability to, as Chris was outlining, to pull those resources those hardware resource together and then allow organizations to drive up the utilization of those GP Resource is through that pooling and also increase the degree of sharing that we support that supported for the customer. Okay, Jeff, take us in a little bit further as how you know the mechanisms of diffusion work. Sure, Yeah, that's a great question. So think of it this way. There there is a client component that we're using a server component. The server component is running on a machine that actually has the physical GPU is installed in it. The client machine, which is running the bit fusion client software, is where the user of the data scientist is actually running their machine machine learning application. But there's no GPU actually in that host. And what is happening with fusion technology is that it is essentially intercepting the cuda calls that are being made by that machine learning app, patience and promoting those protocols over to the bit fusion server and then injecting them into the local GPU on the server. So it's actually, you know, we call it into a position in the ability that remote these protocols, but it's actually much more sophisticated than that. There are a lot of underlying capabilities that are being deployed in terms of optimization who takes maximum advantage of the the networking link that sits between the client machine and the server machine. But given all of that, once we've done it with diffusion, it's now possible for the data scientist. Either consume multiple GP use for single GPU use or even fractional defuse across that Internet using the using technology. Okay, maybe it would help illustrate some of these technologies. If you got a couple of customers, Sure, so one example would be a retail customer. I'm thinking of who is. Actually it's ah, grocery chain. That is the flowing, ah, large number of video cameras into their to their stores in order to do things like, um, watch for pilfering, uh, identify when storage store shelves could be restocked and even looking for cases where, for example, maybe a customer has fallen down in denial on someone needs to go and help those multiple video streams and then multiple app patients that are being run that part are consuming the data from those video streams and doing analytics and ml on them would be perfectly suited for this type of environment where you would like to be ableto have these multiple independent applications running but having them be able to efficiently share the hardware resources of the GP use. Another example would be retailers who are deploying ml Howard Check out registers who helped reduce fraud customers who are buying, buying things with, uh, fake barcodes, for example. So in that case, you would not necessarily want to employ a single dedicated GPU for every single check out line. Instead, what you would prefer to do is have a full set of resource. Is that each inference operation that's occurring within each one of those check out lines could then consume collectively. That would be two examples of the use of this kind of pull in technology. Okay, great. So, Josh, a lot last question for you is this technology is this only for use and anything else. You can give us a little bit of a look forward to as to what we should be expecting from the big fusion technology. Yeah. So currently, the target is specifically NVIDIA GPU use with Cuda. The team, actually even prior to acquisition, had done some work on enablement of PJs and also had done some work on open CL, which is more open standard for a device that so what you will see over time is an expansion of the diffusion capabilities to embrace devices like PJs. The domain specific a six that first was referring to earlier will roll out over time. But we are starting with the NVIDIA GPU, which totally makes sense, since that is the primary hardware acceleration and for deep learning currently excellent. Well, John and Chris, thank you so much for the updates to the audience. If you're watching this live, please throwing the crowd chat and ask your questions. This faith, If you're watching this on demand, you can also go to crowdchat dot net slash make ai really to be able to see the conversation that we had. Thanks so much for joining. >>Thank you very much. >>Thank you. Managing your data center requires around the clock. Attention Dell, EMC open manage mobile enables I t administrators to monitor data center issues and respond rapidly toe unexpected events anytime, anywhere. Open Manage Mobile provides a wealth of features within a comprehensive user interface, including >>server configuration, push notifications, remote desktop augmented reality and more. The latest release features an updated Our interface Power and Thermal Policy Review. Emergency Power Reduction, an internal storage monitoring download Open Manage Mobile today.
SUMMARY :
the potential to profoundly change our lives with Dell Technologies. much in the big data world is you know, it used to be, you know Oh, there was the opportunity. product suite right from the hardware and software to do multiple iterations be really careful about the type of energies that we utilize to do what we do every day on You know, the overall climate in AI. is having the right skill set to go out and have the execution So, Ravi, give us the news. One of the things we are doing at Dell Technologies is making So teary, maybe give us a little bit of a broader look as to, you know, more of the science of it because it's healthcare, because it's the industry we see Yeah, I may ask you just to build on that interesting comment that you made on we're around some of the B sphere seven launch, you know, theory. We all know that the big players in the industry to But that's all about flexibility and so one of the crucial things I think we need to, you know, ensure that we talk about forward to the VM Ware discussion, we the foundation for a lot of the Every single piece of software that we developed is simulated dozens And having reliable compute infrastructure is critical for this. We try to predict one system fails. On the end of the day, now we have machines that learn how to speak a language from from So in real time you can adjust solution that they have to live in being able to deliver the full package. chief technologist for the High performance computing group, both of them with VM ware. As to you know, the news today And at that time we laid out a strategy that part of our institution as the cornerstone that diffusion brings the VC? and essentially going to be couple it from the server so you can have a pool So tell us about what the partnership is in the solutions for for this long. This just takes the partnership to the next the degree of sharing that we support that supported for the customer. to monitor data center issues and respond rapidly toe unexpected events anytime, Power and Thermal Policy Review.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Josh | PERSON | 0.99+ |
Library of Congress | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Robbie | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
today | DATE | 0.99+ |
John | PERSON | 0.99+ |
Italy | LOCATION | 0.99+ |
Ravi | PERSON | 0.99+ |
Chris Facade | PERSON | 0.99+ |
Two | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
VM Ware | ORGANIZATION | 0.99+ |
Rob | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Krish | PERSON | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
six | QUANTITY | 0.99+ |
dozens | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
less than 15% | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
tens of petabytes | QUANTITY | 0.99+ |
90 | QUANTITY | 0.99+ |
Andi | PERSON | 0.99+ |
first | QUANTITY | 0.98+ |
19 examples | QUANTITY | 0.98+ |
Austin, Texas | LOCATION | 0.98+ |
Epsilon | ORGANIZATION | 0.98+ |
two important pieces | QUANTITY | 0.98+ |
two big challenges | QUANTITY | 0.98+ |
Forbes | ORGANIZATION | 0.98+ |
Simon | PERSON | 0.98+ |
one example | QUANTITY | 0.98+ |
about 3000 times | QUANTITY | 0.97+ |
M. Ware | ORGANIZATION | 0.97+ |
Cube Studios | ORGANIZATION | 0.97+ |
more than one user | QUANTITY | 0.97+ |
1 40 | OTHER | 0.97+ |
8000 accelerator | QUANTITY | 0.96+ |
several years ago | DATE | 0.96+ |
Advanced Driver Assistance Systems Centre | ORGANIZATION | 0.96+ |
VMware | ORGANIZATION | 0.95+ |
A year ago | DATE | 0.95+ |
six FPs | QUANTITY | 0.95+ |
Krish Prasad & Josh Simons, VMware | Enabling Real Artificial Intelligence
>>from the Cube Studios in Palo Alto and Boston connecting with thought leaders all around the world. This is a cube conversation. Alright, welcome back to help us dig into this discussion and happy to welcome to the program. Chris Prasad. He is the senior vice president and general manager of the V Sphere business And just Simon, chief technologist for the high performance computing group. Both of them with VM ware. Gentlemen, thanks so much for joining. >>Thank you for having us. >>All right, Krish. When VM Ware made the bit fusion acquisition, everybody was looking the You know what this will do for this space GP use? We're talking about things like AI and ML. So bring us up to speed. As to, you know, the news today is the what being worth doing with fusion. >>Yeah. Today we have a big announcement. I'm excited to announce that, you know, we're taking the next big step in the AI ml and more than application strategy. With the launch off bit fusion, we just now being fully integrated with the V Sphere seven black home and we'll be releasing this very shortly to the market. As you said when we acquired institution a year ago, we had a showcase that's capable base as part of the normal event. And at that time we laid out a strategy that part of our institution as the cornerstone off our capabilities in the platform in the Iot space. Since then, we have had many customers. Take a look at the technology and we have had feedback from them as well as from partners and analysts. And the feedback has been tremendous. >>Excellent. Well, Chris, what does this then mean for customers, you know, what's the value proposition? That diffusion brings the visa versa? >>Yeah, if you look at our customers, they are in the midst of a big ah journey in digital transformation. And basically, what that means is customers are building a ton of applications, and most of those applications have some kind of data analytics or machine learning embedded in it. And what this is doing is that in the harbor and infrastructure industry, this is driving a lot of innovation. So you see the admin off a lot off specialized accelerators, custom a six FPs. And of course, the views being used to accelerate the special algorithms that these ai ml type applications need And, um, unfortunately, customer environment. Most of these specialized accelerators in a bare metal kind of set up. So they're not taking advantage off optimization and everything that it brings to that. Also, with fusion launched today, we are essentially doing the accelerator space. What we need to compute several years ago. And that is, um, essentially bringing organization to the accent leaders. But we take it one step further, which is, you know, we use the customers the ability to pull these accelerators and essentially going to be a couple of from the server so you can have a pool of these accelerators sitting in the network, and customers are able to then target their workloads and share the accelerators, get better utilization, drive a lot of cost improvements and, in essence, have a smaller pool that they can use for a whole bunch of different applications across the enterprise. That is a huge enabler for our customers. And that's the tremendous positive feedback that we get getting both from customers as well. >>Excellent. Well, I'm glad we've got Josh here to dig into some of the pieces, but before we get to you they got Chris. Uh, part of this announcement is the partnership of VM Ware in Dell. So tell us about what the partnership is in the solutions for for this long. >>Yeah. We have been working with the Dell in the in the AI and ML space for a long time. We have, ah, good partnership there. This just takes the partnership to the next level, and we will have, ah, execution solution support in some of the key. I am. It'll targeted the words like the sea for 1 40 the r 7 40 Those are the centers that would be partnering with them on and providing solutions. >>Okay, Tough. Take us in a little bit further as how you know the mechanisms of diffusion work. >>Yeah, that's a great question. So think of it this way. There there is a client component that we're using in a server component. The server component is running on a machine that actually has the physical GP use installed in it. The client machine, which is running the bit fusion client software, is where the user, the data scientist, is actually running their machine machine learning application. But there's no GPU actually in that host. And what is happening with fusion technology is that it is essentially intercepting the Cuda calls that are being made by that machine learning application and promoting those protocols over to the bit fusion server and then injecting them into the local GPU on the server. So it's actually, you know, we call it into a position in the ability that remote these protocols, but it's actually much more sophisticated than that. There are a lot of underlying capabilities that are being deployed in terms of optimization who takes maximum advantage of the, uh, the networking link that's it between the client machine and the server machine. But given all of that, once we've done it with diffusion, it's now possible for the data scientist either consume multiple GP use for single GPU use or even fractional GP use across that interconnected using the using technology. >>Okay, maybe it would help illustrate some of these technologies. If you got a couple of customers. >>Yeah, sure. So one example would be a retail customer. I'm thinking of who is. Actually it's ah grocery chain that is deploying ah, large number of video cameras into their into their stores in order to do things like, um, watch for pilfering, uh, identify when storage store shelves could be restocked and even looking for cases where, for example, maybe a customer has fallen down in denial on someone needs to go and help those multiple video streams and then multiple applications that are being run that part are consuming the data from those video screens and doing analytics and ml on them would be perfectly suited for this type of environment where you would like to be ableto have these multiple independent applications running. But having them be able to efficiently share the hardware resources of the GP is another example would be retailers who are deploying ML our check out registers who helped reduce fraud customers who are buying, buying things with, uh, fake barcodes, for example. So in that case, you would not necessarily want to deploy ah single dedicated GPU for every single check out line. Instead, what you would prefer to do is have a full set of resource. Is that each inference operation that's occurring within each one of those check out lines but then consume collectively. That would be two examples of the use of this wonderful in technology. >>Okay, great. So, Josh, last question for you is this technology is this only for use and anything else? You can give us a little bit of a look forward as to what we should be expecting from the big fusion technology. >>Yeah. So currently, the target is specifically NVIDIA gpu use with Buddha. Ah, the team, actually, even prior to acquisition had done some work on enablement of PJs. And also, I have done some work on open CL, which is more open standard for device access. So what you will see over time is an expansion of the diffusion capabilities to embrace devices like F PJs of the domain. Specific. A six that was referring to earlier will roll out over time, but we are starting with the NVIDIA GPU, which totally makes sense, since that is the primary hardware acceleration. And for deep learning currently >>excellent. Well, John and Chris, thank you so much for the updates to the audience. If you're watching this live leads growing, the crowd chat out Im to ask your questions. This page, if you're watching this on demand, you can also go to crowdchat dot net slash make ai really to be able to see the conversation that we had. Thanks so much for joy. Yeah, yeah, yeah, >>yeah.
SUMMARY :
from the Cube Studios in Palo Alto and Boston connecting with thought leaders all around the world. is the what being worth doing with fusion. And the feedback has been tremendous. That diffusion brings the visa versa? the server so you can have a pool of these accelerators sitting in the network, So tell us about in some of the key. Take us in a little bit further as how you know the mechanisms of that actually has the physical GP use installed in it. If you got a couple of customers. of the GP is another example would be retailers who are deploying So, Josh, last question for you is this technology is this only an expansion of the diffusion capabilities to embrace devices like F PJs really to be able to see the conversation that we had.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Josh | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Chris Prasad | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Krish | PERSON | 0.99+ |
V Sphere | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
Josh Simons | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Krish Prasad | PERSON | 0.99+ |
VM Ware | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Cube Studios | ORGANIZATION | 0.99+ |
Both | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
two examples | QUANTITY | 0.99+ |
a year ago | DATE | 0.99+ |
today | DATE | 0.98+ |
each inference | QUANTITY | 0.97+ |
single | QUANTITY | 0.96+ |
one example | QUANTITY | 0.96+ |
six FPs | QUANTITY | 0.95+ |
each one | QUANTITY | 0.95+ |
VMware | ORGANIZATION | 0.95+ |
six | QUANTITY | 0.95+ |
Simon | PERSON | 0.91+ |
1 40 | OTHER | 0.9+ |
one step | QUANTITY | 0.86+ |
crowdchat | ORGANIZATION | 0.86+ |
V Sphere seven black home | COMMERCIAL_ITEM | 0.86+ |
several years ago | DATE | 0.84+ |
every single | QUANTITY | 0.7+ |
applications | QUANTITY | 0.7+ |
single GPU | QUANTITY | 0.69+ |
couple | QUANTITY | 0.64+ |
ton | QUANTITY | 0.59+ |
r 7 40 | OTHER | 0.53+ |
Cuda | COMMERCIAL_ITEM | 0.43+ |
VM ware | ORGANIZATION | 0.41+ |
CUBE Insights from re:Invent 2018
(upbeat music) >> Live from Las Vegas, it's theCUBE covering AWS re:Invent 2018. Brought to you by Amazon Web Services, Intel, and their ecosystem partners. >> Okay, welcome back everyone. Live coverage here in Las Vegas for Amazon re:Invent 2018. Day three, we're winding down over 150 videos. We'll have over 500 clips. Losing the voice. Dave Vellante, my co-host. Suzi analyst tech that we're going to extract theCUBE insights, James Kobielus. David Floyer from Wikibon. Jim you've been prolific on the blogs, Siliconangle.com, great stories. David you've got some research. What's your take? Jim, you're all over what's going on in the news. What's the impact? >> Well I think what this years re:Invent shows is that AWS is doubling down on A.I. If you look at the sheer range of innovative A.I. capabilities they've introduced into their portfolio, in terms of their announcements, it's really significant. A. They have optimized tense or flow for their cloud. B. They now have an automated labeling, called Ground Truth, labeling capability that leverages mechanical turf, which has been an Amazon capability for a while. They've also got now the industries first, what's called reinforcement learning plug-in to their data science tool chain, in this case Sage Maker, reinforcement learning is becoming so important for robotics, and gaming, and lots of other applications of A.I., and I'm just scratching the surface. So they've announced a lot of things, and David can discuss other things, but I'm seeing the depth of A.I. Their investment in it shows that they've really got their fingers on what enterprises are doing, and will be doing to differentiate themselves with this technology over the next five to ten years. >> What's an area that you see that people are getting? Clearly A.I. What areas are people missing that's compelling that you've observed here? >> When you say people are missing, you mean the general...? >> Journalists. >> Oh. >> Audience. There's so much news. >> Yeah. Yeah. >> Where are the nuggets that are hidden in the news? (laughing) What are you seeing that people might not see that's different? >> Getting back to the point I was raising, which is that robotics is becoming a predominant application realm for A.I. Robotics, outside the laboratory, or outside of the industrial I.O.T., robots are coming into everything, and there's a special type of A.I. you build into robots, re-enforcement learning is a big part of it. So I think the general, if you look at the journalists, they've missed the fact that I've seen in the past couple of years, robotics and re-enforcement learning are almost on the verge of being mainstream in the space, and AWS gets it. Just the depth of their investments. Like Deep Racer, that cute little autonomous vehicle that they rolled out here at this event, that just shows that they totally get it. That will be a huge growth sector. >> David Floyer, outpost is their on premises cloud. You've been calling this for I don't know how many years, >> (laughing) Three years. >> Three years? >> Yeah. What's the impact? >> And people said, no way Foyer's wrong (laughing). >> So you get vindication but... >> And people, in particular in AWS. (laughing) >> So you're right. So you're right, but is it going to be out in a year? >> Yeah, next in 2019. >> Will this thing actually make it to the market? And if it does what is the impact? Who wins and who loses? >> Well let's start with will it get to the market? Absolutely. It is outposts, AWS Outposts, is the name. It is taking AWS in the cloud and putting it on premise. The same API's. The same services. It'll be eventually identical between the two. And that has enormous increase in the range, and the reach that AWS and the time that AWS can go after. It is a major, major impact on the marketplace, puts pressure on a whole number of people, the traditional vendors who are supplying that marketplace of the moment, and in my opinion it's going to be wildly successful. People have been waiting that, wanting that, particularly in the enterprise market. They reasons for it are simple. Latency, low latency, you've got to have the data and the compute very close together. Moving data is very, very expensive over long distances, and the third one is many people want, or need to have the data in certain places. So the combination is meeting the requirements, they've taken a long time to get there. I think it's going to be, however wildly successful. It's going to be coming out in 2019. They'll have their alpha, their betas in the beginning of it. They'll have some announcements, probably about mid 2019. >> Who's threatened by this? Everybody? Cisco? HP? Dell? >> The integration of everything, storage, networking, compute, all in the same box is obviously a threat to all suppliers within that. And their going to have to adapt to that pretty strongly. It's going to be a declining market. Declining markets are good if you adapt properly. A lot of people make a lot of money from, like IBM, from mainframe. >> It's a huge threat to IBM. >> You're playing it safe. You're not naming names. (laughing) Okay, I'll rephrase. What's your prediction? >> What's my prediction on? >> Of the landscape after this is wildly successful. >> The landscape is that the alternatives is going to be a much, much smaller pie, and only those that have volume, and only those that can adapt to that environment are going to survive. >> Well, and let's name names. So who's threatened by this? Clearly Dell, EMC, is threatened by this. >> HP. >> HP, New Tanix, the VX rat guys, Lenovo is in there. Are they wiped out? No, but they have to respond. How do they respond? >> They have to respond, yeah. They have to have self service. They have to have utility pricing. They have to connect to the cloud. So either they go hard after AWS, connecting AWS, or they belly up to Microsoft >> With Azure Stack, >> Microsoft Azure. that's clearly going to be their fallback place, so in a way, Microsoft with Azure Stack is also threatened by this, but in a way it's goodness for them because the ecosystem is going to evolve to that. So listen, these guys don't just give up. >> No, no I know. >> They're hard competitors, they're fighters. It's also to me a confirmation of Oracle's same same strategy. On paper Oracle's got that down, they're executing on that, even though it's in a narrow Oracle world. So I think it does sort of indicate that that iPhone for the enterprise strategy is actually quite viable. If I may jump in here, four things stood out to me. The satellite as a service, was to me amazing. What's next? Amazon with scale, there's just so many opportunities for them. The Edge, if we have time. >> I was going to talk about the Edge. >> Love to talk about the Edge. The hybrid evolution, and Open Source. Amazon use to make it easy for the enterprise players to complete. They had limited sales and service capabilities, they had no Open Source give back, they were hybrid deniers. Everything's going to go into the public cloud. That's all changed. They're making it much, much more difficult, for what they call the old guard, to compete. >> So that same way the objection? >> Yeah, they're removing those barriers, those objections. >> Awesome. Edge. >> Yeah, and to comment on one of the things you were talking about, which is the Edge, they have completely changed their approach to the Edge. They have put in Neo as part of Sage Maker, which allows them to push out inference code, and they themselves are pointing out that inference code is 90% of all the compute, into... >> Not the training. >> Not the training, but the inference code after that, that's 90% of the compute. They're pushing that into the devices at the Edge, all sorts of architectures. That's a major shift in mindset about that. >> Yeah, and in fact I was really impressed by Elastic Inference for the same reasons, because it very much is a validation of a trend I've been seeing in the A.I. space for the last several years, which is, you can increasingly build A.I. in your preferred visual, declarative environment with Python code, and then the abstraction layers of the A.I. Ecosystem have developed to a point where, the ecosystem increasingly will auto-compile to TensorFlow, or MXNet, or PyTorch, and then from there further auto-compile your deployed trained model to the most efficient format for the Edge device, for the GP, or whatever. Where ever it's going to be executed, that's already a well established trend. The fact that AWS has productized that, with this Elastic Inference in their cloud, shows that not only do they get that trend, they're just going to push really hard. I'm making sure that AWS, it becomes in many ways, the hub of efficient inferencing for everybody. >> One more quick point on the Edge, if I may. What's going on on the Edge reminds me of the days when Microsoft was trying to take Windows and stick it on mobile. Right, the windows phone. Top down, I.T. guys coming at it, >> Oh that's right. >> and that's what a lot of people are doing today in IT. It's not going to work. What Amazon is doing see, we're going to build an environment that you can build applications on, that are secure, you can manage them from a bottoms up approach. >> Yeah. Absolutely. >> Identifying what the operations technology developers want. Giving them the tools to do that. That's a winning strategy. >> And focusing on them producing the devices, not themselves. >> Right. >> And not declaring where the boundaries are. >> Spot on. >> Very very important. >> Yep. >> And they're obviously inferencing, you get most value out of the data if you put that inferencing as close as you possibly can to that data, within a camera, is in the camera itself. >> And I eluded to it earlier, another key announcement from AWS here is, first of all the investment in Sage Maker itself is super impressive. In the year since they've introduced it, look at they've already added, they have that slide with all the feature enhancements, and new modules. Sage Maker Ground Truth, really important, the fully managed service for automating labeling of training datasets, using Mechanical Turk . The vast majority of the costs in a lot of A.I. initiatives involves human annotators of training data, and without human annotated training data you can't do supervised learning, which is the magic on a lot of A.I, AWS gets the fact that their customers want to automate that to the nth degree. Now they got that. >> We sound like Fam boys (laughing). >> That's going to be wildly popular. >> As we say, clean data makes good M.L., and good M.L. makes great A.I. >> Yeah. (laughing) >> So you don't want any dirty data out there. Cube, more coverage here. Cube insights panel, here in theCUBE at re:Invent. Stay with us for more after this short break. (upbeat music)
SUMMARY :
Brought to you by Amazon Web Services, What's the impact? of A.I., and I'm just scratching the surface. What's an area that you see that people are getting? you mean the general...? There's so much news. Just the depth of their investments. David Floyer, outpost is their on premises cloud. What's the impact? And people, in particular in AWS. So you're right. And that has enormous increase in the range, And their going to have to adapt to that pretty strongly. What's your prediction? The landscape is that the alternatives is going to be Well, and let's name names. No, but they have to respond. They have to have self service. because the ecosystem is going to evolve to that. for the enterprise strategy is actually quite viable. for the enterprise players to complete. that inference code is 90% of all the compute, into... They're pushing that into the devices at the Edge, for the Edge device, for the GP, or whatever. What's going on on the Edge reminds me of the days It's not going to work. Identifying what the operations And focusing on them producing the devices, you get most value out of the data if you put that AWS gets the fact that their customers (laughing). and good M.L. makes great A.I. Yeah. So you don't want any dirty data out there.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
James Kobielus | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Jim | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
Three years | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Python | TITLE | 0.99+ |
New Tanix | ORGANIZATION | 0.99+ |
over 500 clips | QUANTITY | 0.99+ |
over 150 videos | QUANTITY | 0.98+ |
mid 2019 | DATE | 0.98+ |
Windows | TITLE | 0.98+ |
a year | QUANTITY | 0.97+ |
third one | QUANTITY | 0.97+ |
Day three | QUANTITY | 0.96+ |
M.L. | PERSON | 0.96+ |
ten years | QUANTITY | 0.95+ |
VX | ORGANIZATION | 0.94+ |
Edge | TITLE | 0.94+ |
today | DATE | 0.93+ |
Azure Stack | TITLE | 0.92+ |
Sage Maker | TITLE | 0.92+ |
windows | TITLE | 0.91+ |
one | QUANTITY | 0.91+ |
Wikibon | ORGANIZATION | 0.9+ |
first | QUANTITY | 0.88+ |
Invent 2018 | EVENT | 0.88+ |
PyTorch | TITLE | 0.86+ |
TensorFlow | TITLE | 0.86+ |
MXNet | TITLE | 0.84+ |
Inference | ORGANIZATION | 0.83+ |
Siliconangle.com | OTHER | 0.83+ |
Amazon re:Invent 2018 | EVENT | 0.82+ |
five | QUANTITY | 0.79+ |
re:Invent 2018 | EVENT | 0.74+ |
re: | EVENT | 0.73+ |
Elastic | TITLE | 0.71+ |
past couple of years | DATE | 0.71+ |
One more quick point | QUANTITY | 0.67+ |
re | EVENT | 0.66+ |
Cube | ORGANIZATION | 0.66+ |