Chad Burton, Univ. of Pitt. & Jim Keller, NorthBay Solutions | AWS Public Sector Partner Awards 2020
>> Announcer: From around the globe, it's theCUBE with digital coverage of AWS Public Sector Partner Awards Brought to you by Amazon Web Services. >> All right, welcome back to "the Cube's" coverage here from Palo Alto, California in our studio with remote interviews during this time of COVID-19 with our quarantine crew. I'm John Furrier, your host of "the Cube" and we have here the award winners for the best EDU solution from NorthBay Solutions, Jim Keller, the president and from Harvard Business Publishing and the University of Pittsburgh, Chad Burton, PhD and Data Privacy Officer of University of Pittsburgh IT. Thanks for coming on gentlemen, appreciate it. >> Thank you. >> So, Jim, we'll start with you. What is the solution that you guys had got the award for? And talk about how it all came about. >> Yeah, thank you for asking and it's been a pleasure working with Chad and the entire UPitt team. So as we entered this whole COVID situation, our team really got together and started to think about how we could help AWS customers continue their journey with AWS, but also appreciate the fact that everyone was virtual, that budgets were very tight, but nonetheless, the priorities remained the same. So we devised a solution which we called jam sessions, AWS jam sessions, and the whole principle behind the notion is that many customers go through AWS training and AWS has a number of other offerings, immersion days and boot camps and other things, but we felt it was really important that we brought forth a solution that enables customers to focus on a use case, but do it rapidly in a very concentrated way with our expert team. So we formulated what we call jam sessions, which are essentially very focused two week engagements, rapid prototyping engagements. So in the context of Chad and UPitt team, it was around a data lake and they had been, and Chad will certainly speak to this in much more detail, but the whole notion here was how does a customer get started? How does, a customer prove the efficacy of AWS, prove that they can get data out of their on premises systems, get it into AWS, make it accessible in the form, in this case, a data lake solution and have the data be consumable. So we have an entire construct that we use which includes structured education, virtual simultaneous rooms where development occurs with our joint rep prototyping teams. We come back again and do learnings, and we do all of this in the construct of the agile framework, and ideally by the time we're done with the two weeks, the customer achieves some success around achieving the goal of the jam session. But more importantly, their team members have learned a lot about AWS with hands on work, real work, learn by doing, if you will, and really marry those two concepts of education and doing, and come out of that with an opportunity then to think about the next step in that journey, which in this case would be the implementation of a data lake in a full scale project kind of initiative. >> Chad, talk about the relationship with NorthBay Solutions. Obviously you're a customer, you guys are partnering on this, so it's kind of you're partnering, but also they're helping you. Talk about the relationship and how the interactions went. >> Yeah, so I would say the challenge that I think a lot of people in my role are faced with where the demand for data is increasing and demand for more variety of data. And I'm faced with a lot of aging on premise hardware that I really don't want to invest any further in. So I know the cloud's in the future, but we are so new with the cloud that we don't even know what we don't know. So we had zeroed in on AWS and I was talking with them and I made it very clear. I said "Because of our inexperience, we have talented data engineers, but they don't have this type of experience, but I'm confident they can learn." So what I'm looking for is a partner who can help us not only prove this out that it can work, which I had high confidence that it could, but help us identify where we need to be putting our skilling up. You know, what gaps do we have? And AWS has just so many different components that we also needed help just zeroing in on for our need, what are the pieces we should really be paying attention to and developing those skills. So we got introduced to NorthBay and they introduced us to the idea of the jam session, which was perfect. It was really exactly what I was looking for. We made it very clear in the early conversations that this would be side by side development, that my priority was of course, to meet our deliverables, but also for my team to learn how to use some of this and learn what they need to dive deeper in at the end of the engagement. I think that's how it got started and then I think it was very successful engagement after that. >> Talk about the jam sessions, because I love this. First of all, this is in line with what we're seeing in the marketplace with rapid innovation, now more than ever with virtual workforces at home, given the situation. You know, rapid agile, rapid innovation, rapid development is a key kind of thing. What is a jam session? What was the approach? Jim you laid a little bit about it out, but Chad, what's your take on the jam sessions? How does it all work? >> I mean, it was great, because of large teams that NorthBay brought and the variety of skills they brought, and then they just had a playbook that worked. They broke us up into different groups, from the people who'd be making the data pipeline, to the people who then would be consuming it to develop analytics projects. So that part worked really well, and yes, this rapid iterative development. Like right now with our current kind of process and our current tool, I have a hard time telling anybody how long it will take to get that new data source online and available to our data analysts, to our data scientists, because it takes months sometimes and nobody wants that answer and I don't want to be giving that answer, so what we're really focused on is how do we tighten up our process? How do we select the right tools so that we can say, "We'll be two weeks from start to finish" and you'll be able to make those data available. So the engagement with NorthBay, the jam session scheduled like that really helped us prove that once you have the skills and you have the right people, you can do this rapid development and bring more value to our business more quickly, which is really what it's all about for us. >> Jim, I'll get your thoughts because, you know, we see time and time again with the use cases with the cloud, when you got smart people, certainly people who play with data and work with data, They're pretty savvy, right? They know limitations, but when you get the cloud, it's like if a car versus a horse, right? Got to go from point A to point B, but again, the faster is the key. How did you put this all together and what were the key learnings? >> Yeah, so John, a couple of things that are really important. One is, as Chad mentioned, really smart people on the U-PIT side that wanted to really learn and had a thirst for learning. And then couple that with the thing that they're trying to learn in an actual use case that we're trying to jointly implement. A couple of things that we've learned that are really important. One is although we have structure and we have a syllabi and we have sort of a pattern of execution, we can never lose sight of the fact that every customer is different. Every team member is different. And in fact, Chad, in this case had team members, some had more skills on AWS than others. So we had to be sensitive to that. So what we did was we sort of used our general formula for the two weeks. Week one is very structured, focused on getting folks up to speed and normalize in terms of where they are in their education of AWS, the solution we're building and then week two is really meant to sort of mold the clay together and really take this solution that we're trying to execute around and tailor it to the customer so that we're addressing the specific needs, both from their team member perspective and the institution's perspective in total. We've learned that starting the day together and ending the day with a recap of that day is really important in terms of ensuring that everyone's on the same page, that they have commonality of knowledge and then when we're addressing any concerns. You know, this stuff we move fast, right? Two weeks is not a long time to get a lot of rapid prototyping done, so if there is anxiety, or folks feel like they're falling behind, we want to make sure we knew that, we wanted to address that quickly, either that evening, or the next morning, recalibrate and then continue. The other thing that we've learned is that, and Chad and entire U-Pit team did a phenomenal job with this, was really preparation. So we have a set of preliminary set of activities that we work with our customers to sort of lay the foundation for, so that on day one of the jam session, we're ready to go. And since we're doing this virtually, we don't have the luxury of being in a physical room and having time to sort of get acclimated to the physical construct of organizing rooms and chairs and tables and all that. We're doing all that virtually. So Chad and the team were tremendous in getting all the preparatory work done Thinking about what's involved in a data lake, it's the data and security and access and things our team needed to work with their team and the prescription and the formula that we use is really three critical things. One is our team members have to be adept at educating on a virtual whiteboard, in this case. Secondly, we want to do side by side development. That's the whole goal and we want team members to build trust and relationships side by side. And then thirdly, and importantly, we want to be able to do over the shoulder mentoring, so that as Chad's team members were executing, we could guide them as we go. And really those three ingredients were really key. >> Chad, talk about the data lake and the outcome as you guys went through this. What was the results of the data Lake? How did it all turn out? >> Yeah, the result was great. It was exactly what we were looking for. The way I had structured the engagement and working with Jim to do this is I wanted to accomplish two things. I wanted to one, prove that we can do what we do today with a star schema mart model that creates a lot of reports that are important to the business, but doesn't really help us grow in our use of data. So there was a second component of it that I said, I want to show how we do something new and different that we can't do with our existing tools, so that I can go back to our executive leadership and say "Hey, by investing in this, here's all the possibilities we can do and we've got proof that we can do it." So some natural language processing was one of those and leveraging AWS comprehend was key. And the idea here was there are, unfortunately, it's not as relevant today with COVID, but there are events happening all around campus and how do students find the right events for them? You know, they're all in the calendar. Well, with a price of natural language processing using AWS comprehend and link them to a student's major, so that we can then bubble these up to a student "Hey, do you know of all these thousands of events here are the 10 you might be most interested in." We can't do that right now, but using these tools, using the skills that that NorthBay helped us develop by working side by side will help us get there. >> A beautiful thing is with these jam sessions, once you get some success, you go for the next one. This sounds like another jam session opportunity to go in there and do the virtual version. As the fall comes up, you have the new reality. And this is really kind of what I like about the story is you guys did the jam session, first of all, great project, but right in the middle of this new shift of virtual, so it's very interesting. So I want to get your thoughts, Chad, as you guys looked at this, I mean on any given Sunday, this is a great project, right? You can get people together, you go to the cloud, get more agile, get the proof points, show it, double down on it, playbook, check. But now you've got the virtual workforce. How did that all play out? Anything surprise you? Any expectations that were met, or things that were new that came out of this? 'Cause this is something that is everyone is going through right now. How do I come out of this, or deal with current COVID as it evolves? And then when I come out of it, I want to have a growth strategy, I want to have a team that's deploying and building. What's your take on that? >> Yeah, it's a good question and I was a little concerned about it at first, because when we had first begun conversations with NorthBay, we were planning on a little bit on site and a little bit virtual. Then of course COVID happened. Our campus is closed, nobody's permitted to be there and so we had to just pivot to a hundred percent virtual. I have to say, I didn't notice any problems with it. It didn't impede our progress. It didn't impede our communication. I think the playbook that NorthBay had really just worked for that. Now they may have had to adjust it and Jim can certainly talk to that, But those morning stand-ups for each group that's working, the end of day report outs, right? Those were the things I was joining in on I wasn't involved in it throughout the day, but I wanted to check in at the end of the day to make sure things are kind of moving along and the communication, the transparency that was provided was key, and because of that transparency and that kind of schedule they already had set up at North Bay, We didn't have any problems having it a fully virtual engagement. In fact, I would probably prefer to do virtual engagements moving forward because we can cut down on travel costs for everybody. >> You know, Jim, I want to get your thoughts on this, 'cause I think this is a huge point that's not just represented here and illustrated with the example of the success of the EDU solution you guys got the award for, but in a way COVID exposes all the people that have been relying on waterfall based processes. You've got to be in a room and argue things out, or have meetings set up. It takes a lot of time and when you have a virtual space and an agile process, yeah you make some adjustments, but if you're already agile, it doesn't really impact too much. Can you share your thoughts because you deployed this very successfully virtually. >> Yeah, it's certainly, you know, the key is always preparation and our team did a phenomenal job at making sure that we could deliver equal to, or better than, virtual experience than we could an on-site experience, but John you're absolutely right. What it forces you to really do is think about all the things that come natural when you're in a physical room together, but you can't take for granted virtually. Even interpersonal relationships and how those are built and the trust that's built. As much as this is a technical solution and as much as the teams did really phenomenal AWS work, foundationally it all comes down to trust and as Chad said, transparency. And it's often hard to build that into a virtual experience. So part of that preparatory work that I mentioned, we actually spend time doing that and we spent time with Chad and other team members, understanding each of their team members and understanding their strengths, understanding where they were in the education journey and the experiential journey, a little bit about them personally. So I think the reality in the in the short and near term is that everything's going to be virtual. NorthBay delivers much of their large scale projects virtually now. We have a whole methodology around that and it's proven actually it's made us better at what we do quite frankly. >> Yeah it definitely puts the pressure on getting the job done and focusing on the creativity in the building out. I want to ask you guys both the same question on this next round, because I think it's super important as people see the reality of cloud and this certainly has been around, the benefits of there, but still you have the mentality of "we have to do it ourselves", "not invented here", "It's a managed service", "It's security". There's plenty of objections. If you really want to avoid cloud, you can come up with something if you really looked for it. But the reality is is that there are benefits. For the folks out there that are now being accelerated into the cloud for the reasons with COVID and other reasons, What's your advice to them? Why cloud? What's the bet? What comes out of making a good choice with the cloud? Chad, as people sitting there going "okay, I got to get my cloud mojo going" What's your advice to those folks sitting out there watching this? >> So I would say, and Jim knows this, we at Pitt have a big vision for data, a whole universe of data where just everything is made available and I can't estimate the demand for all of that yet, right? That's going to evolve over time, so if I'm trying to scale some physical hardware solution, I'm either going to under scale it and not be able to deliver, or I'm going to invest too much money for the value I'm getting. By moving to the cloud, what that enables me to do is just grow organically and make sure that our spend and the value we're getting from the use are always aligned. And then, of course, all the questions about, scalability and extensibility, right? We can just keep growing and if we're not seeing value in one area, we can just stop and we're no longer spending on that particular area and we can direct that money to a different component of the cloud. So just not being locked in to a huge expensive product is really key, I think. >> Jim, your thoughts on why cloud and why now? Obviously it's pretty obvious reasons, but benefits for the naysayer sitting on the fence? >> Yeah, it's a really important question, John and I think Chad had a lot of important points. I think there's two others that become important. One is agility. Whether that's agility with respect to if you're in a competitive market place, Agility in terms of just retaining team members and staff in a highly competitive environment we all know we're in, particularly in the IT world. Agility from a cost perspective. So agility is a theme that comes through and through over and over and over again, and as Chad rightfully said, most companies and most organizations they don't know the entirety of what it is they're facing, or what the demands are going to be on their services, so agility is really, is really key. And the second one is, the notion has often been that you have to have it all figured out before you can start and really our mantra in the jam session was sort of born this way. It's really start by doing. Pick a use case, pick a pain point, pick an area of frustration, whatever it might be and just start the process. You'll learn as you go and not everything is the right fit for cloud. There were some things for the right reasons where alternatives might be be appropriate, but by and large, if you start by doing and in fact, through jam session, learn by doing, you'll start to better understand, enterprise will start to better understand what's most applicable to them, where they can leverage the best bang for the buck, if you will. And ultimately deliver on the value that IT is meant to deliver to the line of business, whatever that might be. And those two themes come through and through. And thirdly, I'll just add speed now. Speed of transformation, speed of cost reduction, speed of future rollout. You know, Chad has users begging for information and access to data, right? He and the team are sitting there trying to figure how to give it to them quickly. So speed of execution with quality is really paramount as well these days. >> Yeah and Chad also mentioned scale too, cause he's trying to scale up as key and again, getting the cloud muscles going for the teams and culture is critical because matching that incentives, I think the alignment is critical point. So congratulations gentlemen on a great award, best EDU solution. Chad, while I have you here, I want to just get your personal thoughts, but your industry expert PhD hat on, because one of the things we've been reporting on is in the EDU space, higher ed and other areas, with people having different education policies, the new reality is with virtualized students and faculty, alumni and community, the expectations and the data flows are different, right? So you had stuff that people used, systems, legacy systems, kind of as a good opportunity to look at cloud to build a new abstraction layer and again, create that alignment of what can we do development wise, because I'm sure you're seeing new data flows coming in. I'm sure this kind of thinking going on around "Okay, as we go forward, how do we find out what classes to attend if they're not onsite?" This is another jam session. So I see more and more things happening, pretty innovative in your world. What's your take on all this? >> My take, so when we did the pivot, we did a pivot right after spring break to be virtual for our students, like a lot of universities did. And you learn a lot when you go through a crisis kind of like that and you find all the weaknesses. And we had finished the engagement, I think, with NorthBay by that point, or were in it and seeing how if we were at our future state, you know, might end up the way I envisioned the future state, I can now point to these specific things and give specific examples about how we would have been able to more effectively respond when these new demands on data came up, when new data flows were being created very quickly and able to point out to the weaknesses of our current ecosystem and how that would be better. So that was really key and this whole thing is an opportunity. It's really accelerated a lot of things that were kind of already in the works and that's why it's exciting. It's obviously very challenging and at Pitt we're really right now trying to focus on how do we have a safe campus environment and going with a maximum flexibility and all the technology that's involved in that. And, you know, I've already got, I've had more unique data requests come to my desk since COVID than in the previous five years, you know? >> New patterns, new opportunities to write software and it's great to see you guys focused on that hierarchy of needs. I really appreciate it. I want to just share with you a funny story, not funny, but interesting story, because this highlights the creativity that's coming. I was riffing on Zoom with someone in a higher ed university out here in California and it wasn't official business, was just more riffing on the future and I said "Hey, wouldn't it be cool if you had like an abstraction layer that had leveraged Canvas, Zoom and Discord?" All the kids are on Discord if they're gamers. So you go "Okay, why discord? It's a hang space." People, it's connective tissue. "Well, how do you build notifications through the different silos?" You know, Canvas doesn't support certain things and Canvas is the software that most universities use, but that's a use case that we were just riffing on, but that's the kind of ideation that's going to come out of these kinds of jam sessions. Are you guys having that kind of feeling too? I mean, how do you see this new ideation, rapid prototype? I only think it's going to get faster and accelerated. >> As Chad said, his requests are we're multiplying, I'm sure and people aren't, you know, folks are not willing to wait. We're in a hurry up, 'hurry up, I want it now' mentality these days with both college attendees as well as those of us who are trying to deliver on that promise. And I think John, I think you're absolutely right and I think that whether it be the fail fast mantra, or whether it be can we make even make this work, right? Does it have legs? Is it is even viable? And is it even cost-effective? I can tell you that we do a lot of work in Ed tech, we do a lot of work in other industries as well And what the the courseware delivery companies and the infrastructure companies are all trying to deal with as a result of COVID, is they've all had to try to innovate. So we're being asked to challenge ourselves in ways we never been asked to challenge ourselves in terms of speed of execution, speed of deployment, because these folks need answers, you know, tomorrow, today, yesterday, not six months from now. So I'll use the word legacy way of thinking is really not one that can be sustained, or tolerated any longer and I want Chad and others to be able to call us and say, "Hey, we need help. We need help quickly. How can we go work together side by side and go prove something. It may not be the most elegant, it may not be the most robust, but we need it tomorrow." And that's really the spirit of the whole notion of jam session. >> And new expectations means new solutions. Chad, we'll give you the final word. Going forward, you're on this wave right now, you got new things coming at you you're getting that foundation set. What's your mindset as you ride this wave? >> I'm optimistic. It really is, it's an exciting time to be in this role, the progress we've made in the calendar year 2020, despite the challenges we've been faced with, with COVID and budget issues, I'm optimistic. I love what I saw in the jam session. It just kind of confirmed my belief that this is really the future for the University of Pittsburgh in order to fully realize our vision of maximizing the value of data. >> Awesome! Best EDU solution award for AWS public sector. Congratulations to NorthBay Solutions. Jim Keller, president, and University of Pittsburgh, Chad Burton. Thank you for coming on and sharing your story. Great insights and again, the wave is here, new expectations, new solutions, clouds there, and you guys got a good approach. Congratulations on the jam session, thanks. >> Thank you, John. Chad, pleasure, thank you. >> Thank you. >> See you soon. >> This is "the Cube" coverage of AWS public sector partner awards. I'm John Furrier, host of "the Cube". Thanks for watching. (bright music)
SUMMARY :
Brought to you by and the University of Pittsburgh, What is the solution that you and ideally by the time we're and how the interactions went. and I was talking with them in the marketplace with rapid innovation, and the variety of skills they brought, but again, the faster is the key. and ending the day with and the outcome as you and different that we can't but right in the middle of and the communication, the transparency and when you have a virtual space and as much as the teams did and focusing on the creativity and the value we're getting and really our mantra in the jam session and again, getting the cloud and all the technology and it's great to see you guys focused and the infrastructure companies Chad, we'll give you the final word. of maximizing the value of data. and you guys got a good approach. Chad, pleasure, thank you. I'm John Furrier, host of "the Cube".
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Chad | PERSON | 0.99+ |
Jim Keller | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
NorthBay | ORGANIZATION | 0.99+ |
Harvard Business Publishing | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
University of Pittsburgh | ORGANIZATION | 0.99+ |
Chad Burton | PERSON | 0.99+ |
NorthBay Solutions | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
two weeks | QUANTITY | 0.99+ |
two week | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
today | DATE | 0.99+ |
tomorrow | DATE | 0.99+ |
two themes | QUANTITY | 0.99+ |
UPitt | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
Two weeks | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
two things | QUANTITY | 0.98+ |
second component | QUANTITY | 0.98+ |
the Cube | TITLE | 0.98+ |
Sunday | DATE | 0.98+ |
first | QUANTITY | 0.98+ |
Secondly | QUANTITY | 0.98+ |
three ingredients | QUANTITY | 0.98+ |
each | QUANTITY | 0.97+ |
each group | QUANTITY | 0.97+ |
hundred percent | QUANTITY | 0.97+ |
Chad Burton and Jim Keller V1
>>from the Cube Studios in Palo Alto and Boston connecting with thought leaders all around the world. This is a cube conversation. >>Welcome back to the Cube's coverage here from Palo Alto, California in our studio with remote interviews during this time of covert 19 with our quarantine crew. I'm John Furrier, your host of the Cube, and we have here the award winners for the best CDU solution from North based loses. Jim Keller, the president and from Harvard Business Publishing and University of Pittsburgh, Chad Burden PhD in data privacy officer of University of Pittsburgh. Thanks for coming on, gentlemen. Appreciate it. >>Thank you. >>So, Jim, we'll start with you. What is the solution that you guys have got the award for and talk about how it all came about? >>Yeah. Thank you for asking. And, uh, it's been a pleasure Worldwide chat and the entire you pitch team. So? So as we as we enter this this this whole covitz situation, our team really got together and started to think about how we could help AWS customers continue their journey with AWS, but also appreciate the fact that everyone was virtual. The budgets were very tight, but Nonetheless, the priorities remained the same. Um, So So we devised a solution which which we call jam sessions, AWS jam sessions and the whole principle behind the notion is that many customers go through AWS training and AWS has a number of other offerings, immersion days and boot camps and other things. But we felt it was really important that we brought forth a solution that enables customers to focus on a use case but do it rapidly in a very concentrated way with our expert team. So we formulated what we call jam sessions, which are essentially very focused, too. Weak engagements, rapid prototyping engagements. So in the context of Chad on the pitch team, it was around a data lake and they had been channels certainly speak to this in much more detail. But the whole notion here was how do you How does the customer get started out? Is how does a customer prove the efficacy of AWS proved that they can get data out of their on premises systems, get it into AWS, make it accessible in the form in this case, a data lake solution, and have the data be consumable. So we have an entire construct that we use, which includes structured education, virtual simultaneous rooms where development occurs with our joint sap prototyping teams. We come back again and do learnings, and we do all of this in the construct of the agile framework. And ideally, by the time we're done with the two weeks, um, the customer achieves some success around achieving the goal of the jam session. But more importantly, their team members have learned a lot about AWS with hands on work, real work. Learn by doing if you will, um, and really marry those two concepts of education and doing and come out of that with an opportunity then to think about the next step in that journey, which in this case be Thea implementation of a data lake in a full scale project kind of initiative. >>Talk about the relationship with the North based solutions. So your customer, you guys were partnering on this, so it's kind of your partnering, but also your they're helping you talk about the relationship and how the interactions went. >>Yeah, so I was faced with a challenge that I think a lot of people in my role is faced with where the demand for data is increasing and demand for more variety of data. And I'm faced with a lot of aging on premise hardware that, um I really don't want to invest any further. And so I know the clouds in the future, but we are so new with the cloud that we don't even know what we don't know. So it has zeroed in on AWS and I was talking with them and I made it very clear. I said, you know, because of our inexperience, you know, we have talented data engineers, but they don't have this type of experience, but I'm confident they can learn. What I'm looking for is a partner who can help us not only prove this out, that it can work, which I had high confidence that it could, but help us identify where we need to be putting our still skilling up. You know what gaps do we have? And you know, aws has so many different components. But we also needed help zeroing in on or our need. You know, what are the pieces we should really be paying attention to and developing those skills. So we got introduced to North Bay and they introduced us to the idea of the jam session, which was perfect. It was really exactly what I was looking for. Um, you know, we made it very clear in the early conversations that this would be side by side development, that my priority was, of course, to meet our deliverables. But it also for my team to learn how to use some of this and learn what they need to dive deeper in at the end of the engagement. I think that's how we got started on then. It was very successful engagement after that >>talk about the jam sessions because I love this. First of all, this is in line with what we're seeing in the marketplace, with rapid innovation now more than ever, with virtual workforces at home given situation, rapid, agile, rapid innovation, rapid development is a key kind of thing. What is a jam session was the approach. Give me a little bit about of it out, but what's your take on the jam sessions? Had it all has it all work? >>It was great because of the large team that north a broad and the variety of skills they brought and then they just had a playbook that worked, right? They broke us up into different groups from the people who be making the data pipeline to the people who then would be consuming it to develop analytics projects. Um, so that part works really well. And, yes, this rapid iterative development, You know, right now, with our current kind of process in our current tools, I have a hard time telling anybody how long it will take to get that new data source online and available to our data analysts who are data scientists because it takes months sometimes and nobody wants that answer. And I don't want to be giving that answer. So what we're really focused on is how do we tighten up our process? How do we still like the right tools so that we can pay, you know, will be two weeks from start to finish and you know you'll be able to make the data available. So the engagement with North of the jam session scheduled like that really helped us prove that. You know, once you have the skills and have the right people, you can do this rapid development and bring more value to our business more quickly, which is really what it's all about. We're out, >>Jim. I want get your thoughts because, you know, we see time and time again with the use cases with the cloud When you got smart people, certainly people who play with data and work with data, they're not. They're pretty savvy. They know the limitations. But when you get the cloud, it's like a car versus a horse or, you know, get a go from point A to point B. But again, the faster is the key. How did you put this all together And what were the key learnings? >>Yeah. So, uh, John, you know, a couple of things that are really important. One is, as Chad mentioned, really smart people, um, on the it side that wanted to wanted to really learn and had had a thirst for learning. Um, and then couple that with the thing that they're trying to learn in the actual use case that we're trying to jointly jointly implement a couple of things that we've learned that they're they're really important. One is, although we have structure, we have a Silla by and we have sort of a pattern of execution. We never lose sight of the fact that every customer's different. Every team members different and in fact chat in this case that team members some had more skills on AWS than others, so we had to be sensitive to that. So what we did was we sort of use our general formula for for the two weeks one week one is very structured, focused on getting folks up to speed and normalize in terms of where they are in their education of aws solution we're building, um, and then we two is really meant to sort of multiple together and really take this the solution that we're trying to execute around, um, and tailor it to the customer. So they were addressing the specific needs both from their team member of perspective and, uh, and the institutions perspective, Um, in total. We've learned that starting the day together and ending today with the recap of that day is really important in terms of ensuring that everyone's on the same page, that they have commonality of knowledge. And then we were addressing any concerns. You know, this stuff we move fast, right? Two weeks is is not a long time to get a lot of rapid prototyping done. So if there is anxiety or folks feel like they're falling behind, you want to make sure we knew that we want to address that quickly that evening or the next morning, recalibrate and and then continue. The other thing that we've learned is that and Chad, the entire Cube team did a phenomenal job of this was really preparation. So we want to We we We have a set of preliminary set of activities that we that we work with our customers sort of lay the foundation for, so that on day one of the jam session, we're ready to go. And with this we're doing this virtually. We don't have the luxury of being in a physical room and having time to sort of get acclimated to the physical constructive of organizing rooms and shares and tables. All of that, we're doing all that virtually so. Joe and the team were tremendous and getting all the preparatory work done. The thing about was involved in a data lake. It's the data and security and access of things Our team needed to work with their team and the prescription that in the formula that we use is really 33 critical things. One is our team members have to be adept that educating on a white board in this case. Secondly, we want to do side by side element. That's that's the whole goal. And then we want team members to to build trust and relationship side by side and then, thirdly and importantly, we want to be able to do over the shoulder mentoring. So as Chad's team members were executing, UI could guide them as we go. And those really those three ingredients really >>talk about the Data Lake on the outcome. As you guys went through this, what was the results of the Data Lake? How did it all? How'd it all turn out? >>Yeah, the result was great. It was exactly what we're looking for. The way I had structured the engagement and working with Jim to do this is I wanted to accomplish two things. I wanted to one prove that we can do what we do today with a star schema Martin model that creates a lot of reports that are important to the business but doesn't really help us grow in our use of data. There was a second component of it that I said, I want I want to show how we do something new and different that we can't do with our existing tools so that I can go back to our executive leadership and say, Hey, you know, by investing in this year's all the possibilities we can do and we've got proof that we can do it. So some natural language processing was one of those and leveraging aws comprehend with key and And the idea here was there are unfortunately relevant today with Cove it. But there are events happening all around campus. And how do students find the right events for them? You know, they're all in the calendar will live pricing national language processing using AWS comprehend and link them to a student's major so that we can then bubble these up to a student. Hey, you know of all these thousands of events here and you might be most interested in you can't do that right now, but using these tools using the skills that north they helped us develop working side by side will help us get there, >>you know, beautiful thing is with these jam sessions. You want to get some success, You go for the next one. You get this Sounds like another jam session opportunity to go in there and do the virtual version as well. As the fall comes up, you have the new reality. And this >>is >>really kind of What I like about this story is you guys did the jam session. First of all, great project, but right in the middle of this new shift of virtual, so it's very interesting. So I want to get your thoughts, Chad, You know, as you guys look at this, I mean on any given Sunday, this is a great project. You get people together, you have the cloud get more agile, get the proof points, show it double down on it. Playbook check. But now you've got the virtual workforce. How did that all play out? Anything surprise you any expectations that were met or things that were new that came out of this? Because this is something that everyone is going through right now. How do I come out of this or deal with current Cove it as it evolves and when I come out of it. I don't have a growth strategy in a team that's deploying and building. What's your take on? >>Yeah, so, yeah, you know, it's a good question. And I was a little concerned about it at first, cause when we had first begun conversations with North Bay, we were planning on a little bit on site and a little bit virtual. And of course, Cove. It happened. Our campuses closed. Nobody's permitted to be there. And so we had to just pivot to 100% virtual. I have to say I didn't notice any problems with it. It didn't impede our progress that didn't impede our communication. I think the playbook that North they had really just worked for that. Now they may have had to adjust it, and Jim can certainly part of that. But you know those morning stand ups for each group that's working the end of day worn out right? That's what those were the things I was joining in on, you know, it wasn't involved in it throughout the day, but I wanted to check in at the end of the day to make sure things are kind of moving along and the communication the transparency that was provided with key, and because of that transparency and that kind of schedule, they already have set up North Bay. We didn't see we didn't have any problems having a fully virtual engagement. In fact, I would probably prefer to do for two engagements moving forward because we can cut down on travel costs for everybody. >>You know, Jim O. Negative thoughts that I think is a huge point that's not just representing with here and illustrate with the example of the success of the EU solution. You guys got the award for, but in a way, covert exposes all the people that are been relying on waterfall based processes. You got to be in a room and argue things out. Our have meetings set up. It takes a lot of time when you when you have a virtual space and an agile process, you make some adjustments. But if you're already agile, it doesn't really impact too much. Can you share your thoughts because you deployed this very successfully? Virtually. >>Yeah, I know it is. Certainly, um, the key is always preparation and on our team did a phenomenal job of making sure that we could deliver equal to or better than virtual experience than we could on site and on site experience. But, John, you're right. You're absolutely right. But it forces you to really do is think about all the things that come natural when you're when you're in a physical room together, you can't take for granted virtually, um, even even interpersonal relationships and how those were built and the trust that's built in. And this whole, as much as this is a technical solution and as much as the teams did you really phenomenal aws work, foundational Lee. It all comes down to trust it, as Chad said, transparency, and it's hard, often hard to to build that into a virtual experience. So part of that preparatory work that I mentioned, we actually spend time doing that. And we spent time with Chad and other team members understanding each of their team members and understanding their strengths, understanding where they were in the education journey and experiential journey a little bit about them personally, right? So so I think. Look, I think the reality in the short and near term is that everything is gonna be virtual North Bay delivers much of their large scale projects. Virtually now, we have a whole methodology around that, and, um, and it's proven. Actually, it's made us better at what we do. >>Yeah, definitely puts the pressure on getting the job done and focusing on the creativity the building out. I want to ask you guys both the same question on this next round, because I think it's super important as people see the reality of cloud and there certainly has been around the benefits of there. But still you have, you know, mentality of, you know, we have to do it ourselves, not invented here. It's a managed services security. You know, there's plenty of objections. If you really want to avoid cloud, you can come up with something if you really look for it. Um, but the reality is, is that there are benefits for the folks out there that are now being accelerated into the cloud for the reasons we cove it and other reasons. What's your advice to them? Why cloud, what's the what's the bet? What comes? What comes out of making a good choice with the cloud? Chad? Is people sitting there going? Okay, I got to get my cloud mojo going What's your What's your What's your advice to those folks sitting out there watching this? >>Yeah, so I would say it. And Jim does this, you know, we have a big vision for data, you know, the whole universe of data. Where does everything is made available? And, um, I can't estimate the demand for all of that yet, right, That's going to evolve over time. So if I'm trying to scale some physical hardware solution, I'm either going to under scale it and not be able to deliver. Or I'm gonna invest too much money for the value in getting what? By moving to the cloud. What that enables me to do is just grow organically and make sure that our spend and the value we're getting from the use are always aligned. Um And then, of course, all the questions that you have availability and acceptability, right? We can just keep growing. And if we're not seeing value in one area, we can just we're no longer spending on that particular area, and we contract that money to a different components of the cloud, so just not being locked into a huge expense up front is really key, I think, >>Jim, your thoughts on Why Cloud? Why now? It's pretty obvious reasons, but benefits for the naysayers sitting on the fence who are? >>Yeah, it's It's a really important question, John and I think that had a lot of important points. I think there's two others that become important. One is, um, agility. Whether that's agility with respect to your in a competitive marketplace, place agility in terms of just retaining team members and staff in a highly competitive environment will go nowhere in particularly in the I t world, um, agility from a cost perspective. So So agility is a theme that comes through and through, over and over and over again in this change, right? So, he said, most companies and most organizations don't they don't know the entirety of what it is they're facing or what the demands are gonna be on their services. The agility is really is really key, and the 2nd 1 is, you know, the notion has often been that you have to have it all figured out. You could start and really our mantra and the jam session was sort of born this way. It's really start by doing, um, pick a use case, Pick a pain point, pick an area of frustration, whatever it might be. And just start the process you learn as you go. Um, and you know, not everything is the right fit for cloud. There are some things for the right reasons where alternatives might be might be appropriate. But by and large, if you if you start by doing And in fact, you know the jam session, learn by doing, and you start to better understand, enterprise will start to better understand what's most applicable to that where they can leverage the best of this bang for the buck if you will, um, and ultimately deliver on the value that that I t is is meant to deliver to the line of business, whatever that whatever that might be. And those two themes come through and through. And thirdly, I'll just add speed now. Speed of transformation, Speed of cost reduction, speed of feature rollout. Um, you know, Chad has users begging for information and access to data. Right? And the team we're sitting there trying to figure how to give it to him quickly. Um, so speed of execution with quality is really paramount as well these days >>and channels. You mentioned scale too, because he's trying to scale up as key and again getting the cloud muscles going for the teams. And culture is critical because, you know, matching that incentives. I think the alignment is critical. Point point. So congratulations, gentlemen. On great award best edu solution, Chad, While I have you here, I want to just get your personal thoughts. Put your industry expert PhD hat on because, you know, one of the things we've been reporting on is a lot of in the edu space higher ed in other areas with people having different education policies. The new reality is with virtual virtualized students and faculty alumni nine in community, the expectations and the data flows are different. Right? So you you had stuff that people use systems, legacy systems, >>kind of. >>It's a good opportunity to look at cloud to build a new abstraction layer and again create that alignment of what can we do? Development wise? I'm sure you're seeing new data flows coming in. I'm sure there's kind of thinking going on around. Okay. As we go forward, how >>do >>we find out who's what. Classes to attend if they're not on site this another jam session. So I see more, more things happening pretty innovative in your world. What's your take on all this? >>Um, I take, you know, So when we did the pivot, we did a pivot right after spring. Great toe. Be virtual for our students, Like a lot of universities dead. And, um, you learn a lot when you go through a crisis kind of like that. And you find all the weaknesses And we had finished the engagement. I think north by that point, or it were in it. And, um, seeing how if we were at our future state, you know, the way I envision the future state, I can now point to the specific things and get specific examples of how we would have been able to more effectively on when these new demands on data came up when new data flows were being created very quickly and, you know, able to point out to the weaknesses of our current ecosystem and how that would be better. Um, so that was really key. And then, you know, it's a This whole thing is an opportunity. It's really accelerated a lot of things that were kind of already in the works, and that's why it's exciting. It's obviously very challenging, you know, and that if it were really right now trying to focus on how do we have a safe campus environment and going with a maximum flexibility and older technology that's involved in that? And, you know, I've already got you know, I've had more unique data requests. >>My desk >>is coded and in the previous five years, you know, >>new patterns, new opportunities to write software. And it's great to see you guys focused on the hierarchy of needs. Really appreciate. I want to just share a funny story. Not funny, but interesting story, because this highlights the creativity that's coming. I was riffing on Zoom with someone in Higher Ed University out here in California, and it was wasn't official. Business was just more riffing on the future, and I said, Hey, wouldn't it be cool if you have, like an abstraction layer that had leverage, canvas, zoom and discord and all the kids are on discourse, their game received. Okay, why discord? It's the hang space people are his connective tissue Well, how do you build notifications through the different silos? So canvas doesn't support certain things? And campuses? The software. Most companies never say years, but that's a use case that we were just riffing on. But that's the kind of ideation that's going to come out of these kinds of jam sessions. You guys having that kind of feeling to? How do you see this new ideation? Rapid prototyping. I only think it's gonna get faster. Accelerated >>It was. Chad said, you know, his requests are multiplying. I'm sure on people are you know, folks are not willing to wait, you know, we're in a hurry up. Hurry up. I wanted now mentality these days with with both, um college attendees as well as those of us. We're trying to deliver on that promise. And I think, John, I think you're absolutely right. And I think that, um, whether it be the fail fast mantra or whether it be can we may even make this work right? Doesn't have lakes, is it is even viable. Um, and is it even cost effective? I can tell you that the we do a lot of work in tech. We do a lot of work in other industries as well. And what what the courseware delivery companies and the infrastructure companies are all trying to deal with and as a result of coaches, they've all had to try to innovate. Um, so we're being asked to challenge ourselves in ways we never been asked to challenge ourselves in terms of speed, of execution, speed of deployment, because these folks need answers, you know, tomorrow, Today, yesterday, not not six months from now. So the the I'll use the word legacy way of thinking is really not one that could be sustained or tolerated any longer. And and I want Chad and others to be able to call us and say, Hey, we need help. We need help quickly. How we go work together, side by side and go prove something. It may not be the most elegant. It may not be the most robust, but we need. We need it kind of tomorrow, and that's really the spirit of the whole. The whole notion of transition >>and new expectations means new solutions that will give you the final word going forward. You're on this wave right now. You got new things coming at you. You get in that foundation set. What's your mindset as you ride this wave? >>I'm optimistic it really It's an exciting time to be in this role. The progress we've made in the county or 2020 despite the challenges we've been faced with with, um cove it and budget issues. Um, I'm optimistic. I love what I saw in the in the jam session. It just kind of confirmed my I believe that this is really the future for the University of Pittsburgh in order to fully realize our vision of maximizing the value of data. >>Awesome. Best Edu solution award for AWS Public sector Congratulations and North based solutions. Jim Keller, President and University of Pittsburgh Chadbourne. Thank you for coming on and sharing your story. Great insights. And again, the wave is here. New expectation, new solutions. Clouds There. You guys got a good approach. Congratulations on the jam session. Thanks. >>Thank you, John. Pleasure. Thank you. Through >>the cube coverage of AWS Public Sector Partner Awards. I'm John Furrow, your host of the Cube. Thanks for watching. Yeah, yeah, yeah, yeah
SUMMARY :
from the Cube Studios in Palo Alto and Boston connecting with thought leaders all around the world. Welcome back to the Cube's coverage here from Palo Alto, California in our studio with remote What is the solution that you guys have got the award But the whole notion here was how do you How does the customer get started out? Talk about the relationship with the North based solutions. I said, you know, because of our inexperience, you know, we have talented data engineers, First of all, this is in line with what we're seeing in the marketplace, How do we still like the right tools so that we can pay, you know, will be two weeks But when you get the cloud, it's like a car versus a horse or, is that and Chad, the entire Cube team did a phenomenal job of this was really preparation. As you guys went through this, what was the results of the Data Lake? to our executive leadership and say, Hey, you know, by investing in this year's all the possibilities As the fall comes up, you have the new reality. really kind of What I like about this story is you guys did the jam session. Yeah, so, yeah, you know, it's a good question. Can you share your thoughts because you deployed this very successfully? solution and as much as the teams did you really phenomenal aws I want to ask you guys both the same question on this next round, because I think it's super important as people see the of course, all the questions that you have availability and acceptability, right? And just start the process you learn as you go. And culture is critical because, you know, matching that incentives. It's a good opportunity to look at cloud to build a new abstraction layer and again create that alignment of what So I see more, more things happening pretty innovative in your world. seeing how if we were at our future state, you know, the way I envision the future state, And it's great to see you guys focused on the hierarchy It may not be the most robust, but we need. and new expectations means new solutions that will give you the final word going forward. It just kind of confirmed my I believe that this is really the future for the University And again, the wave is here. Thank you. the cube coverage of AWS Public Sector Partner Awards.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Jim Keller | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John Furrow | PERSON | 0.99+ |
Chad | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Joe | PERSON | 0.99+ |
University of Pittsburgh | ORGANIZATION | 0.99+ |
Harvard Business Publishing | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Chad Burden | PERSON | 0.99+ |
Chad Burton | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
two weeks | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
University of Pittsburgh | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
today | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Cube | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Today | DATE | 0.99+ |
two themes | QUANTITY | 0.99+ |
aws | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
Two weeks | QUANTITY | 0.99+ |
one week | QUANTITY | 0.99+ |
two engagements | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
North Bay | ORGANIZATION | 0.99+ |
second component | QUANTITY | 0.98+ |
three ingredients | QUANTITY | 0.98+ |
Sunday | DATE | 0.98+ |
Lee | PERSON | 0.98+ |
Secondly | QUANTITY | 0.98+ |
Cube Studios | ORGANIZATION | 0.97+ |
Jim O. | PERSON | 0.97+ |
two concepts | QUANTITY | 0.97+ |
each group | QUANTITY | 0.97+ |
EU | ORGANIZATION | 0.97+ |
first | QUANTITY | 0.97+ |
two others | QUANTITY | 0.95+ |
one | QUANTITY | 0.95+ |
one area | QUANTITY | 0.95+ |
agile | TITLE | 0.94+ |
First | QUANTITY | 0.93+ |
33 critical | QUANTITY | 0.92+ |
each | QUANTITY | 0.92+ |
North | ORGANIZATION | 0.89+ |
Wikibon Presents: Software is Eating the Edge | The Entangling of Big Data and IIoT
>> So as folks make their way over from Javits I'm going to give you the least interesting part of the evening and that's my segment in which I welcome you here, introduce myself, lay out what what we're going to do for the next couple of hours. So first off, thank you very much for coming. As all of you know Wikibon is a part of SiliconANGLE which also includes theCUBE, so if you look around, this is what we have been doing for the past couple of days here in the TheCUBE. We've been inviting some significant thought leaders from over on the show and in incredibly expensive limousines driven them up the street to come on to TheCUBE and spend time with us and talk about some of the things that are happening in the industry today that are especially important. We tore it down, and we're having this party tonight. So we want to thank you very much for coming and look forward to having more conversations with all of you. Now what are we going to talk about? Well Wikibon is the research arm of SiliconANGLE. So we take data that comes out of TheCUBE and other places and we incorporated it into our research. And work very closely with large end users and large technology companies regarding how to make better decisions in this incredibly complex, incredibly important transformative world of digital business. What we're going to talk about tonight, and I've got a couple of my analysts assembled, and we're also going to have a panel, is this notion of software is eating the Edge. Now most of you have probably heard Marc Andreessen, the venture capitalist and developer, original developer of Netscape many years ago, talk about how software's eating the world. Well, if software is truly going to eat the world, it's going to eat at, it's going to take the big chunks, big bites at the Edge. That's where the actual action's going to be. And what we want to talk about specifically is the entangling of the internet or the industrial internet of things and IoT with analytics. So that's what we're going to talk about over the course of the next couple of hours. To do that we're going to, I've already blown the schedule, that's on me. But to do that I'm going to spend a couple minutes talking about what we regard as the essential digital business capabilities which includes analytics and Big Data, and includes IIoT and we'll explain at least in our position why those two things come together the way that they do. But I'm going to ask the august and revered Neil Raden, Wikibon analyst to come on up and talk about harvesting value at the Edge. 'Cause there are some, not now Neil, when we're done, when I'm done. So I'm going to ask Neil to come on up and we'll talk, he's going to talk about harvesting value at the Edge. And then Jim Kobielus will follow up with him, another Wikibon analyst, he'll talk specifically about how we're going to take that combination of analytics and Edge and turn it into the new types of systems and software that are going to sustain this significant transformation that's going on. And then after that, I'm going to ask Neil and Jim to come, going to invite some other folks up and we're going to run a panel to talk about some of these issues and do a real question and answer. So the goal here is before we break for drinks is to create a community feeling within the room. That includes smart people here, smart people in the audience having a conversation ultimately about some of these significant changes so please participate and we look forward to talking about the rest of it. All right, let's get going! What is digital business? One of the nice things about being an analyst is that you can reach back on people who were significantly smarter than you and build your points of view on the shoulders of those giants including Peter Drucker. Many years ago Peter Drucker made the observation that the purpose of business is to create and keep a customer. Not better shareholder value, not anything else. It is about creating and keeping your customer. Now you can argue with that, at the end of the day, if you don't have customers, you don't have a business. Now the observation that we've made, what we've added to that is that we've made the observation that the difference between business and digital business essentially is one thing. That's data. A digital business uses data to differentially create and keep customers. That's the only difference. If you think about the difference between taxi cab companies here in New York City, every cab that I've been in in the last three days has bothered me about Uber. The reason, the difference between Uber and a taxi cab company is data. That's the primary difference. Uber uses data as an asset. And we think this is the fundamental feature of digital business that everybody has to pay attention to. How is a business going to use data as an asset? Is the business using data as an asset? Is a business driving its engagement with customers, the role of its product et cetera using data? And if they are, they are becoming a more digital business. Now when you think about that, what we're really talking about is how are they going to put data to work? How are they going to take their customer data and their operational data and their financial data and any other kind of data and ultimately turn that into superior engagement or improved customer experience or more agile operations or increased automation? Those are the kinds of outcomes that we're talking about. But it is about putting data to work. That's fundamentally what we're trying to do within a digital business. Now that leads to an observation about the crucial strategic business capabilities that every business that aspires to be more digital or to be digital has to put in place. And I want to be clear. When I say strategic capabilities I mean something specific. When you talk about, for example technology architecture or information architecture there is this notion of what capabilities does your business need? Your business needs capabilities to pursue and achieve its mission. And in the digital business these are the capabilities that are now additive to this core question, ultimately of whether or not the company is a digital business. What are the three capabilities? One, you have to capture data. Not just do a good job of it, but better than your competition. You have to capture data better than your competition. In a way that is ultimately less intrusive on your markets and on your customers. That's in many respects, one of the first priorities of the internet of things and people. The idea of using sensors and related technologies to capture more data. Once you capture that data you have to turn it into value. You have to do something with it that creates business value so you can do a better job of engaging your markets and serving your customers. And that essentially is what we regard as the basis of Big Data. Including operations, including financial performance and everything else, but ultimately it's taking the data that's being captured and turning it into value within the business. The last point here is that once you have generated a model, or an insight or some other resource that you can act upon, you then have to act upon it in the real world. We call that systems of agency, the ability to enact based on data. Now I want to spend just a second talking about systems of agency 'cause we think it's an interesting concept and it's something Jim Kobielus is going to talk about a little bit later. When we say systems of agency, what we're saying is increasingly machines are acting on behalf of a brand. Or systems, combinations of machines and people are acting on behalf of the brand. And this whole notion of agency is the idea that ultimately these systems are now acting as the business's agent. They are at the front line of engaging customers. It's an extremely rich proposition that has subtle but crucial implications. For example I was talking to a senior decision maker at a business today and they made a quick observation, they talked about they, on their way here to New York City they had followed a woman who was going through security, opened up her suitcase and took out a bird. And then went through security with the bird. And the reason why I bring this up now is as TSA was trying to figure out how exactly to deal with this, the bird started talking and repeating things that the woman had said and many of those things, in fact, might have put her in jail. Now in this case the bird is not an agent of that woman. You can't put the woman in jail because of what the bird said. But increasingly we have to ask ourselves as we ask machines to do more on our behalf, digital instrumentation and elements to do more on our behalf, it's going to have blow back and an impact on our brand if we don't do it well. I want to draw that forward a little bit because I suggest there's going to be a new lifecycle for data. And the way that we think about it is we have the internet or the Edge which is comprised of things and crucially people, using sensors, whether they be smaller processors in control towers or whether they be phones that are tracking where we go, and this crucial element here is something that we call information transducers. Now a transducer in a traditional sense is something that takes energy from one form to another so that it can perform new types of work. By information transducer I essentially mean it takes information from one form to another so it can perform another type of work. This is a crucial feature of data. One of the beauties of data is that it can be used in multiple places at multiple times and not engender significant net new costs. It's one of the few assets that you can say about that. So the concept of an information transducer's really important because it's the basis for a lot of transformations of data as data flies through organizations. So we end up with the transducers storing data in the form of analytics, machine learning, business operations, other types of things, and then it goes back and it's transduced, back into to the real world as we program the real world and turning into these systems of agency. So that's the new lifecycle. And increasingly, that's how we have to think about data flows. Capturing it, turning it into value and having it act on our behalf in front of markets. That could have enormous implications for how ultimately money is spent over the next few years. So Wikibon does a significant amount of market research in addition to advising our large user customers. And that includes doing studies on cloud, public cloud, but also studies on what's happening within the analytics world. And if you take a look at it, what we basically see happening over the course of the next few years is significant investments in software and also services to get the word out. But we also expect there's going to be a lot of hardware. A significant amount of hardware that's ultimately sold within this space. And that's because of something that we call true private cloud. This concept of ultimately a business increasingly being designed and architected around the idea of data assets means that the reality, the physical realities of how data operates, how much it costs to store it or move it, the issues of latency, the issues of intellectual property protection as well as things like the regulatory regimes that are being put in place to govern how data gets used in between locations. All of those factors are going to drive increased utilization of what we call true private cloud. On premise technologies that provide the cloud experience but act where the data naturally needs to be processed. I'll come a little bit more to that in a second. So we think that it's going to be a relatively balanced market, a lot of stuff is going to end up in the cloud, but as Neil and Jim will talk about, there's going to be an enormous amount of analytics that pulls an enormous amount of data out to the Edge 'cause that's where the action's going to be. Now one of the things I want to also reveal to you is we've done a fair amount of data, we've done a fair amount of research around this question of where or how will data guide decisions about infrastructure? And in particular the Edge is driving these conversations. So here is a piece of research that one of our cohorts at Wikibon did, David Floyer. Taking a look at IoT Edge cost comparisons over a three year period. And it showed on the left hand side, an example where the sensor towers and other types of devices were streaming data back into a central location in a wind farm, stylized wind farm example. Very very expensive. Significant amounts of money end up being consumed, significant resources end up being consumed by the cost of moving the data from one place to another. Now this is even assuming that latency does not become a problem. The second example that we looked at is if we kept more of that data at the Edge and processed at the Edge. And literally it is a 85 plus percent cost reduction to keep more of the data at the Edge. Now that has enormous implications, how we think about big data, how we think about next generation architectures, et cetera. But it's these costs that are going to be so crucial to shaping the decisions that we make over the next two years about where we put hardware, where we put resources, what type of automation is possible, and what types of technology management has to be put in place. Ultimately we think it's going to lead to a structure, an architecture in the infrastructure as well as applications that is informed more by moving cloud to the data than moving the data to the cloud. That's kind of our fundamental proposition is that the norm in the industry has been to think about moving all data up to the cloud because who wants to do IT? It's so much cheaper, look what Amazon can do. Or what AWS can do. All true statements. Very very important in many respects. But most businesses today are starting to rethink that simple proposition and asking themselves do we have to move our business to the cloud, or can we move the cloud to the business? And increasingly what we see happening as we talk to our large customers about this, is that the cloud is being extended out to the Edge, we're moving the cloud and cloud services out to the business. Because of economic reasons, intellectual property control reasons, regulatory reasons, security reasons, any number of other reasons. It's just a more natural way to deal with it. And of course, the most important reason is latency. So with that as a quick backdrop, if I may quickly summarize, we believe fundamentally that the difference today is that businesses are trying to understand how to use data as an asset. And that requires an investment in new sets of technology capabilities that are not cheap, not simple and require significant thought, a lot of planning, lot of change within an IT and business organizations. How we capture data, how we turn it into value, and how we translate that into real world action through software. That's going to lead to a rethinking, ultimately, based on cost and other factors about how we deploy infrastructure. How we use the cloud so that the data guides the activity and not the choice of cloud supplier determines or limits what we can do with our data. And that's going to lead to this notion of true private cloud and elevate the role the Edge plays in analytics and all other architectures. So I hope that was perfectly clear. And now what I want to do is I want to bring up Neil Raden. Yes, now's the time Neil! So let me invite Neil up to spend some time talking about harvesting value at the Edge. Can you see his, all right. Got it. >> Oh boy. Hi everybody. Yeah, this is a really, this is a really big and complicated topic so I decided to just concentrate on something fairly simple, but I know that Peter mentioned customers. And he also had a picture of Peter Drucker. I had the pleasure in 1998 of interviewing Peter and photographing him. Peter Drucker, not this Peter. Because I'd started a magazine called Hired Brains. It was for consultants. And Peter said, Peter said a number of really interesting things to me, but one of them was his definition of a customer was someone who wrote you a check that didn't bounce. He was kind of a wag. He was! So anyway, he had to leave to do a video conference with Jack Welch and so I said to him, how do you charge Jack Welch to spend an hour on a video conference? And he said, you know I have this theory that you should always charge your client enough that it hurts a little bit or they don't take you seriously. Well, I had the chance to talk to Jack's wife, Suzie Welch recently and I told her that story and she said, "Oh he's full of it, Jack never paid "a dime for those conferences!" (laughs) So anyway, all right, so let's talk about this. To me, things about, engineered things like the hardware and network and all these other standards and so forth, we haven't fully developed those yet, but they're coming. As far as I'm concerned, they're not the most interesting thing. The most interesting thing to me in Edge Analytics is what you're going to get out of it, what the result is going to be. Making sense of this data that's coming. And while we're on data, something I've been thinking a lot lately because everybody I've talked to for the last three days just keeps talking to me about data. I have this feeling that data isn't actually quite real. That any data that we deal with is the result of some process that's captured it from something else that's actually real. In other words it's proxy. So it's not exactly perfect. And that's why we've always had these problems about customer A, customer A, customer A, what's their definition? What's the definition of this, that and the other thing? And with sensor data, I really have the feeling, when companies get, not you know, not companies, organizations get instrumented and start dealing with this kind of data what they're going to find is that this is the first time, and I've been involved in analytics, I don't want to date myself, 'cause I know I look young, but the first, I've been dealing with analytics since 1975. And everything we've ever done in analytics has involved pulling data from some other system that was not designed for analytics. But if you think about sensor data, this is data that we're actually going to catch the first time. It's going to be ours! We're not going to get it from some other source. It's going to be the real deal, to the extent that it's the real deal. Now you may say, ya know Neil, a sensor that's sending us information about oil pressure or temperature or something like that, how can you quarrel with that? Well, I can quarrel with it because I don't know if the sensor's doing it right. So we still don't know, even with that data, if it's right, but that's what we have to work with. Now, what does that really mean? Is that we have to be really careful with this data. It's ours, we have to take care of it. We don't get to reload it from source some other day. If we munge it up it's gone forever. So that has, that has very serious implications, but let me, let me roll you back a little bit. The way I look at analytics is it's come in three different eras. And we're entering into the third now. The first era was business intelligence. It was basically built and governed by IT, it was system of record kind of reporting. And as far as I can recall, it probably started around 1988 or at least that's the year that Howard Dresner claims to have invented the term. I'm not sure it's true. And things happened before 1988 that was sort of like BI, but 88 was when they really started coming out, that's when we saw BusinessObjects and Cognos and MicroStrategy and those kinds of things. The second generation just popped out on everybody else. We're all looking around at BI and we were saying why isn't this working? Why are only five people in the organization using this? Why are we not getting value out of this massive license we bought? And along comes companies like Tableau doing data discovery, visualization, data prep and Line of Business people are using this now. But it's still the same kind of data sources. It's moved out a little bit, but it still hasn't really hit the Big Data thing. Now we're in third generation, so we not only had Big Data, which has come and hit us like a tsunami, but we're looking at smart discovery, we're looking at machine learning. We're looking at AI induced analytics workflows. And then all the natural language cousins. You know, natural language processing, natural language, what's? Oh Q, natural language query. Natural language generation. Anybody here know what natural language generation is? Yeah, so what you see now is you do some sort of analysis and that tool comes up and says this chart is about the following and it used the following data, and it's blah blah blah blah blah. I think it's kind of wordy and it's going to refined some, but it's an interesting, it's an interesting thing to do. Now, the problem I see with Edge Analytics and IoT in general is that most of the canonical examples we talk about are pretty thin. I know we talk about autonomous cars, I hope to God we never have them, 'cause I'm a car guy. Fleet Management, I think Qualcomm started Fleet Management in 1988, that is not a new application. Industrial controls. I seem to remember, I seem to remember Honeywell doing industrial controls at least in the 70s and before that I wasn't, I don't want to talk about what I was doing, but I definitely wasn't in this industry. So my feeling is we all need to sit down and think about this and get creative. Because the real value in Edge Analytics or IoT, whatever you want to call it, the real value is going to be figuring out something that's new or different. Creating a brand new business. Changing the way an operation happens in a company, right? And I think there's a lot of smart people out there and I think there's a million apps that we haven't even talked about so, if you as a vendor come to me and tell me how great your product is, please don't talk to me about autonomous cars or Fleet Managing, 'cause I've heard about that, okay? Now, hardware and architecture are really not the most interesting thing. We fell into that trap with data warehousing. We've fallen into that trap with Big Data. We talk about speeds and feeds. Somebody said to me the other day, what's the narrative of this company? This is a technology provider. And I said as far as I can tell, they don't have a narrative they have some products and they compete in a space. And when they go to clients and the clients say, what's the value of your product? They don't have an answer for that. So we don't want to fall into this trap, okay? Because IoT is going to inform you in ways you've never even dreamed about. Unfortunately some of them are going to be really stinky, you know, they're going to be really bad. You're going to lose more of your privacy, it's going to get harder to get, I dunno, mortgage for example, I dunno, maybe it'll be easier, but in any case, it's not going to all be good. So let's really think about what you want to do with this technology to do something that's really valuable. Cost takeout is not the place to justify an IoT project. Because number one, it's very expensive, and number two, it's a waste of the technology because you should be looking at, you know the old numerator denominator thing? You should be looking at the numerators and forget about the denominators because that's not what you do with IoT. And the other thing is you don't want to get over confident. Actually this is good advice about anything, right? But in this case, I love this quote by Derek Sivers He's a pretty funny guy. He said, "If more information was the answer, "then we'd all be billionaires with perfect abs." I'm not sure what's on his wishlist, but you know, I would, those aren't necessarily the two things I would think of, okay. Now, what I said about the data, I want to explain some more. Big Data Analytics, if you look at this graphic, it depicts it perfectly. It's a bunch of different stuff falling into the funnel. All right? It comes from other places, it's not original material. And when it comes in, it's always used as second hand data. Now what does that mean? That means that you have to figure out the semantics of this information and you have to find a way to put it together in a way that's useful to you, okay. That's Big Data. That's where we are. How is that different from IoT data? It's like I said, IoT is original. You can put it together any way you want because no one else has ever done that before. It's yours to construct, okay. You don't even have to transform it into a schema because you're creating the new application. But the most important thing is you have to take care of it 'cause if you lose it, it's gone. It's the original data. It's the same way, in operational systems for a long long time we've always been concerned about backup and security and everything else. You better believe this is a problem. I know a lot of people think about streaming data, that we're going to look at it for a minute, and we're going to throw most of it away. Personally I don't think that's going to happen. I think it's all going to be saved, at least for a while. Now, the governance and security, oh, by the way, I don't know where you're going to find a presentation where somebody uses a newspaper clipping about Vladimir Lenin, but here it is, enjoy yourselves. I believe that when people think about governance and security today they're still thinking along the same grids that we thought about it all along. But this is very very different and again, I'm sorry I keep thrashing this around, but this is treasured data that has to be carefully taken care of. Now when I say governance, my experience has been over the years that governance is something that IT does to make everybody's lives miserable. But that's not what I mean by governance today. It means a comprehensive program to really secure the value of the data as an asset. And you need to think about this differently. Now the other thing is you may not get to think about it differently, because some of the stuff may end up being subject to regulation. And if the regulators start regulating some of this, then that'll take some of the degrees of freedom away from you in how you put this together, but you know, that's the way it works. Now, machine learning, I think I told somebody the other day that claims about machine learning in software products are as common as twisters in trail parks. And a lot of it is not really what I'd call machine learning. But there's a lot of it around. And I think all of the open source machine learning and artificial intelligence that's popped up, it's great because all those math PhDs who work at Home Depot now have something to do when they go home at night and they construct this stuff. But if you're going to have machine learning at the Edge, here's the question, what kind of machine learning would you have at the Edge? As opposed to developing your models back at say, the cloud, when you transmit the data there. The devices at the Edge are not very powerful. And they don't have a lot of memory. So you're only going to be able to do things that have been modeled or constructed somewhere else. But that's okay. Because machine learning algorithm development is actually slow and painful. So you really want the people who know how to do this working with gobs of data creating models and testing them offline. And when you have something that works, you can put it there. Now there's one thing I want to talk about before I finish, and I think I'm almost finished. I wrote a book about 10 years ago about automated decision making and the conclusion that I came up with was that little decisions add up, and that's good. But it also means you don't have to get them all right. But you don't want computers or software making decisions unattended if it involves human life, or frankly any life. Or the environment. So when you think about the applications that you can build using this architecture and this technology, think about the fact that you're not going to be doing air traffic control, you're not going to be monitoring crossing guards at the elementary school. You're going to be doing things that may seem fairly mundane. Managing machinery on the factory floor, I mean that may sound great, but really isn't that interesting. Managing well heads, drilling for oil, well I mean, it's great to the extent that it doesn't cause wells to explode, but they don't usually explode. What it's usually used for is to drive the cost out of preventative maintenance. Not very interesting. So use your heads. Come up with really cool stuff. And any of you who are involved in Edge Analytics, the next time I talk to you I don't want to hear about the same five applications that everybody talks about. Let's hear about some new ones. So, in conclusion, I don't really have anything in conclusion except that Peter mentioned something about limousines bringing people up here. On Monday I was slogging up and down Park Avenue and Madison Avenue with my client and we were visiting all the hedge funds there because we were doing a project with them. And in the miserable weather I looked at him and I said, for godsake Paul, where's the black car? And he said, that was the 90s. (laughs) Thank you. So, Jim, up to you. (audience applauding) This is terrible, go that way, this was terrible coming that way. >> Woo, don't want to trip! And let's move to, there we go. Hi everybody, how ya doing? Thanks Neil, thanks Peter, those were great discussions. So I'm the third leg in this relay race here, talking about of course how software is eating the world. And focusing on the value of Edge Analytics in a lot of real world scenarios. Programming the real world for, to make the world a better place. So I will talk, I'll break it out analytically in terms of the research that Wikibon is doing in the area of the IoT, but specifically how AI intelligence is being embedded really to all material reality potentially at the Edge. But mobile applications and industrial IoT and the smart appliances and self driving vehicles. I will break it out in terms of a reference architecture for understanding what functions are being pushed to the Edge to hardware, to our phones and so forth to drive various scenarios in terms of real world results. So I'll move a pace here. So basically AI software or AI microservices are being infused into Edge hardware as we speak. What we see is more vendors of smart phones and other, real world appliances and things like smart driving, self driving vehicles. What they're doing is they're instrumenting their products with computer vision and natural language processing, environmental awareness based on sensing and actuation and those capabilities and inferences that these devices just do to both provide human support for human users of these devices as well as to enable varying degrees of autonomous operation. So what I'll be talking about is how AI is a foundation for data driven systems of agency of the sort that Peter is talking about. Infusing data driven intelligence into everything or potentially so. As more of this capability, all these algorithms for things like, ya know for doing real time predictions and classifications, anomaly detection and so forth, as this functionality gets diffused widely and becomes more commoditized, you'll see it burned into an ever-wider variety of hardware architecture, neuro synaptic chips, GPUs and so forth. So what I've got here in front of you is a sort of a high level reference architecture that we're building up in our research at Wikibon. So AI, artificial intelligence is a big term, a big paradigm, I'm not going to unpack it completely. Of course we don't have oodles of time so I'm going to take you fairly quickly through the high points. It's a driver for systems of agency. Programming the real world. Transducing digital inputs, the data, to analog real world results. Through the embedding of this capability in the IoT, but pushing more and more of it out to the Edge with points of decision and action in real time. And there are four capabilities that we're seeing in terms of AI enabled, enabling capabilities that are absolutely critical to software being pushed to the Edge are sensing, actuation, inference and Learning. Sensing and actuation like Peter was describing, it's about capturing data from the environment within which a device or users is operating or moving. And then actuation is the fancy term for doing stuff, ya know like industrial IoT, it's obviously machine controlled, but clearly, you know self driving vehicles is steering a vehicle and avoiding crashing and so forth. Inference is the meat and potatoes as it were of AI. Analytics does inferences. It infers from the data, the logic of the application. Predictive logic, correlations, classification, abstractions, differentiation, anomaly detection, recognizing faces and voices. We see that now with Apple and the latest version of the iPhone is embedding face recognition as a core, as the core multifactor authentication technique. Clearly that's a harbinger of what's going to be universal fairly soon which is that depends on AI. That depends on convolutional neural networks, that is some heavy hitting processing power that's necessary and it's processing the data that's coming from your face. So that's critically important. So what we're looking at then is the AI software is taking root in hardware to power continuous agency. Getting stuff done. Powered decision support by human beings who have to take varying degrees of action in various environments. We don't necessarily want to let the car steer itself in all scenarios, we want some degree of override, for lots of good reasons. They want to protect life and limb including their own. And just more data driven automation across the internet of things in the broadest sense. So unpacking this reference framework, what's happening is that AI driven intelligence is powering real time decisioning at the Edge. Real time local sensing from the data that it's capturing there, it's ingesting the data. Some, not all of that data, may be persistent at the Edge. Some, perhaps most of it, will be pushed into the cloud for other processing. When you have these highly complex algorithms that are doing AI deep learning, multilayer, to do a variety of anti-fraud and higher level like narrative, auto-narrative roll-ups from various scenes that are unfolding. A lot of this processing is going to begin to happen in the cloud, but a fair amount of the more narrowly scoped inferences that drive real time decision support at the point of action will be done on the device itself. Contextual actuation, so it's the sensor data that's captured by the device along with other data that may be coming down in real time streams through the cloud will provide the broader contextual envelope of data needed to drive actuation, to drive various models and rules and so forth that are making stuff happen at the point of action, at the Edge. Continuous inference. What it all comes down to is that inference is what's going on inside the chips at the Edge device. And what we're seeing is a growing range of hardware architectures, GPUs, CPUs, FPGAs, ASIC, Neuro synaptic chips of all sorts playing in various combinations that are automating more and more very complex inference scenarios at the Edge. And not just individual devices, swarms of devices, like drones and so forth are essentially an Edge unto themselves. You'll see these tiered hierarchies of Edge swarms that are playing and doing inferences of ever more complex dynamic nature. And much of this will be, this capability, the fundamental capabilities that is powering them all will be burned into the hardware that powers them. And then adaptive learning. Now I use the term learning rather than training here, training is at the core of it. Training means everything in terms of the predictive fitness or the fitness of your AI services for whatever task, predictions, classifications, face recognition that you, you've built them for. But I use the term learning in a broader sense. It's what's make your inferences get better and better, more accurate over time is that you're training them with fresh data in a supervised learning environment. But you can have reinforcement learning if you're doing like say robotics and you don't have ground truth against which to train the data set. You know there's maximize a reward function versus minimize a loss function, you know, the standard approach, the latter for supervised learning. There's also, of course, the issue, or not the issue, the approach of unsupervised learning with cluster analysis critically important in a lot of real world scenarios. So Edge AI Algorithms, clearly, deep learning which is multilayered machine learning models that can do abstractions at higher and higher levels. Face recognition is a high level abstraction. Faces in a social environment is an even higher level of abstraction in terms of groups. Faces over time and bodies and gestures, doing various things in various environments is an even higher level abstraction in terms of narratives that can be rolled up, are being rolled up by deep learning capabilities of great sophistication. Convolutional neural networks for processing images, recurrent neural networks for processing time series. Generative adversarial networks for doing essentially what's called generative applications of all sort, composing music, and a lot of it's being used for auto programming. These are all deep learning. There's a variety of other algorithm approaches I'm not going to bore you with here. Deep learning is essentially the enabler of the five senses of the IoT. Your phone's going to have, has a camera, it has a microphone, it has the ability to of course, has geolocation and navigation capabilities. It's environmentally aware, it's got an accelerometer and so forth embedded therein. The reason that your phone and all of the devices are getting scary sentient is that they have the sensory modalities and the AI, the deep learning that enables them to make environmentally correct decisions in the wider range of scenarios. So machine learning is the foundation of all of this, but there are other, I mean of deep learning, artificial neural networks is the foundation of that. But there are other approaches for machine learning I want to make you aware of because support vector machines and these other established approaches for machine learning are not going away but really what's driving the show now is deep learning, because it's scary effective. And so that's where most of the investment in AI is going into these days for deep learning. AI Edge platforms, tools and frameworks are just coming along like gangbusters. Much development of AI, of deep learning happens in the context of your data lake. This is where you're storing your training data. This is the data that you use to build and test to validate in your models. So we're seeing a deepening stack of Hadoop and there's Kafka, and Spark and so forth that are driving the training (coughs) excuse me, of AI models that are power all these Edge Analytic applications so that that lake will continue to broaden in terms, and deepen in terms of a scope and the range of data sets and the range of modeling, AI modeling supports. Data science is critically important in this scenario because the data scientist, the data science teams, the tools and techniques and flows of data science are the fundamental development paradigm or discipline or capability that's being leveraged to build and to train and to deploy and iterate all this AI that's being pushed to the Edge. So clearly data science is at the center, data scientists of an increasingly specialized nature are necessary to the realization to this value at the Edge. AI frameworks are coming along like you know, a mile a minute. TensorFlow has achieved a, is an open source, most of these are open source, has achieved sort of almost like a defacto standard, status, I'm using the word defacto in air quotes. There's Theano and Keras and xNet and CNTK and a variety of other ones. We're seeing range of AI frameworks come to market, most open source. Most are supported by most of the major tool vendors as well. So at Wikibon we're definitely tracking that, we plan to go deeper in our coverage of that space. And then next best action, powers recommendation engines. I mean next best action decision automation of the sort of thing Neil's covered in a variety of contexts in his career is fundamentally important to Edge Analytics to systems of agency 'cause it's driving the process automation, decision automation, sort of the targeted recommendations that are made at the Edge to individual users as well as to process that automation. That's absolutely necessary for self driving vehicles to do their jobs and industrial IoT. So what we're seeing is more and more recommendation engine or recommender capabilities powered by ML and DL are going to the Edge, are already at the Edge for a variety of applications. Edge AI capabilities, like I said, there's sensing. And sensing at the Edge is becoming ever more rich, mixed reality Edge modalities of all sort are for augmented reality and so forth. We're just seeing a growth in certain, the range of sensory modalities that are enabled or filtered and analyzed through AI that are being pushed to the Edge, into the chip sets. Actuation, that's where robotics comes in. Robotics is coming into all aspects of our lives. And you know, it's brainless without AI, without deep learning and these capabilities. Inference, autonomous edge decisioning. Like I said, it's, a growing range of inferences that are being done at the Edge. And that's where it has to happen 'cause that's the point of decision. Learning, training, much training, most training will continue to be done in the cloud because it's very data intensive. It's a grind to train and optimize an AI algorithm to do its job. It's not something that you necessarily want to do or can do at the Edge at Edge devices so, the models that are built and trained in the cloud are pushed down through a dev ops process down to the Edge and that's the way it will work pretty much in most AI environments, Edge analytics environments. You centralize the modeling, you decentralize the execution of the inference models. The training engines will be in the cloud. Edge AI applications. I'll just run you through sort of a core list of the ones that are coming into, already come into the mainstream at the Edge. Multifactor authentication, clearly the Apple announcement of face recognition is just a harbinger of the fact that that's coming to every device. Computer vision speech recognition, NLP, digital assistance and chat bots powered by natural language processing and understanding, it's all AI powered. And it's becoming very mainstream. Emotion detection, face recognition, you know I could go on and on but these are like the core things that everybody has access to or will by 2020 and they're core devices, mass market devices. Developers, designers and hardware engineers are coming together to pool their expertise to build and train not just the AI, but also the entire package of hardware in UX and the orchestration of real world business scenarios or life scenarios that all this intelligence, the submitted intelligence enables and most, much of what they build in terms of AI will be containerized as micro services through Docker and orchestrated through Kubernetes as full cloud services in an increasingly distributed fabric. That's coming along very rapidly. We can see a fair amount of that already on display at Strata in terms of what the vendors are doing or announcing or who they're working with. The hardware itself, the Edge, you know at the Edge, some data will be persistent, needs to be persistent to drive inference. That's, and you know to drive a variety of different application scenarios that need some degree of historical data related to what that device in question happens to be sensing or has sensed in the immediate past or you know, whatever. The hardware itself is geared towards both sensing and increasingly persistence and Edge driven actuation of real world results. The whole notion of drones and robotics being embedded into everything that we do. That's where that comes in. That has to be powered by low cost, low power commodity chip sets of various sorts. What we see right now in terms of chip sets is it's a GPUs, Nvidia has gone real far and GPUs have come along very fast in terms of power inference engines, you know like the Tesla cars and so forth. But GPUs are in many ways the core hardware sub straight for in inference engines in DL so far. But to become a mass market phenomenon, it's got to get cheaper and lower powered and more commoditized, and so we see a fair number of CPUs being used as the hardware for Edge Analytic applications. Some vendors are fairly big on FPGAs, I believe Microsoft has gone fairly far with FPGAs inside DL strategy. ASIC, I mean, there's neuro synaptic chips like IBM's got one. There's at least a few dozen vendors of neuro synaptic chips on the market so at Wikibon we're going to track that market as it develops. And what we're seeing is a fair number of scenarios where it's a mixed environment where you use one chip set architecture at the inference side of the Edge, and other chip set architectures that are driving the DL as processed in the cloud, playing together within a common architecture. And we see some, a fair number of DL environments where the actual training is done in the cloud on Spark using CPUs and parallelized in memory, but pushing Tensorflow models that might be trained through Spark down to the Edge where the inferences are done in FPGAs and GPUs. Those kinds of mixed hardware scenarios are very, very, likely to be standard going forward in lots of areas. So analytics at the Edge power continuous results is what it's all about. The whole point is really not moving the data, it's putting the inference at the Edge and working from the data that's already captured and persistent there for the duration of whatever action or decision or result needs to be powered from the Edge. Like Neil said cost takeout alone is not worth doing. Cost takeout alone is not the rationale for putting AI at the Edge. It's getting new stuff done, new kinds of things done in an automated consistent, intelligent, contextualized way to make our lives better and more productive. Security and governance are becoming more important. Governance of the models, governance of the data, governance in a dev ops context in terms of version controls over all those DL models that are built, that are trained, that are containerized and deployed. Continuous iteration and improvement of those to help them learn to do, make our lives better and easier. With that said, I'm going to hand it over now. It's five minutes after the hour. We're going to get going with the Influencer Panel so what we'd like to do is I call Peter, and Peter's going to call our influencers. >> All right, am I live yet? Can you hear me? All right so, we've got, let me jump back in control here. We've got, again, the objective here is to have community take on some things. And so what we want to do is I want to invite five other people up, Neil why don't you come on up as well. Start with Neil. You can sit here. On the far right hand side, Judith, Judith Hurwitz. >> Neil: I'm glad I'm on the left side. >> From the Hurwitz Group. >> From the Hurwitz Group. Jennifer Shin who's affiliated with UC Berkeley. Jennifer are you here? >> She's here, Jennifer where are you? >> She was here a second ago. >> Neil: I saw her walk out she may have, >> Peter: All right, she'll be back in a second. >> Here's Jennifer! >> Here's Jennifer! >> Neil: With 8 Path Solutions, right? >> Yep. >> Yeah 8 Path Solutions. >> Just get my mic. >> Take your time Jen. >> Peter: All right, Stephanie McReynolds. Far left. And finally Joe Caserta, Joe come on up. >> Stephie's with Elysian >> And to the left. So what I want to do is I want to start by having everybody just go around introduce yourself quickly. Judith, why don't we start there. >> I'm Judith Hurwitz, I'm president of Hurwitz and Associates. We're an analyst research and fault leadership firm. I'm the co-author of eight books. Most recent is Cognitive Computing and Big Data Analytics. I've been in the market for a couple years now. >> Jennifer. >> Hi, my name's Jennifer Shin. I'm the founder and Chief Data Scientist 8 Path Solutions LLC. We do data science analytics and technology. We're actually about to do a big launch next month, with Box actually. >> We're apparent, are we having a, sorry Jennifer, are we having a problem with Jennifer's microphone? >> Man: Just turn it back on? >> Oh you have to turn it back on. >> It was on, oh sorry, can you hear me now? >> Yes! We can hear you now. >> Okay, I don't know how that turned back off, but okay. >> So you got to redo all that Jen. >> Okay, so my name's Jennifer Shin, I'm founder of 8 Path Solutions LLC, it's a data science analytics and technology company. I founded it about six years ago. So we've been developing some really cool technology that we're going to be launching with Box next month. It's really exciting. And I have, I've been developing a lot of patents and some technology as well as teaching at UC Berkeley as a lecturer in data science. >> You know Jim, you know Neil, Joe, you ready to go? >> Joe: Just broke my microphone. >> Joe's microphone is broken. >> Joe: Now it should be all right. >> Jim: Speak into Neil's. >> Joe: Hello, hello? >> I just feel not worthy in the presence of Joe Caserta. (several laughing) >> That's right, master of mics. If you can hear me, Joe Caserta, so yeah, I've been doing data technology solutions since 1986, almost as old as Neil here, but been doing specifically like BI, data warehousing, business intelligence type of work since 1996. And been doing, wholly dedicated to Big Data solutions and modern data engineering since 2009. Where should I be looking? >> Yeah I don't know where is the camera? >> Yeah, and that's basically it. So my company was formed in 2001, it's called Caserta Concepts. We recently rebranded to only Caserta 'cause what we do is way more than just concepts. So we conceptualize the stuff, we envision what the future brings and we actually build it. And we help clients large and small who are just, want to be leaders in innovation using data specifically to advance their business. >> Peter: And finally Stephanie McReynolds. >> I'm Stephanie McReynolds, I had product marketing as well as corporate marketing for a company called Elysian. And we are a data catalog so we help bring together not only a technical understanding of your data, but we curate that data with human knowledge and use automated intelligence internally within the system to make recommendations about what data to use for decision making. And some of our customers like City of San Diego, a large automotive manufacturer working on self driving cars and General Electric use Elysian to help power their solutions for IoT at the Edge. >> All right so let's jump right into it. And again if you have a question, raise your hand, and we'll do our best to get it to the floor. But what I want to do is I want to get seven questions in front of this group and have you guys discuss, slog, disagree, agree. Let's start here. What is the relationship between Big Data AI and IoT? Now Wikibon's put forward its observation that data's being generated at the Edge, that action is being taken at the Edge and then increasingly the software and other infrastructure architectures need to accommodate the realities of how data is going to work in these very complex systems. That's our perspective. Anybody, Judith, you want to start? >> Yeah, so I think that if you look at AI machine learning, all these different areas, you have to be able to have the data learned. Now when it comes to IoT, I think one of the issues we have to be careful about is not all data will be at the Edge. Not all data needs to be analyzed at the Edge. For example if the light is green and that's good and it's supposed to be green, do you really have to constantly analyze the fact that the light is green? You actually only really want to be able to analyze and take action when there's an anomaly. Well if it goes purple, that's actually a sign that something might explode, so that's where you want to make sure that you have the analytics at the edge. Not for everything, but for the things where there is an anomaly and a change. >> Joe, how about from your perspective? >> For me I think the evolution of data is really becoming, eventually oxygen is just, I mean data's going to be the oxygen we breathe. It used to be very very reactive and there used to be like a latency. You do something, there's a behavior, there's an event, there's a transaction, and then you go record it and then you collect it, and then you can analyze it. And it was very very waterfallish, right? And then eventually we figured out to put it back into the system. Or at least human beings interpret it to try to make the system better and that is really completely turned on it's head, we don't do that anymore. Right now it's very very, it's synchronous, where as we're actually making these transactions, the machines, we don't really need, I mean human beings are involved a bit, but less and less and less. And it's just a reality, it may not be politically correct to say but it's a reality that my phone in my pocket is following my behavior, and it knows without telling a human being what I'm doing. And it can actually help me do things like get to where I want to go faster depending on my preference if I want to save money or save time or visit things along the way. And I think that's all integration of big data, streaming data, artificial intelligence and I think the next thing that we're going to start seeing is the culmination of all of that. I actually, hopefully it'll be published soon, I just wrote an article for Forbes with the term of ARBI and ARBI is the integration of Augmented Reality and Business Intelligence. Where I think essentially we're going to see, you know, hold your phone up to Jim's face and it's going to recognize-- >> Peter: It's going to break. >> And it's going to say exactly you know, what are the key metrics that we want to know about Jim. If he works on my sales force, what's his attainment of goal, what is-- >> Jim: Can it read my mind? >> Potentially based on behavior patterns. >> Now I'm scared. >> I don't think Jim's buying it. >> It will, without a doubt be able to predict what you've done in the past, you may, with some certain level of confidence you may do again in the future, right? And is that mind reading? It's pretty close, right? >> Well, sometimes, I mean, mind reading is in the eye of the individual who wants to know. And if the machine appears to approximate what's going on in the person's head, sometimes you can't tell. So I guess, I guess we could call that the Turing machine test of the paranormal. >> Well, face recognition, micro gesture recognition, I mean facial gestures, people can do it. Maybe not better than a coin toss, but if it can be seen visually and captured and analyzed, conceivably some degree of mind reading can be built in. I can see when somebody's angry looking at me so, that's a possibility. That's kind of a scary possibility in a surveillance society, potentially. >> Neil: Right, absolutely. >> Peter: Stephanie, what do you think? >> Well, I hear a world of it's the bots versus the humans being painted here and I think that, you know at Elysian we have a very strong perspective on this and that is that the greatest impact, or the greatest results is going to be when humans figure out how to collaborate with the machines. And so yes, you want to get to the location more quickly, but the machine as in the bot isn't able to tell you exactly what to do and you're just going to blindly follow it. You need to train that machine, you need to have a partnership with that machine. So, a lot of the power, and I think this goes back to Judith's story is then what is the human decision making that can be augmented with data from the machine, but then the humans are actually training the training side and driving machines in the right direction. I think that's when we get true power out of some of these solutions so it's not just all about the technology. It's not all about the data or the AI, or the IoT, it's about how that empowers human systems to become smarter and more effective and more efficient. And I think we're playing that out in our technology in a certain way and I think organizations that are thinking along those lines with IoT are seeing more benefits immediately from those projects. >> So I think we have a general agreement of what kind of some of the things you talked about, IoT, crucial capturing information, and then having action being taken, AI being crucial to defining and refining the nature of the actions that are being taken Big Data ultimately powering how a lot of that changes. Let's go to the next one. >> So actually I have something to add to that. So I think it makes sense, right, with IoT, why we have Big Data associated with it. If you think about what data is collected by IoT. We're talking about a serial information, right? It's over time, it's going to grow exponentially just by definition, right, so every minute you collect a piece of information that means over time, it's going to keep growing, growing, growing as it accumulates. So that's one of the reasons why the IoT is so strongly associated with Big Data. And also why you need AI to be able to differentiate between one minute versus next minute, right? Trying to find a better way rather than looking at all that information and manually picking out patterns. To have some automated process for being able to filter through that much data that's being collected. >> I want to point out though based on what you just said Jennifer, I want to bring Neil in at this point, that this question of IoT now generating unprecedented levels of data does introduce this idea of the primary source. Historically what we've done within technology, or within IT certainly is we've taken stylized data. There is no such thing as a real world accounting thing. It is a human contrivance. And we stylize data and therefore it's relatively easy to be very precise on it. But when we start, as you noted, when we start measuring things with a tolerance down to thousandths of a millimeter, whatever that is, metric system, now we're still sometimes dealing with errors that we have to attend to. So, the reality is we're not just dealing with stylized data, we're dealing with real data, and it's more, more frequent, but it also has special cases that we have to attend to as in terms of how we use it. What do you think Neil? >> Well, I mean, I agree with that, I think I already said that, right. >> Yes you did, okay let's move on to the next one. >> Well it's a doppelganger, the digital twin doppelganger that's automatically created by your very fact that you're living and interacting and so forth and so on. It's going to accumulate regardless. Now that doppelganger may not be your agent, or might not be the foundation for your agent unless there's some other piece of logic like an interest graph that you build, a human being saying this is my broad set of interests, and so all of my agents out there in the IoT, you all need to be aware that when you make a decision on my behalf as my agent, this is what Jim would do. You know I mean there needs to be that kind of logic somewhere in this fabric to enable true agency. >> All right, so I'm going to start with you. Oh go ahead. >> I have a real short answer to this though. I think that Big Data provides the data and compute platform to make AI possible. For those of us who dipped our toes in the water in the 80s, we got clobbered because we didn't have the, we didn't have the facilities, we didn't have the resources to really do AI, we just kind of played around with it. And I think that the other thing about it is if you combine Big Data and AI and IoT, what you're going to see is people, a lot of the applications we develop now are very inward looking, we look at our organization, we look at our customers. We try to figure out how to sell more shoes to fashionable ladies, right? But with this technology, I think people can really expand what they're thinking about and what they model and come up with applications that are much more external. >> Actually what I would add to that is also it actually introduces being able to use engineering, right? Having engineers interested in the data. Because it's actually technical data that's collected not just say preferences or information about people, but actual measurements that are being collected with IoT. So it's really interesting in the engineering space because it opens up a whole new world for the engineers to actually look at data and to actually combine both that hardware side as well as the data that's being collected from it. >> Well, Neil, you and I have talked about something, 'cause it's not just engineers. We have in the healthcare industry for example, which you know a fair amount about, there's this notion of empirical based management. And the idea that increasingly we have to be driven by data as a way of improving the way that managers do things, the way the managers collect or collaborate and ultimately collectively how they take action. So it's not just engineers, it's supposed to also inform business, what's actually happening in the healthcare world when we start thinking about some of this empirical based management, is it working? What are some of the barriers? >> It's not a function of technology. What happens in medicine and healthcare research is, I guess you can say it borders on fraud. (people chuckling) No, I'm not kidding. I know the New England Journal of Medicine a couple of years ago released a study and said that at least half their articles that they published turned out to be written, ghost written by pharmaceutical companies. (man chuckling) Right, so I think the problem is that when you do a clinical study, the one that really killed me about 10 years ago was the women's health initiative. They spent $700 million gathering this data over 20 years. And when they released it they looked at all the wrong things deliberately, right? So I think that's a systemic-- >> I think you're bringing up a really important point that we haven't brought up yet, and that is is can you use Big Data and machine learning to begin to take the biases out? So if you let the, if you divorce your preconceived notions and your biases from the data and let the data lead you to the logic, you start to, I think get better over time, but it's going to take a while to get there because we do tend to gravitate towards our biases. >> I will share an anecdote. So I had some arm pain, and I had numbness in my thumb and pointer finger and I went to, excruciating pain, went to the hospital. So the doctor examined me, and he said you probably have a pinched nerve, he said, but I'm not exactly sure which nerve it would be, I'll be right back. And I kid you not, he went to a computer and he Googled it. (Neil laughs) And he came back because this little bit of information was something that could easily be looked up, right? Every nerve in your spine is connected to your different fingers so the pointer and the thumb just happens to be your C6, so he came back and said, it's your C6. (Neil mumbles) >> You know an interesting, I mean that's a good example. One of the issues with healthcare data is that the data set is not always shared across the entire research community, so by making Big Data accessible to everyone, you actually start a more rational conversation or debate on well what are the true insights-- >> If that conversation includes what Judith talked about, the actual model that you use to set priorities and make decisions about what's actually important. So it's not just about improving, this is the test. It's not just about improving your understanding of the wrong thing, it's also testing whether it's the right or wrong thing as well. >> That's right, to be able to test that you need to have humans in dialog with one another bringing different biases to the table to work through okay is there truth in this data? >> It's context and it's correlation and you can have a great correlation that's garbage. You know if you don't have the right context. >> Peter: So I want to, hold on Jim, I want to, >> It's exploratory. >> Hold on Jim, I want to take it to the next question 'cause I want to build off of what you talked about Stephanie and that is that this says something about what is the Edge. And our perspective is that the Edge is not just devices. That when we talk about the Edge, we're talking about human beings and the role that human beings are going to play both as sensors or carrying things with them, but also as actuators, actually taking action which is not a simple thing. So what do you guys think? What does the Edge mean to you? Joe, why don't you start? >> Well, I think it could be a combination of the two. And specifically when we talk about healthcare. So I believe in 2017 when we eat we don't know why we're eating, like I think we should absolutely by now be able to know exactly what is my protein level, what is my calcium level, what is my potassium level? And then find the foods to meet that. What have I depleted versus what I should have, and eat very very purposely and not by taste-- >> And it's amazing that red wine is always the answer. >> It is. (people laughing) And tequila, that helps too. >> Jim: You're a precision foodie is what you are. (several chuckle) >> There's no reason why we should not be able to know that right now, right? And when it comes to healthcare is, the biggest problem or challenge with healthcare is no matter how great of a technology you have, you can't, you can't, you can't manage what you can't measure. And you're really not allowed to use a lot of this data so you can't measure it, right? You can't do things very very scientifically right, in the healthcare world and I think regulation in the healthcare world is really burdening advancement in science. >> Peter: Any thoughts Jennifer? >> Yes, I teach statistics for data scientists, right, so you know we talk about a lot of these concepts. I think what makes these questions so difficult is you have to find a balance, right, a middle ground. For instance, in the case of are you being too biased through data, well you could say like we want to look at data only objectively, but then there are certain relationships that your data models might show that aren't actually a causal relationship. For instance, if there's an alien that came from space and saw earth, saw the people, everyone's carrying umbrellas right, and then it started to rain. That alien might think well, it's because they're carrying umbrellas that it's raining. Now we know from real world that that's actually not the way these things work. So if you look only at the data, that's the potential risk. That you'll start making associations or saying something's causal when it's actually not, right? So that's one of the, one of the I think big challenges. I think when it comes to looking also at things like healthcare data, right? Do you collect data about anything and everything? Does it mean that A, we need to collect all that data for the question we're looking at? Or that it's actually the best, more optimal way to be able to get to the answer? Meaning sometimes you can take some shortcuts in terms of what data you collect and still get the right answer and not have maybe that level of specificity that's going to cost you millions extra to be able to get. >> So Jennifer as a data scientist, I want to build upon what you just said. And that is, are we going to start to see methods and models emerge for how we actually solve some of these problems? So for example, we know how to build a system for stylized process like accounting or some elements of accounting. We have methods and models that lead to technology and actions and whatnot all the way down to that that system can be generated. We don't have the same notion to the same degree when we start talking about AI and some of these Big Datas. We have algorithms, we have technology. But are we going to start seeing, as a data scientist, repeatability and learning and how to think the problems through that's going to lead us to a more likely best or at least good result? >> So I think that's a bit of a tough question, right? Because part of it is, it's going to depend on how many of these researchers actually get exposed to real world scenarios, right? Research looks into all these papers, and you come up with all these models, but if it's never tested in a real world scenario, well, I mean we really can't validate that it works, right? So I think it is dependent on how much of this integration there's going to be between the research community and industry and how much investment there is. Funding is going to matter in this case. If there's no funding in the research side, then you'll see a lot of industry folk who feel very confident about their models that, but again on the other side of course, if researchers don't validate those models then you really can't say for sure that it's actually more accurate, or it's more efficient. >> It's the issue of real world testing and experimentation, A B testing, that's standard practice in many operationalized ML and AI implementations in the business world, but real world experimentation in the Edge analytics, what you're actually transducing are touching people's actual lives. Problem there is, like in healthcare and so forth, when you're experimenting with people's lives, somebody's going to die. I mean, in other words, that's a critical, in terms of causal analysis, you've got to tread lightly on doing operationalizing that kind of testing in the IoT when people's lives and health are at stake. >> We still give 'em placebos. So we still test 'em. All right so let's go to the next question. What are the hottest innovations in AI? Stephanie I want to start with you as a company, someone at a company that's got kind of an interesting little thing happening. We start thinking about how do we better catalog data and represent it to a large number of people. What are some of the hottest innovations in AI as you see it? >> I think it's a little counter intuitive about what the hottest innovations are in AI, because we're at a spot in the industry where the most successful companies that are working with AI are actually incorporating them into solutions. So the best AI solutions are actually the products that you don't know there's AI operating underneath. But they're having a significant impact on business decision making or bringing a different type of application to the market and you know, I think there's a lot of investment that's going into AI tooling and tool sets for data scientists or researchers, but the more innovative companies are thinking through how do we really take AI and make it have an impact on business decision making and that means kind of hiding the AI to the business user. Because if you think a bot is making a decision instead of you, you're not going to partner with that bot very easily or very readily. I worked at, way at the start of my career, I worked in CRM when recommendation engines were all the rage online and also in call centers. And the hardest thing was to get a call center agent to actually read the script that the algorithm was presenting to them, that algorithm was 99% correct most of the time, but there was this human resistance to letting a computer tell you what to tell that customer on the other side even if it was more successful in the end. And so I think that the innovation in AI that's really going to push us forward is when humans feel like they can partner with these bots and they don't think of it as a bot, but they think about as assisting their work and getting to a better result-- >> Hence the augmentation point you made earlier. >> Absolutely, absolutely. >> Joe how 'about you? What do you look at? What are you excited about? >> I think the coolest thing at the moment right now is chat bots. Like to be able, like to have voice be able to speak with you in natural language, to do that, I think that's pretty innovative, right? And I do think that eventually, for the average user, not for techies like me, but for the average user, I think keyboards are going to be a thing of the past. I think we're going to communicate with computers through voice and I think this is the very very beginning of that and it's an incredible innovation. >> Neil? >> Well, I think we all have myopia here. We're all thinking about commercial applications. Big, big things are happening with AI in the intelligence community, in military, the defense industry, in all sorts of things. Meteorology. And that's where, well, hopefully not on an every day basis with military, you really see the effect of this. But I was involved in a project a couple of years ago where we were developing AI software to detect artillery pieces in terrain from satellite imagery. I don't have to tell you what country that was. I think you can probably figure that one out right? But there are legions of people in many many companies that are involved in that industry. So if you're talking about the dollars spent on AI, I think the stuff that we do in our industries is probably fairly small. >> Well it reminds me of an application I actually thought was interesting about AI related to that, AI being applied to removing mines from war zones. >> Why not? >> Which is not a bad thing for a whole lot of people. Judith what do you look at? >> So I'm looking at things like being able to have pre-trained data sets in specific solution areas. I think that that's something that's coming. Also the ability to, to really be able to have a machine assist you in selecting the right algorithms based on what your data looks like and the problems you're trying to solve. Some of the things that data scientists still spend a lot of their time on, but can be augmented with some, basically we have to move to levels of abstraction before this becomes truly ubiquitous across many different areas. >> Peter: Jennifer? >> So I'm going to say computer vision. >> Computer vision? >> Computer vision. So computer vision ranges from image recognition to be able to say what content is in the image. Is it a dog, is it a cat, is it a blueberry muffin? Like a sort of popular post out there where it's like a blueberry muffin versus like I think a chihuahua and then it compares the two. And can the AI really actually detect difference, right? So I think that's really where a lot of people who are in this space of being in both the AI space as well as data science are looking to for the new innovations. I think, for instance, cloud vision I think that's what Google still calls it. The vision API we've they've released on beta allows you to actually use an API to send your image and then have it be recognized right, by their API. There's another startup in New York called Clarify that also does a similar thing as well as you know Amazon has their recognition platform as well. So I think in a, from images being able to detect what's in the content as well as from videos, being able to say things like how many people are entering a frame? How many people enter the store? Not having to actually go look at it and count it, but having a computer actually tally that information for you, right? >> There's actually an extra piece to that. So if I have a picture of a stop sign, and I'm an automated car, and is it a picture on the back of a bus of a stop sign, or is it a real stop sign? So that's going to be one of the complications. >> Doesn't matter to a New York City cab driver. How 'about you Jim? >> Probably not. (laughs) >> Hottest thing in AI is General Adversarial Networks, GANT, what's hot about that, well, I'll be very quick, most AI, most deep learning, machine learning is analytical, it's distilling or inferring insights from the data. Generative takes that same algorithmic basis but to build stuff. In other words, to create realistic looking photographs, to compose music, to build CAD CAM models essentially that can be constructed on 3D printers. So GANT, it's a huge research focus all around the world are used for, often increasingly used for natural language generation. In other words it's institutionalizing or having a foundation for nailing the Turing test every single time, building something with machines that looks like it was constructed by a human and doing it over and over again to fool humans. I mean you can imagine the fraud potential. But you can also imagine just the sheer, like it's going to shape the world, GANT. >> All right so I'm going to say one thing, and then we're going to ask if anybody in the audience has an idea. So the thing that I find interesting is traditional programs, or when you tell a machine to do something you don't need incentives. When you tell a human being something, you have to provide incentives. Like how do you get someone to actually read the text. And this whole question of elements within AI that incorporate incentives as a way of trying to guide human behavior is absolutely fascinating to me. Whether it's gamification, or even some things we're thinking about with block chain and bitcoins and related types of stuff. To my mind that's going to have an enormous impact, some good, some bad. Anybody in the audience? I don't want to lose everybody here. What do you think sir? And I'll try to do my best to repeat it. Oh we have a mic. >> So my question's about, Okay, so the question's pretty much about what Stephanie's talking about which is human and loop training right? I come from a computer vision background. That's the problem, we need millions of images trained, we need humans to do that. And that's like you know, the workforce is essentially people that aren't necessarily part of the AI community, they're people that are just able to use that data and analyze the data and label that data. That's something that I think is a big problem everyone in the computer vision industry at least faces. I was wondering-- >> So again, but the problem is that is the difficulty of methodologically bringing together people who understand it and people who, people who have domain expertise people who have algorithm expertise and working together? >> I think the expertise issue comes in healthcare, right? In healthcare you need experts to be labeling your images. With contextual information where essentially augmented reality applications coming in, you have the AR kit and everything coming out, but there is a lack of context based intelligence. And all of that comes through training images, and all of that requires people to do it. And that's kind of like the foundational basis of AI coming forward is not necessarily an algorithm, right? It's how well are datas labeled? Who's doing the labeling and how do we ensure that it happens? >> Great question. So for the panel. So if you think about it, a consultant talks about being on the bench. How much time are they going to have to spend on trying to develop additional business? How much time should we set aside for executives to help train some of the assistants? >> I think that the key is not, to think of the problem a different way is that you would have people manually label data and that's one way to solve the problem. But you can also look at what is the natural workflow of that executive, or that individual? And is there a way to gather that context automatically using AI, right? And if you can do that, it's similar to what we do in our product, we observe how someone is analyzing the data and from those observations we can actually create the metadata that then trains the system in a particular direction. But you have to think about solving the problem differently of finding the workflow that then you can feed into to make this labeling easy without the human really realizing that they're labeling the data. >> Peter: Anybody else? >> I'll just add to what Stephanie said, so in the IoT applications, all those sensory modalities, the computer vision, the speech recognition, all that, that's all potential training data. So it cross checks against all the other models that are processing all the other data coming from that device. So that the natural language process of understanding can be reality checked against the images that the person happens to be commenting upon, or the scene in which they're embedded, so yeah, the data's embedded-- >> I don't think we're, we're not at the stage yet where this is easy. It's going to take time before we do start doing the pre-training of some of these details so that it goes faster, but right now, there're not that many shortcuts. >> Go ahead Joe. >> Sorry so a couple things. So one is like, I was just caught up on your incentivizing programs to be more efficient like humans. You know in Ethereum that has this notion, which is bot chain, has this theory, this concept of gas. Where like as the process becomes more efficient it costs less to actually run, right? It costs less ether, right? So it actually is kind of, the machine is actually incentivized and you don't really know what it's going to cost until the machine processes it, right? So there is like some notion of that there. But as far as like vision, like training the machine for computer vision, I think it's through adoption and crowdsourcing, so as people start using it more they're going to be adding more pictures. Very very organically. And then the machines will be trained and right now is a very small handful doing it, and it's very proactive by the Googles and the Facebooks and all of that. But as we start using it, as they start looking at my images and Jim's and Jen's images, it's going to keep getting smarter and smarter through adoption and through very organic process. >> So Neil, let me ask you a question. Who owns the value that's generated as a consequence of all these people ultimately contributing their insight and intelligence into these systems? >> Well, to a certain extent the people who are contributing the insight own nothing because the systems collect their actions and the things they do and then that data doesn't belong to them, it belongs to whoever collected it or whoever's going to do something with it. But the other thing, getting back to the medical stuff. It's not enough to say that the systems, people will do the right thing, because a lot of them are not motivated to do the right thing. The whole grant thing, the whole oh my god I'm not going to go against the senior professor. A lot of these, I knew a guy who was a doctor at University of Pittsburgh and they were doing a clinical study on the tubes that they put in little kids' ears who have ear infections, right? And-- >> Google it! Who helps out? >> Anyway, I forget the exact thing, but he came out and said that the principle investigator lied when he made the presentation, that it should be this, I forget which way it went. He was fired from his position at Pittsburgh and he has never worked as a doctor again. 'Cause he went against the senior line of authority. He was-- >> Another question back here? >> Man: Yes, Mark Turner has a question. >> Not a question, just want to piggyback what you're saying about the transfixation of maybe in healthcare of black and white images and color images in the case of sonograms and ultrasound and mammograms, you see that happening using AI? You see that being, I mean it's already happening, do you see it moving forward in that kind of way? I mean, talk more about that, about you know, AI and black and white images being used and they can be transfixed, they can be made to color images so you can see things better, doctors can perform better operations. >> So I'm sorry, but could you summarize down? What's the question? Summarize it just, >> I had a lot of students, they're interested in the cross pollenization between AI and say the medical community as far as things like ultrasound and sonograms and mammograms and how you can literally take a black and white image and it can, using algorithms and stuff be made to color images that can help doctors better do the work that they've already been doing, just do it better. You touched on it like 30 seconds. >> So how AI can be used to actually add information in a way that's not necessarily invasive but is ultimately improves how someone might respond to it or use it, yes? Related? I've also got something say about medical images in a second, any of you guys want to, go ahead Jennifer. >> Yeah, so for one thing, you know and it kind of goes back to what we were talking about before. When we look at for instance scans, like at some point I was looking at CT scans, right, for lung cancer nodules. In order for me, who I don't have a medical background, to identify where the nodule is, of course, a doctor actually had to go in and specify which slice of the scan had the nodule and where exactly it is, so it's on both the slice level as well as, within that 2D image, where it's located and the size of it. So the beauty of things like AI is that ultimately right now a radiologist has to look at every slice and actually identify this manually, right? The goal of course would be that one day we wouldn't have to have someone look at every slice to like 300 usually slices and be able to identify it much more automated. And I think the reality is we're not going to get something where it's going to be 100%. And with anything we do in the real world it's always like a 95% chance of it being accurate. So I think it's finding that in between of where, what's the threshold that we want to use to be able to say that this is, definitively say a lung cancer nodule or not. I think the other thing to think about is in terms of how their using other information, what they might use is a for instance, to say like you know, based on other characteristics of the person's health, they might use that as sort of a grading right? So you know, how dark or how light something is, identify maybe in that region, the prevalence of that specific variable. So that's usually how they integrate that information into something that's already existing in the computer vision sense. I think that's, the difficulty with this of course, is being able to identify which variables were introduced into data that does exist. >> So I'll make two quick observations on this then I'll go to the next question. One is radiologists have historically been some of the highest paid physicians within the medical community partly because they don't have to be particularly clinical. They don't have to spend a lot of time with patients. They tend to spend time with doctors which means they can do a lot of work in a little bit of time, and charge a fair amount of money. As we start to introduce some of these technologies that allow us to from a machine standpoint actually make diagnoses based on those images, I find it fascinating that you now see television ads promoting the role that the radiologist plays in clinical medicine. It's kind of an interesting response. >> It's also disruptive as I'm seeing more and more studies showing that deep learning models processing images, ultrasounds and so forth are getting as accurate as many of the best radiologists. >> That's the point! >> Detecting cancer >> Now radiologists are saying oh look, we do this great thing in terms of interacting with the patients, never have because they're being dis-intermediated. The second thing that I'll note is one of my favorite examples of that if I got it right, is looking at the images, the deep space images that come out of Hubble. Where they're taking data from thousands, maybe even millions of images and combining it together in interesting ways you can actually see depth. You can actually move through to a very very small scale a system that's 150, well maybe that, can't be that much, maybe six billion light years away. Fascinating stuff. All right so let me go to the last question here, and then I'm going to close it down, then we can have something to drink. What are the hottest, oh I'm sorry, question? >> Yes, hi, my name's George, I'm with Blue Talon. You asked earlier there the question what's the hottest thing in the Edge and AI, I would say that it's security. It seems to me that before you can empower agency you need to be able to authorize what they can act on, how they can act on, who they can act on. So it seems if you're going to move from very distributed data at the Edge and analytics at the Edge, there has to be security similarly done at the Edge. And I saw (speaking faintly) slides that called out security as a key prerequisite and maybe Judith can comment, but I'm curious how security's going to evolve to meet this analytics at the Edge. >> Well, let me do that and I'll ask Jen to comment. The notion of agency is crucially important, slightly different from security, just so we're clear. And the basic idea here is historically folks have thought about moving data or they thought about moving application function, now we are thinking about moving authority. So as you said. That's not necessarily, that's not really a security question, but this has been a problem that's been in, of concern in a number of different domains. How do we move authority with the resources? And that's really what informs the whole agency process. But with that said, Jim. >> Yeah actually I'll, yeah, thank you for bringing up security so identity is the foundation of security. Strong identity, multifactor, face recognition, biometrics and so forth. Clearly AI, machine learning, deep learning are powering a new era of biometrics and you know it's behavioral metrics and so forth that's organic to people's use of devices and so forth. You know getting to the point that Peter was raising is important, agency! Systems of agency. Your agent, you have to, you as a human being should be vouching in a secure, tamper proof way, your identity should be vouching for the identity of some agent, physical or virtual that does stuff on your behalf. How can that, how should that be managed within this increasingly distributed IoT fabric? Well a lot of that's been worked. It all ran through webs of trust, public key infrastructure, formats and you know SAML for single sign and so forth. It's all about assertion, strong assertions and vouching. I mean there's the whole workflows of things. Back in the ancient days when I was actually a PKI analyst three analyst firms ago, I got deep into all the guts of all those federation agreements, something like that has to be IoT scalable to enable systems agency to be truly fluid. So we can vouch for our agents wherever they happen to be. We're going to keep on having as human beings agents all over creation, we're not even going to be aware of everywhere that our agents are, but our identity-- >> It's not just-- >> Our identity has to follow. >> But it's not just identity, it's also authorization and context. >> Permissioning, of course. >> So I may be the right person to do something yesterday, but I'm not authorized to do it in another context in another application. >> Role based permissioning, yeah. Or persona based. >> That's right. >> I agree. >> And obviously it's going to be interesting to see the role that block chain or its follow on to the technology is going to play here. Okay so let me throw one more questions out. What are the hottest applications of AI at the Edge? We've talked about a number of them, does anybody want to add something that hasn't been talked about? Or do you want to get a beer? (people laughing) Stephanie, you raised your hand first. >> I was going to go, I bring something mundane to the table actually because I think one of the most exciting innovations with IoT and AI are actually simple things like City of San Diego is rolling out 3200 automated street lights that will actually help you find a parking space, reduce the amount of emissions into the atmosphere, so has some environmental change, positive environmental change impact. I mean, it's street lights, it's not like a, it's not medical industry, it doesn't look like a life changing innovation, and yet if we automate streetlights and we manage our energy better, and maybe they can flicker on and off if there's a parking space there for you, that's a significant impact on everyone's life. >> And dramatically suppress the impact of backseat driving! >> (laughs) Exactly. >> Joe what were you saying? >> I was just going to say you know there's already the technology out there where you can put a camera on a drone with machine learning within an artificial intelligence within it, and it can look at buildings and determine whether there's rusty pipes and cracks in cement and leaky roofs and all of those things. And that's all based on artificial intelligence. And I think if you can do that, to be able to look at an x-ray and determine if there's a tumor there is not out of the realm of possibility, right? >> Neil? >> I agree with both of them, that's what I meant about external kind of applications. Instead of figuring out what to sell our customers. Which is most what we hear. I just, I think all of those things are imminently doable. And boy street lights that help you find a parking place, that's brilliant, right? >> Simple! >> It improves your life more than, I dunno. Something I use on the internet recently, but I think it's great! That's, I'd like to see a thousand things like that. >> Peter: Jim? >> Yeah, building on what Stephanie and Neil were saying, it's ambient intelligence built into everything to enable fine grain microclimate awareness of all of us as human beings moving through the world. And enable reading of every microclimate in buildings. In other words, you know you have sensors on your body that are always detecting the heat, the humidity, the level of pollution or whatever in every environment that you're in or that you might be likely to move into fairly soon and either A can help give you guidance in real time about where to avoid, or give that environment guidance about how to adjust itself to your, like the lighting or whatever it might be to your specific requirements. And you know when you have a room like this, full of other human beings, there has to be some negotiated settlement. Some will find it too hot, some will find it too cold or whatever but I think that is fundamental in terms of reshaping the sheer quality of experience of most of our lived habitats on the planet potentially. That's really the Edge analytics application that depends on everybody having, being fully equipped with a personal area network of sensors that's communicating into the cloud. >> Jennifer? >> So I think, what's really interesting about it is being able to utilize the technology we do have, it's a lot cheaper now to have a lot of these ways of measuring that we didn't have before. And whether or not engineers can then leverage what we have as ways to measure things and then of course then you need people like data scientists to build the right model. So you can collect all this data, if you don't build the right model that identifies these patterns then all that data's just collected and it's just made a repository. So without having the models that supports patterns that are actually in the data, you're not going to find a better way of being able to find insights in the data itself. So I think what will be really interesting is to see how existing technology is leveraged, to collect data and then how that's actually modeled as well as to be able to see how technology's going to now develop from where it is now, to being able to either collect things more sensitively or in the case of say for instance if you're dealing with like how people move, whether we can build things that we can then use to measure how we move, right? Like how we move every day and then being able to model that in a way that is actually going to give us better insights in things like healthcare and just maybe even just our behaviors. >> Peter: Judith? >> So, I think we also have to look at it from a peer to peer perspective. So I may be able to get some data from one thing at the Edge, but then all those Edge devices, sensors or whatever, they all have to interact with each other because we don't live, we may, in our business lives, act in silos, but in the real world when you look at things like sensors and devices it's how they react with each other on a peer to peer basis. >> All right, before I invite John up, I want to say, I'll say what my thing is, and it's not the hottest. It's the one I hate the most. I hate AI generated music. (people laughing) Hate it. All right, I want to thank all the panelists, every single person, some great commentary, great observations. I want to thank you very much. I want to thank everybody that joined. John in a second you'll kind of announce who's the big winner. But the one thing I want to do is, is I was listening, I learned a lot from everybody, but I want to call out the one comment that I think we all need to remember, and I'm going to give you the award Stephanie. And that is increasing we have to remember that the best AI is probably AI that we don't even know is working on our behalf. The same flip side of that is all of us have to be very cognizant of the idea that AI is acting on our behalf and we may not know it. So, John why don't you come on up. Who won the, whatever it's called, the raffle? >> You won. >> Thank you! >> How 'about a round of applause for the great panel. (audience applauding) Okay we have a put the business cards in the basket, we're going to have that brought up. We're going to have two raffle gifts, some nice Bose headsets and speaker, Bluetooth speaker. Got to wait for that. I just want to say thank you for coming and for the folks watching, this is our fifth year doing our own event called Big Data NYC which is really an extension of the landscape beyond the Big Data world that's Cloud and AI and IoT and other great things happen and great experts and influencers and analysts here. Thanks for sharing your opinion. Really appreciate you taking the time to come out and share your data and your knowledge, appreciate it. Thank you. Where's the? >> Sam's right in front of you. >> There's the thing, okay. Got to be present to win. We saw some people sneaking out the back door to go to a dinner. >> First prize first. >> Okay first prize is the Bose headset. >> Bluetooth and noise canceling. >> I won't look, Sam you got to hold it down, I can see the cards. >> All right. >> Stephanie you won! (Stephanie laughing) Okay, Sawny Cox, Sawny Allie Cox? (audience applauding) Yay look at that! He's here! The bar's open so help yourself, but we got one more. >> Congratulations. Picture right here. >> Hold that I saw you. Wake up a little bit. Okay, all right. Next one is, my kids love this. This is great, great for the beach, great for everything portable speaker, great gift. >> What is it? >> Portable speaker. >> It is a portable speaker, it's pretty awesome. >> Oh you grabbed mine. >> Oh that's one of our guys. >> (lauging) But who was it? >> Can't be related! Ava, Ava, Ava. Okay Gene Penesko (audience applauding) Hey! He came in! All right look at that, the timing's great. >> Another one? (people laughing) >> Hey thanks everybody, enjoy the night, thank Peter Burris, head of research for SiliconANGLE, Wikibon and he great guests and influencers and friends. And you guys for coming in the community. Thanks for watching and thanks for coming. Enjoy the party and some drinks and that's out, that's it for the influencer panel and analyst discussion. Thank you. (logo music)
SUMMARY :
is that the cloud is being extended out to the Edge, the next time I talk to you I don't want to hear that are made at the Edge to individual users We've got, again, the objective here is to have community From the Hurwitz Group. And finally Joe Caserta, Joe come on up. And to the left. I've been in the market for a couple years now. I'm the founder and Chief Data Scientist We can hear you now. And I have, I've been developing a lot of patents I just feel not worthy in the presence of Joe Caserta. If you can hear me, Joe Caserta, so yeah, I've been doing We recently rebranded to only Caserta 'cause what we do to make recommendations about what data to use the realities of how data is going to work in these to make sure that you have the analytics at the edge. and ARBI is the integration of Augmented Reality And it's going to say exactly you know, And if the machine appears to approximate what's and analyzed, conceivably some degree of mind reading but the machine as in the bot isn't able to tell you kind of some of the things you talked about, IoT, So that's one of the reasons why the IoT of the primary source. Well, I mean, I agree with that, I think I already or might not be the foundation for your agent All right, so I'm going to start with you. a lot of the applications we develop now are very So it's really interesting in the engineering space And the idea that increasingly we have to be driven I know the New England Journal of Medicine So if you let the, if you divorce your preconceived notions So the doctor examined me, and he said you probably have One of the issues with healthcare data is that the data set the actual model that you use to set priorities and you can have a great correlation that's garbage. What does the Edge mean to you? And then find the foods to meet that. And tequila, that helps too. Jim: You're a precision foodie is what you are. in the healthcare world and I think regulation For instance, in the case of are you being too biased We don't have the same notion to the same degree but again on the other side of course, in the Edge analytics, what you're actually transducing What are some of the hottest innovations in AI and that means kind of hiding the AI to the business user. I think keyboards are going to be a thing of the past. I don't have to tell you what country that was. AI being applied to removing mines from war zones. Judith what do you look at? and the problems you're trying to solve. And can the AI really actually detect difference, right? So that's going to be one of the complications. Doesn't matter to a New York City cab driver. (laughs) So GANT, it's a huge research focus all around the world So the thing that I find interesting is traditional people that aren't necessarily part of the AI community, and all of that requires people to do it. So for the panel. of finding the workflow that then you can feed into that the person happens to be commenting upon, It's going to take time before we do start doing and Jim's and Jen's images, it's going to keep getting Who owns the value that's generated as a consequence But the other thing, getting back to the medical stuff. and said that the principle investigator lied and color images in the case of sonograms and ultrasound and say the medical community as far as things in a second, any of you guys want to, go ahead Jennifer. to say like you know, based on other characteristics I find it fascinating that you now see television ads as many of the best radiologists. and then I'm going to close it down, It seems to me that before you can empower agency Well, let me do that and I'll ask Jen to comment. agreements, something like that has to be IoT scalable and context. So I may be the right person to do something yesterday, Or persona based. that block chain or its follow on to the technology into the atmosphere, so has some environmental change, the technology out there where you can put a camera And boy street lights that help you find a parking place, That's, I'd like to see a thousand things like that. that are always detecting the heat, the humidity, patterns that are actually in the data, but in the real world when you look at things and I'm going to give you the award Stephanie. and for the folks watching, We saw some people sneaking out the back door I can see the cards. Stephanie you won! Picture right here. This is great, great for the beach, great for everything All right look at that, the timing's great. that's it for the influencer panel and analyst discussion.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Judith | PERSON | 0.99+ |
Jennifer | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
Stephanie McReynolds | PERSON | 0.99+ |
Jack | PERSON | 0.99+ |
2001 | DATE | 0.99+ |
Marc Andreessen | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Jennifer Shin | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Joe Caserta | PERSON | 0.99+ |
Suzie Welch | PERSON | 0.99+ |
Joe | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Stephanie | PERSON | 0.99+ |
Jen | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Mark Turner | PERSON | 0.99+ |
Judith Hurwitz | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Elysian | ORGANIZATION | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Qualcomm | ORGANIZATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
Honeywell | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Derek Sivers | PERSON | 0.99+ |
New York | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
New York City | LOCATION | 0.99+ |
1998 | DATE | 0.99+ |
Bill Mannel & Dr. Nicholas Nystrom | HPE Discover 2017
>> Announcer: Live, from Las Vegas, it's the Cube, covering HPE Discover 2017. Brought to you by Hewlett Packard Enterprise. >> Hey, welcome back everyone. We are here live in Las Vegas for day two of three days of exclusive coverage from the Cube here at HPE Discover 2017. Our two next guests is Bill Mannel, VP and General Manager of HPC and AI for HPE. Bill, great to see you. And Dr. Nick Nystrom, senior of research at Pittsburgh's Supercomputer Center. Welcome to The Cube, thanks for coming on, appreciate it. >> My pleasure >> Thanks for having us. >> As we wrap up day two, first of all before we get started, love the AI, love the high performance computing. We're seeing great applications for compute. Everyone now sees that a lot of compute actually is good. That's awesome. What is the Pittsburgh Supercomputer Center? Give a quick update and describe what that is. >> Sure. The quick update is we're operating a system called Bridges. Bridges is operating for the National Science Foundation. It democratizes HPC. It brings people who have never used high performance computing before to be able to use HPC seamlessly, almost as a cloud. It unifies HPC big data and artificial intelligence. >> So who are some of the users that are getting access that they didn't have before? Could you just kind of talk about some of the use cases of the organizations or people that you guys are opening this up to? >> Sure. I think one of the newest communities that's very significant is deep learning. So we have collaborations between the University of Pittsburgh life sciences and the medical center with Carnegie Mellon, the machine learning researchers. We're looking to apply AI machine learning to problems in breast and lung cancer. >> Yeah, we're seeing the data. Talk about some of the innovations that HPE's bringing with you guys in the partnership, because we're seeing, people are seeing the results of using big data and deep learning and breakthroughs that weren't possible before. So not only do you have the democratization cool element happening, you have a tsunami of awesome open source code coming in from big places. You see Google donating a bunch of machine learning libraries. Everyone's donating code. It's like open bar and open source, as I say, and the young kids that are new are the innovators as well, so not just us systems guys, but a lot of young developers are coming in. What's the innovation? Why is this happening? What's the ah-ha moment? Is it just cloud, is it a combination of things, talk about it. >> It's a combination of all the big data coming in, and then new techniques that allow us to analyze and get value from it and from that standpoint. So the traditional HPC world, typically we built equations which then generated data. Now we're actually kind of doing the reverse, which is we take the data and then build equations to understand the data. So it's a different paradigm. And so there's more and more energy understanding those two different techniques of kind of getting two of the same answers, but in a different way. >> So Bill, you and I talked in London last year. >> Yes. With Dr. Gho. And we talked a lot about SGI and what that acquisition meant to you guys. So I wonder if you could give us a quick update on the business? I mean it's doing very well, Meg talked about it on the conference call this last quarter. Really high point and growing. What's driving the growth, and give us an update on the business. >> Sure. And I think the thing that's driving the growth is all this data and the fact that customers want to get value from it. So we're seeing a lot of growth in industries like financial services, like in manufacturing, where folks are moving to digitization, which means that in the past they might have done a lot of their work through experimentation. Now they're moving it to a digital format, and they're simulating everything. So that's driven a lot more HPC over time. As far as the SGI, integration is concern. We've integrated about halfway, so we're at about the halfway point. And now we've got the engineering teams together and we're driving a road map and a new set of products that are coming out. Our Gen 10-based products are on target, and they're going to be releasing here over the next few months. >> So Nick, from your standpoint, when you look at, there's been an ebb and flow in the supercomputer landscape for decades. All the way back to the 70s and the 80s. So from a customer perspective, what do you see now? Obviously China's much more prominent in the game. There's sort of an arms race, if you will, in computing power. From a customer's perspective, what are you seeing, what are you looking for in a supplier? >> Well, so I agree with you, there is this arms race for exaflops. Where we are really focused right now is enabling data-intensive applications, looking at big data service, HPC is a service, really making things available to users to be able to draw on the large data sets you mentioned, to be able to put the capability class computing, which will go to exascale, together with AI, and data and Linux under one platform, under one integrated fabric. That's what we did with HPE for Bridges. And looking to build on that in the future, to be able to do the exascale applications that you're referring to, but also to couple on data, and to be able to use AI with classic simulation to make those simulations better. >> So it's always good to have a true practitioner on The Cube. But when you talk about AI and machine learning and deep learning, John and I sometimes joke, is it same wine, new bottle, or is there really some fundamental shift going on that just sort of happened to emerge in the last six to nine months? >> I think there is a fundamental shift. And the shift is due to what Bill mentioned. It's the availability of data. So we have that. We have more and more communities who are building on that. You mentioned the open source frameworks. So yes, they're building on the TensorFlows, on the Cafes, and we have people who have not been programmers. They're using these frameworks though, and using that to drive insights from data they did not have access to. >> These are flipped upside down, I mean this is your point, I mean, Bill pointed it out, it's like the models are upside down. This is the new world. I mean, it's crazy, I don't believe it. >> So if that's the case, and I believe it, it feels like we're entering this new wave of innovation which for decades we talked about how we march to the cadence of Moore's Law. That's been the innovation. You think back, you know, your five megabyte disk drive, then it went to 10, then 20, 30, now it's four terabytes. Okay, wow. Compared to what we're about to see, I mean it pales in comparison. So help us envision what the world is going to look like in 10 or 20 years. And I know it's hard to do that, but can you help us get our minds around the potential that this industry is going to tap? >> So I think, first of all, I think the potential of AI is very hard to predict. We see that. What we demonstrated in Pittsburgh with the victory of Libratus, the poker-playing bot, over the world's best humans, is the ability of an AI to beat humans in a situation where they have incomplete information, where you have an antagonist, an adversary who is bluffing, who is reacting to you, and who you have to deal with. And I think that's a real breakthrough. We're going to see that move into other aspects of life. It will be buried in apps. It will be transparent to a lot of us, but those sorts of AI's are going to influence a lot. That's going to take a lot of IT on the back end for the infrastructure, because these will continue to be compute-hungry. >> So I always use the example of Kasperov and he got beaten by the machine, and then he started a competition to team up with a supercomputer and beat the machine. Yeah, humans and machines beat machines. Do you expect that's going to continue? Maybe both your opinions. I mean, we're just sort of spitballing here. But will that augmentation continue for an indefinite period of time, or are we going to see the day that it doesn't happen? >> I think over time you'll continue to see progress, and you'll continue to see more and more regular type of symmetric type workloads being done by machines, and that allows us to do the really complicated things that the human brain is able to better process than perhaps a machine brain, if you will. So I think it's exciting from the standpoint of being able to take some of those other roles and so forth, and be able to get those done in perhaps a more efficient manner than we're able to do. >> Bill, talk about, I want to get your reaction to the concept of data. As data evolves, you brought up the model, I like the way you're going with that, because things are being flipped around. In the old days, I want to monetize my data. I have data sets, people are looking at their data. I'm going to make money from my data. So people would talk about how we monetizing the data. >> Dave: Old days, like two years ago. >> Well and people actually try to solve and monetize their data, and this could be use case for one piece of it. Other people are saying no, I'm going to open, make people own their own data, make it shareable, make it more of an enabling opportunity, or creating opportunities to monetize differently. In a different shift. That really comes down to the insights question. What's your, what trends do you guys see emerging where data is much more of a fabric, it's less of a discreet, monetizable asset, but more of an enabling asset. What's your vision on the role of data? As developers start weaving in some of these insights. You mentioned the AI, I think that's right on. What's your reaction to the role of data, the value of the data? >> Well, I think one thing that we're seeing in some of our, especially our big industrial customers is the fact that they really want to be able to share that data together and collect it in one place, and then have that regularly updated. So if you look at a big aircraft manufacturer, for example, they actually are putting sensors all over their aircraft, and in realtime, bringing data down and putting it into a place where now as they're doing new designs, they can access that data, and use that data as a way of making design trade-offs and design decision. So a lot of customers that I talk to in the industrial area are really trying to capitalize on all the data possible to allow them to bring new insights in, to predict things like future failures, to figure out how they need to maintain whatever they have in the field and those sorts of things at all. So it's just kind of keeping it within the enterprise itself. I mean, that's a challenge, a really big challenge, just to get data collected in one place and be able to efficiently use it just within an enterprise. We're not even talking about sort of pan-enterprise, but just within the enterprise. That is a significant change that we're seeing. Actually an effort to do that and see the value in that. >> And the high performance computing really highlights some of these nuggets that are coming out. If you just throw compute at something, if you set it up and wrangle it, you're going to get these insights. I mean, new opportunities. >> Bill: Yeah, absolutely. >> What's your vision, Nick? How do you see the data, how do you talk to your peers and people who are generally curious on how to approach it? How to architect data modeling and how to think about it? >> I think one of the clearest examples on managing that sort of data comes from the life sciences. So we're working with researchers at University of Pittsburgh Medical Center, and the Institute for Precision Medicine at Pitt Cancer Center. And there it's bringing together the large data as Bill alluded to. But there it's very disparate data. It is genomic data. It is individual tumor data from individual patients across their lifetime. It is imaging data. It's the electronic health records. And trying to be able to do this sort of AI on that to be able to deliver true precision medicine, to be able to say that for a given tumor type, we can look into that and give you the right therapy, or even more interestingly, how can we prevent some of these issues proactively? >> Dr. Nystrom, it's expensive doing what you do. Is there a commercial opportunity at the end of the rainbow here for you or is that taboo, I mean, is that a good thing? >> No, thank you, it's both. So as a national supercomputing center, our resources are absolutely free for open research. That's a good use of our taxpayer dollars. They've funded these, we've worked with HP, we've designed the system that's great for everybody. We also can make this available to industry at an extremely low rate because it is a federal resource. We do not make a profit on that. But looking forward, we are working with local industry to let them test things, to try out ideas, especially in AI. A lot of people want to do AI, they don't know what to do. And so we can help them. We can help them architect solutions, put things on hardware, and when they determine what works, then they can scale that up, either locally on prem, or with us. >> This is a great digital resource. You talk about federally funded. I mean, you can look at Yosemite, it's a state park, you know, Yellowstone, these are natural resources, but now when you start thinking about the goodness that's being funded. You want to talk about democratization, medicine is just the tip of the iceberg. This is an interesting model as we move forward. We see what's going on in government, and see how things are instrumented, some things not, delivery of drugs and medical care, all these things are coalescing. How do you see this digital age extending? Because if this continues, we should be doing more of these, right? >> We should be. We need to be. >> It makes sense. So is there, I mean I just not up to speed on what's going on with federally funded-- >> Yeah, I think one thing that Pittsburgh has done with the Bridges machine, is really try to bring in data and compute and all the different types of disciplines in there, and provide a place where a lot of people can learn, they can build applications and things like that. That's really unusual in HPC. A lot of times HPC is around big iron. People want to have the biggest iron basically on the top 500 list. This is where the focus hasn't been on that. This is where the focus has been on really creating value through the data, and getting people to utilize it, and then build more applications. >> You know, I'll make an observation. When we first started doing The Cube, we observed that, we talked about big data, and we said that the practitioners of big data, are where the guys are going to make all the money. And so far that's proven true. You look at the public big data companies, none of them are making any money. And maybe this was sort of true with ERP, but not like it is with big data. It feels like AI is going to be similar, that the consumers of AI, those people that can find insights from that data are really where the big money is going to be made here. I don't know, it just feels like-- >> You mean a long tail of value creation? >> Yeah, in other words, you used to see in the computing industry, it was Microsoft and Intel became, you know, trillion dollar value companies, and maybe there's a couple of others. But it really seems to be the folks that are absorbing those technologies, applying them, solving problems, whether it's health care, or logistics, transportation, etc., looks to where the huge economic opportunities may be. I don't know if you guys have thought about that. >> Well I think that's happened a little bit in big data. So if you look at what the financial services market has done, they've probably benefited far more than the companies that make the solutions, because now they understand what their consumers want, they can better predict their life insurance, how they should-- >> Dave: You could make that argument for Facebook, for sure. >> Absolutely, from that perspective. So I expect it to get to your point around AI as well, so the folks that really use it, use it well, will probably be the ones that benefit it. >> Because the tooling is very important. You've got to make the application. That's the end state in all this That's the rubber meets the road. >> Bill: Exactly. >> Nick: Absolutely. >> All right, so final question. What're you guys showing here at Discover? What's the big HPC? What's the story for you guys? >> So we're actually showing our Gen 10 product. So this is with the latest microprocessors in all of our Apollo lines. So these are specifically optimized platforms for HPC and now also artificial intelligence. We have a platform called the Apollo 6500, which is used by a lot of companies to do AI work, so it's a very dense GPU platform, and does a lot of processing and things in terms of video, audio, these types of things that are used a lot in some of the workflows around AI. >> Nick, anything spectacular for you here that you're interested in? >> So we did show here. We had video in Meg's opening session. And that was showing the poker result, and I think that was really significant, because it was actually a great amount of computing. It was 19 million core hours. So was an HPC AI application, and I think that was a really interesting success. >> The unperfect information really, we picked up this earlier in our last segment with your colleagues. It really amplifies the unstructured data world, right? People trying to solve the streaming problem. With all this velocity, you can't get everything, so you need to use machines, too. Otherwise you have a haystack of needles. Instead of trying to find the needles in the haystack, as they was saying. Okay, final question, just curious on this natural, not natural, federal resource. Natural resource, feels like it. Is there like a line to get in? Like I go to the park, like this camp waiting list, I got to get in there early. How do you guys handle the flow for access to the supercomputer center? Is it, my uncle works there, I know a friend of a friend? Is it a reservation system? I mean, who gets access to this awesomeness? >> So there's a peer reviewed system, it's fair. People apply for large allocations four times a year. This goes to a national committee. They met this past Sunday and Monday for the most recent. They evaluate the proposals based on merit, and they make awards accordingly. We make 90% of the system available through that means. We have 10% discretionary that we can make available to the corporate sector and to others who are doing proprietary research in data-intensive computing. >> Is there a duration, when you go through the application process, minimums and kind of like commitments that they get involved, for the folks who might be interested in hitting you up? >> For academic research, the normal award is one year. These are renewable, people can extend these and they do. What we see now of course is for large data resources. People keep those going. The AI knowledge base is 2.6 petabytes. That's a lot. For industrial engagements, those could be any length. >> John: Any startup action coming in, or more bigger, more-- >> Absolutely. A coworker of mine has been very active in life sciences startups in Pittsburgh, and engaging many of these. We have meetings every week with them now, it seems. And with other sectors, because that is such a great opportunity. >> Well congratulations. It's fantastic work, and we're happy to promote it and get the word out. Good to see HP involved as well. Thanks for sharing and congratulations. >> Absolutely. >> Good to see your work, guys. Okay, great way to end the day here. Democratizing supercomputing, bringing high performance computing. That's what the cloud's all about. That's what great software's out there with AI. I'm John Furrier, Dave Vellante bringing you all the data here from HPE Discover 2017. Stay tuned for more live action after this short break.
SUMMARY :
Brought to you by Hewlett Packard Enterprise. of exclusive coverage from the Cube What is the Pittsburgh Supercomputer Center? to be able to use HPC seamlessly, almost as a cloud. and the medical center with Carnegie Mellon, and the young kids that are new are the innovators as well, It's a combination of all the big data coming in, that acquisition meant to you guys. and they're going to be releasing here So from a customer perspective, what do you see now? and to be able to use AI with classic simulation in the last six to nine months? And the shift is due to what Bill mentioned. This is the new world. So if that's the case, and I believe it, is the ability of an AI to beat humans and he got beaten by the machine, that the human brain is able to better process I like the way you're going with that, You mentioned the AI, I think that's right on. So a lot of customers that I talk to And the high performance computing really highlights and the Institute for Precision Medicine the end of the rainbow here for you We also can make this available to industry I mean, you can look at Yosemite, it's a state park, We need to be. So is there, I mean I just not up to speed and getting people to utilize it, the big money is going to be made here. But it really seems to be the folks that are So if you look at what the financial services Dave: You could make that argument So I expect it to get to your point around AI as well, That's the end state in all this What's the story for you guys? We have a platform called the Apollo 6500, and I think that was really significant, I got to get in there early. We make 90% of the system available through that means. For academic research, the normal award is one year. and engaging many of these. and get the word out. Good to see your work, guys.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
National Science Foundation | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
London | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
Institute for Precision Medicine | ORGANIZATION | 0.99+ |
Pittsburgh | LOCATION | 0.99+ |
Carnegie Mellon | ORGANIZATION | 0.99+ |
Nick | PERSON | 0.99+ |
Meg | PERSON | 0.99+ |
Nick Nystrom | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Bill | PERSON | 0.99+ |
Bill Mannel | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
10% | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
University of Pittsburgh Medical Center | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
HPE | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Discover | ORGANIZATION | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
Yosemite | LOCATION | 0.99+ |
30 | QUANTITY | 0.99+ |
Nystrom | PERSON | 0.99+ |
one year | QUANTITY | 0.99+ |
three days | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Nicholas Nystrom | PERSON | 0.99+ |
HPC | ORGANIZATION | 0.99+ |
two next guests | QUANTITY | 0.99+ |
SGI | ORGANIZATION | 0.99+ |
Kasperov | PERSON | 0.99+ |
2.6 petabytes | QUANTITY | 0.99+ |
80s | DATE | 0.98+ |
one piece | QUANTITY | 0.98+ |
two years ago | DATE | 0.98+ |
70s | DATE | 0.98+ |
Yellowstone | LOCATION | 0.98+ |
five megabyte | QUANTITY | 0.98+ |
one platform | QUANTITY | 0.97+ |
two different techniques | QUANTITY | 0.97+ |
Pitt Cancer Center | ORGANIZATION | 0.97+ |
20 years | QUANTITY | 0.97+ |
Monday | DATE | 0.97+ |
Dr. | PERSON | 0.96+ |
one thing | QUANTITY | 0.96+ |
Gho | PERSON | 0.96+ |
one | QUANTITY | 0.95+ |
first | QUANTITY | 0.95+ |
one place | QUANTITY | 0.95+ |
day two | QUANTITY | 0.94+ |
four terabytes | QUANTITY | 0.94+ |
past Sunday | DATE | 0.93+ |
Pittsburgh Supercomputer Center | ORGANIZATION | 0.93+ |
University of Pittsburgh life sciences | ORGANIZATION | 0.9+ |
last quarter | DATE | 0.89+ |
four times a year | QUANTITY | 0.89+ |
Linux | TITLE | 0.88+ |
19 million core hours | QUANTITY | 0.86+ |
nine months | QUANTITY | 0.84+ |
decades | QUANTITY | 0.83+ |
Bridges | ORGANIZATION | 0.81+ |
Eng Lim Goh, HPE & Tuomas Sandholm, Strategic Machine Inc. - HPE Discover 2017
>> Announcer: Live from Las Vegas, it's theCUBE covering HPE Discover 2017, brought to you by Hewlett Packard Enterprise. >> Okay, welcome back everyone. We're live here in Las Vegas for SiliconANGLE's CUBE coverage of HPE Discover 2017. This is our seventh year of covering HPE Discover Now. HPE Discover in its second year. I'm John Furrier, my co-host Dave Vellante. We've got two great guests, two doctors, PhD's in the house here. So Eng Lim Goh, VP and SGI CTO, PhD, and Tuomas Sandholm, Professor at Carnegie Mellon University of Computer Science and also runs the marketplace lab over there, welcome to theCube guys, doctors. >> Thank you. >> Thank you. >> So the patient is on the table, it's called machine learning, AI, cloud computing. We're living in a really amazing place. I call it open bar and open source. There's so many new things being contributed to open source, so much new hardware coming on with HPE that there's a lot of innovation happening. So want to get your thoughts first on how you guys are looking at this big trend where all this new software is coming in and these new capabilities, what's the vibe, how do you look at this. You must be, Carnegie Mellon, oh this is an amazing time, thoughts. >> Yeah, it is an amazing time and I'm seeing it both on the academic side and the startup side that you know, you don't have to invest into your own custom hardware. We are using HPE with the Pittsburgh Supercomputing Center in academia, using cloud in the startups. So it really makes entry both for academic research and startups easier, and also the high end on the academic research, you don't have to worry about maintaining and staying up to speed with all of the latest hardware and networking and all that. You know it kind of. >> Focus on your research. >> Focus on the research, focus on the algorithms, focus on the AI, and the rest is taken care of. >> John: Eng talk about the supercomputer world that's now there, if you look at the abundant computer intelligent edge we're seeing genome sequencing done in minutes, the prices are dropping. I mean high performance computing used to be this magical, special thing, that you had to get a lot of money to pay for or access to. Democratization is pretty amazing can I just hear your thoughts on what you see happening. >> Yes, Yes democratization in the traditional HPC approach the goal is to prediction and forecasts. Whether the engine will stay productive, or financial forecasts, whether you should buy or sell or hold, let's use the weather as an example. In traditional HPC for the last 30 years what we do to predict tomorrows weather, what we do first is to write all the equations that models the weather. Measure today's weather and feed that in and then we apply supercomputing power in the hopes that it will predict tomorrows weather faster than tomorrow is coming. So that has been the traditional approach, but things have changed. Two big things changed in the last few years. We got these scientists that think perhaps there is a new way of doing it. Instead of calculating your prediction can you not use data intensive method to do an educated guess at your prediction and this is what you do. Instead of feeding today's weather information into the machine learning system they feed 30 years everyday, 10 thousand days. Everyday they feed the data in, the machine learning system guess at whether it will rain tomorrow. If it gets it wrong, it's okay, it just goes back to the weights that control the inputs and adjust them. Then you take the next day and feed it in again after 10 thousand tries, what started out as a wild guess becomes an educated guess, and this is how the new way of doing data intensive computing is starting to emerge using machine learning. >> Democratization is a theme I threw that out because I think it truly is happening. But let's get specific now, I mean a lot of science has been, well is climate change real, I mean this is something that is in the news. We see that in today's news cycle around climate change things of that as you mentioned weather. So there's other things, there's other financial models there's other in healthcare, in disease and there's new ways to get at things that were kind of hocus pocus maybe some science, some modeling, forecasting. What are you seeing that's right low hanging fruit right now that's going to impact lives? What key things will HPC impact besides weather? Is healthcare there, where is everyone getting excited? >> I think health and safety immediately right. Health and safety, you mentioned gene sequencing, drug designs, and you also mentioned in gene sequencing and drug design there is also safety in designing of automobiles and aircrafts. These methods have been traditionally using simulation, but more and more now they are thinking while these engines for example, are flying can you collect more data so you can predict when this engine will fail. And also predict say, when will the aircraft lands what sort of maintenance you should be applying on the engine without having to spend some time on the ground, which is unproductive time, that time on the ground diagnosing the problems. You start to see application of data intensive methods increased in order to improve safety and health. >> I think that's good and I agree with that. You could also kind of look at some of the technology perspective as to what kind of AI is going to be next and if you look back over the last five to seven years, deep learning has become a very hot part of machine learning and machine learning is part of AI. So that's really lifted that up. But what's next there is not just classification or prediction, but decision making on top of that. So we'll see AI move up the chain to actual decision making on top of just the basic machine learning. So optimization, things like that. Another category is what we call strategic reasoning. Traditionally in games like chess, or checkers and now Go, people have fallen to AI and now we did this in January in poker as well, after 14 years of research. So now we can actually take real strategic reasoning under imperfect information settings and apply it to various settings like business strategy optimization, automated negotiation, certain areas of finance, cyber security, and so forth. >> Go ahead. >> I'd like to interject, so we are very on it and impressed right. If we look back years ago IBM beat the worlds top chess player right. And that was an expert system and more recently Google Alpha Go beat even a more complex game, Go, and beat humans in that. But what the Professor has done recently is develop an even more complex game in a sense that it is incomplete information, it is poker. You don't know the other party's cards, unlike in the board game you would know right. This is very much real life in business negotiation in auctions, you don't quite know what the other party' thinking. So I believe now you are looking at ways I hope right, that poker playing AI software that can handle incomplete information, not knowing the other parties but still able to play expertly and apply that in business. >> I want to double down on that, I know Dave's got a question but I want to just follow this thread through. So the AI, in this case augmented intelligence, not so much artificial, because you're augmenting without the perfect information. It's interesting because one of the debates in the big data world has been, well the streaming of all this data is so high-velocity and so high-volume that we don't know what we're missing. Everyone's been trying to get at the perfect information in the streaming of the data. And this is where the machine learning if I get your point here, can do this meta reasoning or this reasoning on top of it to try to use that and say, hey let's not try to solve the worlds problems and boil the ocean over and understand it all, let's use that as a variable for AI. Did I get that right? >> Kind of, kind of I would say, in that it's not just a technical barrier to getting the big data, it's also kind of a strategic barrier. Companies, even if I could tell you all of my strategic information, I wouldn't want to. So you have to worry not just about not having all the information but are there other guys explicitly hiding information, misrepresenting and vice versa, you doing strategic action as well. Unlike in games like Go or chess, where it's perfect information, you need totally different kinds of algorithms to deal with these imperfect information games, like negotiation or strategic pricing where you have to think about the opponents responses. >> It's your hairy window. >> In advance. >> John: Knowing what you don't know. >> To your point about huge amounts of data we are talking about looking for a needle in a haystack. But when the data gets so big and the needles get so many you end up with a haystack of needles. So you need some augmentation to help you to deal with it. Because the humans would be inundated with the needles themselves. >> So is HPE sort of enabling AI or is AI driving HPC. >> I think it's both. >> Both, yeah. >> Eng: Yeah, that's right, both together. In fact AI is driving HPC because it is a new way of using that supercomputing power. Not just doing computer intensive calculation, but also doing it data intensive AI, machine learning. Then we are also driving AI because our customers are now asking the same questions, how do I transition from a computer intensive approach to a data intensive one also. This is where we come in. >> What are your thoughts on how this affects society, individuals, particularly students coming in. You mentioned Gary Kasparov losing to the IBM supercomputer. But he didn't stop there, he said I'm going to beat the supercomputer, and he got supercomputers and humans together and now holds a contest every year. So everybody talks about the impact of machines replacing humans and that's always happened. But what do you guys see, where's the future of work, of creativity for young people and the future of the economy. What does this all mean? >> You want to go first or second? >> You go ahead first. (Eng and Tuomas laughing) >> They love the fighting. >> This is a fun topic, yeah. There's a lot of worry about AI of course. But I think of AI as a tool, much like a hammer or a saw So It's going to make human lives better and it's already making human lives better. A lot of people don't even understand all the things that already have AI that are helping them out. There's this worry that there's going to be a super species that's AI that's going to take over humans. I don't think so, I don't think there's any demand for a super species of AI. Like a hammer and a saw, a hammer and a saw is better than a hammersaw, so I actually think of AI as better being separate tools for separate applications and that is very important for mankind and also nations and the world in the future. One example is our work on kidney exchange. We run the nationwide kidney exchange for the United Network for Organ Sharing, which saves hundreds of lives. This is an example not only that saves lives and makes better decisions than humans can. >> In terms of kidney candidates, timing, is all of that. >> That's a long story, but basically, when you have willing but incompatible live donors, incompatible with the patient they can swap their donors. Pair A gives to pair B gives to pair C gives to pair A for example. And we also co-invented this idea of chains where an altruist donor creates a while chain through our network and then the question of which combination of cycles and chains is the best solution. >> John: And no manual involvement, your machines take over the heavy lifting? >> It's hard because when the number of possible solutions is bigger than the number of atoms in the universe. So you have to have optimization AI actually make the decisions. So now our AI makes twice a week, these decisions for the country or 66% of the transplant centers in the country, twice a week. >> Dr. Goh would you would you add anything to the societal impact of AI? >> Yes, absolutely on the cross point on the saw and hammer. That's why these AI systems today are very specific. That's why some call them artificial specific intelligence, not general intelligence. Now whether a hundred years from now you take a hundred of these specific intelligence and combine them, whether you get an emergent property of general intelligence, that's something else. But for now, what they do is to help the analyst, the human, the decision maker and more and more you will see that as you train these models it's hard to make a lot of correct decisions. But ultimately there's a difference between a correct decision and, I believe, a right decision. Therefore, there always needs to be a human supervisor there to ultimately make the right decision. Of course, he will listen to the machine learning algorithm suggesting the correct answer, but ultimately the human values have to be applied to decide whether society accepts this decision. >> All models are wrong, some are useful. >> So on this thing there's a two benefits of AI. One is a this saves time, saves effort, which is a labor savings, automation. The other is better decision making. We're seeing the better decision making now become more of an important part instead of just labor savings or what have you. We're seeing that in the kidney exchange and now with strategic reasoning, now for the first time we can do better strategic reasoning than the best humans in imperfect information settings. Now it becomes almost a competitive need. You have to have, what I call, strategic augmentation as a business to be competitive. >> I want to get your final thoughts before we end the segment, this is more of a sharing component. A lot of young folks are coming in to computer science and or related sciences and they don't need to be a computer science major per se, but they have all the benefits of this goodness we're talking about here. Your advice, if both of you could share you opinion and thoughts in reaction to the trend where, the question we get all the time is what should young people be thinking about if they're going to be modeling and simulating a lot of new data scientists are coming in some are more practitioner oriented, some are more hard core. As this evolution of simulations and modeling that we're talking about have scale here changes, what should they know, what should be the best practice be for learning, applying, thoughts. >> For me you know the key thing is be comfortable about using tools. And for that I think the young chaps of the world as they come out of school they are very comfortable with that. So I think I'm actually less worried. It will be a new set of tools these intelligent tools, leverage them. If you look at the entire world as a single system what we need to do is to move our leveraging of tools up to a level where we become an even more productive society rather than worrying, of course we must be worried and then adapt to it, about jobs going to AI. Rather we should move ourselves up to leverage AI to be an even more productive world and then hopefully they will distribute that wealth to the entire human race, becomes more comfortable given the AI. >> Tuomas your thoughts? >> I think that people should be ready to actually for the unknown so you've got to be flexible in your education get the basics right because those basics don't change. You know, math, science, get that stuff solid and then be ready to, instead of thinking about I'm going to be this in my career, you should think about I'm going to be this first and then maybe something else I don't know even. >> John: Don't memorize the test you don't know you're going to take yet, be more adaptive. >> Yes, creativity is very important and adaptability and people should start thinking about that at a young age. >> Doctor thank you so much for sharing your input. What a great world we live in right now. A lot of opportunities a lot of challenges that are opportunities to solve with high performance computing, AI and whatnot. Thanks so much for sharing. This is theCUBE bringing you all the best coverage from HPE Discover. I'm John Furrier with Dave Vellante, we'll be back with more live coverage after this short break. Three days of wall to wall live coverage. We'll be right back. >> Thanks for having us.
SUMMARY :
covering HPE Discover 2017, brought to you and also runs the marketplace lab over there, So the patient is on the table, and the startup side that you know, Focus on the research, focus on the algorithms, done in minutes, the prices are dropping. and this is what you do. things of that as you mentioned weather. Health and safety, you mentioned gene sequencing, You could also kind of look at some of the technology So I believe now you are looking at ways So the AI, in this case augmented intelligence, and vice versa, you doing strategic action as well. So you need some augmentation to help you to deal with it. are now asking the same questions, and the future of the economy. (Eng and Tuomas laughing) and also nations and the world in the future. is the best solution. is bigger than the number of atoms in the universe. Dr. Goh would you would you add anything and combine them, whether you get an emergent property We're seeing that in the kidney exchange and or related sciences and they don't need to be and then adapt to it, about jobs going to AI. for the unknown so you've got to be flexible John: Don't memorize the test you don't know and adaptability and people should start thinking This is theCUBE bringing you all
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
John | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Gary Kasparov | PERSON | 0.99+ |
Tuomas Sandholm | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Tuomas | PERSON | 0.99+ |
30 years | QUANTITY | 0.99+ |
66% | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
10 thousand days | QUANTITY | 0.99+ |
January | DATE | 0.99+ |
Three days | QUANTITY | 0.99+ |
two doctors | QUANTITY | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
One | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
Eng Lim Goh | PERSON | 0.99+ |
Pittsburgh Supercomputing Center | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Both | QUANTITY | 0.99+ |
twice a week | QUANTITY | 0.99+ |
Strategic Machine Inc. | ORGANIZATION | 0.99+ |
seventh year | QUANTITY | 0.99+ |
two benefits | QUANTITY | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
today | DATE | 0.98+ |
HPE Discover | ORGANIZATION | 0.98+ |
Carnegie Mellon | ORGANIZATION | 0.98+ |
first | QUANTITY | 0.98+ |
One example | QUANTITY | 0.98+ |
Carnegie Mellon University of Computer Science | ORGANIZATION | 0.98+ |
United Network for Organ Sharing | ORGANIZATION | 0.98+ |
two great guests | QUANTITY | 0.98+ |
second year | QUANTITY | 0.98+ |
tomorrows | DATE | 0.97+ |
Dr. | PERSON | 0.97+ |
seven years | QUANTITY | 0.97+ |
Goh | PERSON | 0.97+ |
second | QUANTITY | 0.96+ |
first time | QUANTITY | 0.96+ |
Two big | QUANTITY | 0.96+ |
Go | TITLE | 0.95+ |
10 thousand tries | QUANTITY | 0.94+ |
one | QUANTITY | 0.94+ |
next day | DATE | 0.91+ |
last few years | DATE | 0.91+ |
a hundred years | QUANTITY | 0.91+ |
single system | QUANTITY | 0.88+ |
SGI | ORGANIZATION | 0.88+ |
hundreds of lives | QUANTITY | 0.87+ |
chess | TITLE | 0.86+ |
last 30 years | DATE | 0.86+ |
pair A | OTHER | 0.85+ |
hundred | QUANTITY | 0.84+ |
HPE Discover 2017 | EVENT | 0.83+ |
HPE Discover | EVENT | 0.82+ |
pair B | OTHER | 0.81+ |
14 years | QUANTITY | 0.8+ |
SiliconANGLE | ORGANIZATION | 0.79+ |
2017 | DATE | 0.78+ |
transplant centers | QUANTITY | 0.75+ |
five | QUANTITY | 0.73+ |
Eng | PERSON | 0.72+ |
last | QUANTITY | 0.71+ |
years ago | DATE | 0.7+ |
every | QUANTITY | 0.68+ |
VP | PERSON | 0.67+ |
theCube | ORGANIZATION | 0.66+ |
pair C | OTHER | 0.59+ |
Alpha Go | COMMERCIAL_ITEM | 0.57+ |
Chris Knittel, MIT | MIT Expert Series: UBER and Racial Discrimination
>> Welcome to the latest edition of the MIT Sloan Expert Series. I'm your host, Rebecca Knight. Our topic today is racial bias in the sharing economy, how Uber and Lyft are failing black passengers, and what to do about it. Here to talk about that is Chris Knittel. He is a professor of Applied Economics here at MIT Sloan, and he's also the co-author of a study that shows how Uber and Lyft drivers discriminate based on a passenger's skin color. Thanks so much for joining us. >> Oh, it's great to be here. >> Before we begin, I want to remind our viewers that we will be taking your questions live on social media. Please use the hashtag MITSloanExpert to pose your questions on Twitter. Chris, let's get started. >> Chris: Sure. So there is a lot of research that shows how difficult it is to hail a cab, particularly for black people. Uber and Lyft were supposed to represent a more egalitarian travel option, but you didn't find that. >> That's right, so what we found in two experiments that we ran, and one in Seattle, and one in Boston, is that Uber and Lyft drivers were discriminating based on race. >> Rebecca: We've already seen, actually some evidence of racial discrimination in the sharing economy, not just with ride sharing apps. >> Sure, so there's evidence for Airbnb. And what's interesting about Airbnb actually, is that discrimination is two-sided. So not only do white renters of properties not want to rent to black rentees, but white renters do not stay at a home of a black home owner. >> Did your findings and the findings of that other research you just talked about, does it make you discouraged? >> Partly, I was an optimist. We went into this, at least I went into this hoping that we wouldn't find discrimination, but one thing that has helped, or at least shined a more positive light, is that there are ways that we can do better in this sector. >> You've talked about this study, which you undertook with colleagues from the University of Washington and Stanford, shows the power of the experiment. Can you talk a little bit about what you mean by that? >> Sure, what we did was actually run two randomized control trials. Just like you would test whether a blood pressure medication works, so you would have a control group that wouldn't get the medication, and a treatment group that would. We did the same thing where we sent out in Seattle both black and white RAs that hailed Uber and Lyft rides, and we randomized whether or not it was a black RA calling the ride or a white RA that particular time, and they all drove the same exact route at the same exact times of the day. >> So what did you find? Let's talk about first, what you found in Seattle. >> Sure, so in Seattle, we measured how long it took for a ride to be accepted, and also, how long it took, once it was accepted, for the driver to show up and pick up the passenger. And what we found is, if you're a black research assistant, that in hailing an Uber ride, it took 30 percent longer for a ride to be accepted, and also 30 percent longer for the driver to show up and pick you up. >> 30 percent seems substantial. >> Well, for the time it takes to accept the ride, we're talking seconds, but for the time it takes for a passenger to actually be picked up, it's over a minute longer. And I'll mention also for Lyft, we found a 30 percent increase in the amount of time it took to be accepted, but there was no statistically significant impact on how long it took for the driver to actually show up. >> So, the thing about the minute difference, that can be material, particularly if you're trying to catch a cab, an Uber or a Lyft for a job interview or to get to the airport. >> Yeah, this is introspection, but I always seem to be late, so even a minute can be very costly. >> I hear you, I hear you. So why do you think there was the difference between Lyft and Uber? >> What's interesting, and we learned this while we were doing the experiment, a Lyft driver sees the name of the passenger before they accept the ride, whereas an Uber driver only sees the name after they've accepted. So in order for an Uber driver to discriminate, they have to first accept the ride, and then see the name and then cancel, whereas a Lyft driver can just pass it up right away. So it turns out because of that, the Lyft platform is more easily capable of handling discrimination because it pushed it to another driver faster than the Uber platform. >> I want to come back to that, but I want to say also, that difference caused you to change the way you did the experiment in Boston. >> In Boston, a couple differences. One is that we sent out RAs with two cell phones actually. So each RA had an Uber and Lyft account under a stereotypically white sounding name, and then also an Uber and Lyft account under a stereotypically black sounding name. That was one difference, and then also, what we measured in Boston that we didn't measure in Seattle, is cancellations. So an Uber driver accepts the ride, and then cancels on the RA. >> Let's go back to the stereotypically black sounding name verses white sounding name. You're an economist, how did you determine what those names are? >> We relied on another published paper that actually looked at birth records from the 1970s in Boston, and the birth records tell you not only the name, but also the race of the baby. So they found names that actually 100 percent of the time were African American or 100 percent of the time were not African American. So we relied on those names. >> And the names were... >> So you could imagine Jamal for example, compared to Jerry. >> Alright, Ayisha and Alison. >> Chris: Sure. >> So what was your headline finding in Boston? >> In Boston, what we found is, if you were a black male calling an Uber ride, that you were canceled upon more than twice as often as if you were a white male. >> And what about Lyft? >> For Lyft, there is no cancellation effect, and that's not because there's no discrimination, it's just that they don't have to accept and then cancel the ride, they can just pass up the ride completely. It's actually a nice little experiment within the experiment, we shouldn't find an effect of names on cancellations for Lyft and in fact, we don't. >> And also, the driver network is much thicker in Boston than in Seattle. >> So in Boston, although we found this cancellation effect, we didn't find that it has a measurable impact on how long you wait. And this is somewhat speculation, but we speculate that that's because the driver network is so much more dense in Boston that, although you were canceled upon, there's so many only drivers nearby, that it doesn't lead to a longer wait time. >> How do you think what you found compares to hailing traditional cabs? We started our conversation talking about the vast body of research that shows how difficult it is for black people to hail cabs. >> Yeah, we are quick to point out that we are not at all saying that Uber and Lyft are worse than traditional, status quo system, and we want to definitely make that clear. In fact, in Seattle, we had our same research assistants stand at the busiest corners and hail cabs. What we found there is, if you were a black research assistant, the first cab passed you 80 percent of the time. But if you were a white research assistant, it only passed you 20 percent of the time. So just like the previous literature has found, we found discrimination with the status quo system as well. >> You've talked to the companies about you findings, what has the response been? >> That's been actually heartening. Both companies reached out to us very quickly, and we've had continued conversations with them, and we're actually trying to design followup studies to minimize the amount of discrimination that's occurring for both Uber and Lyft. >> But those are off the record and... >> Right, we're not talking specifics, but what I can say is that the companies understand this research and they definitely want to do better. >> In fact, the companies both have issued statements about this, the first one is from Lyft, "we are extremely proud of the positive impact..." Uber has also responded. So let's talk about solutions to this. What do you and your colleagues who undertook this research suggest? >> We've been brainstorming, we don't know for sure if we have the silver bullet, but a few things could change, for example, you could imagine Uber and Lyft getting rid of names completely. We realize that has a trade off in the sense that it's nice to know the name of the driver... >> Rebecca: Sure, you can strike up a conversation... >> It makes it more social, it makes it more personal, more peer to peer if you will. But it would eliminate the type of discrimination that we uncovered. Another potential change is to delay when you give the name to the driver, so that the driver has to commit more to the ride than he or she previously had to. And that may increase the costs of discrimination. >> So that would be changing the software? >> Right, so you could imagine now, like I said, with Lyft that you see the name right away. Maybe you wait until they're 30 seconds away from the passenger before you give them the name. >> What about the dawn of the age of autonomous vehicles? Might that have an impact? We already know that Uber is experimenting with driverless cars in Pittsburgh and Arizona. >> That would obviously solve it, so that would take the human element out of things, and it's important to point out that these are the drivers that are deciding to discriminate. So provided you didn't write the autonomous vehicle software to discriminate, you would know for sure that that car is not going to discriminate. >> What about a driver education campaign? Do you think that would make a difference? I'm reminded of an essay written by Doug Glanville, who is an ESPN commentator and former pro ball player. He writes, on talking about his experience being denied service by an Uber driver, "the driver had concluded I was a threat, "either because I was dangerous myself, "or because I would direct him to a bad neighborhood, "or give him a lower tip, either way, "given the circumstances, it was hard "to attribute his refusal to anything other than my race. "Shortly after we walked away, I saw the driver assisting "another passenger who was white." >> We all hope that information helps, and eliminates discrimination. It's certainly possible that Uber and Lyft could have a full information campaign, where they show the tip rates for different ethnicities, they show the bad ride probabilities for different ethnicities, and my hope is that once the drivers learn that there aren't differences across ethnicities, that the drivers would internalize that, and stop discriminating. >> Policy, Senator Al Franken has weighed in on this, urging Uber and Lyft to address your research. Do you think that there could be policies too? Does government have a role to play? >> Potentially, but what I'll say again is, that Uber and Lyft, I think have all the incentive in the world to fix this, and that they seem to be taking active steps to fixing this. So what could policy makers do? They can, obviously it's already outlawed. They could come down and potentially fine the companies if there's more evidence of discrimination. But I would at least allow the companies some time to internalize this research, and respond to it, and see how effective they can be. >> Many, many think tanks and government advocacy groups have weighed in too. The MIT Sloan Expert Series recently sat down with Eva Millona of the Massachusetts Immigrant and Refugee Coalition. She will talk about this research in the context of immigration, let's roll that clip. >> We're an advocacy organization, and we represent the interest of foreign born, and our mission is to promote and enhance immigrant and refugee integration. Anecdotally, yes, I would say that the research, and given the impressive sample of the research really leads to a sad belief that discrimination is still out there, and there is a lot that needs to be done across sectors to really address these issues. We are really privileged to live in such a fantastic commonwealth with the right leadership and all sectors together, really making our commonwealth a welcoming place. And I do want to highlight the fantastic role of our Attorney General for standing up for our values, but Massachusetts is one state, and it could be an example, but the concern is nation wide. Given a very divisive campaign, and also not just a campaign, but also, what is currently happening at the national level that the current administration is really rejecting this welcoming effort, and the values of our country as a country, who welcomes immigrants. All sectors need to be involved in an effort to really make our society a better one for everyone. And it's going to take political leadership to really set the right tone, send the right message, and really look into the integration, and the welcoming of the newcomers as an investment in our future of our nation. Uber and Lyft have an opportunity here to provide leadership and come up with promotion of policies that integrate the newcomers, or that are welcoming to the newcomers, provide education and training, and train their people. And as troubling as the result of this research are, we like to believe that this is the attitude of the drivers, but not really what the corporate represents, so we see an opportunity for the corporate to really step in and work and promote policies of integration, policies of improvement and betterment for the whole of society and provide an example. Let me say thank you to Professor Knittle for his leadership and MIT for always being a leader, and looking into these issues. But if we can go deeper into A, the size, B, the geography, but also looking into a wider range of all communities that are represented. Looking into the Latino community, looking into the Arab communities in other parts of the nation in a more rigorous, more deep and larger size of research will be very helpful in terms of promoting better policies and integration for everybody who chooses America to be their home. >> That was Eva Millona of the Massechusetts Immigrant and Refugee Advocacy Coalition. Chris, are you confident this problem can in fact be remedied? >> I think we can do better, for sure. And I would say we need more studies like what we just preformed to see how widespread it is. We only studied two cities, we also haven't looked at all at how the driver's race impacts the discrimination. >> Now we're going to turn to you, questions from our viewers. Questions have already been coming in this morning and overnight, lots of great ones. Please use the hashtag MITSloanExpert to pose your question. The first one comes from Justin Wang, who is an MIT Sloan MBA student. He asks, "what policies can sharing economy startups "implement to reduce racial bias?" >> Well, I would say the first thing is to be aware of this. I think Uber and Lyft and Airbnb potentially were caught off guard with the amount of discrimination that was taking place. So the research that we preformed, and the research on Airbnb gives new startups a head start on designing their platforms. >> Just knowing that this is an issue. >> Knowing it's an issue, and potentially designing their platforms to think of ways to limit the amount of discrimination. >> Another question, did you look at gender bias? Do you have any indication that drivers discriminate based on gender? >> We did look at gender bias. The experiments weren't set up to necessarily nail that, but one thing that we found, for example in Boston, is that there is some evidence that women drivers were taken on longer trips. Again, both the male and the female RAs are going from the same point A to the same point B. >> Rebecca: That was a controlled part of the setting. >> That was the controlled part of the experiment. And we found evidence that women passengers were taken on longer trips and in fact, one of our RAs commented that she remembers going through the same intersection three times before she finally said something to the driver. >> And you think... So you didn't necessarily study this as part of it, but do you have any speculation, conjecture about why this was happening? >> Well, there's two potential motives. One is a financial motive that, by taking the passenger on a longer drive. They potentially get a higher fare. But I've heard anecdotal evidence that a more social motive might also be at play. For example, I have a colleague here at Sloan, who's told me that she's been asked out on dates three times while taking Uber and Lyft rides. >> So drivers taking the opportunity to flirt a little bit. >> Chris: Sure. >> Another question, can you comment on the hashtag DeleteUber campaign? This of course, is about the backlash against Uber responding that it was intending to profit from President Trump's executive order, the banning immigrants and refugees from certain countries from entering the United States. Uber maintains that its intentions were misunderstood, but it didn't stop the hashtag DeleteUber campaign. >> Yeah, I haven't followed that super closely, but to me it seems like Uber's getting a bit of a bad rap. One potential reason why they allowed Uber drivers to continue working is that, maybe they wanted to bring protesters to the airports to protest. So from that perspective, actually having Uber and Lyft still in business would be beneficial. >> Another question, did your study take into account the race of the drivers themselves? >> We actually we not allowed to. So any time you do a randomized control trial in the field like this, you have to go through a campus committee that approves or disapproves the research, and they were worried that if we collected information on the driver, that potentially, Uber and Lyft could go back into their records and find the drivers that discriminate, and then have penalties assigned to those drivers. >> So it just wouldn't be allowed to... >> At least in this first phase, yeah. They didn't want us to collect those data. >> Last question, we have time for one more. Why aren't there more experiments in the field of applies economics like this one? That's a good question. >> That's a great question, and in fact, I think many of us are trying to push experiments as much as possible. My other line of research is actually in energy and climate change research, and we've been- >> Rebecca: You like the hot topic. (lauhging) >> We've been designing a bunch of experiments to look at how information impacts consumers' choices in terms of what cars to buy, how it impacts their use of electricity at home. And experiments, randomized control trials actually started in developmental economics, where MIT has actually pioneered their use. And again, it's the best way to actually test, the most rigorous way to test whether intervention actually has an effect because you have both the controlled group and the treatment group. >> So why aren't they done more often? >> Well, it's tough, often you need to find a third party, for example, we didn't need a third party in the sense that we could just send RAs out with Uber and Lyft. But if we wanted to do anything with the drivers, for example, an information campaign, or if we wanted to change the platform at all, we would've needed Uber and Lyft to partner with us, and that can sometimes be difficult to do. And also experiments, let's be honest, are pretty expensive. >> Expensive because, you obviously weren't partnered with Uber and Lyft for this one, but... >> Right, but we had research assistants take 1500 Uber and Lyft rides, so we had to pay for each of those rides, and we also had to give them an hourly rate for their time. >> Well, Chris Knittle, thank you so much. This has been great talking to you, and you've given us a lot to think about. >> It's been fun, thanks for having me. >> And thank you for joining us on this edition of the MIT Sloan Expert Series. We hope to see you again soon.
SUMMARY :
and he's also the co-author of a study that we will be taking your questions live on social media. a more egalitarian travel option, but you didn't find that. that we ran, and one in Seattle, and one in Boston, of racial discrimination in the sharing economy, is that discrimination is two-sided. is that there are ways that we can do better in this sector. from the University of Washington and Stanford, We did the same thing where we sent out in Seattle So what did you find? for the driver to show up and pick you up. Well, for the time it takes to accept the ride, for a job interview or to get to the airport. but I always seem to be late, so even a minute can So why do you think there was the difference a Lyft driver sees the name of the passenger the way you did the experiment in Boston. One is that we sent out RAs with two cell phones actually. Let's go back to the stereotypically and the birth records tell you not only the name, that you were canceled upon more it's just that they don't have to accept and then cancel And also, the driver network that it doesn't lead to a longer wait time. We started our conversation talking about the vast body the first cab passed you 80 percent of the time. to minimize the amount of discrimination but what I can say is that the companies understand So let's talk about solutions to this. that it's nice to know the name of the driver... so that the driver has to commit more to the ride from the passenger before you give them the name. What about the dawn of the age of autonomous vehicles? to discriminate, you would know for sure that "given the circumstances, it was hard that once the drivers learn that there aren't differences Does government have a role to play? and that they seem to be taking active steps to fixing this. in the context of immigration, let's roll that clip. of the research really leads to a sad belief the Massechusetts Immigrant and Refugee Advocacy Coalition. at how the driver's race impacts the discrimination. "implement to reduce racial bias?" So the research that we preformed, and the research to limit the amount of discrimination. from the same point A to the same point B. before she finally said something to the driver. So you didn't necessarily study this as part of it, by taking the passenger on a longer drive. but it didn't stop the hashtag DeleteUber campaign. So from that perspective, actually having Uber that approves or disapproves the research, At least in this first phase, yeah. Last question, we have time for one more. to push experiments as much as possible. Rebecca: You like the hot topic. And again, it's the best way to actually test, and that can sometimes be difficult to do. Expensive because, you obviously weren't partnered and Lyft rides, so we had to pay for each of those rides, This has been great talking to you, We hope to see you again soon.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Doug Glanville | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Rebecca | PERSON | 0.99+ |
Eva Millona | PERSON | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Lyft | ORGANIZATION | 0.99+ |
Seattle | LOCATION | 0.99+ |
Justin Wang | PERSON | 0.99+ |
Pittsburgh | LOCATION | 0.99+ |
Chris Knittle | PERSON | 0.99+ |
Arizona | LOCATION | 0.99+ |
Chris | PERSON | 0.99+ |
Airbnb | ORGANIZATION | 0.99+ |
Chris Knittel | PERSON | 0.99+ |
20 percent | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
80 percent | QUANTITY | 0.99+ |
30 percent | QUANTITY | 0.99+ |
University of Washington | ORGANIZATION | 0.99+ |
100 percent | QUANTITY | 0.99+ |
30 seconds | QUANTITY | 0.99+ |
Massachusetts Immigrant and Refugee Coalition | ORGANIZATION | 0.99+ |
Massechusetts Immigrant and Refugee Advocacy Coalition | ORGANIZATION | 0.99+ |
MIT | ORGANIZATION | 0.99+ |
Jerry | PERSON | 0.99+ |
three times | QUANTITY | 0.99+ |
President | PERSON | 0.99+ |
two experiments | QUANTITY | 0.99+ |
MIT Sloan | ORGANIZATION | 0.99+ |
Knittle | PERSON | 0.99+ |