Lillian Carrasquillo, Spotify | Stanford Women in Data Science (WiDS) Conference 2020
>>live from Stanford University. It's the queue covering Stanford women in data science 2020. Brought to you by Silicon Angle Media. >>Yeah, yeah. Hi. And welcome to the Cube. I'm your host, Sonia Atari. And we're live at Stanford University, covering the fifth annual Woods Women in Data Science Conference. Joining us today is Lillian Kearse. Keo, who's the Insights manager at Spotify. Slowly and welcome to the Cube. Thank you so much for having me. So tell us a little bit about your role at a Spotify. >>Yeah, So I'm actually one of the few insights managers in the personalization team. Um, and within my little group, we think about data and algorithms that help power the larger personalization experiences throughout Spotify. So, from your limits to discover weekly to your year and wrap stories to your experience on home and the search results, that's >>awesome. Can you tell us a little bit more about the personalization? Um, team? >>Yes. We actually have a variety of different product areas that come together to form the personalization mission, which is the mission is like the term that we use for a big department at Spotify, and we collaborate across different product areas to understand what are the foundational data sets and the foundational machine learning tools that are needed to be able to create features that a user can actually experience in the app? >>Great. Um, and so you're going to be on the career panel today? How do you feel about that? I'm >>really excited. Yeah, Yeah, the would seem is in a great job of bringing together Diverse is very, uh, it's overused term. Sometimes they're a very diverse group of people with lots of different types of experiences, which I think is core. So how I think about data science, it's a wide definition. And so I think it's great to show younger and mid career women all of the different career paths that we can all take. >>And what advice would you would you give to? Women were coming out of college right now about data science. >>Yeah, so my my big advice is to follow your interests. So there's so many different types of data science problems. You don't have to just go into a title that says data scientists or a team that says Data scientist, You can follow your interest into your data science. Use your data science skills in ways that might require a lot of collaboration or mixed methods, or work within a team where there are different types of different different types of expertise coming together to work on problems. >>And speaking of mixed methods, insights is a team that's a mixed methods research groups. So tell us more about that. Yes, I >>personally manage a data scientist, Um, user researcher and the three of us collaborate highly together across their disciplines. We also collaborate across research science, the research science team right into the product and engineering teams that are actually delivering the different products that users get to see. So it's highly collaborative, and the idea is to understand the problem. Space deeply together, be able to understand. What is it that we're trying to even just form in our head is like the need that a user work and human and user human has, um, in bringing in research from research scientists and the product side to be able to understand those needs and then actually have insights that another human, you know, a product owner you can really think through and understand the current space and like the product opportunities >>and to understand that user insight do use a B testing. >>We use a lot of >>a B testing, so that's core to how we think about our users at Spotify. So we use a lot of a B testing. We do a lot of offline experiments to understand the potential consequences or impact that certain interventions can have. But I think a B testing, you know, there's so much to learn about best practices there and where you're talking about a team that does foundational data and foundational features. You also have to think about unintended or second order effects of algorithmic a B test. So it's been just like a huge area of learning in a huge area of just very interesting outcomes. And like every test that we run, we learn a lot about not just the individual thing. We're testing with just the process overall. >>And, um, what are some features of Spotify that customers really love anything? Anything >>that's like we know use a daily mix people absolutely love every time that I make a new friend and I saw them what they work on there like I was just listening to my daily makes this morning discover weekly for people who really want >>to stay, >>you know, open to new music is also very popular. But I think the one that really takes it is any of the end of year wrapped campaigns that we have just the nostalgia that people have, even just for the last year. But in 2019 we were actually able to do 10 years, and that amount of nostalgia just went through the roof like people were just like, Oh my goodness, you captured the time that I broke up with that, you >>know, the 1st 5 years ago, or just like when I discovered that I love Taylor Swift, even though I didn't think I like their or something like that, you know? >>Are there any surprises or interesting stories that you have about, um, interesting user experiences? Yeah. >>I mean, I could give I >>can give you an example from my experience. So recently, A few a few months ago, I was scrolling through my home feed, and I noticed that one of the highly rated things for me was women in >>country, and I was like, Oh, that's kind of weird. I don't consider >>myself a country fan, right? And I was like having this moment where I went through this path of Wait, That's weird. Why would Why would this recommend? Why would the home screen recommend women in country, country music to me? And then when I click through it, um, it would show you a little bit of information about it because it had, you know, Dolly Parton. It had Margo Price and it had the high women and those were all artistes. And I've been listening to a lot, but I just had not formed an identity as a country music. And then I click through It was like, Oh, this is a great play list and I listen to it and it got me to the point where I was realizing I really actually do like country music when the stories were centered around women, that it was really fun to discover other artists that I wouldn't have otherwise jumped into as well. Based on the fact that I love the story writing and the song, writing these other country acts that >>so quickly discovered that so you have a degree in industrial mathematics, went to a liberal arts college on purpose because you want to try out different classes. So how is that diversity of education really helped >>you in your Yes, in my undergrad is from Smith College, which is a liberal arts school, very strong liberal arts foundation. And when I went to visit, one of the math professors that I met told me that he, you know, he considers studying math, not just to make you better at math, but that it makes you a better thinker. And you can take in much more information and sort of question assumptions and try to build a foundation for what? The problem that you're trying to think through is. And I just found that extremely interesting. And I also, you know, I haven't undeclared major in Latin American studies, and I studied like neuroscience and quantum physics for non experts and film class and all of these other things that I don't know if I would have had the same opportunity at a more technical school, and I just found it really challenging and satisfying to be able to push myself to think in different ways. I even took a poetry writing class I did not write good poetry, but the experience really stuck with me because it was about pushing myself outside of my own boundaries. >>And would you recommend having this kind of like diverse education to young women now who are looking >>and I absolutely love it? I mean, I think, you know, there's some people believe that instead of thinking about steam, we should be talking instead of thinking about stem. Rather, we should be talking about steam, which adds the arts education in there, and liberal arts is one of them. And I think that now, in these conversations that we have about biases in data and ML and AI and understanding, fairness and accountability, accountability bitterly, it's a hardware. Apparently, I think that a strong, uh, cross disciplinary collaborative and even on an individual level, cross disciplinary education is really the only way that we're gonna be able to make those connections to understand what kind of second order effects for having based on the decisions of parameters for a model. In a local sense, we're optimizing and doing a great job. But what are the global consequences of those decisions? And I think that that kind of interdisciplinary approach to education as an individual and collaboration as a team is really the only way. >>And speaking about bias. Earlier, we heard that diversity is great because it brings out new perspectives, and it also helps to reduce that unfair bias. So how it Spotify have you managed? Or has Spotify managed to create a more diverse team? >>Yeah, so I mean, it starts with recruiting. It starts with what kind of messaging we put out there, and there's a great team that thinks about that exclusively. And they're really pushing all of us as managers. As I seizes leaders to really think about the decisions in the way that we talk about things and all of these micro decisions that we make and how that creates an inclusive environments, it's not just about diversity. It's also about making people feel like this is where they should be. On a personal level, you know, I talk a lot with younger folks and people who are trying to just figure out what their place is in technology, whether it be because they come from a different culture, >>there are, >>you know, they might be gender, non binary. They might be women who feel like there is in a place for them. It's really about, You know, the things that I think about is because you're different. Your voice is needed even more. You know, like your voice matters and we need to figure out. And I always ask, How can I highlight your voice more? You know, how can I help? I have a tiny, tiny bit of power and influence. You know, more than some other folks. How can I help other people acquire that as well? >>Lilian, thank you so much for your insight. Thank you for being on the Cube. Thank you. I'm your host, Sonia today. Ari. Thank you for watching and stay tuned for more. Yeah, yeah.
SUMMARY :
Brought to you by Silicon Angle Media. Thank you so much for having me. that help power the larger personalization experiences throughout Spotify. Can you tell us a little bit more about the personalization? and we collaborate across different product areas to understand what are the foundational data sets and How do you feel about that? And so I think it's great to show younger And what advice would you would you give to? Yeah, so my my big advice is to follow your interests. And speaking of mixed methods, insights is a team that's a mixed methods research groups. in bringing in research from research scientists and the product side to be able to understand those needs And like every test that we run, we learn a lot about not just the individual thing. you know, open to new music is also very popular. Are there any surprises or interesting stories that you have about, um, interesting user experiences? can give you an example from my experience. I don't consider And I was like having this moment where I went through this path of Wait, so quickly discovered that so you have a degree in industrial mathematics, And I also, you know, I haven't undeclared major in Latin American studies, I mean, I think, you know, there's some people believe that So how it Spotify have you managed? As I seizes leaders to really think about the decisions in the way that we talk And I always ask, How can I highlight your voice more? Lilian, thank you so much for your insight.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lillian Carrasquillo | PERSON | 0.99+ |
Lillian Kearse | PERSON | 0.99+ |
Lilian | PERSON | 0.99+ |
Sonia | PERSON | 0.99+ |
Spotify | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
Ari | PERSON | 0.99+ |
Sonia Atari | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
Silicon Angle Media | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Stanford University | ORGANIZATION | 0.99+ |
Smith College | ORGANIZATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
Keo | PERSON | 0.98+ |
last year | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
Dolly Parton | PERSON | 0.98+ |
Margo Price | PERSON | 0.97+ |
Stanford Women in Data Science | EVENT | 0.97+ |
1st 5 years ago | DATE | 0.95+ |
Woods Women in Data Science Conference | EVENT | 0.94+ |
Latin American | OTHER | 0.9+ |
Taylor Swift | PERSON | 0.88+ |
second order | QUANTITY | 0.82+ |
Stanford | ORGANIZATION | 0.82+ |
2020 | DATE | 0.81+ |
WiDS) Conference 2020 | EVENT | 0.8+ |
a few months ago | DATE | 0.77+ |
end | DATE | 0.61+ |
this morning | DATE | 0.6+ |
fifth | EVENT | 0.5+ |
data | TITLE | 0.5+ |
Cube | COMMERCIAL_ITEM | 0.5+ |
annual | QUANTITY | 0.4+ |
Day One Morning Keynote | Red Hat Summit 2018
[Music] [Music] [Music] [Laughter] [Laughter] [Laughter] [Laughter] [Music] [Music] [Music] [Music] you you [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Applause] [Music] wake up feeling blessed peace you warned that Russia ain't afraid to show it I'll expose it if I dressed up riding in that Chester roasted nigga catch you slippin on myself rocks on I messed up like yes sir [Music] [Music] [Music] [Music] our program [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] you are not welcome to Red Hat summit 2018 2018 [Music] [Music] [Music] [Laughter] [Music] Wow that is truly the coolest introduction I've ever had thank you Wow I don't think I feel cool enough to follow an interaction like that Wow well welcome to the Red Hat summit this is our 14th annual event and I have to say looking out over this audience Wow it's great to see so many people here joining us this is by far our largest summit to date not only did we blow through the numbers we've had in the past we blew through our own expectations this year so I know we have a pretty packed house and I know people are still coming in so it's great to see so many people here it's great to see so many familiar faces when I had a chance to walk around earlier it's great to see so many new people here joining us for the first time I think the record attendance is an indication that more and more enterprises around the world are seeing the power of open source to help them with their challenges that they're facing due to the digital transformation that all of enterprises around the world are going through the theme for the summit this year is ideas worth exploring and we intentionally chose that because as much as we are all going through this digital disruption and the challenges associated with it one thing I think is becoming clear no one person and certainly no one company has the answers to these challenges right this isn't a problem where you can go buy a solution this is a set of capabilities that we all need to build it's a set of cultural changes that we all need to go through and that's going to require the best ideas coming from so many different places so we're not here saying we have the answers we're trying to convene the conversation right we want to serve as a catalyst bringing great minds together to share ideas so we all walk out of here at the end of the week a little wiser than when we first came here we do have an amazing agenda for you we have over 7,000 attendees we may be pushing 8,000 by the time we got through this morning we have 36 keynote speakers and we have a hundred and twenty-five breakout sessions and have to throw in one plug scheduling 325 breakout sessions is actually pretty difficult and so we used the Red Hat business optimizer which is an AI constraint solver that's new in the Red Hat decision manager to help us plan the summit because we have individuals who have a clustered set of interests and we want to make sure that when we schedule two breakout sessions we do it in a way that we don't have overlapping sessions that are really important to the same individual so we tried to use this tool and what we understand about people's interest in history of what they wanted to do to try to make sure that we spaced out different times for things of similar interests for similar people as well as for people who stood in the back of breakouts before and I know I've done that too we've also used it to try to optimize room size so hopefully we will do our best to make sure that we've appropriately sized the spaces for those as well so it's really a phenomenal tool and I know it's helped us a lot this year in addition to the 325 breakouts we have a lot of our customers on stage during the main sessions and so you'll see demos you'll hear from partners you'll hear stories from so many of our customers not on our point of view of how to use these technologies but their point of views of how they actually are using these technologies to solve their problems and you'll hear over and over again from those keynotes that it's not just about the technology it's about how people are changing how people are working to innovate to solve those problems and while we're on the subject of people I'd like to take a moment to recognize the Red Hat certified professional of the year this is known award we do every year I love this award because it truly recognizes an individual for outstanding innovation for outstanding ideas for truly standing out in how they're able to help their organization with Red Hat technologies Red Hat certifications help system administrators application developers IT architects to further their careers and help their organizations by being able to advance their skills and knowledge of Red Hat products and this year's winner really truly is a great example about how their curiosity is helped push the limits of what's possible with technology let's hear a little more about this year's winner when I was studying at the University I had computer science as one of my subjects and that's what created the passion from the very beginning they were quite a few institutions around my University who were offering Red Hat Enterprise Linux as a course and a certification paths through to become an administrator Red Hat Learning subscription has offered me a lot more than any other trainings that have done so far that gave me exposure to so many products under red hair technologies that I wasn't even aware of I started to think about the better ways of how these learnings can be put into the real life use cases and we started off with a discussion with my manager saying I have to try this product and I really want to see how it really fits in our environment and that product was Red Hat virtualization we went from deploying rave and then OpenStack and then the open shift environment we wanted to overcome some of the things that we saw as challenges to the speed and rapidity of release and code etc so it made perfect sense and we were able to do it in a really short space of time so you know we truly did use it as an Innovation Lab I think idea is everything ideas can change the way you see things an Innovation Lab was such an idea that popped into my mind one fine day and it has transformed the way we think as a team and it's given that playpen to pretty much everyone to go and test their things investigate evaluate do whatever they like in a non-critical non production environment I recruited Neha almost 10 years ago now I could see there was a spark a potential with it and you know she had a real Drive a real passion and you know here we are nearly ten years later I'm Neha Sandow I am a Red Hat certified engineer all right well everyone please walk into the states to the stage Neha [Music] [Applause] congratulations thank you [Applause] I think that - well welcome to the red has some of this is your first summit yes it is thanks so much well fantastic sure well it's great to have you here I hope you have a chance to engage and share some of your ideas and enjoy the week thank you thank you congratulations [Applause] neha mentioned that she first got interest in open source at university and it made me think red hats recently started our Red Hat Academy program that looks to programmatically infuse Red Hat technologies in universities around the world it's exploded in a way we had no idea it's grown just incredibly rapidly which i think shows the interest that there really is an open source and working in an open way at university so it's really a phenomenal program I'm also excited to announce that we're launching our newest open source story this year at Summit it's called the science of collective discovery and it looks at what happens when communities use open hardware to monitor the environment around them and really how they can make impactful change based on that technologies the rural premier that will be at 5:15 on Wednesday at McMaster Oni West and so please join us for a drink and we'll also have a number of the experts featured in that and you can have a conversation with them as well so with that let's officially start the show please welcome red hat president of products and technology Paul Cormier [Music] Wow morning you know I say it every year I'm gonna say it again I know I repeat myself it's just amazing we are so proud here to be here today too while you all week on how far we've come with opens with open source and with the products that we that we provide at Red Hat so so welcome and I hope the pride shows through so you know I told you Seven Summits ago on this stage that the future would be open and here we are just seven years later this is the 14th summit but just seven years later after that and much has happened and I think you'll see today and this week that that prediction that the world would be open was a pretty safe predict prediction but I want to take you just back a little bit to see how we started here and it's not just how Red Hat started here this is an open source in Linux based computing is now in an industry norm and I think that's what you'll you'll see in here this week you know we talked back then seven years ago when we put on our prediction about the UNIX error and how Hardware innovation with x86 was it was really the first step in a new era of open innovation you know companies like Sun Deck IBM and HP they really changed the world the computing industry with their UNIX models it was that was really the rise of computing but I think what we we really saw then was that single company innovation could only scale so far could really get so far with that these companies were very very innovative but they coupled hardware innovation with software innovation and as one company they could only solve so many problems and even which comp which even complicated things more they could only hire so many people in each of their companies Intel came on the scene back then as the new independent hardware player and you know that was really the beginning of the drive for horizontal computing power and computing this opened up a brand new vehicle for hardware innovation a new hardware ecosystem was built around this around this common hardware base shortly after that Stallman and leanness they had a vision of his of an open model that was created and they created Linux but it was built around Intel this was really the beginning of having a software based platform that could also drive innovation this kind of was the beginning of the changing of the world here that system-level innovation now having a hardware platform that was ubiquitous and a software platform that was open and ubiquitous it really changed this system level innovation and that continues to thrive today it was only possible because it was open this could not have happened in a closed environment it allowed the best ideas from anywhere from all over to come in in win only because it was the best idea that's what drove the rate of innovation at the pace you're seeing today and it which has never been seen before we at Red Hat we saw the need to bring this innovation to solve real-world problems in the enterprise and I think that's going to be the theme of the show today you're going to see us with our customers and partners talking about and showing you some of those real-world problems that we are sought solving with this open innovation we created rel back then for this for the enterprise it started it's it it wasn't successful because it's scaled it was secure and it was enterprise ready it once again changed the industry but this time through open innovation this gave the hardware ecosystem a software platform this open software platform gave the hardware ecosystem a software platform to build around it Unleashed them the hardware side to compete and thrive it enabled innovation from the OEMs new players building cheaper faster servers even new architectures from armed to power sprung up with this change we have seen an incredible amount of hardware innovation over the last 15 years that same innovation happened on the software side we saw powerful implementations of bare metal Linux distributions out in the market in fact at one point there were 300 there are over 300 distributions out in the market on the foundation of Linux powerful open-source equivalents were even developed in every area of Technology databases middleware messaging containers anything you could imagine innovation just exploded around the Linux platform in innovation it's at the core also drove virtualization both Linux and virtualization led to another area of innovation which you're hearing a lot about now public cloud innovation this innovation started to proceed at a rate that we had never seen before we had never experienced this in the past in this unprecedented speed of innovation and software was now possible because you didn't need a chip foundry in order to innovate you just needed great ideas in the open platform that was out there customers seeing this innovation in the public cloud sparked it sparked their desire to build their own linux based cloud platforms and customers are now are now bringing that cloud efficiency on-premise in their own data centers public clouds demonstrated so much efficiency the data centers and architects wanted to take advantage of it off premise on premise I'm sorry within their own we don't within their own controlled environments this really allowed companies to make the most of existing investments from data centers to hardware they also gained many new advantages from data sovereignty to new flexible agile approaches I want to bring Burr and his team up here to take a look at what building out an on-premise cloud can look like today Bure take it away I am super excited to be with all of you here at Red Hat summit I know we have some amazing things to show you throughout the week but before we dive into this demonstration I want you to take just a few seconds just a quick moment to think about that really important event your life that moment you turned on your first computer maybe it was a trs-80 listen Claire and Atari I even had an 83 b2 at one point but in my specific case I was sitting in a classroom in Hawaii and I could see all the way from Diamond Head to Pearl Harbor so just keep that in mind and I turn on an IBM PC with dual floppies I don't remember issuing my first commands writing my first level of code and I was totally hooked it was like a magical moment and I've been hooked on computers for the last 30 years so I want you to hold that image in your mind for just a moment just a second while we show you the computers we have here on stage let me turn this over to Jay fair and Dini here's our worldwide DevOps manager and he was going to show us his hardware what do you got Jay thank you BER good morning everyone and welcome to Red Hat summit we have so many cool things to show you this week I am so happy to be here and you know my favorite thing about red hat summit is our allowed to kind of share all of our stories much like bird just did we also love to you know talk about the hardware and the technology that we brought with us in fact it's become a bit of a competition so this year we said you know let's win this thing and we actually I think we might have won we brought a cloud with us so right now this is a private cloud for throughout the course of the week we're going to turn this into a very very interesting open hybrid cloud right before your eyes so everything you see here will be real and happening right on this thing right behind me here so thanks for our four incredible partners IBM Dell HP and super micro we've built a very vendor heterogeneous cloud here extra special thanks to IBM because they loaned us a power nine machine so now we actually have multiple architectures in this cloud so as you know one of the greatest benefits to running Red Hat technology is that we run on just about everything and you know I can't stress enough how powerful that is how cost-effective that is and it just makes my life easier to be honest so if you're interested the people that built this actual rack right here gonna be hanging out in the customer success zone this whole week it's on the second floor the lobby there and they'd be glad to show you exactly how they built this thing so let me show you what we actually have in this rack so contained in this rack we have 1056 physical chorus right here we have five and a half terabytes of RAM and just in case we threw 50 terabytes of storage in this thing so burr that's about two million times more powerful than that first machine you boot it up thanks to a PC we're actually capable of putting all the power needs and cooling right in this rack so there's your data center right there you know it occurred to me last night that I can actually pull the power cord on this thing and kick it up a notch we could have the world's first mobile portable hybrid cloud so I'm gonna go ahead and unplug no no no no no seriously it's not unplug the thing we got it working now well Berg gets a little nervous but next year we're rolling this thing around okay okay so to recap multiple vendors check multiple architectures check multiple public clouds plug right into this thing check and everything everywhere is running the same software from Red Hat so that is a giant check so burn Angus why don't we get the demos rolling awesome so we have totally we have some amazing hardware amazing computers on this stage but now we need to light it up and we have Angus Thomas who represents our OpenStack engineering team and he's going to show us what we can do with this awesome hardware Angus thank you Beth so this was an impressive rack of hardware to Joe has bought a pocket stage what I want to talk about today is putting it to work with OpenStack platform director we're going to turn it from a lot of potential into a flexible scalable private cloud we've been using director for a while now to take care of managing hardware and orchestrating the deployment of OpenStack what's new is that we're bringing the same capabilities for on-premise manager the deployment of OpenShift director deploying OpenShift in this way is the best of both worlds it's bare-metal performance but with an underlying infrastructure as a service that can take care of deploying in new instances and scaling out and a lot of the things that we expect from a cloud provider director is running on a virtual machine on Red Hat virtualization at the top of the rack and it's going to bring everything else under control what you can see on the screen right now is the director UI and as you see some of the hardware in the rack is already being managed at the top level we have information about the number of cores in the amount of RAM and the disks that each machine have if we dig in a bit there's information about MAC addresses and IPs and the management interface the BIOS kernel version dig a little deeper and there is information about the hard disks all of this is important because we want to be able to make sure that we put in workloads exactly where we want them Jay could you please power on the two new machines at the top of the rack sure all right thank you so when those two machines come up on the network director is going to see them see that they're new and not already under management and is it immediately going to go into the hardware inspection that populates this database and gets them ready for use so we also have profiles as you can see here profiles are the way that we match the hardware in a machine to the kind of workload that it's suited to this is how we make sure that machines that have all the discs run Seth and machines that have all the RAM when our application workouts for example there's two ways these can be set when you're dealing with a rack like this you could go in an individually tag each machine but director scales up to data centers so we have a rules matching engine which will automatically take the hardware profile of a new machine and make sure it gets tagged in exactly the right way so we can automatically discover new machines on the network and we can automatically match them to a profile that's how we streamline and scale up operations now I want to talk about deploying the software we have a set of validations we've learned over time about the Miss configurations in the underlying infrastructure which can cause the deployment of a multi node distributed application like OpenStack or OpenShift to fail if you have the wrong VLAN tags on a switch port or DHCP isn't running where it should be for example you can get into a situation which is really hard to debug a lot of our validations actually run before the deployment they look at what you're intending to deploy and they check in the environment is the way that it should be and they'll preempts problems and obviously preemption is a lot better than debugging something new that you probably have not seen before is director managing multiple deployments of different things side by side before we came out on stage we also deployed OpenStack on this rack just to keep me honest let me jump over to OpenStack very quickly a lot of our opens that customers will be familiar with this UI and the bare metal deployment of OpenStack on our rack is actually running a set of virtual machines which is running Gluster you're going to see that put to work later on during the summit Jay's gone to an awful lot effort to get this Hardware up on the stage so we're going to use it as many different ways as we can okay let's deploy OpenShift if I switch over to the deployed a deployment plan view there's a few steps first thing you need to do is make sure we have the hardware I already talked about how director manages hardware it's smart enough to make sure that it's not going to attempt to deploy into machines they're already in use it's only going to deploy on machines that have the right profile but I think with the rack that we have here we've got enough next thing is the deployment configuration this is where you get to customize exactly what's going to be deployed to make sure that it really matches your environment if they're external IPs for additional services you can set them here whatever it takes to make sure that the deployment is going to work for you as you can see on the screen we have a set of options around enable TLS for encryption network traffic if I dig a little deeper there are options around enabling ipv6 and network isolation so that different classes of traffic there are over different physical NICs okay then then we have roles now roles this is essentially about the software that's going to be put on each machine director comes with a set of roles for a lot of the software that RedHat supports and you can just use those or you can modify them a little bit if you need to add a monitoring agent or whatever it might be or you can create your own custom roles director has quite a rich syntax for custom role definition and custom Network topologies whatever it is you need in order to make it work in your environment so the rawls that we have right now are going to give us a working instance of openshift if I go ahead and click through the validations are all looking green so right now I can click the button start to the deploy and you will see things lighting up on the rack directors going to use IPMI to reboot the machines provisioned and with a trail image was the containers on them and start up the application stack okay so one last thing once the deployment is done you're going to want to keep director around director has a lot of capabilities around what we call de to operational management bringing in new Hardware scaling out deployments dealing with updates and critically doing upgrades as well so having said all of that it is time for me to switch over to an instance of openshift deployed by a director running on bare metal on our rack and I need to hand this over to our developer team so they can show what they can do it thank you that is so awesome Angus so what you've seen now is going from bare metal to the ultimate private cloud with OpenStack director make an open shift ready for our developers to build their next generation applications thank you so much guys that was totally awesome I love what you guys showed there now I have the honor now I have the honor of introducing a very special guest one of our earliest OpenShift customers who understands the necessity of the private cloud inside their organization and more importantly they're fundamentally redefining their industry please extend a warm welcome to deep mar Foster from Amadeus well good morning everyone a big thank you for having armadillos here and myself so as it was just set I'm at Mario's well first of all we are a large IT provider in the travel industry so serving essentially Airlines hotel chains this distributors like Expedia and others we indeed we started very early what was OpenShift like a bit more than three years ago and we jumped on it when when Retta teamed with Google to bring in kubernetes into this so let me quickly share a few figures about our Mario's to give you like a sense of what we are doing and the scale of our operations so some of our key KPIs one of our key metrics is what what we call passenger borders so that's the number of customers that physically board a plane over the year so through our systems it's roughly 1.6 billion people checking in taking the aircrafts on under the Amarillo systems close to 600 million travel agency bookings virtually all airlines are on the system and one figure I want to stress out a little bit is this one trillion availability requests per day that's when I read this figure my mind boggles a little bit so this means in continuous throughput more than 10 million hits per second so of course these are not traditional database transactions it's it's it's highly cached in memory and these applications are running over like more than 100,000 course so it's it's it's really big stuff so today I want to give some concrete feedback what we are doing so I have chosen two applications products of our Mario's that are currently running on production in different in different hosting environments as the theme here is of this talk hybrid cloud and so I want to give some some concrete feedback of how we architect the applications and of course it stays relatively high level so here I have taken one of our applications that is used in the hospitality environment so it's we have built this for a very large US hotel chain and it's currently in in full swing brought into production so like 30 percent of the globe or 5,000 plus hotels are on this platform not so here you can see that we use as the path of course on openshift on that's that's the most central piece of our hybrid cloud strategy on the database side we use Oracle and Couchbase Couchbase is used for the heavy duty fast access more key value store but also to replicate data across two data centers in this case it's running over to US based data centers east and west coast topology that are fit so run by Mario's that are fit with VMware on for the virtualization OpenStack on top of it and then open shift to host and welcome the applications on the right hand side you you see the kind of tools if you want to call them tools that we use these are the principal ones of course the real picture is much more complex but in essence we use terraform to map to the api's of the underlying infrastructure so they are obviously there are differences when you run on OpenStack or the Google compute engine or AWS Azure so some some tweaking is needed we use right at ansible a lot we also use puppet so you can see these are really the big the big pieces of of this sense installation and if we look to the to the topology again very high high level so these two locations basically map the data centers of our customers so they are in close proximity because the response time and the SLA is of this application is are very tight so that's an example of an application that is architectures mostly was high ability and high availability in minds not necessarily full global worldwide scaling but of course it could be scaled but here the idea is that we can swing from one data center to the unit to the other in matters of of minutes both take traffic data is fully synchronized across those data centers and while the switch back and forth is very fast the second example I have taken is what we call the shopping box this is when people go to kayak or Expedia and they're getting inspired where they want to travel to this is really the piece that shoots most of transit of the transactions into our Mario's so we architect here more for high scalability of course availability is also a key but here scaling and geographical spread is very important so in short it runs partially on-premise in our Amarillo Stata Center again on OpenStack and we we deploy it mostly in the first step on the Google compute engine and currently as we speak on Amazon on AWS and we work also together with Retta to qualify the whole show on Microsoft Azure here in this application it's it's the same building blocks there is a large swimming aspect to it so we bring Kafka into this working with records and another partner to bring Kafka on their open shift because at the end we want to use open shift to administrate the whole show so over time also databases and the topology here when you look to the physical deployment topology while it's very classical we use the the regions and the availability zone concept so this application is spread over three principal continental regions and so it's again it's a high-level view with different availability zones and in each of those availability zones we take a hit of several 10,000 transactions so that was it really in very short just to give you a glimpse on how we implement hybrid clouds I think that's the way forward it gives us a lot of freedom and it allows us to to discuss in a much more educated way with our customers that sometimes have already deals in place with one cloud provider or another so for us it's a lot of value to set two to leave them the choice basically what up that was a very quick overview of what we are doing we were together with records are based on open shift essentially here and more and more OpenStack coming into the picture hope you found this interesting thanks a lot and have a nice summer [Applause] thank you so much deeper great great solution we've worked with deep Marv and his team for a long for a long time great solution so I want to take us back a little bit I want to circle back I sort of ended talking a little bit about the public cloud so let's circle back there you know even so even though some applications need to run in various footprints on premise there's still great gains to be had that for running certain applications in the public cloud a public cloud will be as impactful to to the industry as as UNIX era was of computing was but by itself it'll have some of the same limitations and challenges that that model had today there's tremendous cloud innovation happening in the public cloud it's being driven by a handful of massive companies and much like the innovation that sundeck HP and others drove in a you in the UNIX era of community of computing many customers want to take advantage of the best innovation no matter where it comes from buddy but as they even eventually saw in the UNIX era they can't afford the best innovation at the cost of a siloed operating environment with the open community we are building a hybrid application platform that can give you access to the best innovation no matter which vendor or which cloud that it comes from letting public cloud providers innovate and services beyond what customers or anyone can one provider can do on their own such as large scale learning machine learning or artificial intelligence built on the data that's unique probably to that to that one cloud but consumed in a common way for the end customer across all applications in any environment on any footprint in in their overall IT infrastructure this is exactly what rel brought brought to our customers in the UNIX era of computing that consistency across any of those footprints obviously enterprises will have applications for all different uses some will live on premise some in the cloud hybrid cloud is the only practical way forward I think you've been hearing that from us for a long time it is the only practical way forward and it'll be as impactful as anything we've ever seen before I want to bring Byrne his team back to see a hybrid cloud deployment in action burr [Music] all right earlier you saw what we did with taking bare metal and lighting it up with OpenStack director and making it openshift ready for developers to build their next generation applications now we want to show you when those next turn and generation applications and what we've done is we take an open shift and spread it out and installed it across Asia and Amazon a true hybrid cloud so with me on stage today as Ted who's gonna walk us through an application and Brent Midwood who's our DevOps engineer who's gonna be making sure he's monitoring on the backside that we do make sure we do a good job so at this point Ted what have you got for us Thank You BER and good morning everybody this morning we are running on the stage in our private cloud an application that's providing its providing fraud detection detect serves for financial transactions and our customer base is rather large and we occasionally take extended bursts of traffic of heavy traffic load so in order to keep our latency down and keep our customers happy we've deployed extra service capacity in the public cloud so we have capacity with Microsoft Azure in Texas and with Amazon Web Services in Ohio so we use open chip container platform on all three locations because openshift makes it easy for us to deploy our containerized services wherever we want to put them but the question still remains how do we establish seamless communication across our entire enterprise and more importantly how do we balance the workload across these three locations in such a way that we efficiently use our resources and that we give our customers the best possible experience so this is where Red Hat amq interconnect comes in as you can see we've deployed a MQ interconnect alongside our fraud detection applications in all three locations and if I switch to the MQ console we'll see the topology of the app of the network that we've created here so the router inside the on stage here has made connections outbound to the public routers and AWS and Azure these connections are secured using mutual TLS authentication and encrypt and once these connections are established amq figures out the best way auda matically to route traffic to where it needs to get to so what we have right now is a distributed reliable broker list message bus that expands our entire enterprise now if you want to learn more about this make sure that you catch the a MQ breakout tomorrow at 11:45 with Jack Britton and David Ingham let's have a look at the message flow and we'll dive in and isolate the fraud detection API that we're interested in and what we see is that all the traffic is being handled in the private cloud that's what we expect because our latencies are low and they're acceptable but now if we take a little bit of a burst of increased traffic we're gonna see that an EQ is going to push a little a bi traffic out onto the out to the public cloud so as you're picking up some of the load now to keep the Layton sees down now when that subsides as your finishes up what it's doing and goes back offline now if we take a much bigger load increase you'll see two things first of all asher is going to take a bigger proportion than it did before and Amazon Web Services is going to get thrown into the fray as well now AWS is actually doing less work than I expected it to do I expected a little bit of bigger a slice there but this is a interesting illustration of what's going on for load balancing mq load balancing is sending requests to the services that have the lowest backlog and in order to keep the Layton sees as steady as possible so AWS is probably running slowly for some reason and that's causing a and Q to push less traffic its way now the other thing you're going to notice if you look carefully this graph fluctuate slightly and those fluctuations are caused by all the variances in the network we have the cloud on stage and we have clouds in in the various places across the country there's a lot of equipment locked layers of virtualization and networking in between and we're reacting in real-time to the reality on the digital street so BER what's the story with a to be less I noticed there's a problem right here right now we seem to have a little bit performance issue so guys I noticed that as well and a little bit ago I actually got an alert from red ahead of insights letting us know that there might be some potential optimizations we could make to our environment so let's take a look at insights so here's the Red Hat insights interface you can see our three OpenShift deployments so we have the set up here on stage in San Francisco we have our Azure deployment in Texas and we also have our AWS deployment in Ohio and insights is highlighting that that deployment in Ohio may have some issues that need some attention so Red Hat insights collects anonymized data from manage systems across our customer environment and that gives us visibility into things like vulnerabilities compliance configuration assessment and of course Red Hat subscription consumption all of this is presented in a SAS offering so it's really really easy to use it requires minimal infrastructure upfront and it provides an immediate return on investment what insights is showing us here is that we have some potential issues on the configuration side that may need some attention from this view I actually get a look at all the systems in our inventory including instances and containers and you can see here on the left that insights is highlighting one of those instances as needing some potential attention it might be a candidate for optimization this might be related to the issues that you were seeing just a minute ago insights uses machine learning and AI techniques to analyze all collected data so we combine collected data from not only the system's configuration but also with other systems from across the Red Hat customer base this allows us to compare ourselves to how we're doing across the entire set of industries including our own vertical in this case the financial services industry and we can compare ourselves to other customers we also get access to tailored recommendations that let us know what we can do to optimize our systems so in this particular case we're actually detecting an issue here where we are an outlier so our configuration has been compared to other configurations across the customer base and in this particular instance in this security group were misconfigured and so insights actually gives us the steps that we need to use to remediate the situation and the really neat thing here is that we actually get access to a custom ansible playbook so if we want to automate that type of a remediation we can use this inside of Red Hat ansible tower Red Hat satellite Red Hat cloud forms it's really really powerful the other thing here is that we can actually apply these recommendations right from within the Red Hat insights interface so with just a few clicks I can select all the recommendations that insights is making and using that built-in ansible automation I can apply those recommendations really really quickly across a variety of systems this type of intelligent automation is really cool it's really fast and powerful so really quickly here we're going to see the impact of those changes and so we can tell that we're doing a little better than we were a few minutes ago when compared across the customer base as well as within the financial industry and if we go back and look at the map we should see that our AWS employment in Ohio is in a much better state than it was just a few minutes ago so I'm wondering Ted if this had any effect and might be helping with some of the issues that you were seeing let's take a look looks like went green now let's see what it looks like over here yeah doesn't look like the configuration is taking effect quite yet maybe there's some delay awesome fantastic the man yeah so now we're load balancing across the three clouds very much fantastic well I have two minute Ted I truly love how we can route requests and dynamically load transactions across these three clouds a truly hybrid cloud native application you guys saw here on on stage for the first time and it's a fully portable application if you build your applications with openshift you can mover from cloud to cloud to cloud on stage private all the way out to the public said it's totally awesome we also have the application being fully managed by Red Hat insights I love having that intelligence watching over us and ensuring that we're doing everything correctly that is fundamentally awesome thank you so much for that well we actually have more to show you but you're going to wait a few minutes longer right now we'd like to welcome Paul back to the stage and we have a very special early Red Hat customer an Innovation Award winner from 2010 who's been going boldly forward with their open hybrid cloud strategy please give a warm welcome to Monty Finkelstein from Citigroup [Music] [Music] hi Marty hey Paul nice to see you thank you very much for coming so thank you for having me Oh our pleasure if you if you wanted to we sort of wanted to pick your brain a little bit about your experiences and sort of leading leading the charge in computing here so we're all talking about hybrid cloud how has the hybrid cloud strategy influenced where you are today in your computing environment so you know when we see the variable the various types of workload that we had an hour on from cloud we see the peaks we see the valleys we see the demand on the environment that we have we really determined that we have to have a much more elastic more scalable capability so we can burst and stretch our environments to multiple cloud providers these capabilities have now been proven at City and of course we consider what the data risk is as well as any regulatory requirement so how do you how do you tackle the complexity of multiple cloud environments so every cloud provider has its own unique set of capabilities they have they're own api's distributions value-added services we wanted to make sure that we could arbitrate between the different cloud providers maintain all source code and orchestration capabilities on Prem to drive those capabilities from within our platforms this requires controlling the entitlements in a cohesive fashion across our on Prem and Wolfram both for security services automation telemetry as one seamless unit can you talk a bit about how you decide when you to use your own on-premise infrastructure versus cloud resources sure so there are multiple dimensions that we take into account right so the first dimension we talk about the risk so low risk - high risk and and really that's about the data classification of the environment we're talking about so whether it's public or internal which would be considered low - ooh confidential PII restricted sensitive and so on and above which is really what would be considered a high-risk the second dimension would be would focus on demand volatility and responsiveness sensitivity so this would range from low response sensitivity and low variability of the type of workload that we have to the high response sensitivity and high variability of the workload the first combination that we focused on is the low risk and high variability and high sensitivity for response type workload of course any of the workloads we ensure that we're regulatory compliant as well as we achieve customer benefits with within this environment so how can we give developers greater control of their their infrastructure environments and still help operations maintain that consistency in compliance so the main driver is really to use the public cloud is scale speed and increased developer efficiencies as well as reducing cost as well as risk this would mean providing develop workspaces and multiple environments for our developers to quickly create products for our customers all this is done of course in a DevOps model while maintaining the source and artifacts registry on-prem this would allow our developers to test and select various middleware products another product but also ensure all the compliance activities in a centrally controlled repository so we really really appreciate you coming by and sharing that with us today Monte thank you so much for coming to the red echo thanks a lot thanks again tamati I mean you know there's these real world insight into how our products and technologies are really running the businesses today that's that's just the most exciting part so thank thanks thanks again mati no even it with as much progress as you've seen demonstrated here and you're going to continue to see all week long we're far from done so I want to just take us a little bit into the path forward and where we we go today we've talked about this a lot innovation today is driven by open source development I don't think there's any question about that certainly not in this room and even across the industry as a whole that's a long way that we've come from when we started our first summit 14 years ago with over a million open source projects out there this unit this innovation aggregates into various community platforms and it finally culminates in commercial open source based open source developed products these products run many of the mission-critical applications in business today you've heard just a couple of those today here on stage but it's everywhere it's running the world today but to make customers successful with that interact innovation to run their real-world business applications these open source products have to be able to leverage increase increasingly complex infrastructure footprints we must also ensure a common base for the developer and ultimately the application no matter which footprint they choose as you heard mati say the developers want choice here no matter which no matter which footprint they are ultimately going to run their those applications on they want that flexibility from the data center to possibly any public cloud out there in regardless of whether that application was built yesterday or has been running the business for the last 10 years and was built on 10-year old technology this is the flexibility that developers require today but what does different infrastructure we may require different pieces of the technical stack in that deployment one example of this that Effects of many things as KVM which provides the foundation for many of those use cases that require virtualization KVM offers a level of consistency from a technical perspective but rel extends that consistency to add a level of commercial and ecosystem consistency for the application across all those footprints this is very important in the enterprise but while rel and KVM formed the foundation other technologies are needed to really satisfy the functions on these different footprints traditional virtualization has requirements that are satisfied by projects like overt and products like Rev traditional traditional private cloud implementations has requirements that are satisfied on projects like OpenStack and products like Red Hat OpenStack platform and as applications begin to become more container based we are seeing many requirements driven driven natively into containers the same Linux in different forms provides this common base across these four footprints this level of compatible compatibility is critical to operators who must best utilize the infinite must better utilize secure and deploy the infrastructure that they have and they're responsible for developers on the other hand they care most about having a platform that can creates that consistency for their applications they care about their services and the services that they need to consume within those applications and they don't want limitations on where they run they want service but they want it anywhere not necessarily just from Amazon they want integration between applications no matter where they run they still want to run their Java EE now named Jakarta EE apps and bring those applications forward into containers and micro services they need able to orchestrate these frameworks and many more across all these different footprints in a consistent secure fashion this creates natural tension between development and operations frankly customers amplify this tension with organizational boundaries that are holdover from the UNIX era of computing it's really the job of our platforms to seamlessly remove these boundaries and it's the it's the goal of RedHat to seamlessly get you from the old world to the new world we're gonna show you a really cool demo demonstration now we're gonna show you how you can automate this transition first we're gonna take a Windows virtual machine from a traditional VMware deployment we're gonna convert it into a KVM based virtual machine running in a container all under the kubernetes umbrella this makes virtual machines more access more accessible to the developer this will accelerate the transformation of those virtual machines into cloud native container based form well we will work this prot we will worked as capability over the product line in the coming releases so we can strike the balance of enabling our developers to move in this direction we want to be able to do this while enabling mission-critical operations to still do their job so let's bring Byrne his team back up to show you this in action for one more thanks all right what Red Hat we recognized that large organizations large enterprises have a substantial investment and legacy virtualization technology and this is holding you back you have thousands of virtual machines that need to be modernized so what you're about to see next okay it's something very special with me here on stage we have James Lebowski he's gonna be walking us through he's represents our operations folks and he's gonna be walking us through a mass migration but also is Itamar Hine who's our lead developer of a very special application and he's gonna be modernizing container izing and optimizing our application all right so let's get started James thanks burr yeah so as you can see I have a typical VMware environment here I'm in the vSphere client I've got a number of virtual machines a handful of them that make up my one of my applications for my development environment in this case and what I want to do is migrate those over to a KVM based right at virtualization environment so what I'm gonna do is I'm gonna go to cloud forms our cloud management platform that's our first step and you know cloud forms actually already has discovered both my rev environment and my vSphere environment and understands the compute network and storage there so you'll notice one of the capabilities we built is this new capability called migrations and underneath here I could begin to there's two steps and the first thing I need to do is start to create my infrastructure mappings what this will allow me to do is map my compute networking storage between vSphere and Rev so cloud forms understands how those relate let's go ahead and create an infrastructure mapping I'll call that summit infrastructure mapping and then I'm gonna begin to map my two environments first the compute so the clusters here next the data stores so those virtual machines happen to live on datastore - in vSphere and I'll target them a datastore data to inside of my revenue Arman and finally my networks those live on network 100 so I'll map those from vSphere to rover so once my infrastructure is map the next step I need to do is actually begin to create a plan to migrate those virtual machines so I'll continue to the plan wizard here I'll select the infrastructure mapping I just created and I'll select migrate my development environment from those virtual machines to Rev and then I need to import a CSV file the CSV file is going to contain a list of all the virtual machines that I want to migrate that were there and that's it once I hit create what's going to happen cloud forms is going to begin in an automated fashion shutting down those virtual machines begin converting them taking care of all the minutia that you'd have to do manually it's gonna do that all automatically for me so I don't have to worry about all those manual interactions and no longer do I have to go manually shut them down but it's going to take care of that all for me you can see the migrations kicked off here this is the I've got the my VMs are migrating here and if I go back to the screen here you can see that we're gonna start seeing those shutdown okay awesome but as people want to know more information about this how would they dive deeper into this technology later this week yeah it's a great question so we have a workload portability session in the hybrid cloud on Wednesday if you want to see a presentation that deep dives into this topic and how some of the methodologies to migrate and then on Thursday we actually have a hands-on lab it's the IT optimization VM migration lab that you can check out and as you can see those are shutting down here yeah we see a powering off right now that's fantastic absolutely so if I go back now that's gonna take a while you got to convert all the disks and move them over but we'll notice is previously I had already run one migration of a single application that was a Windows virtual machine running and if I browse over to Red Hat virtualization I can see on the dashboard here I could browse to virtual machines I have migrated that Windows virtual machine and if I open up a tab I can now browse to my Windows virtual machine which is running our wingtip toy store application our sample application here and now my VM has been moved over from Rev to Vita from VMware to Rev and is available for Itamar all right great available to our developers all right Itamar what are you gonna do for us here well James it's great that you can save cost by moving from VMware to reddit virtualization but I want to containerize our application and with container native virtualization I can run my virtual machine on OpenShift like any other container using Huebert a kubernetes operator to run and manage virtual machines let's look at the open ship service catalog you can see we have a new virtualization section here we can import KVM or VMware virtual machines or if there are already loaded we can create new instances of them for the developer to work with just need to give named CPU memory we can do other virtualization parameters and create our virtual machines now let's see how this looks like in the openshift console the cool thing about KVM is virtual machines are just Linux processes so they can act and behave like other open shipped applications we build in more than a decade of virtualization experience with KVM reddit virtualization and OpenStack and can now benefit from kubernetes and open shift to manage and orchestrate our virtual machines since we know this virtual machine this container is actually a virtual machine we can do virtual machine stuff with it like shutdown reboot or open a remote desktop session to it but we can also see this is just a container like any other container in openshift and even though the web application is running inside a Windows virtual machine the developer can still use open shift mechanisms like services and routes let's browse our web application using the OpenShift service it's the same wingtip toys application but this time the virtual machine is running on open shift but we're not done we want to containerize our application since it's a Windows virtual machine we can open a remote desktop session to it we see we have here Visual Studio and an asp.net application let's start container izing by moving the Microsoft sequel server database from running inside the Windows virtual machine to running on Red Hat Enterprise Linux as an open shipped container we'll go back to the open shipped Service Catalog this time we'll go to the database section and just as easily we'll create a sequel server container just need to accept the EULA provide password and choose the Edition we want and create a database and again we can see the sequel server is just another container running on OpenShift now let's take let's find the connection details for our database to keep this simple we'll take the IP address of our database service go back to the web application to visual studio update the IP address in the connection string publish our application and go back to browse it through OpenShift fortunately for us the user experience team heard we're modernizing our application so they pitched in and pushed new icons to use with our containerized database to also modernize the look and feel it's still the same wingtip toys application it's running in a virtual machine on openshift but it's now using a containerized database to recap we saw that we can run virtual machines natively on openshift like any other container based application modernize and mesh them together we containerize the database but we can use the same approach to containerize any part of our application so some items here to deserve repeating one thing you saw is Red Hat Enterprise Linux burning sequel server in a container on open shift and you also saw Windows VM where the dotnet native application also running inside of open ships so tell us what's special about that that seems pretty crazy what you did there exactly burr if we take a look under the hood we can use the kubernetes commands to see the list of our containers in this case the sequel server and the virtual machine containers but since Q Bert is a kubernetes operator we can actually use kubernetes commands like cube Cpl to list our virtual machines and manage our virtual machines like any other entity in kubernetes I love that so there's your crew meta gem oh we can see the kind says virtual machine that is totally awesome now people here are gonna be very excited about what they just saw we're gonna get more information and when will this be coming well you know what can they do to dive in this will be available as part of reddit Cloud suite in tech preview later this year but we are looking for early adopters now so give us a call also come check our deep dive session introducing container native virtualization Thursday 2:00 p.m. awesome that is so incredible so we went from the old to the new from the close to the open the Red Hat way you're gonna be seeing more from our demonstration team that's coming Thursday at 8 a.m. do not be late if you like what you saw this today you're gonna see a lot more of that going forward so we got some really special things in store for you so at this point thank you so much in tomorrow thank you so much you guys are awesome yeah now we have one more special guest a very early adopter of Red Hat Enterprise Linux we've had over a 12-year partnership and relationship with this organization they've been a steadfast Linux and middleware customer for many many years now please extend a warm welcome to Raj China from the Royal Bank of Canada thank you thank you it's great to be here RBC is a large global full-service is back we have the largest bank in Canada top 10 global operate in 30 countries and run five key business segments personal commercial banking investor in Treasury services capital markets wealth management and insurance but honestly unless you're in the banking segment those five business segments that I just mentioned may not mean a lot to you but what you might appreciate is the fact that we've been around in business for over 150 years we started our digital transformation journey about four years ago and we are focused on new and innovative technologies that will help deliver the capabilities and lifestyle our clients are looking for we have a very simple vision and we often refer to it as the digitally enabled bank of the future but as you can appreciate transforming a hundred fifty year old Bank is not easy it certainly does not happen overnight to that end we had a clear unwavering vision a very strong innovation agenda and most importantly a focus towards a flawless execution today in banking business strategy and IT strategy are one in the same they are not two separate things we believe that in order to be the number one bank we have to have the number one tactic there is no question that most of today's innovations happens in the open source community RBC relies on RedHat as a key partner to help us consume these open source innovations in a manner that it meets our enterprise needs RBC was an early adopter of Linux we operate one of the largest footprints of rel in Canada same with tables we had tremendous success in driving cost out of infrastructure by partnering with rahat while at the same time delivering a world-class hosting service to your business over our 12 year partnership Red Hat has proven that they have mastered the art of working closely with the upstream open source community understanding the needs of an enterprise like us in delivering these open source innovations in a manner that we can consume and build upon we are working with red hat to help increase our agility and better leverage public and private cloud offerings we adopted virtualization ansible and containers and are excited about continuing our partnership with Red Hat in this journey throughout this journey we simply cannot replace everything we've had from the past we have to bring forward these investments of the past and improve upon them with new and emerging technologies it is about utilizing emerging technologies but at the same time focusing on the business outcome the business outcome for us is serving our clients and delivering the information that they are looking for whenever they need it and in whatever form factor they're looking for but technology improvements alone are simply not sufficient to do a digital transformation creating the right culture of change and adopting new methodologies is key we introduced agile and DevOps which has boosted the number of adult projects at RBC and increase the frequency at which we do new releases to our mobile app as a matter of fact these methodologies have enabled us to deliver apps over 20x faster than before the other point about around culture that I wanted to mention was we wanted to build an engineering culture an engineering culture is one which rewards curiosity trying new things investing in new technologies and being a leader not necessarily a follower Red Hat has been a critical partner in our journey to date as we adopt elements of open source culture in engineering culture what you seen today about red hearts focus on new technology innovations while never losing sight of helping you bring forward the investments you've already made in the past is something that makes Red Hat unique we are excited to see red arts investment in leadership in open source technologies to help bring the potential of these amazing things together thank you that's great the thing you know seeing going from the old world to the new with automation so you know the things you've seen demonstrated today they're they're they're more sophisticated than any one company could ever have done on their own certainly not by using a proprietary development model because of this it's really easy to see why open source has become the center of gravity for enterprise computing today with all the progress open-source has made we're constantly looking for new ways of accelerating that into our products so we can take that into the enterprise with customers like these that you've met what you've met today now we recently made in addition to the Red Hat family we brought in core OS to the Red Hat family and you know adding core OS has really been our latest move to accelerate that innovation into our products this will help the adoption of open shift container platform even deeper into the enterprise and as we did with the Linux core platform in 2002 this is just exactly what we did with with Linux back then today we're announcing some exciting new technology directions first we'll integrate the benefits of automated operations so for example you'll see dramatic improvements in the automated intelligence about the state of your clusters in OpenShift with the core OS additions also as part of open shift will include a new variant of rel called Red Hat core OS maintaining the consistency of rel farhat for the operation side of the house while allowing for a consumption of over-the-air updates from the kernel to kubernetes later today you'll hear how we are extending automated operations beyond customers and even out to partners all of this starting with the next release of open shift in July now all of this of course will continue in an upstream open source innovation model that includes continuing container linux for the community users today while also evolving the commercial products to bring that innovation out to the enterprise this this combination is really defining the platform of the future everything we've done for the last 16 years since we first brought rel to the commercial market because get has been to get us just to this point hybrid cloud computing is now being deployed multiple times in enterprises every single day all powered by the open source model and powered by the open source model we will continue to redefine the software industry forever no in 2002 with all of you we made Linux the choice for enterprise computing this changed the innovation model forever and I started the session today talking about our prediction of seven years ago on the future being open we've all seen so much happen in those in those seven years we at Red Hat have celebrated our 25th anniversary including 16 years of rel and the enterprise it's now 2018 open hybrid cloud is not only a reality but it is the driving model in enterprise computing today and this hybrid cloud world would not even be possible without Linux as a platform in the open source development model a build around it and while we have think we may have accomplished a lot in that time and we may think we have changed the world a lot we have but I'm telling you the best is yet to come now that Linux and open source software is firmly driving that innovation in the enterprise what we've accomplished today and up till now has just set the stage for us together to change the world once again and just as we did with rel more than 15 years ago with our partners we will make hybrid cloud the default in the enterprise and I will take that bet every single day have a great show and have fun watching the future of computing unfold right in front of your eyes see you later [Applause] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] anytime [Music]
SUMMARY :
account right so the first dimension we
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
James Lebowski | PERSON | 0.99+ |
Brent Midwood | PERSON | 0.99+ |
Ohio | LOCATION | 0.99+ |
Monty Finkelstein | PERSON | 0.99+ |
Ted | PERSON | 0.99+ |
Texas | LOCATION | 0.99+ |
2002 | DATE | 0.99+ |
Canada | LOCATION | 0.99+ |
five and a half terabytes | QUANTITY | 0.99+ |
Marty | PERSON | 0.99+ |
Itamar Hine | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
David Ingham | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
RBC | ORGANIZATION | 0.99+ |
two machines | QUANTITY | 0.99+ |
Paul | PERSON | 0.99+ |
Jay | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Hawaii | LOCATION | 0.99+ |
50 terabytes | QUANTITY | 0.99+ |
Byrne | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
second floor | QUANTITY | 0.99+ |
Red Hat Enterprise Linux | TITLE | 0.99+ |
Asia | LOCATION | 0.99+ |
Raj China | PERSON | 0.99+ |
Dini | PERSON | 0.99+ |
Pearl Harbor | LOCATION | 0.99+ |
Thursday | DATE | 0.99+ |
Jack Britton | PERSON | 0.99+ |
8,000 | QUANTITY | 0.99+ |
Java EE | TITLE | 0.99+ |
Wednesday | DATE | 0.99+ |
Angus | PERSON | 0.99+ |
James | PERSON | 0.99+ |
Linux | TITLE | 0.99+ |
thousands | QUANTITY | 0.99+ |
Joe | PERSON | 0.99+ |
today | DATE | 0.99+ |
two applications | QUANTITY | 0.99+ |
two new machines | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Burr | PERSON | 0.99+ |
Windows | TITLE | 0.99+ |
2018 | DATE | 0.99+ |
Citigroup | ORGANIZATION | 0.99+ |
2010 | DATE | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
each machine | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Visual Studio | TITLE | 0.99+ |
July | DATE | 0.99+ |
Red Hat | TITLE | 0.99+ |
aul Cormier | PERSON | 0.99+ |
Diamond Head | LOCATION | 0.99+ |
first step | QUANTITY | 0.99+ |
Neha Sandow | PERSON | 0.99+ |
two steps | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
UNIX | TITLE | 0.99+ |
second dimension | QUANTITY | 0.99+ |
seven years later | DATE | 0.99+ |
seven years ago | DATE | 0.99+ |
this week | DATE | 0.99+ |
36 keynote speakers | QUANTITY | 0.99+ |
first level | QUANTITY | 0.99+ |
OpenShift | TITLE | 0.99+ |
first step | QUANTITY | 0.99+ |
16 years | QUANTITY | 0.99+ |
30 countries | QUANTITY | 0.99+ |
vSphere | TITLE | 0.99+ |
Sue Morrow, United Methodist Homes | VTUG Winter Warmer 2018
>> Narrator: From Gillette Stadium in Foxborough, Massachusets, it's theCUBE, covering VTUG Winter Warmer 2018. Presented by SiliconANGLE. (upbeat music) >> I'm Stu Miniman and this is theCUBE's fifth year at the VTUG Winter Warmer. 2018 is the 12th year of this event, always love when we get to talk to some of the users at the conference which's why I'm really happy to introduce to our audience Sue Morrow, who is a network manager at United Methodist Homes. Thanks for joining me Sue. >> No problem. >> First, tell me a little bit about yourself and what brings you all the way from Upstate New York to come to the VTUG. >> Well, I like to go to conferences whenever I can continue my education in IT. I grew up with computers in my house in the '80s. My dad was a physics teacher and a scientist so we always had a Commodore 64 or an Amiga in our house, growing up, when most people had Atari, we had computers. >> Totally, so Commodore 64, classic. I myself was a Tandy Radioshack, the TRS-80 Model III. So, in a similar era. >> Yep, I actually took a basic coding class on a TRS-80 when I was around 10, I think. Anyway, grew up with computers and somehow stumbled into IT later in life. So, that's why I'm here. >> United Methodist Homes, tell us just a little bit about what the mission of the company is. >> United Methodist Homes is a longterm care corporation. We have four facilities, two in the Binghamton area and two in Northeastern Pennsylvania. We have all levels of care from nursing homes, skilled care, up to independent living, and everything in between. >> Okay, and as network manager, what's under your purview? >> Well, it's kind of a silly title, actually. In longterm care or in healthcare or nonprofits, as we are, you often wear many hats and so that's, sort of, a weird title for me, but I supervise our help desk which we serve centrally from our corporate office. We serve about 600 actual computer users and, all in total, about 1200 employees who interface with the technology, in some way. So, I supervise the help desk, I make sure our network is running well. IT has changed over the years so that we're now providing more of a service and making sure that everything is up and running, network-wise, for everyone instead of keeping our servers running all the time. >> Yeah, reminds me of the old saying, it was like oh, the network is the computer, things like that, so you've got both ends of it. >> Sue: Yes. >> What kind of things are you looking at from a technology standpoint when you come to event like this? Did you catch some of the keynotes this morning, there was a broad spectrum? >> Yes. >> What are the kind of things that you're digging in to and find interesting? >> Yeah, the keynotes are really interesting. I think the first one that I went to with Luigi and Chris was great just to, kind of, expand your thinking about your own career personally, and where you want to go with your life was really interesting. I also watched Randall do his coding which is completely outside of what I do everyday, but was fascinating. And then the last major keynote was fantastic. I think that from my perspective in my company, we're kind of small and we don't do a whole lot of, we don't run apps and things like that, so the things that we have ritualized is mostly storage, so I'm looking at better ways that we can manage our storage and stuff. Most of the applications that we run now are SAS applications hosted by somebody else and their cloud, or a public cloud, or wherever, so I'm not so much looking at the cloud technologies like more businesses are that are providing an application for their company. >> It sounds like cloud and SAS's being a part of the overall strategy, have you been seeing that dynamic change in your company? How does it impact what you're doing or is it just a separate organization. >> It's definitely been a shift in the last few years, we used to run all of our applications in-house. Longterm care has caught up now, with the hospitals, so we have our electronic medical record which is a hosted application, whereas, up until five years ago, that was an on-premises application that we hosted and had to run and maintain, and update and upgrade, and make sure was available. That is definitely been a shift, that everything is now hosted. So we just make sure that our network is up and running and support our users and all of their issues when they break things, flip their screens, drop something, provide hardware for them all that sorts of stuff. >> The constant pace of innovation change. On the news this week they were saying, okay, medical records on your iPhone is up for debate. Does regulation impact your day to day activities and what are some of the challenges in that area? >> Absolutely. One of the other things we have to do is interface with the providers. We have medical providers that come in from the outside and they need to access our EMR also, so we need to provide access for them on, sometimes, whatever device they bring in, which is not always compatible, so we have a whole other set of challenges there. Where we can manage our computers for our employees by pushing out policies and things that are required for the application. When someone comes in from the outside, it isn't, necessarily, setup right, so we have that other set of challenges, and regulation-wise, yes. The government is always pushing out new and updated regulations for healthcare and we have to keep on top of that too. Of course, we have HIPAA concerns and things like that, which is also comes into play when you're talking about cloud host, and any hosted application. We have to be concerned about HIPAA, as well. >> Yeah, wondering when I look at the space that you're in, the ultimate goal is you want the patients, the people at your company, be able to spend more time, help them, not be caught up in the technology of things. Could you, maybe, talk a little bit about that dynamic? >> Yeah, one of the things that I always say is, we need to give our employees the tools that they need to do their job most efficiently. A nurse needs to be ready to go at the beginning of her shift on her laptop, ready to pass meds, and when they can't remember their password or that computer isn't working, my team needs to work as quickly as we can to get them back to work. We serve our users, really. We're not there being all techy. They want us to fix them and get them back to work, and that's what we do. We put tools in their hands, any device that they need to make them more efficient. I try hard to provide a variety of devices, people have different preferences on how they do their work. Some people prefer a laptop, some people prefer to stand at a wall-mounted touchscreen and document, some people want to carry a tablet with them. I try to provide a range of devices so that they can have whatever suits them and makes them most comfortable to get their job done. >> Love that, it's not, necessarily, about the cool or trendier thing, it's about getting business done, helping, and in you're case, enabling your employees to really help the people that are there. Anything you want to highlight as to things you're excited to look at this show, or just technology in general? >> I'm just kind of here for the general nature of it. I enjoy the networking and getting to talk to people, and keeping current in what's happening in the industry and my career, so that's why I come. >> Alright, well Sue Morrow, really appreciate you coming, sharing with our audience. >> Absolutely. >> User groups like this, all about the users. Happy to have lots of them on the program, so big thanks to the VTUG group for bringing us some great guests. We'll be back with more coverage here. I'm Stu Miniman, you're watching theCUBE. (upbeat music)
SUMMARY :
in Foxborough, Massachusets, 2018 is the 12th year of this event, and what brings you all the way so we always had a Commodore 64 the TRS-80 Model III. and somehow stumbled into IT later in life. about what the mission of the company is. and everything in between. and making sure that everything is up and running, Yeah, reminds me of the old saying, so the things that we have ritualized is mostly storage, being a part of the overall strategy, and had to run and maintain, and update and upgrade, On the news this week they were saying, One of the other things we have to do the ultimate goal is you want the patients, any device that they need to make them more efficient. the people that are there. I enjoy the networking and getting to talk to people, really appreciate you coming, so big thanks to the VTUG group
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
United Methodist Homes | ORGANIZATION | 0.99+ |
Sue Morrow | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Binghamton | LOCATION | 0.99+ |
Sue | PERSON | 0.99+ |
TRS-80 Model III. | COMMERCIAL_ITEM | 0.99+ |
TRS-80 | COMMERCIAL_ITEM | 0.99+ |
Chris | PERSON | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Northeastern Pennsylvania | LOCATION | 0.99+ |
HIPAA | TITLE | 0.99+ |
Luigi | PERSON | 0.99+ |
Commodore 64 | COMMERCIAL_ITEM | 0.99+ |
Gillette Stadium | LOCATION | 0.99+ |
VTUG Winter Warmer | EVENT | 0.99+ |
First | QUANTITY | 0.99+ |
12th year | QUANTITY | 0.99+ |
fifth year | QUANTITY | 0.99+ |
Upstate New York | LOCATION | 0.99+ |
this week | DATE | 0.99+ |
five years ago | DATE | 0.98+ |
first one | QUANTITY | 0.98+ |
about 1200 employees | QUANTITY | 0.97+ |
four facilities | QUANTITY | 0.97+ |
VTUG | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.97+ |
One | QUANTITY | 0.96+ |
Randall | PERSON | 0.96+ |
SiliconANGLE | ORGANIZATION | 0.95+ |
VTUG | EVENT | 0.95+ |
VTUG Winter Warmer 2018 | EVENT | 0.95+ |
ardware | ORGANIZATION | 0.95+ |
2018 | DATE | 0.94+ |
about 600 actual computer users | QUANTITY | 0.92+ |
SAS | ORGANIZATION | 0.87+ |
both ends | QUANTITY | 0.83+ |
this morning | DATE | 0.83+ |
Atari | ORGANIZATION | 0.82+ |
theCUBE | ORGANIZATION | 0.78+ |
'80s | DATE | 0.73+ |
Foxborough, Massachusets | LOCATION | 0.73+ |
Tandy Radioshack | ORGANIZATION | 0.72+ |
Amiga | COMMERCIAL_ITEM | 0.71+ |
last | DATE | 0.62+ |
years | DATE | 0.61+ |
Winter Warmer | EVENT | 0.58+ |
around 10 | QUANTITY | 0.57+ |
Leslie Berlin, Stanford University | CUBE Conversation Nov 2017
(hopeful futuristic music) >> Hey welcome back everybody, Jeff Frick here with theCUBE. We are really excited to have this cube conversation here in the Palo Alto studio with a real close friend of theCUBE, and repeat alumni, Leslie Berlin. I want to get her official title; she's the historian for the Silicon Valley archive at Stanford. Last time we talked to Leslie, she had just come out with a book about Robert Noyce, and the man behind the microchip. If you haven't seen that, go check it out. But now she's got a new book, it's called "Troublemakers," which is a really appropriate title. And it's really about kind of the next phase of Silicon Valley growth, and it's hitting bookstores. I'm sure you can buy it wherever you can buy any other book, and we're excited to have you on Leslie, great to see you again. >> So good to see you Jeff. >> Absolutely, so the last book you wrote was really just about Noyce, and obviously, Intel, very specific in, you know, the silicon in Silicon Valley obviously. >> Right yeah. >> This is a much, kind of broader history with again just great characters. I mean, it's a tech history book, but it's really a character novel; I love it. >> Well thanks, yeah; I mean, I really wanted to find people. They had to meet a few criteria. They had to be interesting, they had to be important, they had to be, in my book, a little unknown; and most important, they had to be super-duper interesting. >> Jeff Frick: Yeah. >> And what I love about this generation is I look at Noyce's generation of innovators, who sort of working in the... Are getting their start in the 60s. And they really kind of set the tone for the valley in a lot of ways, but the valley at that point was still just all about chips. And then you have this new generation show up in the 70s, and they come up with the personal computer, they come up with video games. They sort of launch the venture capital industry in the way we know it now. Biotech, the internet gets started via the ARPANET, and they kind of set the tone for where we are today around the world in this modern, sort of tech infused, life that we live. >> Right, right, and it's interesting to me, because there's so many things that kind of define what Silicon Valley is. And of course, people are trying to replicate it all over the place, all over the world. But really, a lot of those kind of attributes were started by this class of entrepreneurs. Like just venture capital, the whole concept of having kind of a high risk, high return, small carve out from an institution, to put in a tech venture with basically a PowerPoint and some faith was a brand new concept back in the day. >> Leslie Berlin: Yeah, and no PowerPoint even. >> Well that's right, no PowerPoint, which is probably a good thing. >> You're right, because we're talking about the 1970s. I mean, what's so, really was very surprising to me about this book, and really important for understanding early venture capital, is that now a lot of venture capitalists are professional investors. But these venture capitalists pretty much to a man, and they were all men at that point, they were all operating guys, all of them. They worked at Fairchild, they worked at Intel, they worked at HP; and that was really part of the value that they brought to these propositions was they had money, yes, but they also had done this before. >> Jeff Frick: Right. >> And that was really, really important. >> Right, another concept that kind of comes out, and I think we've seen it time and time again is kind of this partnership of kind of the crazy super enthusiastic visionary that maybe is hard to work with and drives everybody nuts, and then always kind of has the other person, again, generally a guy in this time still a lot, who's kind of the doer. And it was really the Bushnell-Alcorn story around Atari that really brought that home where you had this guy way out front of the curve but you have to have the person behind who's actually building the vision in real material. >> Yeah, I mean I think something that's really important to understand, and this is something that I was really trying to bring out in the book, is that we usually only have room in our stories for one person in the spotlight when innovation is a team sport. And so, the kind of relationship that you're talking about with Nolan Bushnell, who started Atari, and Al Alcorn who was the first engineer there, it's a great example of that. And Nolan is exactly this very out there person, big curly hair, talkative, outgoing guy. After Atari he starts Chuck E. Cheese, which kind of tells you everything you need to know about someone who's dreaming up Chuck E. Cheese, super creative, super out there, super fun oriented. And you have working with him, Al Alcorn, who's a very straight laced for the time, by which I mean, he tried LSD but only once. (cumulative laughing) Engineer, and I think that what's important to understand is how much they needed each other, because the stories are so often only about the exuberant out front guy. To understand that those are just dreams, they are not reality without these other people. And how important, I mean, Al Alcorn told me look, "I couldn't have done this without Nolan, "kind of constantly pushing me." >> Right, right. >> And then in the Apple example, you actually see a third really important person, which to me was possibly the most exciting part of everything I discovered, which was the importance of the guy named Mike Markkula. Because in Jobs you had the visionary, and in Woz you had the engineer, but the two of them together, they had an idea, they had a great product, the Apple II, but they didn't have a company. And when Mike Markkula shows up at the garage, you know, Steve Jobs is 21 years old. >> Jeff Frick: Right. >> He has had 17 months of business experience in his life, and it's all his attack for Atari, actually. And so how that company became a business is due to Mike Markkula, this very quiet guy, very, very ambitious guy. He talked them up from a thousand stock options at Intel to 20,000 stock options at Intel when he got there, just before the IPO, which is how he could then turn around and help finance >> Jeff Frick: Right. >> The birth of Apple. And he pulled into Apple all of the chip people that he had worked with, and that is really what turned Apple into a company. So you had the visionary, you had the tech guy, you also needed a business person. >> But it's funny though because in that story of his visit to the garage he's specifically taken by the engineering elegance of the board >> Leslie Berlin: Right. >> That Woz put together, which I thought was really neat. So yeah, he's a successful business man. Yes he was bringing a lot of kind of business acumen value to the opportunity, but what struck him, and he specifically talks about what chips he used, how he planned for the power supply. Just very elegant engineering stuff that touched him, and he could recognize that they were so far ahead of the curve. And I think that's such another interesting point is that things that we so take for granted like mice, and UI, and UX. I mean the Atari example, for them to even think of actually building it that would operate with a television was just, I mean you might as well go to Venus, forget Mars, I mean that was such a crazy idea. >> Yeah, I mean I think Al ran to Walgreens or something like that and just sort of picked out the closest t.v. to figure out how he could build what turned out to be Pong, the first super successful video game. And I mean, if you look also at another story I tell is about Xerox Park; and specifically about a guy named Bob Taylor, who, I know I keep saying, "Oh this might be my favorite part." But Bob Taylor is another incredible story. This is the guy who convinced DARPA to start, it was then called ARPA, to start the ARPANET, which became the internet in a lot of ways. And then he goes on and he starts the computer sciences lab at Xerox Park. And that is the lab that Steve Jobs comes to in 1979, and for the first time sees a GUI, sees a mouse, sees Windows. And this is... The history behind that, and these people all working together, these very sophisticated Ph.D. engineers were all working together under the guidance of Bob Taylor, a Texan with a drawl and a Master's Degree in Psychology. So what it takes to lead, I think, is a really interesting question that gets raised in this book. >> So another great personality, Sandra Kurtzig. >> Yeah. >> I had to look to see if she's still alive. She's still alive. >> Leslie Berlin: Yeah. >> I'd love to get her in some time, we'll have to arrange for that next time, but her story is pretty fascinating, because she's a woman, and we still have big women issues in the tech industry, and this is years ago, but she was aggressive, she was a fantastic sales person, and she could code. And what was really interesting is she started her own software company. The whole concept of software kind of separated from hardware was completely alien. She couldn't even convince the HP guys to let her have access to a machine to write basically an NRP system that would add a ton of value to these big, expensive machines that they were selling. >> Yeah, you know what's interesting, she was able to get access to the machine. And HP, this is not a well known part of HP's history, is how important it was in helping launch little bitty companies in the valley. It was a wonderful sort of... Benefited all these small companies. But she had to go and read to them the definition of what an OEM was to make an argument that I am adding value to your machines by putting software on it. And software was such an unknown concept. A, people who heard she was selling software thought she was selling lingerie. And B, Larry Ellison tells a hilarious story of going to talk to venture capitalists about... When he's trying to start Oracle, he had co-founders, which I'm not sure everybody knows. And he and his co-founders were going to try to start Oracle, and these venture capitalists would, he said, not only throw him out of the office for such a crazy idea, but their secretaries would double check that he hadn't stolen the copy of Business Week off the table because what kind of nut job are we talking to here? >> Software. >> Yeah, where as now, I mean when you think about it, this is software valley. >> Right, right, it's software, even, world. There's so many great stories, again, "Troublemakers" just go out and get it wherever you buy a book. The whole recombinant DNA story and the birth of Genentech, A, is interesting, but I think the more kind of unique twist was the guy at Stanford, who really took it upon himself to take the commercialization of academic, generated, basic research to a whole 'nother level that had never been done. I guess it was like a sleepy little something in Manhattan they would send some paper to, but this guy took it to a whole 'nother level. >> Oh yeah, I mean before Niels showed up, Niels Reimers, he I believe that Stanford had made something like $3,000 off of the IP from its professors and students in the previous decades, and Niels said "There had to be a better way to do this." And he's the person who decided, we ought to be able to patent recombinant DNA. And one of the stories that's very, very interesting is what a cultural shift that required, whereas engineers had always thought in terms of, "How can this be practical?" For biologists this was seen as really an unpleasant thing to be doing, don't think about that we're about basic research. So in addition to having to convince all sorts of government agencies and the University of California system, which co-patented this, to make it possible, just almost on a paperwork level... >> Right. >> He had to convince the scientists themselves. And it was not a foregone conclusion, and a lot of people think that what kept the two named co-inventors of recombinant DNA, Stan Cohen and Herb Boyer, from winning the Nobel Prize is that they were seen as having benefited from the work of others, but having claimed all the credit, which is not, A, isn't fair, and B, both of those men had worried about that from the very beginning and kept saying, "We need to make sure that this includes everyone." >> Right. >> But that's not just the origins of the biotech industry in the valley, the entire landscape of how universities get their ideas to the public was transformed, and that whole story, there are these ideas that used to be in university labs, used to be locked up in the DOD, like you know, the ARPANET. And this is the time when those ideas start making their way out in a significant way. >> But it's this elegant dance, because it's basic research, and they want it to benefit all, but then you commercialize it, right? And then it's benefiting the few. But if you don't commercialize it and it doesn't get out, you really don't benefit very many. So they really had to walk this fine line to kind of serve both masters. >> Absolutely, and I mean it was even more complicated than that, because researchers didn't have to pay for it, it was... The thing that's amazing to me is that we look back at these people and say, "Oh these are trailblazers." And when I talked to them, because something that was really exciting about this book was that I got to talk to every one of the primary characters, you talk to them, and they say, "I was just putting one foot in front of the other." It's only when you sort of look behind them years later that you see, "Oh my God, they forged a completely new trail." But here it was just, "No I need to get to here, "and now I need to get to here." And that's what helped them get through. That's why I start the book with the quote from Raiders of the Lost Ark where Sallah asks Indy, you know basically, how are you going to stop, "Stop that car." And he says, "How are you going to do it Indy?" And Indy says, "I don't know "I'm making it up as I go along." And that really could almost be a theme in a lot of cases here that they knew where they needed to get to, and they just had to make it up to get there. >> Yeah, and there's a whole 'nother tranche on the Genentech story; they couldn't get all of the financing, so they actually used outsourcing, you know, so that whole kind of approach to business, which was really new and innovative. But we're running out of time, and I wanted to follow up on the last comment that you made. As a historian, you know, you are so fortunate or smart to pick your field that you can talk to the individual. So, I think you said, you've been doing interviews for five or six years for this book, it's 100 pages of notes in the back, don't miss the notes. >> But also don't think the book's too long. >> No, it's a good book, it's an easy read. But as you reflect on these individuals and these personalities, so there's obviously the stories you spent a lot of time writing about, but I'm wondering if there's some things that you see over and over again that just impress you. Is there a pattern, or is it just, as you said, just people working hard, putting one step in front of the other, and taking those risks that in hindsight are so big? >> I would say, I would point to a few things. I'd point to audacity; there really is a certain kind of adventurousness, at an almost unimaginable level, and persistence. I would also point to a third feature at that time that I think was really important, which was for a purpose that was creative. You know, I mean there was the notion, I think the metaphor of pioneering is much more what they were doing then what we would necessarily... Today we would call it disruption, and I think there's a difference there. And their vision was creative, I think of them as rebels with a cause. >> Right, right; is disruption the right... Is disruption, is that the right way that we should be thinking about it today or are just kind of backfilling the disruption after the fact that it happens do you think? >> I don't know, I mean I've given this a lot of thought, because I actually think, well, you know, the valley at this point, two-thirds of the people who are working in the tech industry in the valley were born outside of this country right now, actually 76 percent of the women. >> Jeff Frick: 76 percent? Wow. >> 76 percent of the women, I think it's age 25 to 44 working in tech were born outside of the United States. Okay, so the pioneering metaphor, that's just not the right metaphor anymore. The disruptive metaphor has a lot of the same concepts, but it has, it sounds to me more like blowing things up, and doesn't really thing so far as to, "Okay, what comes next?" >> Jeff Frick: Right, right. >> And I think we have to be sure that we continue to do that. >> Right, well because clearly, I mean, the Facebooks are the classic example where, you know, when he built that thing at Harvard, it was not to build a new platform that was going to have the power to disrupt global elections. You're trying to get dates, right? I mean, it was pretty simple. >> Right. >> Simple concept and yet, as you said, by putting one foot in front of the other as things roll out, he gets smart people, they see opportunities and take advantage of it, it becomes a much different thing, as has Google, as has Amazon. >> That's the way it goes, that's exactly... I mean, and you look back at the chip industry. These guys just didn't want to work for a boss they didn't like, and they wanted to build a transistor. And 20 years later a huge portion of the U.S. economy rests on the decisions they're making and the choices. And so I think this has been a continuous story in Silicon Valley. People start with a cool, small idea and it just grows so fast among them and around them with other people contributing, some people they wish didn't contribute, okay then what comes next? >> Jeff Frick: Right, right. >> That's what we figure out now. >> All right, audacity, creativity and persistence. Did I get it? >> And a goal. >> And a goal, and a goal. Pong, I mean was a great goal. (cumulative laughing) All right, so Leslie, thanks for taking a few minutes. Congratulations on the book; go out, get the book, you will not be disappointed. And of course, the Bob Noyce book is awesome as well, so... >> Thanks. >> Thanks for taking a few minutes and congratulations. >> Thank you so much Jeff. >> All right this is Leslie Berlin, I'm Jeff Frick, you're watching theCUBE. See you next time, thanks for watching. (electronic music)
SUMMARY :
And it's really about kind of the next phase Absolutely, so the last book you wrote was This is a much, kind of broader history and most important, they had to be super-duper interesting. but the valley at that point was still just all about chips. it all over the place, all over the world. which is probably a good thing. of the value that they brought to these propositions was And it was really the Bushnell-Alcorn story And so, the kind of relationship that you're talking about of the guy named Mike Markkula. And so how that company became a business is And he pulled into Apple all of the chip people I mean the Atari example, for them to even think And that is the lab that Steve Jobs comes I had to look to see if she's still alive. She couldn't even convince the HP guys to let double check that he hadn't stolen the copy when you think about it, this is software valley. the commercialization of academic, generated, basic research And he's the person who decided, we ought that from the very beginning and kept saying, in the DOD, like you know, the ARPANET. So they really had to walk this from Raiders of the Lost Ark where Sallah asks all of the financing, so they actually used outsourcing, obviously the stories you spent a lot of time that I think was really important, the disruption after the fact that it happens do you think? the valley at this point, two-thirds of the people Jeff Frick: 76 percent? The disruptive metaphor has a lot of the same concepts, And I think we have to be sure the Facebooks are the classic example where, by putting one foot in front of the other And so I think this has been Did I get it? And of course, the Bob Noyce book is awesome as well, so... See you next time, thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Sandra Kurtzig | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Leslie Berlin | PERSON | 0.99+ |
Mike Markkula | PERSON | 0.99+ |
Steve Jobs | PERSON | 0.99+ |
Niels | PERSON | 0.99+ |
Indy | PERSON | 0.99+ |
Leslie | PERSON | 0.99+ |
Nolan | PERSON | 0.99+ |
University of California | ORGANIZATION | 0.99+ |
Nolan Bushnell | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Stanford | ORGANIZATION | 0.99+ |
1979 | DATE | 0.99+ |
Larry Ellison | PERSON | 0.99+ |
Bob Taylor | PERSON | 0.99+ |
$3,000 | QUANTITY | 0.99+ |
Robert Noyce | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
17 months | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Manhattan | LOCATION | 0.99+ |
Jeff | PERSON | 0.99+ |
United States | LOCATION | 0.99+ |
100 pages | QUANTITY | 0.99+ |
Niels Reimers | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Nov 2017 | DATE | 0.99+ |
Sallah | PERSON | 0.99+ |
Stan Cohen | PERSON | 0.99+ |
Noyce | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
PowerPoint | TITLE | 0.99+ |
Al Alcorn | PERSON | 0.99+ |
Herb Boyer | PERSON | 0.99+ |
76 percent | QUANTITY | 0.99+ |
Walgreens | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Venus | LOCATION | 0.99+ |
six years | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
20,000 stock options | QUANTITY | 0.99+ |
Fairchild | ORGANIZATION | 0.99+ |
Nobel Prize | TITLE | 0.99+ |
Atari | ORGANIZATION | 0.99+ |
Biotech | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
Mars | LOCATION | 0.99+ |
70s | DATE | 0.99+ |
third | QUANTITY | 0.99+ |
two-thirds | QUANTITY | 0.99+ |
Bob Noyce | PERSON | 0.98+ |
one foot | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
Seth Myers, Demandbase | George Gilbert at HQ
>> This is George Gilbert, we're on the ground at Demandbase, the B2B CRM company, based on AI, one of uh, a very special company that's got some really unique technology. We have the privilege to be with Seth Myers today, Senior Data Scientist and resident wizard, and who's going to take us on a journey through some of the technology Demandbase is built on, and some of the technology coming down the road. So Seth, welcome. >> Thank you very much for having me. >> So, we talked earlier with Aman Naimat, Senior VP of Technology, and we talked about some of the functionality in Demandbase, and how it's very flexible, and reactive, and adaptive in helping guide, or react to a customer's journey, through the buying process. Tell us about what that journey might look like, how it's different, and the touchpoints, and the participants, and then how your technology rationalizes that, because we know, old CRM packages were really just lists of contact points. So this is something very different. How's it work? >> Yeah, absolutely, so at the highest level, each customer's going to be different, each customer's going to make decisions and look at different marketing collateral, and respond to different marketing collateral in different ways, you know, as the companies get bigger, and their products they're offering become more sophisticated, that's certainly the case, and also, sales cycles take a long time. You're engaged with an opportunity over many months, and so there's a lot of touchpoints, there's a lot of planning that has to be done, so that actually offers a huge opportunity to be solved with AI, especially in light of recent developments in this thing called reinforcement learning. So reinforcement learning is basically machine learning that can think strategically, they can actually plan ahead in a series of decisions, and it's actually technology behind AlphaGo which is the Google technology that beat the best Go players in the world. And what we basically do is we say, "Okay, if we understand "you're a customer, we understand the company you work at, "we understand the things they've been researching elsewhere "on third party sites, then we can actually start to predict "about content they will be likely to engage with." But more importantly, we can start to predict content they're more likely to engage with next, and after that, and after that, and after that, and so what our technology does is it looks at all possible paths that your potential customer can take, all the different content you could ever suggest to them, all the different routes they will take, and it looks at ones that they're likely to follow, but also ones they're likely to turn them into an opportunity. And so we basically, in the same way Google Maps considers all possible routes to get you from your office to home, we do the same, and we choose the one that's most likely to convert the opportunity, the same way Google chooses the quickest road home. >> Okay, this is really, that's a great example, because people can picture that, but how do you, how do you know what's the best path, is it based on learning from previous journeys from customers? >> Yes. >> And then, if you make a wrong guess, you sort of penalize the engine and say, "Pick the next best, "what you thought was the next best path." >> Absolutely, so the way, the nuts and bolts of how it works is we start working with our clients, and they have all this data of different customers, and how they've engaged with different pieces of content throughout their journey, and so the machine learning model, what it's really doing at any moment in time, given any customer in any stage of the opportunity that they find themselves in, it says, what piece of content are they likely to engage with next, and that's based on historical training data, if you will. And then once we make that decision on a step-by-step basis, then we kind of extrapolate, and we basically say, "Okay, if we showed them this page, or if they engage with "this material, what would that do, what situation would "we find them in at the next step, and then what would "we recommend from there, and then from there, "and then from there," and so it's really kind of learning the right move to make at each time, and then extrapolating that all the way to the opportunity being closed. >> The picture that's in my mind is like, the Deep Blue, I think it was chess, where it would map out all the potential moves. >> Very similar, yeah. >> To the end game. >> Very similar idea. >> So, what about if you're trying to engage with a customer across different channels, and it's not just web content? How is that done? >> Well, that's something that we're very excited about, and that's something that we're currently really starting to devote resources to. Right now, we already have a product live that's focused on web content specifically, but yeah, we're working on kind of a multi-channel type solution, and we're all pretty excited about it. >> Okay so, obviously you can't talk too much about it. Can you tell us what channels that might touch? >> I might have to play my cards a little close to my chest on this one, but I'll just say we're excited. >> Alright. Well I guess that means I'll have to come back. >> Please, please. >> So, um, tell us about the personalized conversations. Is the conversation just another way of saying, this is how we're personalizing the journey? Or is there more to it than that? >> Yeah, it really is about personalizing the journey, right? Like you know, a lot of our clients now have a lot of sophisticated marketing collateral, and a lot of time and energy has gone into developing content that different people find engaging, that kind of positions products towards pain points, and all that stuff, and so really there's so much low-hanging fruit by just organizing and leveraging all of this material, and actually forming the conversation through a series of journeys through that material. >> Okay, so, Aman was telling us earlier that we have so many sort of algorithms, they're all open source, or they're all published, and they're only as good as the data you can apply them to. So, tell us, where do companies, startups, you know, not the Googles, Microsofts, Amazons, where do they get their proprietary information? Is it that you have algorithms that now are so advanced that you can refine raw information into proprietary information that others don't have? >> Really I think it comes down to, our competitive advantage I think is largely in the source of our data, and so, yes, you can build more and more sophisticated algorithms, but again, you're starting with a public data set, you'll be able to derive some insights, but there will always be a path to those datasets for, say, a competitor. For example, we're currently tracking about 700 billion web interactions a year, and then we're also able to attribute those web interactions to companies, meaning the employees at those companies involved in those web interactions, and so that's able to give us an insight that no amount of public data or processing would ever really be able to achieve. >> How do you, Aman started to talk to us about how, like there were DNS, reverse DNS registries. >> Reverse IP lookups, yes. >> Yeah, so how are those, if they're individuals within companies, and then the companies themselves, how do you identify them reliably? >> Right, so reverse IP lookup is, we've been doing this for years now, and so we've kind of developed a multi-source solution, so reverse IP lookups is a big one. Also machine learning, you can look at traffic coming from an IP address, and you can start to make some very informed decisions about what the IP address is actually doing, who they are, and so if you're looking at, at the account level, which is what we're tracking at, there's a lot of information to be gleaned from that kind of information. >> Sort of the way, and this may be a weird-sounding analogy, but the way a virus or some piece of malware has a signature in terms of its behavior, you find signatures in terms of users associated with an IP address. >> And we certainly don't de-anonymize individual users, but if we're looking at things at the account level, then you know, the bigger the data, the more signal you can infer, and so if we're looking at a company-wide usage of an IP address, then you can start to make some very educated guesses as to who that company is, the things that they're researching, what they're in market for, that type of thing. >> And how do you find out, if they're not coming to your site, and they're not coming to one of your customer's sites, how do you find out what they're touching? >> Right, I mean, I can't really go into too much detail, but a lot of it comes from working with publishers, and a lot of this data is just raw, and it's only because we can identify the companies behind these IP addresses, that we're able to actually turn these web interactions into insights about specific companies. >> George: Sort of like how advertisers or publishers would track visitors across many, many sites, by having agreements. >> Yes. Along those lines, yeah. >> Okay. So, tell us a little more about natural language processing, I think where most people have assumed or have become familiar with it is with the B2C capabilities, with the big internet giants, where they're trying to understand all language. You have a more well-scoped problem, tell us how that changes your approach. >> So a lot of really exciting things are happening in natural language processing in general, and the research, and right now in general, it's being measured against this yardstick of, can it understand languages as good as a human can, obviously we're not there yet, but that doesn't necessarily mean you can't derive a lot of meaningful insights from it, and the way we're able to do that is, instead of trying to understand all of human language, let's understand very specific language associated with the things that we're trying to learn. So obviously we're a B2B marketing company, so it's very important to us to understand what companies are investing in other companies, what companies are buying from other companies, what companies are suing other companies, and so if we said, okay, we only want to be able to infer a competitive relationship between two businesses in an actual document, that becomes a much more solvable and manageable problem, as opposed to, let's understand all of human language. And so we actually started off with these kind of open source solutions, with some of these proprietary solutions that we paid for, and they didn't work because their scope was this broad, and so we said, okay, we can do better by just focusing in on the types of insights we're trying to learn, and then work backwards from them. >> So tell us, how much of the algorithms that we would call building blocks for what you're doing, and others, how much of those are all published or open source, and then how much is your secret sauce? Because we talk about data being a key part of the secret sauce, what about the algorithms? >> I mean yeah, you can treat the algorithms as tools, but you know, a bag of tools a product does not make, right? So our secret sauce becomes how we use these tools, how we deploy them, and the datasets we put them again. So as mentioned before, we're not trying to understand all of human language, actually the exact opposite. So we actually have a single machine learning algorithm that all it does is it learns to recognize when Amazon, the company, is being mentioned in a document. So if you see the word Amazon, is it talking about the river, is it talking about the company? So we have a classifier that all it does is it fires whenever Amazon is being mentioned in a document. And that's a much easier problem to solve than understanding, than Siri basically. >> Okay. I still get rather irritated with Siri. So let's talk about, um, broadly this topic that sort of everyone lays claim to as their great higher calling, which is democratizing machine learning and AI, and opening it up to a much greater audience. Help set some context, just the way you did by saying, "Hey, if we narrow the scope of a problem, "it's easier to solve." What are some of the different approaches people are taking to that problem, and what are their sweet spots? >> Right, so the the talk of the data science community, talking machinery right now, is some of the work that's coming out of DeepMind, which is a subsidiary of Google, they just built AlphaGo, which solved the strategy game that we thought we were decades away from actually solving, and their approach of restricting the problem to a game, with well-defined rules, with a limited scope, I think that's how they're able to propel the field forward so significantly. They started off by playing Atari games, then they moved to long term strategy games, and now they're doing video games, like video strategy games, and I think the idea of, again, narrowing the scope to well-defined rules and well-defined limited settings is how they're actually able to advance the field. >> Let me ask just about playing the video games. I can't remember Star... >> Starcraft. >> Starcraft. Would you call that, like, where the video game is a model, and you're training a model against that other model, so it's almost like they're interacting with each other. >> Right, so it really comes down, you can think of it as pulling levers, so you have a very complex machine, and there's certain levers you can pull, and the machine will respond in different ways. If you're trying to, for example, build a robot that can walk amongst a factory and pick out boxes, like how you move each joint, what you look around, all the different things you can see and sense, those are all levers to pull, and that gets very complicated very quickly, but if you narrow it down to, okay, there's certain places on the screen I can click, there's certain things I can do, there's certain inputs I can provide in the video game, you basically limit the number of levers, and then optimizing and learning how to work those levers is a much more scoped and reasonable problem, as opposed to learn everything all at once. >> Okay, that's interesting, now, let me switch gears a little bit. We've done a lot of work at WikiBound about IOT and increasingly edge-based intelligence, because you can't go back to the cloud for your analytics for everything, but one of the things that's becoming apparent is, it's not just the training that might go on in a cloud, but there might be simulations, and then the sort of low-latency response is based on a model that's at the edge. Help elaborate where that applies and how that works. >> Well in general, when you're working with machine learning, in almost every situation, training the model is, that's really the data-intensive process that requires a lot of extensive computation, and that's something that makes sense to have localized in a single location which you can leverage resources and you can optimize it. Then you can say, alright, now that I have this model that understands the problem that's trained, it becomes a much simpler endeavor to basically put that as close to the device as possible. And so that really is how they're able to say, okay, let's take this really complicated billion-parameter neural network that took days and weeks to train, and let's actually derive insights at the level, right at the device level. Recent technology though, like I mentioned deep learning, that in itself, just the actual deploying the technology creates new challenges as well, to the point that actually Google invented a new type of chip to just run... >> The tensor processing. >> Yeah, the TPU. The tensor processing unit, just to handle what is now a machine learning algorithm so sophisticated that even deploying it after it's been trained is still a challenge. >> Is there a difference in the hardware that you need for training vs. inferencing? >> So they initially deployed the TPU just for the sake of inference. In general, the way it actually works is that, when you're building a neural network, there is a type of mathematical operation to do a whole bunch, and it's based on the idea of working with matrices and it's like that, that's still absolutely the case with training as well as inference, where actually, querying the model, but so if you can solve that one mathematical operation, then you can deploy it everywhere. >> Okay. So, one of our CTOs was talking about how, in his view, what's going to happen in the cloud is richer and richer simulations, and as you say, the querying the model, getting an answer in realtime or near realtime, is out on the edge. What exactly is the role of the simulation? Is that just a model that understands time, and not just time, but many multiple parameters that it's playing with? >> Right, so simulations are particularly important in taking us back to reinforcement learning, where you basically have many decisions to make before you actually see some sort of desirable or undesirable outcome, and so, for example, the way AlphaGo trained itself is basically by running simulations of the game being played against itself, and really what that simulations are doing is allowing the artificial intelligence to explore the entire possibilities of all games. >> Sort of like WarGames, if you remember that movie. >> Yes, with uh... >> Matthew Broderick, and it actually showed all the war game scenarios on the screen, and then figured out, you couldn't really win. >> Right, yes, it's a similar idea where they, for example in Go, there's more board configurations than there are atoms in the observable universe, and so the way Deep Blue won chess is basically, more or less explore the vast majority of chess moves, that's really not the same option, you can't really play that same strategy with AlphaGo, and so, this constant simulation is how they explore the meaningful game configurations that it needed to win. >> So in other words, they were scoped down, so the problem space was smaller. >> Right, and in fact, basically one of the reasons, like AlphaGo was really kind of two different artificial intelligences working together, one that decided which solutions to explore, like which possibilities it should pursue more, and which ones not to, to ignore, and then the second piece was, okay, given the certain board configuration, what's the likely outcome? And so those two working in concert, one that narrows and focuses, and one that comes up with the answer, given that focus, is how it was actually able to work so well. >> Okay. Seth, on that note, that was a very, very enlightening 20 minutes. >> Okay. I'm glad to hear that. >> We'll have to come back and get an update from you soon. >> Alright, absolutely. >> This is George Gilbert, I'm with Seth Myers, Senior Data Scientist at Demandbase, a company I expect we'll be hearing a lot more about, and we're on the ground, and we'll be back shortly.
SUMMARY :
We have the privilege to and the participants, and the company you work at, say, "Pick the next best, the right move to make the Deep Blue, I think it was chess, that we're very excited about, Okay so, obviously you I might have to play I'll have to come back. Is the conversation just and actually forming the as good as the data you can apply them to. and so that's able to give us Aman started to talk to us about how, and you can start to make Sort of the way, and this the things that they're and a lot of this data is just George: Sort of like how Along those lines, yeah. the B2C capabilities, focusing in on the types of about the company? the way you did by saying, the problem to a game, playing the video games. Would you call that, and that gets very complicated a model that's at the edge. that in itself, just the Yeah, the TPU. the hardware that you need and it's based on the idea is out on the edge. and so, for example, the if you remember that movie. it actually showed all the and so the way Deep Blue so the problem space was smaller. and focuses, and one that Seth, on that note, that was a very, very I'm glad to hear that. We'll have to come back and and we're on the ground,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
George Gilbert | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
George | PERSON | 0.99+ |
Amazons | ORGANIZATION | 0.99+ |
Microsofts | ORGANIZATION | 0.99+ |
Siri | TITLE | 0.99+ |
Googles | ORGANIZATION | 0.99+ |
Demandbase | ORGANIZATION | 0.99+ |
20 minutes | QUANTITY | 0.99+ |
Starcraft | TITLE | 0.99+ |
second piece | QUANTITY | 0.99+ |
WikiBound | ORGANIZATION | 0.99+ |
two businesses | QUANTITY | 0.99+ |
Seth Myers | PERSON | 0.99+ |
Aman Naimat | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Atari | ORGANIZATION | 0.99+ |
Seth | PERSON | 0.98+ |
each customer | QUANTITY | 0.98+ |
each joint | QUANTITY | 0.98+ |
Go | TITLE | 0.98+ |
single | QUANTITY | 0.98+ |
Matthew Broderick | PERSON | 0.98+ |
one | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
Aman | PERSON | 0.96+ |
Deep Blue | TITLE | 0.96+ |
billion-parameter | QUANTITY | 0.94+ |
each time | QUANTITY | 0.91+ |
two different artificial intelligences | QUANTITY | 0.88+ |
decades | QUANTITY | 0.88+ |
Google Maps | TITLE | 0.86+ |
AlphaGo | ORGANIZATION | 0.82+ |
about 700 billion web interactions a year | QUANTITY | 0.81+ |
Star | TITLE | 0.81+ |
AlphaGo | TITLE | 0.79+ |
one mathematical | QUANTITY | 0.78+ |
lot | QUANTITY | 0.76+ |
years | QUANTITY | 0.74+ |
DeepMind | ORGANIZATION | 0.74+ |
lot of information | QUANTITY | 0.73+ |
bag of tools | QUANTITY | 0.63+ |
IOT | TITLE | 0.62+ |
WarGames | TITLE | 0.6+ |
sites | QUANTITY | 0.6+ |