Liz Rice, Isovalent | CloudNativeSecurityCon 23
(upbeat music) >> Hello, everyone, from Palo Alto, Lisa Martin here. This is The Cube's coverage of CloudNativeSecurityCon, the inaugural event. I'm here with John Furrier in studio. In Boston, Dave Vellante joins us, and our guest, Liz Rice, one of our alumni, is joining us from Seattle. Great to have everyone here. Liz is the Chief Open Source officer at Isovalent. She's also the Emeritus Chair Technical Oversight Committee at CNCF, and a co-chair of this new event. Everyone, welcome Liz. Great to have you back on theCUBE. Thanks so much for joining us today. >> Thanks so much for having me, pleasure. >> So CloudNativeSecurityCon. This is the inaugural event, Liz, this used to be part of KubeCon, it's now its own event in its first year. Talk to us about the importance of having it as its own event from a security perspective, what's going on? Give us your opinions there. >> Yeah, I think security was becoming so- at such an important part of the conversation at KubeCon, CloudNativeCon, and the TAG security, who were organizing the co-located Cloud Native Security Day which then turned into a two day event. They were doing this amazing job, and there was so much content and so much activity and so much interest that it made sense to say "Actually this could stand alone as a dedicated event and really dedicate, you know, all the time and resources of running a full conference, just thinking about cloud native security." And I think that's proven to be true. There's plenty of really interesting talks that we're going to see. Things like a capture the flag. There's all sorts of really good things going on this week. >> Liz, great to see you, and Dave, great to see you in Boston Lisa, great intro. Liz, you've been a CUBE alumni. You've been a great contributor to our program, and being part of our team, kind of extracting that signal from the CNCF cloud native world KubeCon. This event really kind of to me is a watershed moment, because it highlights not only security as a standalone discussion event, but it's also synergistic with KubeCon. And, as co-chair, take us through the thought process on the sessions, the experts, it's got a practitioner vibe there. So we heard from Priyanka early on, bottoms up, developer first. You know KubeCon's shift left was big momentum. This seems to be a breakout of very focused security. Can you share the rationale and the thoughts behind how this is emerging, and how you see this developing? I know it's kind of a small event, kind of testing the waters it seems, but this is really a directional shift. Can you share your thoughts? >> Yeah I'm just, there's just so many different angles that you can consider security. You know, we are seeing a lot of conversations about supply chain security, but there's also runtime security. I'm really excited about eBPF tooling. There's also this opportunity to talk about how do we educate people about security, and how do security practitioners get involved in cloud native, and how do cloud native folks learn about the security concepts that they need to keep their deployments secure. So there's lots of different groups of people who I think maybe at a KubeCon, KubeCon is so wide, it's such a diverse range of topics. If you really just want to focus in, drill down on what do I need to do to run Kubernetes and cloud native applications securely, let's have a really focused event, and just drill down into all the different aspects of that. And I think that's great. It brings the right people together, the practitioners, the experts, the vendors to, you know, everyone can be here, and we can find each other at a smaller event. We are not spread out amongst the thousands of people that would attend a KubeCon. >> It's interesting, Dave, you know, when we were talking, you know, we're going to bring you in real quick, because AWS, which I think is the bellweather for, you know, cloud computing, has now two main shows, AWS re:Invent and re:Inforce. Security, again, broken out there. you see the classic security events, RSA, Black Hat, you know, those are the, kind of, the industry kind of mainstream security, very wide. But you're starting to see the cloud native developer first with both security and cloud native, kind of, really growing so fast. This is a major trend for a lot of the ecosystem >> You know, and you hear, when you mention those other conferences, John you hear a lot about, you know, shift left. There's a little bit of lip service there, and you, we heard today way more than lip service. I mean deep practitioner level conversations, and of course the runtime as well. Liz, you spent a lot of time obviously in your keynote on eBPF, and I wonder if you could share with the audience, you know, why you're so excited about that. What makes it a more effective tool compared to other traditional methods? I mean, it sounds like it simplifies things. You talked about instrumenting nodes versus workloads. Can you explain that a little bit more detail? >> Yeah, so with eBPF programs, we can load programs dynamically into the kernel, and we can attach them to all kinds of different events that could be happening anywhere on that virtual machine. And if you have the right knowledge about where to hook into, you can observe network events, you can observe file access events, you can observe pretty much anything that's interesting from a security perspective. And because eBPF programs are living in the kernel, there's only one kernel shared amongst all of the applications that are running on that particular machine. So you don't- you no longer have to instrument each individual application, or each individual pod. There's no more need to inject sidecars. We can apply eBPF based tooling on a per node basis, which just makes things operationally more straightforward, but it's also extremely performant. We can hook these programs into events that typically very lightweight, small programs, kind of, emitting an event, making a decision about whether to drop a packet, making a decision about whether to allow file access, things of that nature. There's super fast, there's no need to transition between kernel space and user space, which is usually quite a costly operation from performance perspective. So eBPF makes it really, you know, it's taking the security tooling, and other forms of tooling, networking and observability. We can take these tools into the kernel, and it's really efficient there. >> So Liz- >> So, if I may, one, just one quick follow up. You gave kind of a space age example (laughs) in your keynote. When, do you think a year from now we'll be able to see, sort of, real world examples in in action? How far away are we? >> Well, some of that is already pretty widely deployed. I mean, in my keynote I was talking about Cilium. Cilium is adopted by hundreds of really big scale deployments. You know, the users file is full of household names who've been using cilium. And as part of that they will be using network policies. And I showed some visualizations this morning of network policy, but again, network policy has been around, pretty much since the early days of Kubernetes. It can be quite fiddly to get it right, but there are plenty of people who are using it at scale today. And then we were also looking at some runtime security detections, seeing things like, in my example, exfiltrating the plans to the Death Star, you know, looking for suspicious executables. And again, that's a little bit, it's a bit newer, but we do have people running that in production today, proving that it really does work, and that eBPF is a scalable technology. It's, I've been fascinated by eBPF for years, and it's really amazing to see it being used in the real world now. >> So Liz, you're a maintainer on the Cilium project. Talk about the use of eBPF in the Cilium project. How is it contributing to cloud native security, and really helping to change the dials on that from an efficiency, from a performance perspective, as well as a, what's in it for me as a business perspective? >> So Cilium is probably best known as a networking plugin for Kubernetes. It, when you are running Kubernetes, you have to make a decision about some networking plugin that you're going to use. And Cilium is, it's an incubating project in the CNCF. It's the most mature of the different CNIs that's in the CNCF at the moment. As I say, very widely deployed. And right from day one, it was based on eBPF. And in fact some of the people who contribute to the eBPF platform within the kernel, are also working on the Cilium project. They've been kind of developed hand in hand for the last six, seven years. So really being able to bring some of that networking capability, it required changes in the kernel that have been put in place several years ago, so that now we can build these amazing tools for Kubernetes operators. So we are using eBPF to make the networking stack for Kubernetes and cloud native really efficient. We can bypass some of the parts of the network stack that aren't necessarily required in a cloud native deployment. We can use it to make these incredibly fast decisions about network policy. And we also have a sub-project called Tetragon, which is a newer part of the Cilium family which uses eBPF to observe these runtime events. The things like people opening a file, or changing the permissions on a file, or making a socket connection. All of these things that as a security engineer you are interested in. Who is running executables who is making network connections, who's accessing files, all of these operations are things that we can observe with Cilium Tetragon. >> I mean it's exciting. We've chatted in the past about that eBPF extended Berkeley Packet Filter, which is about the Linux kernel. And I bring that up Liz, because I think this is the trend I'm trying to understand with this event. It's, I hear bottoms up developer, developer first. It feels like it's an under the hood, infrastructure, security geek fest for practitioners, because Brian, in his keynote, mentioned BIND in reference the late Dan Kaminsky, who was, obviously found that error in BIND at the, in DNS. He mentioned DNS. There's a lot of things that's evolving at the silicone, kernel, kind of root levels of our infrastructure. This seems to be a major shift in focus and rightfully so. Is that something that you guys talk about, or is that coincidence, or am I just overthinking this point in terms of how nerdy it's getting in terms of the importance of, you know, getting down to the low level aspects of protecting everything. And as we heard also the quote was no software secure. (Liz chuckles) So that's up and down the stack of the, kind of the old model. What's your thoughts and reaction to that? >> Yeah, I mean I think a lot of folks who get into security really are interested in these kind of details. You know, you see write-ups of exploits and they, you know, they're quite often really involved, and really require understanding these very deep detailed technical levels. So a lot of us can really geek out about the details of that. The flip side of that is that as an application developer, you know, as- if you are working for a bank, working for a media company, you're writing applications, you shouldn't have to be worried about what's happening at the kernel level. This might be kind of geeky interesting stuff, but really, operationally, it should be taken care of for you. You've got your work cut out building business value in applications. So I think there's this interesting, kind of dual track going on almost, if you like, of the people who really want to get involved in those nitty gritty details, and understand how the underlying, you know, kernel level exploits maybe working. But then how do we make that really easy for people who are running clusters to, I mean like you said, nothing is ever secure, but trying to make things as secure as they can be easily, and make things visual, make things accessible, make things, make it easy to check whether or not you are compliant with whatever regulations you need to be compliant with. That kind of focus on making things usable for the platform team, for the application developers who deliver apps on the platform, that's the important (indistinct)- >> I noticed that the word expert was mentioned, I mentioned earlier with Priyanka. Was there a rationale on the 72 sessions, was there thinking around it or was it kind of like, these are urgent areas, they're obvious low hanging fruit. Was there, take us through the selection process of, or was it just, let's get 72 sessions going to get this (Liz laughs) thing moving? >> No, we did think quite carefully about how we wanted to, what the different focus areas we wanted to include. So we wanted to make sure that we were including things like governance and compliance, and that we talk about not just supply chain, which is clearly a very hot topic at the moment, but also to talk about, you know, threat detection, runtime security. And also really importantly, we wanted to have space to talk about education, to talk about how people can get involved. Because maybe when we talk about all these details, and we get really technical, maybe that's, you know, a bit scary for people who are new into the cloud native security space. We want to make sure that there are tracks and content that are accessible for newcomers to get involved. 'Cause, you know, given time they'll be just as excited about diving into those kind of kernel level details. But everybody needs a place to start, and we wanted to make sure there were conversations about how to get started in security, how to educate other members of your team in your organization about security. So hopefully there's something for everyone. >> That education piece- >> Liz, what's the- >> Oh sorry, Dave. >> What the buzz on on AI? We heard Dan talk about, you know, chatGPT, using it to automate spear phishing. There's always been this tension between security and speed to market, but CISOs are saying, "Hey we're going to a zero trust architecture and that's helping us move faster." Will, in your, is the talk on the floor, AI is going to slow us down a little bit until we figure it out? Or is it actually going to be used as an offensive defensive tool if I can use that angle? >> Yeah, I think all of the above. I actually had an interesting chat this morning. I was talking with Andy Martin from Control Plane, and we were talking about the risk of AI generated code that attempts to replicate what open source libraries already do. So rather than using an existing open source package, an organization might think, "Well, I'll just have my own version, and I'll have an AI write it for me." And I don't, you know, I'm not a lawyer so I dunno what the intellectual property implications of this will be, but imagine companies are just going, "Well you know, write me an SSL library." And that seems terrifying from a security perspective, 'cause there could be all sorts of very slightly different AI generated libraries that pick up the same vulnerabilities that exist in open source code. So, I think we're going to go through a pretty interesting period of vulnerabilities being found in AI generated code that look familiar, and we'll be thinking "Haven't we seen these vulnerabilities before? Yeah, we did, but they were previously in handcrafted code and now we'll see the same things being generated by AI." I mean, in the same way that if you look at an AI generated picture and it's got I don't know, extra fingers, or, you know, extra ears or something that, (Dave laughs) AI does make mistakes. >> So Liz, you talked about the education, the enablement, the 72 sessions, the importance of CloudNativeSecurityCon being its own event this year. What are your hopes and dreams for the practitioners to be able to learn from this event? How do you see the event as really supporting the growth, the development of the cloud native security community as a whole? >> Yeah, I think it's really important that we think of it as a Cloud Native Security community. You know, there are lots of interesting sort of hacker community security related community. Cloud native has been very community focused for a long time, and we really saw, particularly through the tag, the security tag, that there was this growing group of people who were, really wanted to work at that intersection between security and cloud native. And yeah, I think things are going really well this week so far, So I hope this is, you know, the first of many additions of this conference. I think it will also be interesting to see how the balance between a smaller, more focused event, compared to the giant KubeCon and cloud native cons. I, you know, I think there's space for both things, but whether or not there will be other smaller focus areas that want to stand alone and justify being able to stand alone as their own separate conferences, it speaks to the growth of cloud native in general that this is worthwhile doing. >> Yeah. >> It is, and what also speaks to, it reminds me of our tagline here at theCUBE, being able to extract the signal from the noise. Having this event as a standalone, being able to extract the value in it from a security perspective, that those practitioners and the community at large is going to be able to glean from these conversations is something that will be important, that we'll be keeping our eyes on. >> Absolutely. Makes sense for me, yes. >> Yeah, and I think, you know, one of the things, Lisa, that I want to get in, and if you don't mind asking Dave his thoughts, because he just did a breaking analysis on the security landscape. And Dave, you know, as Liz talking about some of these root level things, we talk about silicon advances, powering machine learning, we've been covering a lot of that. You've been covering the general security industry. We got RSA coming up reinforced with AWS, and as you see the cloud native developer first, really driving the standards of the super cloud, the multicloud, you're starting to see a lot more application focus around latency and kind of controlling that, These abstraction layer's starting to see a lot more growth. What's your take, Dave, on what Liz and- is talking about because, you know, you're analyzing the horses on the track, and there's sometimes the old guard security folks, and you got open source continuing to kick butt. And even on the ML side, we've been covering some of these foundation models, you're seeing a real technical growth in open source at all levels and, you know, you still got some proprietary machine learning stuff going on, but security's integrating all that. What's your take and your- what's your breaking analysis on the security piece here? >> I mean, to me the two biggest problems in cyber are just the lack of talent. I mean, it's just really hard to find super, you know, deep expertise and get it quickly. And I think the second is it's just, it's so many tools to deal with. And so the architecture of security is just this mosaic and a mess. That's why I'm excited about initiatives like eBPF because it does simplify things, and developers are being asked to do a lot. And I think one of the other things that's emerging is when you- when we talk about Industry 4.0, and IIoT, you- I'm seeing a lot of tools that are dedicated just to that, you know, slice of the world. And I don't think that's the right approach. I think that there needs to be a more comprehensive view. We're seeing, you know, zero trust architectures come together, and it's going to take some time, but I think that you're going to definitely see, you know, some rethinking of how to architect security. It's a game of whack-a-mole, but I think the industry is just- the technology industry is doing a really really good job of, you know, working hard to solve these problems. And I think the answer is not just another bespoke tool, it's a broader thinking around architectures and consolidating some of those tools, you know, with an end game of really addressing the problem in a more comprehensive fashion. >> Liz, in the last minute or so we have your thoughts on how automation and scale are driving some of these forcing functions around, you know, taking away the toil and the muck around developers, who just want stuff to be code, right? So infrastructure as code. Is that the dynamic here? Is this kind of like new, or is it kind of the same game, different kind of thing? (chuckles) 'Cause you're seeing a lot more machine learning, a lot more automation going on. What's, is that having an impact? What's your thoughts? >> Automation is one of the kind of fundamental underpinnings of cloud native. You know, we're expecting infrastructure to be written as code, We're expecting the platform to be defined in yaml essentially. You know, we are expecting the Kubernetes and surrounding tools to self-heal and to automatically scale and to do things like automated security. If we think about supply chain, you know, automated dependency scanning, think about runtime. Network policy is automated firewalling, if you like, for a cloud native era. So, I think it's all about making that platform predictable. Automation gives us some level of predictability, even if the underlying hardware changes or the scale changes, so that the application developers have something consistent and standardized that they can write to. And you know, at the end of the day, it's all about the business applications that run on top of this infrastructure >> Business applications and the business outcomes. Liz, we so appreciate your time talking to us about this inaugural event, CloudNativeSecurityCon 23. The value in it for those practitioners, all of the content that's going to be discussed and learned, and the growth of the community. Thank you so much, Liz, for sharing your insights with us today. >> Thanks for having me. >> For Liz Rice, John Furrier and Dave Vellante, I'm Lisa Martin. You're watching the Cube's coverage of CloudNativeSecurityCon 23. (electronic music)
SUMMARY :
Great to have you back on theCUBE. This is the inaugural event, Liz, and the TAG security, kind of testing the waters it seems, that you can consider security. the bellweather for, you know, and of course the runtime as well. of the applications that are running You gave kind of a space exfiltrating the plans to the Death Star, and really helping to change the dials of the network stack that in terms of the importance of, you know, of the people who really I noticed that the but also to talk about, you know, We heard Dan talk about, you know, And I don't, you know, I'm not a lawyer for the practitioners to be you know, the first of many and the community at large Yeah, and I think, you know, hard to find super, you know, Is that the dynamic here? so that the application developers all of the content that's going of CloudNativeSecurityCon 23.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dan Kaminsky | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Liz Rice | PERSON | 0.99+ |
Andy Martin | PERSON | 0.99+ |
Liz Rice | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
Liz | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Dan | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
two day | QUANTITY | 0.99+ |
72 sessions | QUANTITY | 0.99+ |
Priyanka | PERSON | 0.99+ |
eBPF | TITLE | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
CloudNativeSecurityCon | EVENT | 0.99+ |
Control Plane | ORGANIZATION | 0.99+ |
KubeCon | EVENT | 0.99+ |
today | DATE | 0.99+ |
CloudNativeCon | EVENT | 0.99+ |
Cloud Native Security Day | EVENT | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
Cilium | TITLE | 0.99+ |
second | QUANTITY | 0.99+ |
Boston Lisa | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
each individual application | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
CloudNativeSecurityCon 23 | EVENT | 0.98+ |
hundreds | QUANTITY | 0.97+ |
each individual pod | QUANTITY | 0.97+ |
both things | QUANTITY | 0.97+ |
first year | QUANTITY | 0.97+ |
Tetragon | TITLE | 0.97+ |
BIND | ORGANIZATION | 0.96+ |
this week | DATE | 0.96+ |
Ahmad Khan, Snowflake & Kurt Muehmel, Dataiku | Snowflake Summit 2022
>>Hey everyone. Welcome back to the Cube's live coverage of snowflake summit 22 live from Las Vegas. Caesar's forum. Lisa Martin here with Dave Valante. We've got a couple of guests here. We're gonna be talking about every day. AI. You wanna know what that means? You're in the right spot. Kurt UL joins us, the chief customer officer at data ICU and the mod Conn, the head of AI and ML strategy at snowflake guys. Great to have you on the program. >>It's wonderful to be here. Thank you so much. >>So we wanna understand Kurt what everyday AI means, but before we do that for the audience who might not be familiar with data, I could give them a little bit of an overview. What about what you guys do your mission and maybe a little bit about the partnership? >>Yeah, great. Uh, very happy to do so. And thanks so much for this opportunity. Um, well, data IKU, we are a collaborative platform, uh, for enterprise AI. And what that means is it's a software, you know, that sits on top of incredible infrastructure, notably snowflake that allows people from different backgrounds of data, analysts, data, scientists, data, engineers, all to come together, to work together, to build out machine learning models and ultimately the AI that's gonna be the future, uh, of their business. Um, and so we're very excited to, uh, to be here, uh, and you know, very proud to be a, a, a very close partner of snowflake. >>So Amad, what is Snowflake's AI strategy? Is it to, is it to partner? Where do, where do you pick up? And Frank said today, we, we're not doing it all. Yeah. The ecosystem by design. >>Yeah. Yeah, absolutely. So we believe in the best of breed look. Um, I think, um, we, we think that we're the best data platform and for data science and machine learning, we want our customers to really use the best tool for their use cases. Right. And, you know, data ICU is, is our leading partner in that space. And so, you know, when, when you talk about, uh, machine learning and data science, people talk about training a model, but it's really the difficult part and challenges are really, before you train the model, how do you get access to the right data? And then after you train the model, how do you then run the model? And then how do you manage the model? Uh, that's very, very important. And that's where our partnership with, with data, uh, IKU comes in place. Snowflake provides the platform that can process data at scale for the pre-processing bit and, and data IKU comes in and really, uh, simplifies the process for deploying the models and managing the model. >>Got it. Thank >>You. You talk about KD data. Aico talks about everyday AI. I wanna break that down. What do you mean by that? And how is this partnership with snowflake empowering you to deliver that to companies? >>Yeah, absolutely. So everyday AI for us is, uh, you know, kind of a future state that we are building towards where we believe that AI will become so pervasive in all of the business processes, all the decision making that organizations have to go through that it's no longer this special thing that we talk about. It's just the, the day to day life of, uh, of our businesses. And we can't do that without partners like snowflake and, uh, because they're bringing together all of that data and ensuring that there is the, uh, the computational horsepower behind that to drive that we heard that this morning in some of the keynote talking about that broad democratization and the, um, let's call it the, uh, you know, the pressure that that's going to put on the underlying infrastructure. Um, and so ultimately everyday AI for us is where companies own that AI capability. They're building it themselves very broad, uh, participation in the development of that. And all that work then is being pushed down into best of breed, uh, infrastructure, notably of course, snowflake. Well, >>You said push down, you, you guys, you there's a term in the industry push down optimization. What does that mean? How is it evolving? Why is it so important? >>So Amma, do you want to take a first step at that? >>Yeah, absolutely. So, I mean, when, when you're, you know, processing data, so saying data, um, before you train a, uh, a model, you have to do it at scale, that that, that data is, is coming from all different sources. It's human generated machine generated data, we're talking millions and billions of rows of data. Uh, and you have to make sense of it. You have to transform that data into the right kind of features into the right kind of signals that inform the machine learning model that you're trying to, uh, train. Uh, and so that's where, you know, any kind of large scale data processing is automatically pushed down by data IQ, into snowflakes, scalable infrastructure. Um, so you don't get into like memory issues. You don't get into, um, uh, situations where you're where your pipeline is running overnight, and it doesn't finish in time. Right? And so, uh, you can really take advantage of the scalable nature of cloud computing, uh, using Snowflake's infrastructure. So a lot of that processing is actually getting pushed down from data I could down into the scalable snowflake compute engine. How >>Does this affect the life of a data scientist? You always hear a data scientist spend 80% of the time wrangling data. Uh, I presume there's an infrastructure component around that you trying, we heard this morning, you're making infrastructure, my words, infrastructure, self serve, uh, does this directly address that problem and, and talk about that. And what else are you doing to address that 80% problem? >>It, it certainly does, right? Uh, that's how you solve for, uh, data scientists needing to have on demand access to computing resources, or of course, to the, uh, to the underlying data, um, is by ensuring that that work doesn't have to run on their laptop, doesn't have to run on some, you know, constrained, uh, physical machines, uh, in, in a data center somewhere. Instead it gets pushed down into snowflake and can be executed at scale with incredible parallelization. Now what's really, uh, I important is the ongoing development, uh, between the two products, uh, and within that technology. And so today snowflake, uh, announced the introduction of Python within snow park, um, which is really, really exciting, uh, because that really opens up this capability to a much wider audience. Now DataCo provides that both through a visual interface, um, in historically, uh, since last year through Java UDFs, but that's kind of the, the two extremes, right? You have people who don't code on one side, you know, very no code or a low code, uh, population, and then a very high code population. On the other side, this Python, uh, integration really allows us to, to touch really kind the, the fat center of the data science population, who, uh, who, for whom, you know, Python really is the lingua franca that they've been learning for, uh, for decades now. Sure. So >>Talking about the data scientist, I wanna elevate that a little bit because you both are enterprise customers, data ICO, and snowflake Kurt as the chief customer officer, obviously you're with customers all the time. If we look at the macro environment of all the challenges, companies have to be a data company these days, if you're not, you're not gonna be successful. It's how do we do that? Extract insights, value, action, take it. But I'm just curious if your customer conversations are elevating up to the C-suite or, or the board in terms of being able to get democratize access to data, to be competitive, new products, new services, we've seen tremendous momentum, um, on, on the, the part of customer's growth on the snowflake side. But what are you hearing from customers as they're dealing with some of these current macro pains? >>Yeah, no, I, I think it is the conversation today, uh, at that sea level is not only how do we, you know, leverage, uh, new infrastructure, right. You know, they they're, you know, most of them now are starting to have snowflake. I think Frank said, uh, you know, 50% of the, uh, fortune 500, so we can say most, um, have that in place. Um, but now the question is, how do we, how do we ensure that we're getting access to that data, to that, to that computational horsepower, to a broader group of people so that it becomes truly a transformational initiative and not just an it initiative, not just a technology initiative, but really a core business initiative. And that, that really has been a pivot. You know, I've been, you know, with my company now for almost eight years, right. Uh, and we've really seen a change in that discussion going from, you know, much more niche discussions at the team or departmental level now to truly corporate strategic level. How do we build AI into our corporate strategy? How do we really do that in practice? And >>We hear a lot about, Hey, I want to inject data into apps, AI, and machine intelligence into applications. And we've talked about, those are separate stacks. You got the data stack and analytics stack over here. You got the application development, stack the databases off in the corner. And so we see you guys bringing those worlds together. And my question is, what does that stack look like? I took a snapshot. I think it was Frank's presentation today. He had infrastructure at the lowest level live data. So infrastructure's cloud live data. That's multiple data sources coming in workload execution. You made some announcements there. Mm-hmm, <affirmative>, uh, to expend expand that application development. That's the tooling that is needed. Uh, and then marketplace, that's how you bring together this ecosystem. Yes. Monetization is how you turn data into data products and make money. Is that the stack, is that the new stack that's emerging here? Are you guys defining that? >>Absolutely. Absolutely. You talked about like the 80% of the time being spent by data scientists and part of that is actually discovering the right data. Right. Um, being able to give the right access to the right people and being able to go and discover that data. And so you, you, you go from that angle all the way to processing, training a model. And then all those predictions that are insights that are coming out of the model are being consumed downstream by data applications. And so the two major announcements I'm super excited about today is, is the ability to run Python, which is snow park, uh, in, in snowflake. Um, that will do, you know, you can now as a Python developer come and bring the processing to where the data lives rather than move the data out to where the processing lives. Right. Um, so both SQL developers, Python developers, fully enabled. Um, and then the predictions that are coming out of models that are being trained by data ICU are then being used downstream by these data applications for most of our customers. And so that's where number, the second announcement with streamlet is super exciting. I can write a complete data application without writing a single line of JavaScript CSS or HTML. I can write it completely in Python. It's it makes me super excited as, as a Python developer, myself >>And you guys have joint customers that are headed in this direction, doing this today. Where, where can you talk about >>That? Yeah, we do. Uh, you know, there's a few that we're very proud of. Um, you know, company, well known companies like, uh, like REI or emeritus. Um, but one that was mentioned today, uh, this morning by Frank again, uh, Novartis, uh, pharmaceutical company, you know, they have been extremely successful, uh, in accelerating their AI and ML development by expanding access to their data. And that's a combination of, uh, both the data ICU, uh, layer, you know, allowing for that work to be developed in that, uh, in that workspace. Um, but of course, without, you know, the, the underlying, uh, uh, platform of snowflake, right, they, they would not have been able to, to have re realized those, uh, those gains. And they were talking about, you know, very, very significant increases in inefficiency everything from data access to the actual model development to the deployment. Um, it's just really, really honestly inspiring to see. >>And it was great to see Novartis mentioned on the main stage, massive time to value there. We've actually got them on the program later this week. So that was great. Another joint customer, you mentioned re I we'll let you go, cuz you're off to do a, a session with re I, is that right? >>Yes, that's exactly right. So, uh, so we're going to be doing a fireside chat, uh, talking about, in fact, you know, much of the same, all of the success that they've had in accelerating their, uh, analytics, workflow development, uh, the actual development of AI capabilities within, uh, of course that, uh, that beloved brand. >>Excellent guys, thank you so much for joining Dave and me talking about everyday AI, what you're doing together, data ICO, and snowflake to empower organizations to actually achieve that and live it. We appreciate your insights. Thank you both. You guys. Thank you for having us for our guests and Dave ante. I'm Lisa Martin. You're watching the Cube's live coverage of snowflake summit 22 from Las Vegas. Stick around our next guest joins us momentarily.
SUMMARY :
Great to have you on the program. Thank you so much. What about what you guys do Um, and so we're very excited to, uh, to be here, uh, and you know, Where do, where do you pick up? And so, you know, when, Thank And how is this partnership with snowflake empowering you to deliver uh, you know, the pressure that that's going to put on the underlying infrastructure. Why is it so important? Uh, and so that's where, you know, any kind of And what else are you doing to address that 80% problem? You have people who don't code on one side, you know, very no code or a low code, Talking about the data scientist, I wanna elevate that a little bit because you both are enterprise customers, I think Frank said, uh, you know, 50% of the, uh, And so we see you guys Um, that will do, you know, you can now as a Python developer And you guys have joint customers that are headed in this direction, doing this today. And that's a combination of, uh, both the data ICU, uh, layer, you know, you go, cuz you're off to do a, a session with re I, is that right? you know, much of the same, all of the success that they've had in accelerating their, uh, analytics, Thank you both.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Frank | PERSON | 0.99+ |
Dave Valante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Novartis | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Kurt | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
Ahmad Khan | PERSON | 0.99+ |
last year | DATE | 0.99+ |
Python | TITLE | 0.99+ |
millions | QUANTITY | 0.99+ |
two products | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
two extremes | QUANTITY | 0.99+ |
Kurt Muehmel | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
Snowflake Summit 2022 | EVENT | 0.98+ |
Amma | PERSON | 0.98+ |
Kurt UL | PERSON | 0.98+ |
second announcement | QUANTITY | 0.98+ |
JavaScript | TITLE | 0.98+ |
Caesar | PERSON | 0.98+ |
billions | QUANTITY | 0.97+ |
first step | QUANTITY | 0.97+ |
REI | ORGANIZATION | 0.97+ |
HTML | TITLE | 0.97+ |
two major announcements | QUANTITY | 0.97+ |
later this week | DATE | 0.97+ |
Snowflake | ORGANIZATION | 0.96+ |
Amad | PERSON | 0.94+ |
this morning | DATE | 0.94+ |
single line | QUANTITY | 0.94+ |
Aico | ORGANIZATION | 0.93+ |
SQL | TITLE | 0.93+ |
Snowflake | TITLE | 0.93+ |
one side | QUANTITY | 0.91+ |
fortune 500 | QUANTITY | 0.91+ |
Java UDFs | TITLE | 0.9+ |
almost eight years | QUANTITY | 0.9+ |
emeritus | ORGANIZATION | 0.89+ |
snowflake summit 22 | EVENT | 0.85+ |
IKU | ORGANIZATION | 0.85+ |
Cube | ORGANIZATION | 0.85+ |
Cube | PERSON | 0.82+ |
decades | QUANTITY | 0.78+ |
IKU | TITLE | 0.74+ |
streamlet | TITLE | 0.72+ |
snowflake | ORGANIZATION | 0.7+ |
Dataiku | PERSON | 0.65+ |
couple of | QUANTITY | 0.64+ |
DataCo | ORGANIZATION | 0.63+ |
CSS | TITLE | 0.59+ |
one | QUANTITY | 0.55+ |
data ICU | ORGANIZATION | 0.51+ |
rows | QUANTITY | 0.49+ |
Conn | ORGANIZATION | 0.35+ |
UNLIST TILL 4/2 - The Shortest Path to Vertica – Best Practices for Data Warehouse Migration and ETL
hello everybody and thank you for joining us today for the virtual verdict of BBC 2020 today's breakout session is entitled the shortest path to Vertica best practices for data warehouse migration ETL I'm Jeff Healey I'll leave verdict and marketing I'll be your host for this breakout session joining me today are Marco guesser and Mauricio lychee vertical product engineer is joining us from yume region but before we begin I encourage you to submit questions or comments or in the virtual session don't have to wait just type question in a comment in the question box below the slides that click Submit as always there will be a Q&A session the end of the presentation will answer as many questions were able to during that time any questions we don't address we'll do our best to answer them offline alternatively visit Vertica forums that formed at vertical comm to post your questions there after the session our engineering team is planning to join the forums to keep the conversation going also reminder that you can maximize your screen by clicking the double arrow button and lower right corner of the sides and yes this virtual session is being recorded be available to view on demand this week send you a notification as soon as it's ready now let's get started over to you mark marco andretti oh hello everybody this is Marco speaking a sales engineer from Amir said I'll just get going ah this is the agenda part one will be done by me part two will be done by Mauricio the agenda is as you can see big bang or piece by piece and the migration of the DTL migration of the physical data model migration of et I saw VTL + bi functionality what to do with store procedures what to do with any possible existing user defined functions and migration of the data doctor will be by Maurice it you want to talk about emeritus Rider yeah hello everybody my name is Mauricio Felicia and I'm a birth record pre-sales like Marco I'm going to talk about how to optimize that were always using some specific vertical techniques like table flattening live aggregated projections so let me start with be a quick overview of the data browser migration process we are going to talk about today and normally we often suggest to start migrating the current that allows the older disease with limited or minimal changes in the overall architecture and yeah clearly we will have to port the DDL or to redirect the data access tool and we will platform but we should minimizing the initial phase the amount of changes in order to go go live as soon as possible this is something that we also suggest in the second phase we can start optimizing Bill arouse and which again with no or minimal changes in the architecture as such and during this optimization phase we can create for example dog projections or for some specific query or optimize encoding or change some of the visual spools this is something that we normally do if and when needed and finally and again if and when needed we go through the architectural design for these operations using full vertical techniques in order to take advantage of all the features we have in vertical and this is normally an iterative approach so we go back to name some of the specific feature before moving back to the architecture and science we are going through this process in the next few slides ok instead in order to encourage everyone to keep using their common sense when migrating to a new database management system people are you often afraid of it it's just often useful to use the analogy of how smooth in your old home you might have developed solutions for your everyday life that make perfect sense there for example if your old cent burner dog can't walk anymore you might be using a fork lifter to heap in through your window in the old home well in the new home consider the elevator and don't complain that the window is too small to fit the dog through this is very much in the same way as Narita but starting to make the transition gentle again I love to remain in my analogy with the house move picture your new house as your new holiday home begin to install everything you miss and everything you like from your old home once you have everything you need in your new house you can shut down themselves the old one so move each by feet and go for quick wins to make your audience happy you do bigbang only if they are going to retire the platform you are sitting on where you're really on a sinking ship otherwise again identify quick wings implement published and quickly in Vertica reap the benefits enjoy the applause use the gained reputation for further funding and if you find that nobody's using the old platform anymore you can shut it down if you really have to migrate you can still go to really go to big battle in one go only if you absolutely have to otherwise migrate by subject area use the group all similar clear divisions right having said that ah you start off by migrating objects objects in the database that's one of the very first steps it consists of migrating verbs the places where you can put the other objects into that is owners locations which is usually schemers then what do you have that you extract tables news then you convert the object definition deploy them to Vertica and think that you shouldn't do it manually never type what you can generate ultimate whatever you can use it enrolls usually there is a system tables in the old database that contains all the roads you can export those to a file reformat them and then you have a create role and create user scripts that you can apply to Vertica if LDAP Active Directory was used for the authentication the old database vertical supports anything within the l dubs standard catalogued schemas should be relatively straightforward with maybe sometimes the difference Vertica does not restrict you by defining a schema as a collection of all objects owned by a user but it supports it emulates it for old times sake Vertica does not need the catalog or if you absolutely need the catalog from the old tools that you use it it usually said it is always set to the name of the database in case of vertical having had now the schemas the catalogs the users and roles in place move the take the definition language of Jesus thought if you are allowed to it's best to use a tool that translates to date types in the PTL generated you might see as a mention of old idea to listen by memory to by the way several times in this presentation we are very happy to have it it actually can export the old database table definition because they got it works with the odbc it gets what the old database ODBC driver translates to ODBC and then it has internal translation tables to several target schema to several target DBMS flavors the most important which is obviously vertical if they force you to use something else there are always tubes like sequel plots in Oracle the show table command in Tara data etc H each DBMS should have a set of tools to extract the object definitions to be deployed in the other instance of the same DBMS ah if I talk about youth views usually a very new definition also in the old database catalog one thing that you might you you use special a bit of special care synonyms is something that were to get emulated different ways depending on the specific needs I said I stop you on the view or table to be referred to or something that is really neat but other databases don't have the search path in particular that works that works very much like the path environment variable in Windows or Linux where you specify in a table an object name without the schema name and then it searched it first in the first entry of the search path then in a second then in third which makes synonym hugely completely unneeded when you generate uvl we remained in the analogy of moving house dust and clean your stuff before placing it in the new house if you see a table like the one here at the bottom this is usually corpse of a bad migration in the past already an ID is usually an integer and not an almost floating-point data type a first name hardly ever has 256 characters and that if it's called higher DT it's not necessarily needed to store the second when somebody was hired so take good care in using while you are moving dust off your stuff and use better data types the same applies especially could string how many bytes does a string container contains for eurozone's it's not for it's actually 12 euros in utf-8 in the way that Vertica encodes strings and ASCII characters one died but the Euro sign thinks three that means that you have to very often you have when you have a single byte character set up a source you have to pay attention oversize it first because otherwise it gets rejected or truncated and then you you will have to very carefully check what their best science is the best promising is the most promising approach is to initially dimension strings in multiples of very initial length and again ODP with the command you see there would be - I you 2 comma 4 will double the lengths of what otherwise will single byte character and multiply that for the length of characters that are wide characters in traditional databases and then load the representative sample of your cells data and profile using the tools that we personally use to find the actually longest datatype and then make them shorter notice you might be talking about the issues of having too long and too big data types on projection design are we live and die with our projects you might know remember the rules on how default projects has come to exist the way that we do initially would be just like for the profiling load a representative sample of the data collector representative set of already known queries from the Vertica database designer and you don't have to decide immediately you can always amend things and otherwise follow the laws of physics avoid moving data back and forth across nodes avoid heavy iOS if you can design your your projections initially by hand encoding matters you know that the database designer is a very tight fisted thing it would optimize to use as little space as possible you will have to think of the fact that if you compress very well you might end up using more time in reading it this is the testimony to run once using several encoding types and you see that they are l e is the wrong length encoded if sorted is not even visible while the others are considerably slower you can get those nights and look it in look at them in detail I will go in detail you now hear about it VI migrations move usually you can expect 80% of everything to work to be able to live to be lifted and shifted you don't need most of the pre aggregated tables because we have live like regain projections many BI tools have specialized query objects for the dimensions and the facts and we have the possibility to use flatten tables that are going to be talked about later you might have to ride those by hand you will be able to switch off casting because vertical speeds of everything with laps Lyle aggregate projections and you have worked with molap cubes before you very probably won't meet them at all ETL tools what you will have to do is if you do it row by row in the old database consider changing everything to very big transactions and if you use in search statements with parameter markers consider writing to make pipes and using verticals copy command mouse inserts yeah copy c'mon that's what I have here ask you custom functionality you can see on this slide the verticals the biggest number of functions in the database we compare them regularly by far compared to any other database you might find that many of them that you have written won't be needed on the new database so look at the vertical catalog instead of trying to look to migrate a function that you don't need stored procedures are very often used in the old database to overcome their shortcomings that Vertica doesn't have very rarely you will have to actually write a procedure that involves a loop but it's really in our experience very very rarely usually you can just switch to standard scripting and this is basically repeating what Mauricio said in the interest of time I will skip this look at this one here the most of the database data warehouse migration talks should be automatic you can use you can automate GDL migration using ODB which is crucial data profiling it's not crucial but game-changing the encoding is the same thing you can automate at you using our database designer the physical data model optimization in general is game-changing you have the database designer use the provisioning use the old platforms tools to generate the SQL you have no objects without their onus is crucial and asking functions and procedures they are only crucial if they depict the company's intellectual property otherwise you can almost always replace them with something else that's it from me for now Thank You Marco Thank You Marco so we will now point our presentation talking about some of the Vertica that overall the presentation techniques that we can implement in order to improve the general efficiency of the dot arouse and let me start with a few simple messages well the first one is that you are supposed to optimize only if and when this is needed in most of the cases just a little shift from the old that allows to birth will provide you exhaust the person as if you were looking for or even better so in this case probably is not really needed to to optimize anything in case you want optimize or you need to optimize then keep in mind some of the vertical peculiarities for example implement delete and updates in the vertical way use live aggregate projections in order to avoid or better in order to limit the goodbye executions at one time used for flattening in order to avoid or limit joint and and then you can also implement invert have some specific birth extensions life for example time series analysis or machine learning on top of your data we will now start by reviewing the first of these ballots optimize if and when needed well if this is okay I mean if you get when you migrate from the old data where else to birth without any optimization if the first four month level is okay then probably you only took my jacketing but this is not the case one very easier to dispute in session technique that you can ask is to ask basket cells to optimize the physical data model using the birth ticket of a designer how well DB deal which is the vertical database designer has several interfaces here I'm going to use what we call the DB DB programmatic API so basically sequel functions and using other databases you might need to hire experts looking at your data your data browser your table definition creating indexes or whatever in vertical all you need is to run something like these are simple as six single sequel statement to get a very well optimized physical base model you see that we start creating a new design then we had to be redesigned tables and queries the queries that we want to optimize we set our target in this case we are tuning the physical data model in order to maximize query performances this is why we are using my design query and in our statement another possible journal tip would be to tune in order to reduce storage or a mix between during storage and cheering queries and finally we asked Vertica to produce and deploy these optimized design in a matter of literally it's a matter of minutes and in a few minutes what you can get is a fully optimized fiscal data model okay this is something very very easy to implement keep in mind some of the vertical peculiarities Vaska is very well tuned for load and query operations aunt Berta bright rose container to biscuits hi the Pharos container is a group of files we will never ever change the content of this file the fact that the Rose containers files are never modified is one of the political peculiarities and these approach led us to use minimal locks we can add multiple load operations in parallel against the very same table assuming we don't have a primary or unique constraint on the target table in parallel as a sage because they will end up in two different growth containers salad in read committed requires in not rocket fuel and can run concurrently with insert selected because the Select will work on a snapshot of the catalog when the transaction start this is what we call snapshot isolation the kappa recovery because we never change our rows files are very simple and robust so we have a huge amount of bandages due to the fact that we never change the content of B rows files contain indiarose containers but on the other side believes and updates require a little attention so what about delete first when you believe in the ethica you basically create a new object able it back so it appeared a bit later in the Rose or in memory and this vector will point to the data being deleted so that when the feed is executed Vertica will just ignore the rules listed in B delete records and it's not just about the leak and updating vertical consists of two operations delete and insert merge consists of either insert or update which interim is made of the little insert so basically if we tuned how the delete work we will also have tune the update in the merge so what should we do in order to optimize delete well remember what we said that every time we please actually we create a new object a delete vector so avoid committing believe and update too often we reduce work the work for the merge out for the removal method out activities that are run afterwards and be sure that all the interested projections will contain the column views in the dedicate this will let workers directly after access the projection without having to go through the super projection in order to create the vector and the delete will be much much faster and finally another very interesting optimization technique is trying to segregate the update and delete operation from Pyrenean third workload in order to reduce lock contention beliefs something we are going to discuss and these contain using partition partition operation this is exactly what I want to talk about now here you have a typical that arouse architecture so we have data arriving in a landing zone where the data is loaded that is from the data sources then we have a transformation a year writing into a staging area that in turn will feed the partitions block of data in the green data structure we have at the end those green data structure we have at the end are the ones used by the data access tools when they run their queries sometimes we might need to change old data for example because we have late records or maybe because we want to fix some errors that have been originated in the facilities so what we do in this case is we just copied back the partition we want to change or we want to adjust from the green interior a the end to the stage in the area we have a very fast operation which is Tokyo Station then we run our updates or our adjustment procedure or whatever we need in order to fix the errors in the data in the staging area and at the very same time people continues to you with green data structures that are at the end so we will never have contention between the two operations when we updating the staging area is completed what we have to do is just to run a swap partition between tables in order to swap the data that we just finished to adjust in be staging zone to the query area that is the green one at the end this swap partition is very fast is an atomic operation and basically what will happens is just that well exchange the pointer to the data this is a very very effective techniques and lot of customer useless so why flops on table and live aggregate for injections well basically we use slot in table and live aggregate objection to minimize or avoid joint this is what flatten table are used for or goodbye and this is what live aggregate projections are used for now compared to traditional data warehouses better can store and process and aggregate and join order of magnitudes more data that is a true columnar database joint and goodbye normally are not a problem at all they run faster than any traditional data browse that page there are still scenarios were deficits are so big and we are talking about petabytes of data and so quickly going that would mean be something in order to boost drop by and join performances and this is why you can't reduce live aggregate projections to perform aggregations hard loading time and limit the need for global appear on time and flux and tables to combine information from different entity uploading time and again avoid running joint has query undefined okay so live aggregate projections at this point in time we can use live aggregate projections using for built in aggregate functions which are some min Max and count okay let's see how this works suppose that you have a normal table in this case we have a table unit sold with three columns PIB their time and quantity which has been segmented in a given way and on top of this base table we call it uncle table we create a projection you see that we create the projection using the salad that will aggregate the data we get the PID we get the date portion of the time and we get the sum of quantity from from the base table grouping on the first two columns so PID and the date portion of day time okay what happens in this case when we load data into the base table all we have to do with load data into the base table when we load data into the base table we will feel of course big injections that assuming we are running with k61 we will have to projection to projections and we will know the data in those two projection with all the detail in data we are going to load into the table so PAB playtime and quantity but at the very same time at the very same time and without having to do nothing any any particular operation or without having to run any any ETL procedure we will also get automatically in the live aggregate projection for the data pre aggregated with be a big day portion of day time and the sum of quantity into the table name total quantity you see is something that we get for free without having to run any specific procedure and this is very very efficient so the key concept is that during the loading operation from VDL point of view is executed again the base table we do not explicitly aggregate data or we don't have any any plc do the aggregation is automatic and we'll bring the pizza to be live aggregate projection every time we go into the base table you see the two selection that we have we have on in this line on the left side and you see that those two selects will produce exactly the same result so running select PA did they trying some quantity from the base table or running the select star from the live aggregate projection will result exactly in the same data you know this is of course very useful but is much more useful result that if we and we can observe this if we run an explained if we run the select against the base table asking for this group data what happens behind the scene is that basically vertical itself that is a live aggregate projection with the data that has been already aggregating loading phase and rewrite your query using polite aggregate projection this happens automatically you see this is a query that ran a group by against unit sold and vertical decided to rewrite this clearly as something that has to be collected against the light aggregates projection because if I decrease this will save a huge amount of time and effort during the ETL cycle okay and is not just limited to be information you want to aggregate for example another query like select count this thing you might note that can't be seen better basically our goodbyes will also take advantage of the live aggregate injection and again this is something that happens automatically you don't have to do anything to get this okay one thing that we have to keep very very clear in mind Brassica what what we store in the live aggregate for injection are basically partially aggregated beta so in this example we have two inserts okay you see that we have the first insert that is entered in four volts and the second insert which is inserting five rules well in for each of these insert we will have a partial aggregation you will never know that after the first insert you will have a second one so better will calculate the aggregation of the data every time irin be insert it is a key concept and be also means that you can imagine lies the effectiveness of bees technique by inserting large chunk of data ok if you insert data row by row this technique live aggregate rejection is not very useful because for every goal that you insert you will have an aggregation so basically they'll live aggregate injection will end up containing the same number of rows that you have in the base table but if you everytime insert a large chunk of data the number of the aggregations that you will have in the library get from structure is much less than B base data so this is this is a key concept you can see how these works by counting the number of rows that you have in alive aggregate injection you see that if you run the select count star from the solved live aggregate rejection the query on the left side you will get four rules but actually if you explain this query you will see that he was reading six rows so this was because every of those two inserts that we're actively interested a few rows in three rows in India in the live aggregate projection so this is a key concept live aggregate projection keep partially aggregated data this final aggregation will always happen at runtime okay another which is very similar to be live aggregate projection or what we call top K projection we actually do not aggregate anything in the top case injection we just keep the last or limit the amount of rows that we collect using the limit over partition by all the by clothes and this again in this case we create on top of the base stable to top gay projection want to keep the last quantity that has been sold and the other one to keep the max quantity in both cases is just a matter of ordering the data in the first case using the B time column in the second page using quantity in both cases we fill projection with just the last roof and again this is something that we do when we insert data into the base table and this is something that happens automatically okay if we now run after the insert our select against either the max quantity okay or be lost wanted it okay we will get the very last you see that we have much less rows in the top k projections okay we told at the beginning that basically we can use for built-in function you might remember me max sum and count what if I want to create my own specific aggregation on top of the lid and customer sum up because our customers have very specific needs in terms of live aggregate projections well in this case you can code your own live aggregate production user-defined functions so you can create the user-defined transport function to implement any sort of complex aggregation while loading data basically after you implemented miss VPS you can deploy using a be pre pass approach that basically means the data is aggregated as loading time during the data ingestion or the batch approach that means that the data is when that woman is running on top which things to remember on live a granade projections they are limited to be built in function again some max min and count but you can call your own you DTF so you can do whatever you want they can reference only one table and for bass cab version before 9.3 it was impossible to update or delete on the uncle table this limit has been removed in 9.3 so you now can update and delete data from the uncle table okay live aggregate projection will follow the segmentation of the group by expression and in some cases the best optimizer can decide to pick the live aggregates objection or not depending on if depending on the fact that the aggregation is a consistent or not remember that if we insert and commit every single role to be uncoachable then we will end up with a live aggregate indirection that contains exactly the same number of rows in this case living block or using the base table it would be the same okay so this is one of the two fantastic techniques that we can implement in Burtka this live aggregate projection is basically to avoid or limit goodbyes the other which we are going to talk about is cutting table and be reused in order to avoid the means for joins remember that K is very fast running joints but when we scale up to petabytes of beta we need to boost and this is what we have in order to have is problem fixed regardless the amount of data we are dealing with so how what about suction table let me start with normalized schemas everybody knows what is a normalized scheme under is no but related stuff in this slide the main scope of an normalized schema is to reduce data redundancies so and the fact that we reduce data analysis is a good thing because we will obtain fast and more brides we will have to write into a database small chunks of data into the right table the problem with these normalized schemas is that when you run your queries you have to put together the information that arrives from different table and be required to run joint again jointly that again normally is very good to run joint but sometimes the amount of data makes not easy to deal with joints and joints sometimes are not easy to tune what happens in in the normal let's say traditional data browser is that we D normalize the schemas normally either manually or using an ETL so basically we have on one side in this light on the left side the normalized schemas where we can get very fast right on the other side on the left we have the wider table where we run all the three joints and pre aggregation in order to prepare the data for the queries and so we will have fast bribes on the left fast reads on the Left sorry fast bra on the right and fast read on the left side of these slides the probability in the middle because we will push all the complexity in the middle in the ETL that will have to transform be normalized schema into the water table and the way we normally implement these either manually using procedures that we call the door using ETL this is what happens in traditional data warehouse is that we will have to coach in ETL layer in order to round the insert select that will feed from the normalized schema and right into the widest table at the end the one that is used by the data access tools we we are going to to view store to run our theories so this approach is costly because of course someone will have to code this ETL and is slow because someone will have to execute those batches normally overnight after loading the data and maybe someone will have to check the following morning that everything was ok with the batch and is resource intensive of course and is also human being intensive because of the people that will have to code and check the results it ever thrown because it can fail and introduce a latency because there is a get in the time axis between the time t0 when you load the data into be normalized schema and the time t1 when we get the data finally ready to be to be queried so what would be inverter to facilitate this process is to create this flatten table with the flattened T work first you avoid data redundancy because you don't need the wide table on the normalized schema on the left side second is fully automatic you don't have to do anything you just have to insert the data into the water table and the ETL that you have coded is transformed into an insert select by vatika automatically you don't have to do anything it's robust and this Latin c0 is a single fast as soon as you load the data into the water table you will get all the joints executed for you so let's have a look on how it works in this case we have the table we are going to flatten and basically we have to focus on two different clauses the first one is you see that there is one table here I mentioned value 1 which can be defined as default and then the Select or set using okay the difference between the fold and set using is when the data is populated if we use default data is populated as soon as we know the data into the base table if we use set using Google Earth to refresh but everything is there I mean you don't need them ETL you don't need to code any transformation because everything is in the table definition itself and it's for free and of course is in latency zero so as soon as you load the other columns you will have the dimension value valued as well okay let's see an example here suppose here we have a dimension table customer dimension that is on the left side and we have a fact table on on the right you see that the fact table uses columns like o underscore name or Oh the score city which are basically the result of the salad on top of the customer dimension so Beezus were the join is executed as soon as a remote data into the fact table directly into the fact table without of course loading data that arise from the dimension all the data from the dimension will be populated automatically so let's have an example here suppose that we are running this insert as you can see we are running be inserted directly into the fact table and we are loading o ID customer ID and total we are not loading made a major name no city those name and city will be automatically populated by Vertica for you because of the definition of the flood table okay you see behave well all you need in order to have your widest tables built for you your flattened table and this means that at runtime you won't need any join between base fuck table and the customer dimension that we have used in order to calculate name and city because the data is already there this was using default the other option was is using set using the concept is absolutely the same you see that in this case on the on the right side we have we have basically replaced this all on the school name default with all underscore name set using and same is true for city the concept that I said is the same but in this case which we set using then we will have to refresh you see that we have to run these select trash columns and then the name of the table in this case all columns will be fresh or you can specify only certain columns and this will bring the values for name and city reading from the customer dimension so this technique this technique is extremely useful the difference between default and said choosing just to summarize the most important differences remember you just have to remember that default will relate your target when you load set using when you refresh end and in some cases you might need to use them both so in some cases you might want to use both default end set using in this example here we'll see that we define the underscore name using both default and securing and this means that we love the data populated either when we load the data into the base table or when we run the Refresh this is summary of the technique that we can implement in birth in order to make our and other browsers even more efficient and well basically this is the end of our presentation thank you for listening and now we are ready for the Q&A session you
SUMMARY :
the end to the stage in the area we have
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Tom | PERSON | 0.99+ |
Marta | PERSON | 0.99+ |
John | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Chris Keg | PERSON | 0.99+ |
Laura Ipsen | PERSON | 0.99+ |
Jeffrey Immelt | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Chris O'Malley | PERSON | 0.99+ |
Andy Dalton | PERSON | 0.99+ |
Chris Berg | PERSON | 0.99+ |
Dave Velante | PERSON | 0.99+ |
Maureen Lonergan | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Paul Forte | PERSON | 0.99+ |
Erik Brynjolfsson | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Andrew McCafee | PERSON | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
Cheryl | PERSON | 0.99+ |
Mark | PERSON | 0.99+ |
Marta Federici | PERSON | 0.99+ |
Larry | PERSON | 0.99+ |
Matt Burr | PERSON | 0.99+ |
Sam | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Dave Wright | PERSON | 0.99+ |
Maureen | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Cheryl Cook | PERSON | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
$8,000 | QUANTITY | 0.99+ |
Justin Warren | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
2012 | DATE | 0.99+ |
Europe | LOCATION | 0.99+ |
Andy | PERSON | 0.99+ |
30,000 | QUANTITY | 0.99+ |
Mauricio | PERSON | 0.99+ |
Philips | ORGANIZATION | 0.99+ |
Robb | PERSON | 0.99+ |
Jassy | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Mike Nygaard | PERSON | 0.99+ |
John Hennessy, Knight-Hennessy Scholars | ACG SV Grow! Awards 2019
(upbeat techno music) >> From Mountain View California, it's the Cube covering the 15th Annual Grow Awards. Brought to you by ACG SV. >> Hi, Lisa Martin with the Cube on the ground at the Computer History Museum for the 15th annual ACG SV Awards. And in Mountain View California excited to welcome to the Cube for the first time, John Hennessy, the chairman of Alphabet and the co-founder of the Knight-Hennessy Scholars Program at Stanford. JOHN, it's truly a pleasure to have you on the Cube today. >> Well delighted to be here, Lisa. >> So I was doing some research on you. And I see Marc Andreessen has called you the godfather of Silicon Valley. >> Marc very generous (loughs) >> so I thought I was pretty cool I'm going to sit down with the godfather tonight. (loughs) >> I have not done that yet. So you are keynoting the 15th Annual ACG SV Awards tonight. Talk to us a little bit about the takeaways that the audience is going to hear from you tonight. >> Well, they're going to hear some things about leadership the importance of leadership, obviously the importance of innovation. We're in the middle of Silicon Valley innovation is a big thing. And the role that technology plays in our lives and how we should be thinking about that, and how do we ensure the technology is something that serves the public good. >> Definitely. So there's about I think over 230 attendees expected tonight over 100 sea levels, the ACG SV Is has been it's it's much more than a networking organization. there's a lot of opportunities for collaboration for community. Tell me a little bit about your experience with that from a collaboration standpoint? >> Well, I think collaboration is a critical ingredient. I mean, for so many years, you look at the collaboration is gone. Just take between between the universities, my own Stanford and Silicon Valley and how that collaboration has developed over time and lead the founding of great companies, but also collaboration within the valley. This is the place to be a technology person in the whole world it's the best place partly because of this collaboration, and this innovative spirit that really is a core part of what we are as a place. >> I agree. The innovative spirit is one of the things that I enjoy, about not only being in technology, but also living in Silicon Valley. You can't go to a Starbucks without hearing a conversation or many conversations about new startups or cloud technology. So the innovative spirit is pervasive here. And it's also one that I find in an in an environment like ASG SV. You just hear a lot of inspiring stories and I was doing some research on them in the last 18 months. Five CEO positions have been seated and materialized through ACG SV. Number of venture deals initiated several board positions. So a lot of opportunity in this group here tonight. >> Right, well I think that's important because so much of the leadership has got to come by recruiting new young people. And with the increase in concerned about diversity and our leadership core and our boards, I think building that network out and trying to stretch it a little bit from the from perhaps the old boys network of an earlier time in the Valley is absolutely crucial. >> Couldn't agree more. So let's now talk a little bit about the Knight-Hennessy Scholars Program at Stanford. Tell us a little bit about it. When was it founded? >> So we are we are in our very first year, actually, this year, our first year of scholars, we founded it in 2016. The motivation was, I think, an increasing gap we perceived in terms of the need for great leadership and what was available. And it was in government. It was in the nonprofit world, it was in the for profit world. So I being a lifelong educator said, What can we do about this? Let's try to recruit and develop a core of younger people who show that they're committed to the greater good and who are excellent, who are innovative, who are creative, and prepare them for leadership roles in the future. >> So you're looking for are these undergraduate students? >> They are graduate students, so they've completed their undergraduate, it's a little hard to tell when somebody's coming out of high school, what their civic commitment is, what their ability to lead is. But coming out of coming out of undergraduate experience, and often a few years of work experience, we can tell a lot more about whether somebody has the potential to be a future leader. >> So you said, found it just in 2016. And one of the things I saw that was very interesting is projecting in the next 50 years, there's going to be 5000 Knight-Hennessy scholars at various stages of their careers and government organizations, NGOs, as you mentioned, so looking out 50 years you have a strong vision there, but really expect this organization to be able to make a lasting impact. >> That's what our goal is lasting impact over decades, because people who go into leadership positions often take a decade or two to rise to that position. But that's what our investment is our investment is in the in the future. And when I went to Phil Knight who's my co-founder and donor, might lead donor to the program, he was enthusiastic. His view was that we had a we had a major gap in leadership. And we needed to begin training, we need to do multiple things. We need to do things like we're doing tonight. But we also need to think about that next younger generation is up and coming. >> Some terms of inspiring the next generation of innovative diversity thinkers. Talk to me about some of the things that this program is aimed at, in addition to just, you know, some of the knowledge about leadership, but really helping them understand this diverse nature in which we now all find ourselves living. >> So one of the things we do is we try to bring in leaders from all different walks of life to meet and have a conversation with our scholars. This morning, we had the UN High Commissioner for Human Rights in town, Michelle Bachelet, and she sat down and talked about how she thought about her role as addressing human rights, how to move things forward in very complex situations we face around the world with collapse of many governments and many human rights violations. And how do you how do you make that forward progress with a difficult problem? So that kind of exposure to leaders who are grappling with really difficult problems is a critical part of our program. >> And they're really seeing and experiencing real world situations? >> Absolutely. They're seeing them up close as they're really occurring. They see the challenges we had, we had Governor Brown and just before he went out of office here in California, to talk about criminal justice reform a major issue in California and around the country. And how do we make progress on that on that particular challenge? >> So you mentioned a couple of other leaders who the students I've had the opportunity to learn from and engage with, but you yourself are quite the established leader. You went to Stanford as a professor in 1977. You are a President Emeritus you were president of Stanford from 2000 to 2016. So these students also get the opportunity to learn from all that you have experienced as it as a professor of Computer Science, as well as in one of your current roles as chairman of Alphabet. Talk to us a little bit about just the massive changes that you have seen, not just in Silicon Valley, but in technology and innovation over the last 40 plus years. >> Well, it is simply amazing. When I arrived at Stanford, there was no internet. The ARPANET was in its young days, email was something that a bunch of engineers and scientists use to communicate, nobody else did. I still remember going and seeing the first demonstration of what would become Yahoo. Well, while David Filo and Jerry Yang had it set up in their office. And the thing that immediately convinced me Lisa was they showed me that their favorite Pizza Parlor would now allow orders to go online. And when I saw that I said, the World Wide Web is not just about a bunch of scientists and engineers exchanging information. It's going to change our lives and it did. And we've seen wave after wave that with Google and Facebook, social media rise. And now the rise of AI I mean this this is a transformative technology as big as anything I think we've ever seen. In terms of its potential impact. >> It is AI is so transformative. I was I was in Hawaii recently on vacation and Barracuda Networks was actually advertising about AI in Hawaii and I thought that's interesting that the people that are coming to to Hawaii on vacation, presumably, people have you know, many generations who now have AI as a common household word may not understand the massive implications and opportunities that it provides. But it is becoming pervasive at every event we're at at the Cube and there's a lot of opportunity there. It's it's a very exciting subject. Last question for you. You mentioned that this that the Knight-Hennessy Scholars Program is really aimed towards graduate students. What is your advice to those BB stem kids in high school right now who are watching this saying, oh, John, what, what? How do you advise me to be able to eventually get into a program like this? >> Well, I think it begins by really finding your passion, finding something you're really dedicated to pushing yourself challenging yourself, showing that you can do great things with it. And then thinking about the bigger role you want to have with technology. In the after all, technology is not an end in itself. It's a tool to make human lives better and that's the sort of person we're looking for in the knight-Hennessy Scholars Program, >> Best advice you've ever gotten. >> Best advice ever gotten is remember that leadership is about service to the people in the institution you lead. >> It's fantastic not about about yourself but really about service to those. >> About service to others >> JOHN, it's been a pleasure having you on the Cube tonight we wish you the best of luck in your keynote at the 15th annual ACG SV Awards and we thank you for your time. >> Thank you, Lisa. I've enjoyed it. Lisa Martin, you're watching the Cube on the ground. Thanks for watching. (upbeat tech music)
SUMMARY :
Brought to you by ACG SV. and the co-founder of the So I was doing some research on you. so I thought I was pretty cool I'm going to sit down that the audience is going to hear from you tonight. And the role that technology plays in our lives the ACG SV Is has been This is the place to be a technology person is one of the things that I enjoy, because so much of the leadership the Knight-Hennessy Scholars Program at Stanford. the need for great leadership it's a little hard to tell And one of the things I saw and donor, might lead donor to the program, in addition to just, you know, So one of the things we do They see the challenges we had, we had Governor Brown just the massive changes that you have seen, And the thing that immediately convinced me Lisa was that the people that are coming and that's the sort of person we're looking for service to the people in the institution you lead. but really about service to those. and we thank you for your time. the Cube on the ground.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Marc Andreessen | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
Michelle Bachelet | PERSON | 0.99+ |
John Hennessy | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Hawaii | LOCATION | 0.99+ |
California | LOCATION | 0.99+ |
2000 | DATE | 0.99+ |
1977 | DATE | 0.99+ |
John | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Alphabet | ORGANIZATION | 0.99+ |
Jerry Yang | PERSON | 0.99+ |
David Filo | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
ORGANIZATION | 0.99+ | |
first year | QUANTITY | 0.99+ |
Lisa | PERSON | 0.99+ |
ACG SV | ORGANIZATION | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
50 years | QUANTITY | 0.99+ |
Phil Knight | PERSON | 0.99+ |
Stanford | ORGANIZATION | 0.99+ |
Barracuda Networks | ORGANIZATION | 0.99+ |
Starbucks | ORGANIZATION | 0.99+ |
this year | DATE | 0.99+ |
Governor | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
JOHN | PERSON | 0.99+ |
Marc | PERSON | 0.99+ |
tonight | DATE | 0.98+ |
first time | QUANTITY | 0.98+ |
15th Annual ACG SV Awards | EVENT | 0.98+ |
Mountain View California | LOCATION | 0.98+ |
15th Annual Grow Awards | EVENT | 0.98+ |
This morning | DATE | 0.98+ |
one | QUANTITY | 0.97+ |
Five CEO | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
a decade | QUANTITY | 0.96+ |
ACG SV | EVENT | 0.96+ |
over 230 attendees | QUANTITY | 0.95+ |
ACG SV Grow! Awards 2019 | EVENT | 0.95+ |
over 100 sea levels | QUANTITY | 0.95+ |
5000 | QUANTITY | 0.95+ |
ASG SV | ORGANIZATION | 0.94+ |
first demonstration | QUANTITY | 0.93+ |
Knight-Hennessy Scholars | ORGANIZATION | 0.92+ |
President | PERSON | 0.92+ |
15th annual ACG SV Awards | EVENT | 0.91+ |
UN | ORGANIZATION | 0.9+ |
last 40 plus years | DATE | 0.9+ |
last 18 months | DATE | 0.9+ |
Cube | COMMERCIAL_ITEM | 0.85+ |
ARPANET | ORGANIZATION | 0.85+ |
knight-Hennessy Scholars Program | TITLE | 0.85+ |
High Commissioner for Human Rights | PERSON | 0.84+ |
Knight-Hennessy Scholars Program | ORGANIZATION | 0.83+ |
Knight-Hennessy Scholars Program | TITLE | 0.81+ |
over decades | QUANTITY | 0.81+ |
Computer History Museum | LOCATION | 0.75+ |
Pizza Parlor | ORGANIZATION | 0.73+ |
Emeritus | PERSON | 0.7+ |
wave after | EVENT | 0.69+ |
wave | EVENT | 0.66+ |
Cube | ORGANIZATION | 0.65+ |
Stanford | LOCATION | 0.64+ |
Brown | PERSON | 0.63+ |
John Hennessy, Knight Hennessy Scholars with Introduction by Navin Chaddha, Mayfield
(upbeat techno music) >> From Sand Hill Road, in the heart of Silicon Valley, it's theCUBE. Presenting the People First Network, insights from entrepreneurs and tech leaders. >> Hello, everyone, I'm John Furrier the co-host on theCUBE, founder of SiliconANGLE Media. We are here at Sand Hill Road, at Mayfield for the 50th anniversary celebration and content series called The People First Network. This is a co-developed program. We're going to bring thought leaders, inspirational entrepreneurs and tech executives to talk about their experience and their journey around a people first society. This is the focus of entrepreneurship these days. I'm here with Navin Chaddha who's the managing director of Mayfield. Navin, you're kicking off the program. Tell us, why the program? Why People First Network? Is this a cultural thing? Is this part of a program? What's the rationale? What's the message? >> Yeah, first of all I want to thank, John, you and your team and theCUBE for co-hosting the People First Network with us. It's been a real delight working with you. Shifting to people first, Mayfield has had a long standing philosophy that people build companies and it's not the other way around. We believe in betting on great people because even if their initial idea doesn't pan out, they'll quickly pivot to find the right market opportunity. Similarly we believe when the times get tough it's our responsibility to stand behind people and the purpose of this People First Network is people like me were extremely lucky to have mentors along the way, when I was an entrepreneur and now as a venture capitalist, who are helping me achieve my dreams. Mayfield and me want to give back to other entrepreneurs, by bringing in people who are luminaries in their own fields to share their learnings with other entrepreneurs. >> This is a really great opportunity and I want to thank you guys for helping us put this together with you guys. It's a great co-creation. The observation that we're seeing in Silicon Valley and certainly in talking to some of the guests we've already interviewed and that will be coming up on the program, is the spirit of community and the culture of innovation is around the ecosystem of Silicon Valley. This has been the bedrock. >> Mm-hmm. >> Of Silicon Valley, Mayfield, one of the earliest if not the first handful of venture firms. >> Mm-hmm. >> Hanging around Stanford, doing entrepreneurship, this is a people culture in Silicon Valley and this is now going global. >> Mm-hmm. >> So great opportunity. What can we expect to see from some of the interviews? What are you looking for and what's the hope? >> Yeah, so I think what you're going to see from the interviews is, we are trying to bring around 20 plus people, and they'll be many John on the interview besides you. So there will be John Chambers, ex-chairman and CEO of Cisco. There'll be John Zimmer, president and co founder of Lyft. And there also will be John Hennessy who will be our first interview, with him, from Stanford University. And jokes apart, there'll be like 20 plus other people who will be part of this network. So I think what you're going to see is, goings always don't go great. There's a lot of learnings that happen when things don't work out. And our hope is, when these luminaries from their professions, share their learnings the entrepreneurs will benefit from it. As we all know, being an entrepreneur is hard. But sometimes, and many times, actually it's also a lonely road and our belief is, and I strongly personally also believe in it, that great entrepreneurs believe in continuous learning and are continuously adapting themselves to succeed. So our hope is, this People First Network serves as a learning opportunity from entrepreneurs to learn from great leaders. >> You said a few things I really admire about Mayfield and I want to get your reaction because I think is a fundamental for society. Building durable companies is about the long game because people fail and people succeed but they always move on. >> Mm-hmm. >> They move on to another opportunity. They move on to another pursuit. >> Mm-hmm. >> And this pay it forward culture has been a key thing for Silicon Valley. >> It absolutely has been. >> What's the inspiration behind it, from your perspective? You mentioned your experiences. Tell us a story and experience you've had? >> Yeah, so I would say, first of all, right, since we strongly believe people make products and products don't make people, we believe venture capital and entrepreneurship is about like running a marathon, it's not a sprint. So if you take a longterm view, have a strong vision and mission which is supported with great beliefs and values? You can do wonders. And our whole aim, not only as Mayfield but other venture capitalists, is to build iconic companies which are built to last which beyond creating jobs and economic wealth, can give back to the society and make the world a better place to work, live and play. >> You know one of the things that we are passionate about at theCUBE, and on SiliconANGLE Media is standing by our community. >> Mm-hmm. >> Because people do move around and I think one of the things that is key in venture capital now, than ever before is not looking for the quick hit. >> Mm-hmm. >> It's standing by your companies in good times and in bad. >> Mm-hmm. >> Because this is about people and you don't know how things might turn out, how a company might end up in a different place. We've heard some of your entrepreneurs talk about that, that the outcome was not how they envisioned it when they started. >> Mm-hmm. >> This is a key mindset for a business. >> It absolutely is, right? Let's look at a few examples. One of our most successful companies is Lyft. When we backed it at Series A, it was called Zimride. They weren't doing what they were doing, but the company had a strong vision and mission of changing the way people transport and given that, they were A plus people, as I mentioned earlier. The initial idea wasn't going to be a massive opportunity. They quickly pivoted to go after the right market opportunity. And hence, again and again, right? Like to me, it's all about the people. >> Navigating those boards is sometimes challenging and we hope that this content will help people, inspire people, help them discover their passion, discover people that they might want to work with. We really appreciate your support and thank you for contributing your network and your brand and your team in supporting our mission. >> Yeah, it's been an absolute pleasure and we hope the viewers and especially entrepreneurs can learn from the journeys of many iconic people who have built great things in their careers. >> Were here at Sand Hill Road, at Mayfield's venture capital headquarters in sunny Silicon Valley, California, Stanford, California, Palo Alto California, all one big melting pot of innovation. I'm here with John Hennessy, who's the Stanford President Emeritus, also the director of the Knight Hennessy Scholarship. Thanks for joining me today for this conversation. >> Delighted to be here, John. >> So I wanted to get your thoughts on the history of the valley. Obviously, Mayfield, celebrating their 50th anniversary and Mayfield was one of those early venture capital firms that kind of hung around the barbershop, looking for a haircut. Stanford University was that place. Early on this was the innovation spark that created the valley. A lot of other early VCs as well, but not that many in the early days and now 50 years later, so much has changed. What's your thoughts on the arc of entrepreneurship around Stanford, around Silicon Valley? >> Well, you're right, it's been an explosive force. I mean, I think there were a few companies out here on Sand Hill Road at that time. Now nearly the number of venture firms there are today. But I think the biggest change has been the kinds of technologies we build. You know, in those days, we built technologies that were primarily for other engineers or perhaps they were tandem computers being built for business interest. Now we build technologies that change people's lives, every single day and the impact on the world is so much larger than it was and these companies have grown incredibly fast. I mean, you look at the growth rate? We had the stars of the earlier compared to the Googles and Facebooks of today, it's small growth rates, so those are big changes. >> I'm excited to talk with you, because you're one of the only people that I can think of that has seen so many different waves of innovation. You've been involved in many of them yourself, one of the co-founders of MIPS, chairman of the board of Alphabet, which is Google, Google's holding company, the large holdings they have and just Stanford in general has been, you know, now with CAL, kind of the catalyst for a lot of the change. What's interesting is, you know, the Hewlett-Packards, the birthplace of Silicon Valley, that durable company view. >> Mm-hmm. >> Of how to build a company and the people that are involved is really a, still, essential part of it. Certainly happening faster, differently. When you look at the waves of innovation, is there anything that you could look at and say, hey, this is the consistent pattern that we see emerging of these waves? Is it a classic formula of engineers getting together trying to solve problems? Is it the Stanford drop out PH.d program? Is there a playbook? Is there a pattern that you see in the entrepreneurship over the years? >> You know, I think there are these waves that are often induced by big technology changes, right? The beginning of the personal computer. The beginning of the internet. The world wide web, social media. The other observation is that it's very hard to predict what the next one will be. (laughing) If it was easier to predict, there would be one big company, rather than lots of companies riding each one of these waves. The other thing I think that's fascinating about them is these waves don't create just one company. They create a whole new microcosm of companies around that technology which exploit it and bring it to the people and change people's lives with it. >> And another thing is interesting about that point is that even the failures have DNA. You see people, big venture backed company, I think Go is a great example, you think about those kinds of companies. The early work on mobile computing, the early work on processors that you were involved in MIPS. >> Mm-hmm. >> They become successful and/or may/may not have the outcomes but the people move on to other companies to either start companies. This is a nice flywheel, this is one of the things that Silicon Valley has enjoyed over the years. >> Yeah, and just look at the history of RISC technology that I was involved in. We initially thought it would take over the general purpose computing industry and I think Intel responded in an incredible way and eventually reduced the advantage. Now here we are 30 years later and 95%/98% of the processors in the world are RISC because of the rise of mobile, internet of things, dramatically changing where the processors were. >> Yeah. >> They're not on the desktop anymore, they're scattered around in very different ways. >> It's interesting, I was having a conversation with Andy Kessler, who used to be an analyst back at the time for Morgan Stanley. He then became an investor. And he was talking about, with me, the DRAM days when the Japanese were dumping DRAMs and then that was low margin business, and then Intel said, "Hey, no problem. "We'll let go of the DRAM business." but they created Pentium and then the micro processor. >> Right. >> That spawned a whole nother wave, so you see the global economy today, you see China, you see people manufacturing things at very low cost, Apple does work out there. What's your view and reaction to the global landscape? Because certainly things are changed a bit but it seems to be some of the same? What's your thoughts on the global landscape and the impact of entrepreneurs? >> It certainly is global. I mean, I think in two ways. First of all, supply chains have become completely global. Look at how many companies in the valley rely on TSMC as their primary source of silicon? It's a giant engine for the valley. But we also see, increasingly, even in young companies a kind of global, distributed engineering scheme where they'll have a group in Taiwan, or in China or in India that'll be doing part of the engineering work and they're basically outsourcing some of that and balancing their costs and bringing in other talent that might be very hard to hire right now in the valley or very expensive in the valley. And I think that's exciting to see. >> The future of Silicon Valley is interesting because you have a lot of the fast pace, it seems like ventures have shrink down in terms of the acceleration of the classic building blocks of how to get a company started. You get some funding, engineers build a product, they get a prototype, they get it out. Now it seems to be condensed. You'll see valuations of a billion dollars. Can Silicon Valley survive the current pace given the real estate prices and some of the transportation challenges? What's your view on the future of Silicon Valley? >> Well my view is there is no place like the valley. The interaction between great universities, Stanford and Cal, UCSF if you're interested in biomedical innovation and the companies makes it just a microcosm of innovation and excellence. It's challenges, if it doesn't solve it's problems on housing and transportation, it will eventually cause a second Silicon Valley to rise and challenge it and I think that's really up to us to solve and I think we're going to have to, the great leaders, the great companies in the valley are going to have to take a leadership role working with the local governments to solve that problem. >> On the Silicon Valley vision of replicating it, I've seen many people try, other regions try over the years and over the 20 years, my observation is, they kind of get it right on paper but kind of fail in the execution. It's complicated but it's nuanced in a lot of ways but now we're seeing with remote working and the future of work changing a little bit differently and all kinds of new tech from block chain to, you name it, remote working. >> Right. >> That it might be a perfect storm now to actually have a formula to replicate Silicon Valley. If you were advising folks to say, hey, if you want to replicate Silicon Valley, what would be your advice to people? >> Well you got to start with the weather. (laughing) Always a challenge to replicate that. But then the other pieces, right? Some great universities, an ecosystem that supports risk taking and smart failure. One of the great things about the valley is, you're a young engineer/computer scientist graduating, you come here. You go to a start up company, so what it fails? There's 10 other companies you can get a job with. So there's a sense of this is a really exciting place to be, that kind of innovation. Creating that, replicating that ecosystem, I think and getting all the pieces together is going to be the challenge and I think the area that does that will have a chance at building something that could eventually be a real contestant for the second Silicon Valley. >> And I think the ecosystem and community is the key word. >> And community, absolutely. >> So I'll get your thoughts on your journey. Take us through your journey. MIPS co-founder, life at Stanford, now with the Knights Scholarship Program that you're involved in, the Knight Hennessy Scholarship. What lessons have you learned from each kind of big sequence of your life? Obviously in the start up days. Take us through some of the learnings. >> Yeah. >> Whether it's the scar tissue or the success, you know? >> Well, no, the time I spent starting MIPS and I took a leave for about 18 months full-time from the university, but I stayed involved after that on a part time basis but that 18 months was an intensive learning experience because I was an engineer. I knew a lot about the technology we're building, I didn't know anything about starting a company. And I had to go through all kinds of things, you know? Determining who to hire for CEO. Whether or not the CEO would be able to scale with the company. We had to do a layoff when we almost ran out of cash and that was a grueling experience but I learned how to get through that and that was a lesson when I came back to return to the university, to really use those lessons from the valley, they were invaluable. I also became a much better teacher, because here I had actually built something in industry and after all, most of our students are going to build things, they're not going to become future academics. So I went back and reengaged with the university and started taking on a variety of leadership roles there. Which was a wonderful experience. I never thought I'd be university president, not in a million years would I have told you that was, and it wasn't my goal. It was sort of the proverbial frog in the pot of water and the temperature keeps going up and then you're cooking before you know it. >> Well one of the things you did I thought was interesting during your time in the 90's as the head of the computer science department is a lot of that Stanford innovation started to come out with the internet and you had Yahoo, you had Google, you had PH.ds and you guys were okay with people dropping out, coming back in. >> Yeah. >> So you had this culture of building? >> Yup. >> Tell us some of the stories there, I mean Yahoo was a server under the desk and the web exploded. >> Yeah, it was a server under the desk. In fact, Dave and Jerry's office was in a trailer and you go into their room and they'd have pizza boxes and Coke cans stacked around because Yahoo use was exploding and they were trying to build this portal out to serve this growing community of users. Their machine was called Akebono because they were both big sumo wrestling fans. Then eventually, the university had to say, "You guys need to move this off campus "because it's generating 3/4 of the internet traffic "at the university and we can't afford it." (laughing) So they moved off campus and of course figured out how to use advertising as a monetization model. And that changed a lot of things on the internet because that made it possible for Google to come along years later. Redo search in a way that lots of us thought, there's nothing left to do in search, there's just not a lot there. But Larry and Sergey came up with a much better search algorithm. >> Talk about the culture that you guys fostered there because this, I think, is notable, in my mind, as well as some of the things I want to get into about the interdisciplinary. But at that time, you guys fostered a culture of creating and taking things out and there was an investment group of folks around Stanford. Was it a policy? Was it more laid back? >> No, I think-- >> Take us through some of the cultural issues. >> It was a notion of what really matters in the world. How do you get impact? Because in the end that's what the university really wants to do. Some people will do impact by publishing a paper or a book but some technologies, the real impact will occur when you take it out into the real world. And that was a vision that a lot of us had, dating back to Hewlett-Packard, of course but Jim Clark at Silicon Graphics, the Cisco work, MIPS and then, of course, Yahoo and Google years later. That was something that was supported by both the leadership of the university and that made it much easier for people to go out and take their work and take it out to the world. >> Well thank you for doing that, because I think the impact has been amazing and had transcended a lot of society today. You're seeing some challenges now with society. Now we have our own problems. (laughing) The impact has been massive but now lives are being changed. You're seeing technology better lives so it's changing the educational system. It's also changing how people are doing work. Talk about your current role right now with the Knight Hennessy Scholarship. What is that structured like and how are you shaping that? What's the vision? >> Well our vision, I became concerned as I was getting ready to leave the president's office that we, as a human society, were failing to develop the kinds of leaders that we needed. It seemed to me it was true in government. It was true in the corporate world. It was even true in some parts of the nonprofit world. And we needed to step back and say, how do we generate a new community of young leaders who are going to go out, determined to do the right thing, who see their role as service to society? And their success aligned with the success of others? We put together a small program. We put together a vision of this. I got support from the trustees. I went to ask my good friend Phil Knight, talked to him about it, and I said, "Phil I have this great idea," and I explained it to him and he said, "That's terrific." So I said, "Phil I need 400 million dollars." (laughing) A month later he said, "Yes," and we were off and running. Now we've got 50 truly extraordinary scholars from around the world, 21 different birth countries. Really, some of them have already started nonprofits that are making a big difference in their home communities. Others will do it in the future. >> What are some of the things they're working on? And how did you guys roll this out? Because, obviously, getting the funding's key but now you got to execute. What are some of the things that you went through? How did you recruit? How did you deploy? How did you get it up and running? >> We recruited by going out to universities around the world, and meeting with them and, of course, using social media as well. If you want get 21 year and 22 year olds to apply? Go to social media. So that gave us a feed on some students and then we thought a lot, our goal is to educate people who will be leaders in all walks of life. So we have MBAs, we have MDs, we have PH.ds, we have JDs. >> Yeah. >> A broad cohort of people, build a community. Build a community that will last far beyond their time at Stanford so they have a connection to a community of like minded individuals long after they graduate and then try to build their leadership skills. Bringing in people who they can meet with and hear from. George Schultz is coming in on Thursday night to talk about his journey through government service in four different cabinet positions and how did he address some of the challenges that he encountered. Build up their speaking skills and their ability to collaborate with others. And hopefully, these are great people. >> Yeah. >> We just hope to push their trajectory a little higher. >> One of the things I want you is that when Steve Jobs gave his commencement speech at Stanford, which is up on YouTube, it's got zillions and zillions of views, before he passed away, that has become kind of a famous call to arms for a lot of young people. A lot of parents, I have four kids and the question always comes up, how do I get into Stanford? But the question I want to ask you is more of, as you have the program, and you look for these future leaders, what advice would you give? Because we're seeing a lot of people saying, hey you know people build their resume, they say what they think people want to hear to get into a school, you know Steve Job's point said, "Follow your passion, don't live other people's dogma" these are some of the themes that he shared during that famous commencement speech in Stanford. Your advice for the next generation of leaders? How should they develop their skills? What are some of the things that they can acquire? Steve Jobs was famous to say in interviews, "What have you built?" >> Yeah. >> "Tell me something that you've built." It's kind of a qualifying question. So this brings up the question of, how should young people develop? How should they think about, not just applying and getting in but being a candidate for some of these programs? >> Well I think the first thing is you really want to challenge yourself. You really want to engage your intellectual passions. Find something you really like to do. Find something that you're also good at because that's the thing that'll get you out of bed on weekends early, and you'll go do it. I mean, if you asked me about my career? And asked me about my number one hobby for most of my career? It was my career. I loved being a professor. I loved research, I love teaching. That made it very easy to do it with energy and excitement and passion. You know there's a great quote in Steve Job's commencement speech where he says, "I look in the mirror every morning "and if too many days in a row I find out "I don't like what I'm going to do that day, "it's time for a change." Well I think it's that commitment to something. It's that belief in something that's bigger than yourself, that's about a journey that you're going to go on with others in that leadership role. >> I want to get your thoughts on the future for young people and society and business. It's very people centric now. You're seeing a lot of the younger generation look for mission driven ventures, they want to make a difference. But there's a lot of skills out there that are not yet born, yet. There's jobs that haven't been invented yet. Who handles autonomous vehicles? What's the policy? These are societal and technology questions. What are some of things that you see that are important to focus on for some of these new skills? There's a zillion new cyber security jobs open, for instance. >> Right. I mean there's thousands and thousands of openings for people that don't have those skills. >> Well I think we're going to need two different types of people. The traditional techno experts that we've always had but we're also going to need people that have a deep understanding of technology but are deeply committed to understanding it's impact on people. One of the problems we're going to have with the rise of artificial intelligence is we're going to have job displacements. In the longterm, I'm a believer that the number of opportunities created will exceed those that get destroyed but there'll be a lot of jobs that are deskilled or actually eliminated. How are we going to help educate that cohort of people and minimize the disruption of this technology? Because that disruption is really people's live that you're playing with. >> It's interesting, the old expression of ATMs will kill the bank branch but yet, now there's more bank branches than ever before. >> Than ever before, right? >> So, I think you're right on that, I think there'll be new opportunities. Entrepreneurship certainly is changing and I want to get your thoughts. This is the number one question I get from young entrepreneurs is, how should I raise money? How should I leverage money investors and my board? As you build your early foundational successes whether you're an engineer or a team, putting that E team together, entrepreneurial team is critical and that's just not people around the table of the venture. >> Correct. >> It's the support service providers and advisors and board of directors. How should they leverage their investors and board? How should they leverage that resource and not make it contentious, make it positive? >> Make is positive, right? So the best boards are collaborative with the management team, they work together to try to move the company forward. With so many angels now investing in these young companies there's an opportunity to bring in experience from somebody who's already had a successful entrepreneurial venture and looking for really deciding who do you want your investor to be? And it's not just about who gives you the highest valuation. It's also about who'll be there when things get tough? When the cash squeeze occurs and you're about to run out of money and you're really in a difficult situation? Who will help you build out the rest of your management team? Lots of young entrepreneurs, they're excited about their technology. >> Yeah. >> They don't have any management experience. (laughing) They need help. >> Yeah. >> They need help building that team and finding the right people for the company to be successful. >> I want to get thoughts on Mayfield. The 50th anniversary, obviously, they've been around longer than me, I'm going to be 53 this year. I remember when I first pitched Yogan DeGaulle in 1990, my first venture, he passed, but, Mayfield's been around for a while. I mean, Mayfield was the name of the town around here? >> Right. >> And has a lot of history. How do you see the relationship with the ventures and Stanford evolving? Are they still solid? They're doing well? Is it evolved? There's a new program going on? I see much more integration. What's the future of venture? >> Well I think the university's still a source of many ideas, obviously the notion of entrepreneurship has spread much more broadly than the university. And lots of creative start ups are spun out of existing companies or a group of young entrepreneurs that were in Google or Facebook early and now decide they want to go do their own thing. That's certainly happens but I think that ongoing innovation cycle is still alive. It's still dependent on the venture community and their experience having built companies. Particularly when you're talking about first time entrepreneurs. >> Yeah. >> Who really don't have a lot of depth. >> My final question I want to ask you is obviously one relating, pure to my heart, is computer science. I got my degree in the 80's during the systems revolution. Fun time, a lots changed. Women in computer science, the surface area of what computer science is. >> Mm-hmm. >> It was interesting, there was a story in Bloomberg that was debunked but people were debating if the super micros was being hacked by a chip in the system. >> Right. >> And more people don't even know what computer architecture is, I was like, hey now, the drivers might able to inject malware. So you need computer architecture, a book you've written. >> Mm-hmm. >> Academically, to programming so the range of computer science has changed. The diversity has changed. What's your thoughts on the current computer science curriculums? The global programs? Where's it going and what's your perspective on that? >> So I think computer science has changed dramatically. When I was a graduate student, you could arguably take a full set of breadth courses across the discipline. Maybe only one course in AI or one course in data base if you were a hardware or systems person but you could do everything. I could go to basically any Ph.d defense and understand what was going on. No more, the field has just exploded. And the impact? I mean you have people who do bio computation, for example, and you have to understand a lot of biology in order to understand how computer science applies to that. So that's the excitement. The excitement of having computer science have this broad impact. The other thing that's exciting is to see more women, more people of color, coming into the field, really injecting new energy and new perspective into the field and I think that will stand the discipline well in the future. >> And open source has been growing. I mean if you think about what it's like now to write software, all this goodness coming in with open source, it just adds over the top. >> Yeah. >> More goodness. >> I think today a, even a young undergraduate, writing in Python, using all these open libraries, could write more code in two weeks than I could have written in a year when I was graduate student. >> If we were 21 together, sitting here you and I, today, we're 21 years old, what would we do? What would you do? >> Well I think the opportunity created by the rise of machine learning and artificial intelligence is just unrivaled. This is a technology which we have invested in for 50 or 60 years, that was disappointing us for 50 or 60 years, in terms of not meeting it's projections and then, all of a sudden, turning point. It was a radical breakthrough and we're still at the very beginning of that radical breakthrough so I think it's going to be a really exciting time. >> Diane Green had a great quote at her last Google Cloud conference. She said, "It's like butter, everything's great with it." (laughing) AI is the-- >> Yeah, it's great with it. And of course, it can be overstated but I think there really is a fundamental breakthrough in terms of how we use the technology. Driven, of course, by the amount of data available for training these neural networks and far more computational resources than we ever thought we'd have. >> John it's been a great pleasure. Thanks for spending the time with us here for our People First interview, appreciate it. >> My pleasure, John. >> I'm John Furrier with theCUBE, we are here in Sand Hill Road for the People First program, thanks for watching. (upbeat techno music)
SUMMARY :
in the heart of Silicon Valley, This is the focus of entrepreneurship these days. and it's not the other way around. is around the ecosystem of Silicon Valley. if not the first handful of venture firms. in Silicon Valley and this is now going global. What are you looking for and what's the hope? from the interviews is, we are trying Building durable companies is about the long game They move on to another opportunity. And this pay it forward culture has been What's the inspiration is to build iconic companies which are built to last You know one of the things that we is not looking for the quick hit. by your companies in good times and in bad. that the outcome was not how they envisioned it of changing the way people transport and we hope that this content will help people, can learn from the journeys of many iconic people also the director of the Knight Hennessy Scholarship. that kind of hung around the barbershop, the kinds of technologies we build. for a lot of the change. Is it the Stanford drop out PH The beginning of the personal computer. is that even the failures have DNA. but the people move on to other companies and 95%/98% of the processors in the world They're not on the desktop anymore, "We'll let go of the DRAM business." and the impact of entrepreneurs? of the engineering work and they're basically of the classic building blocks and the companies makes it just a microcosm and the future of work changing a little bit differently a perfect storm now to actually have a formula and getting all the pieces together is the key word. Obviously in the start up days. And I had to go through all kinds of things, you know? Well one of the things you did I thought was interesting of the stories there, I mean Yahoo was a server "because it's generating 3/4 of the internet traffic Talk about the culture that you guys fostered there but some technologies, the real impact will occur What is that structured like and how are you shaping that? I got support from the trustees. What are some of the things that you went through? around the world, and meeting with them and how did he address some of the challenges to push their trajectory a little higher. One of the things I want you is that It's kind of a qualifying question. because that's the thing that'll get you What's the policy? for people that don't have those skills. and minimize the disruption of this technology? It's interesting, the old expression of the venture. It's the support service providers When the cash squeeze occurs and you're about They don't have any management experience. and finding the right people for the company longer than me, I'm going to be 53 this year. What's the future of venture? of many ideas, obviously the notion I got my degree in the 80's during the systems revolution. if the super micros was being hacked So you need computer architecture, a book you've written. to programming so the range of computer science has changed. into the field and I think that will stand I mean if you think about what it's like now I think today a, even a young undergraduate, at the very beginning of that radical breakthrough She said, "It's like butter, everything's great with it." Driven, of course, by the amount of data Thanks for spending the time with us for the People First program, thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Diane Green | PERSON | 0.99+ |
Andy Kessler | PERSON | 0.99+ |
Jim Clark | PERSON | 0.99+ |
George Schultz | PERSON | 0.99+ |
1990 | DATE | 0.99+ |
John | PERSON | 0.99+ |
Navin Chaddha | PERSON | 0.99+ |
Navin | PERSON | 0.99+ |
Steve Jobs | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Phil Knight | PERSON | 0.99+ |
Steve Job | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
John Zimmer | PERSON | 0.99+ |
Taiwan | LOCATION | 0.99+ |
John Chambers | PERSON | 0.99+ |
Alphabet | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Hewlett-Packard | ORGANIZATION | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
China | LOCATION | 0.99+ |
Thursday night | DATE | 0.99+ |
50 | QUANTITY | 0.99+ |
People First Network | ORGANIZATION | 0.99+ |
John Hennessy | PERSON | 0.99+ |
Lyft | ORGANIZATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Larry | PERSON | 0.99+ |
22 year | QUANTITY | 0.99+ |
one course | QUANTITY | 0.99+ |
21 year | QUANTITY | 0.99+ |
21 | QUANTITY | 0.99+ |
400 million dollars | QUANTITY | 0.99+ |
SiliconANGLE Media | ORGANIZATION | 0.99+ |
60 years | QUANTITY | 0.99+ |
India | LOCATION | 0.99+ |
18 months | QUANTITY | 0.99+ |
Sand Hill Road | LOCATION | 0.99+ |
Hewlett-Packards | ORGANIZATION | 0.99+ |
Silicon Graphics | ORGANIZATION | 0.99+ |
Sergey | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Morgan Stanley | ORGANIZATION | 0.99+ |
Stanford | ORGANIZATION | 0.99+ |
TSMC | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
four kids | QUANTITY | 0.99+ |
Googles | ORGANIZATION | 0.99+ |
UCSF | ORGANIZATION | 0.99+ |
Stanford University | ORGANIZATION | 0.99+ |
Phil | PERSON | 0.99+ |
Mayfield | LOCATION | 0.99+ |
Python | TITLE | 0.99+ |
A month later | DATE | 0.99+ |
two weeks | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
MIPS | ORGANIZATION | 0.99+ |
two ways | QUANTITY | 0.99+ |
first venture | QUANTITY | 0.99+ |
50th anniversary | QUANTITY | 0.99+ |