Lena Smart & Tara Hernandez, MongoDB | International Women's Day
(upbeat music) >> Hello and welcome to theCube's coverage of International Women's Day. I'm John Furrier, your host of "theCUBE." We've got great two remote guests coming into our Palo Alto Studios, some tech athletes, as we say, people that've been in the trenches, years of experience, Lena Smart, CISO at MongoDB, Cube alumni, and Tara Hernandez, VP of Developer Productivity at MongoDB as well. Thanks for coming in to this program and supporting our efforts today. Thanks so much. >> Thanks for having us. >> Yeah, everyone talk about the journey in tech, where it all started. Before we get there, talk about what you guys are doing at MongoDB specifically. MongoDB is kind of gone the next level as a platform. You have your own ecosystem, lot of developers, very technical crowd, but it's changing the business transformation. What do you guys do at Mongo? We'll start with you, Lena. >> So I'm the CISO, so all security goes through me. I like to say, well, I don't like to say, I'm described as the ones throat to choke. So anything to do with security basically starts and ends with me. We do have a fantastic Cloud engineering security team and a product security team, and they don't report directly to me, but obviously we have very close relationships. I like to keep that kind of church and state separate and I know I've spoken about that before. And we just recently set up a physical security team with an amazing gentleman who left the FBI and he came to join us after 26 years for the agency. So, really starting to look at the physical aspects of what we offer as well. >> I interviewed a CISO the other day and she said, "Every day is day zero for me." Kind of goofing on the Amazon Day one thing, but Tara, go ahead. Tara, go ahead. What's your role there, developer productivity? What are you focusing on? >> Sure. Developer productivity is kind of the latest description for things that we've described over the years as, you know, DevOps oriented engineering or platform engineering or build and release engineering development infrastructure. It's all part and parcel, which is how do we actually get our code from developer to customer, you know, and all the mechanics that go into that. It's been something I discovered from my first job way back in the early '90s at Borland. And the art has just evolved enormously ever since, so. >> Yeah, this is a very great conversation both of you guys, right in the middle of all the action and data infrastructures changing, exploding, and involving big time AI and data tsunami and security never stops. Well, let's get into, we'll talk about that later, but let's get into what motivated you guys to pursue a career in tech and what were some of the challenges that you faced along the way? >> I'll go first. The fact of the matter was I intended to be a double major in history and literature when I went off to university, but I was informed that I had to do a math or a science degree or else the university would not be paid for. At the time, UC Santa Cruz had a policy that called Open Access Computing. This is, you know, the late '80s, early '90s. And anybody at the university could get an email account and that was unusual at the time if you were, those of us who remember, you used to have to pay for that CompuServe or AOL or, there's another one, I forget what it was called, but if a student at Santa Cruz could have an email account. And because of that email account, I met people who were computer science majors and I'm like, "Okay, I'll try that." That seems good. And it was a little bit of a struggle for me, a lot I won't lie, but I can't complain with how it ended up. And certainly once I found my niche, which was development infrastructure, I found my true love and I've been doing it for almost 30 years now. >> Awesome. Great story. Can't wait to ask a few questions on that. We'll go back to that late '80s, early '90s. Lena, your journey, how you got into it. >> So slightly different start. I did not go to university. I had to leave school when I was 16, got a job, had to help support my family. Worked a bunch of various jobs till I was about 21 and then computers became more, I think, I wouldn't say they were ubiquitous, but they were certainly out there. And I'd also been saving up every penny I could earn to buy my own computer and bought an Amstrad 1640, 20 meg hard drive. It rocked. And kind of took that apart, put it back together again, and thought that could be money in this. And so basically just teaching myself about computers any job that I got. 'Cause most of my jobs were like clerical work and secretary at that point. But any job that had a computer in front of that, I would make it my business to go find the guy who did computing 'cause it was always a guy. And I would say, you know, I want to learn how these work. Let, you know, show me. And, you know, I would take my lunch hour and after work and anytime I could with these people and they were very kind with their time and I just kept learning, so yep. >> Yeah, those early days remind me of the inflection point we're going through now. This major C change coming. Back then, if you had a computer, you had to kind of be your own internal engineer to fix things. Remember back on the systems revolution, late '80s, Tara, when, you know, your career started, those were major inflection points. Now we're seeing a similar wave right now, security, infrastructure. It feels like it's going to a whole nother level. At Mongo, you guys certainly see this as well, with this AI surge coming in. A lot more action is coming in. And so there's a lot of parallels between these inflection points. How do you guys see this next wave of change? Obviously, the AI stuff's blowing everyone away. Oh, new user interface. It's been called the browser moment, the mobile iPhone moment, kind of for this generation. There's a lot of people out there who are watching that are young in their careers, what's your take on this? How would you talk to those folks around how important this wave is? >> It, you know, it's funny, I've been having this conversation quite a bit recently in part because, you know, to me AI in a lot of ways is very similar to, you know, back in the '90s when we were talking about bringing in the worldwide web to the forefront of the world, right. And we tended to think in terms of all the optimistic benefits that would come of it. You know, free passing of information, availability to anyone, anywhere. You just needed an internet connection, which back then of course meant a modem. >> John: Not everyone had though. >> Exactly. But what we found in the subsequent years is that human beings are what they are and we bring ourselves to whatever platforms that are there, right. And so, you know, as much as it was amazing to have this freely available HTML based internet experience, it also meant that the negatives came to the forefront quite quickly. And there were ramifications of that. And so to me, when I look at AI, we're already seeing the ramifications to that. Yes, are there these amazing, optimistic, wonderful things that can be done? Yes. >> Yeah. >> But we're also human and the bad stuff's going to come out too. And how do we- >> Yeah. >> How do we as an industry, as a community, you know, understand and mitigate those ramifications so that we can benefit more from the positive than the negative. So it is interesting that it comes kind of full circle in really interesting ways. >> Yeah. The underbelly takes place first, gets it in the early adopter mode. Normally industries with, you know, money involved arbitrage, no standards. But we've seen this movie before. Is there hope, Lena, that we can have a more secure environment? >> I would hope so. (Lena laughs) Although depressingly, we've been in this well for 30 years now and we're, at the end of the day, still telling people not to click links on emails. So yeah, that kind of still keeps me awake at night a wee bit. The whole thing about AI, I mean, it's, obviously I am not an expert by any stretch of the imagination in AI. I did read (indistinct) book recently about AI and that was kind of interesting. And I'm just trying to teach myself as much as I can about it to the extent of even buying the "Dummies Guide to AI." Just because, it's actually not a dummies guide. It's actually fairly interesting, but I'm always thinking about it from a security standpoint. So it's kind of my worst nightmare and the best thing that could ever happen in the same dream. You know, you've got this technology where I can ask it a question and you know, it spits out generally a reasonable answer. And my team are working on with Mark Porter our CTO and his team on almost like an incubation of AI link. What would it look like from MongoDB? What's the legal ramifications? 'Cause there will be legal ramifications even though it's the wild, wild west just now, I think. Regulation's going to catch up to us pretty quickly, I would think. >> John: Yeah, yeah. >> And so I think, you know, as long as companies have a seat at the table and governments perhaps don't become too dictatorial over this, then hopefully we'll be in a good place. But we'll see. I think it's a really interest, there's that curse, we're living in interesting times. I think that's where we are. >> It's interesting just to stay on this tech trend for a minute. The standards bodies are different now. Back in the old days there were, you know, IEEE standards, ITF standards. >> Tara: TPC. >> The developers are the new standard. I mean, now you're seeing open source completely different where it was in the '90s to here beginning, that was gen one, some say gen two, but I say gen one, now we're exploding with open source. You have kind of developers setting the standards. If developers like it in droves, it becomes defacto, which then kind of rolls into implementation. >> Yeah, I mean I think if you don't have developer input, and this is why I love working with Tara and her team so much is 'cause they get it. If we don't have input from developers, it's not going to get used. There's going to be ways of of working around it, especially when it comes to security. If they don't, you know, if you're a developer and you're sat at your screen and you don't want to do that particular thing, you're going to find a way around it. You're a smart person. >> Yeah. >> So. >> Developers on the front lines now versus, even back in the '90s, they're like, "Okay, consider the dev's, got a QA team." Everything was Waterfall, now it's Cloud, and developers are on the front lines of everything. Tara, I mean, this is where the standards are being met. What's your reaction to that? >> Well, I think it's outstanding. I mean, you know, like I was at Netscape and part of the crowd that released the browser as open source and we founded mozilla.org, right. And that was, you know, in many ways kind of the birth of the modern open source movement beyond what we used to have, what was basically free software foundation was sort of the only game in town. And I think it is so incredibly valuable. I want to emphasize, you know, and pile onto what Lena was saying, it's not just that the developers are having input on a sort of company by company basis. Open source to me is like a checks and balance, where it allows us as a broader community to be able to agree on and enforce certain standards in order to try and keep the technology platforms as accessible as possible. I think Kubernetes is a great example of that, right. If we didn't have Kubernetes, that would've really changed the nature of how we think about container orchestration. But even before that, Linux, right. Linux allowed us as an industry to end the Unix Wars and as someone who was on the front lines of that as well and having to support 42 different operating systems with our product, you know, that was a huge win. And it allowed us to stop arguing about operating systems and start arguing about software or not arguing, but developing it in positive ways. So with, you know, with Kubernetes, with container orchestration, we all agree, okay, that's just how we're going to orchestrate. Now we can build up this huge ecosystem, everybody gets taken along, right. And now it changes the game for what we're defining as business differentials, right. And so when we talk about crypto, that's a little bit harder, but certainly with AI, right, you know, what are the checks and balances that as an industry and as the developers around this, that we can in, you know, enforce to make sure that no one company or no one body is able to overly control how these things are managed, how it's defined. And I think that is only for the benefit in the industry as a whole, particularly when we think about the only other option is it gets regulated in ways that do not involve the people who actually know the details of what they're talking about. >> Regulated and or thrown away or bankrupt or- >> Driven underground. >> Yeah. >> Which would be even worse actually. >> Yeah, that's a really interesting, the checks and balances. I love that call out. And I was just talking with another interview part of the series around women being represented in the 51% ratio. Software is for everybody. So that we believe that open source movement around the collective intelligence of the participants in the industry and independent of gender, this is going to be the next wave. You're starting to see these videos really have impact because there are a lot more leaders now at the table in companies developing software systems and with AI, the aperture increases for applications. And this is the new dynamic. What's your guys view on this dynamic? How does this go forward in a positive way? Is there a certain trajectory you see? For women in the industry? >> I mean, I think some of the states are trying to, again, from the government angle, some of the states are trying to force women into the boardroom, for example, California, which can be no bad thing, but I don't know, sometimes I feel a bit iffy about all this kind of forced- >> John: Yeah. >> You know, making, I don't even know how to say it properly so you can cut this part of the interview. (John laughs) >> Tara: Well, and I think that they're >> I'll say it's not organic. >> No, and I think they're already pulling it out, right. It's already been challenged so they're in the process- >> Well, this is the open source angle, Tara, you are getting at it. The change agent is open, right? So to me, the history of the proven model is openness drives transparency drives progress. >> No, it's- >> If you believe that to be true, this could have another impact. >> Yeah, it's so interesting, right. Because if you look at McKinsey Consulting or Boston Consulting or some of the other, I'm blocking on all of the names. There has been a decade or more of research that shows that a non homogeneous employee base, be it gender or ethnicity or whatever, generates more revenue, right? There's dollar signs that can be attached to this, but it's not enough for all companies to want to invest in that way. And it's not enough for all, you know, venture firms or investment firms to grant that seed money or do those seed rounds. I think it's getting better very slowly, but socialization is a much harder thing to overcome over time. Particularly, when you're not just talking about one country like the United States in our case, but around the world. You know, tech centers now exist all over the world, including places that even 10 years ago we might not have expected like Nairobi, right. Which I think is amazing, but you have to factor in the cultural implications of that as well, right. So yes, the openness is important and we have, it's important that we have those voices, but I don't think it's a panacea solution, right. It's just one more piece. I think honestly that one of the most important opportunities has been with Cloud computing and Cloud's been around for a while. So why would I say that? It's because if you think about like everybody holds up the Steve Jobs, Steve Wozniak, back in the '70s, or Sergey and Larry for Google, you know, you had to have access to enough credit card limit to go to Fry's and buy your servers and then access to somebody like Susan Wojcicki to borrow the garage or whatever. But there was still a certain amount of upfrontness that you had to be able to commit to, whereas now, and we've, I think, seen a really good evidence of this being able to lease server resources by the second and have development platforms that you can do on your phone. I mean, for a while I think Africa, that the majority of development happened on mobile devices because there wasn't a sufficient supply chain of laptops yet. And that's no longer true now as far as I know. But like the power that that enables for people who would otherwise be underrepresented in our industry instantly opens it up, right? And so to me that's I think probably the biggest opportunity that we've seen from an industry on how to make more availability in underrepresented representation for entrepreneurship. >> Yeah. >> Something like AI, I think that's actually going to take us backwards if we're not careful. >> Yeah. >> Because of we're reinforcing that socialization. >> Well, also the bias. A lot of people commenting on the biases of the large language inherently built in are also problem. Lena, I want you to weigh on this too, because I think the skills question comes up here and I've been advocating that you don't need the pedigree, college pedigree, to get into a certain jobs, you mentioned Cloud computing. I mean, it's been around for you think a long time, but not really, really think about it. The ability to level up, okay, if you're going to join something new and half the jobs in cybersecurity are created in the past year, right? So, you have this what used to be a barrier, your degree, your pedigree, your certification would take years, would be a blocker. Now that's gone. >> Lena: Yeah, it's the opposite. >> That's, in fact, psychology. >> I think so, but the people who I, by and large, who I interview for jobs, they have, I think security people and also I work with our compliance folks and I can't forget them, but let's talk about security just now. I've always found a particular kind of mindset with security folks. We're very curious, not very good at following rules a lot of the time, and we'd love to teach others. I mean, that's one of the big things stem from the start of my career. People were always interested in teaching and I was interested in learning. So it was perfect. And I think also having, you know, strong women leaders at MongoDB allows other underrepresented groups to actually apply to the company 'cause they see that we're kind of talking the talk. And that's been important. I think it's really important. You know, you've got Tara and I on here today. There's obviously other senior women at MongoDB that you can talk to as well. There's a bunch of us. There's not a whole ton of us, but there's a bunch of us. And it's good. It's definitely growing. I've been there for four years now and I've seen a growth in women in senior leadership positions. And I think having that kind of track record of getting really good quality underrepresented candidates to not just interview, but come and join us, it's seen. And it's seen in the industry and people take notice and they're like, "Oh, okay, well if that person's working, you know, if Tara Hernandez is working there, I'm going to apply for that." And that in itself I think can really, you know, reap the rewards. But it's getting started. It's like how do you get your first strong female into that position or your first strong underrepresented person into that position? It's hard. I get it. If it was easy, we would've sold already. >> It's like anything. I want to see people like me, my friends in there. Am I going to be alone? Am I going to be of a group? It's a group psychology. Why wouldn't? So getting it out there is key. Is there skills that you think that people should pay attention to? One's come up as curiosity, learning. What are some of the best practices for folks trying to get into the tech field or that's in the tech field and advancing through? What advice are you guys- >> I mean, yeah, definitely, what I say to my team is within my budget, we try and give every at least one training course a year. And there's so much free stuff out there as well. But, you know, keep learning. And even if it's not right in your wheelhouse, don't pick about it. Don't, you know, take a look at what else could be out there that could interest you and then go for it. You know, what does it take you few minutes each night to read a book on something that might change your entire career? You know, be enthusiastic about the opportunities out there. And there's so many opportunities in security. Just so many. >> Tara, what's your advice for folks out there? Tons of stuff to taste, taste test, try things. >> Absolutely. I mean, I always say, you know, my primary qualifications for people, I'm looking for them to be smart and motivated, right. Because the industry changes so quickly. What we're doing now versus what we did even last year versus five years ago, you know, is completely different though themes are certainly the same. You know, we still have to code and we still have to compile that code or package the code and ship the code so, you know, how well can we adapt to these new things instead of creating floppy disks, which was my first job. Five and a quarters, even. The big ones. >> That's old school, OG. There it is. Well done. >> And now it's, you know, containers, you know, (indistinct) image containers. And so, you know, I've gotten a lot of really great success hiring boot campers, you know, career transitioners. Because they bring a lot experience in addition to the technical skills. I think the most important thing is to experiment and figuring out what do you like, because, you know, maybe you are really into security or maybe you're really into like deep level coding and you want to go back, you know, try to go to school to get a degree where you would actually want that level of learning. Or maybe you're a front end engineer, you want to be full stacked. Like there's so many different things, data science, right. Maybe you want to go learn R right. You know, I think it's like figure out what you like because once you find that, that in turn is going to energize you 'cause you're going to feel motivated. I think the worst thing you could do is try to force yourself to learn something that you really could not care less about. That's just the worst. You're going in handicapped. >> Yeah and there's choices now versus when we were breaking into the business. It was like, okay, you software engineer. They call it software engineering, that's all it was. You were that or you were in sales. Like, you know, some sort of systems engineer or sales and now it's,- >> I had never heard of my job when I was in school, right. I didn't even know it was a possibility. But there's so many different types of technical roles, you know, absolutely. >> It's so exciting. I wish I was young again. >> One of the- >> Me too. (Lena laughs) >> I don't. I like the age I am. So one of the things that I did to kind of harness that curiosity is we've set up a security champions programs. About 120, I guess, volunteers globally. And these are people from all different backgrounds and all genders, diversity groups, underrepresented groups, we feel are now represented within this champions program. And people basically give up about an hour or two of their time each week, with their supervisors permission, and we basically teach them different things about security. And we've now had seven full-time people move from different areas within MongoDB into my team as a result of that program. So, you know, monetarily and time, yeah, saved us both. But also we're showing people that there is a path, you know, if you start off in Tara's team, for example, doing X, you join the champions program, you're like, "You know, I'd really like to get into red teaming. That would be so cool." If it fits, then we make that happen. And that has been really important for me, especially to give, you know, the women in the underrepresented groups within MongoDB just that window into something they might never have seen otherwise. >> That's a great common fit is fit matters. Also that getting access to what you fit is also access to either mentoring or sponsorship or some sort of, at least some navigation. Like what's out there and not being afraid to like, you know, just ask. >> Yeah, we just actually kicked off our big mentor program last week, so I'm the executive sponsor of that. I know Tara is part of it, which is fantastic. >> We'll put a plug in for it. Go ahead. >> Yeah, no, it's amazing. There's, gosh, I don't even know the numbers anymore, but there's a lot of people involved in this and so much so that we've had to set up mentoring groups rather than one-on-one. And I think it was 45% of the mentors are actually male, which is quite incredible for a program called Mentor Her. And then what we want to do in the future is actually create a program called Mentor Them so that it's not, you know, not just on the female and so that we can live other groups represented and, you know, kind of break down those groups a wee bit more and have some more granularity in the offering. >> Tara, talk about mentoring and sponsorship. Open source has been there for a long time. People help each other. It's community-oriented. What's your view of how to work with mentors and sponsors if someone's moving through ranks? >> You know, one of the things that was really interesting, unfortunately, in some of the earliest open source communities is there was a lot of pervasive misogyny to be perfectly honest. >> Yeah. >> And one of the important adaptations that we made as an open source community was the idea, an introduction of code of conducts. And so when I'm talking to women who are thinking about expanding their skills, I encourage them to join open source communities to have opportunity, even if they're not getting paid for it, you know, to develop their skills to work with people to get those code reviews, right. I'm like, "Whatever you join, make sure they have a code of conduct and a good leadership team. It's very important." And there are plenty, right. And then that idea has come into, you know, conferences now. So now conferences have codes of contact, if there are any good, and maybe not all of them, but most of them, right. And the ideas of expanding that idea of intentional healthy culture. >> John: Yeah. >> As a business goal and business differentiator. I mean, I won't lie, when I was recruited to come to MongoDB, the culture that I was able to discern through talking to people, in addition to seeing that there was actually women in senior leadership roles like Lena, like Kayla Nelson, that was a huge win. And so it just builds on momentum. And so now, you know, those of us who are in that are now representing. And so that kind of reinforces, but it's all ties together, right. As the open source world goes, particularly for a company like MongoDB, which has an open source product, you know, and our community builds. You know, it's a good thing to be mindful of for us, how we interact with the community and you know, because that could also become an opportunity for recruiting. >> John: Yeah. >> Right. So we, in addition to people who might become advocates on Mongo's behalf in their own company as a solution for themselves, so. >> You guys had great successful company and great leadership there. I mean, I can't tell you how many times someone's told me "MongoDB doesn't scale. It's going to be dead next year." I mean, I was going back 10 years. It's like, just keeps getting better and better. You guys do a great job. So it's so fun to see the success of developers. Really appreciate you guys coming on the program. Final question, what are you guys excited about to end the segment? We'll give you guys the last word. Lena will start with you and Tara, you can wrap us up. What are you excited about? >> I'm excited to see what this year brings. I think with ChatGPT and its copycats, I think it'll be a very interesting year when it comes to AI and always in the lookout for the authentic deep fakes that we see coming out. So just trying to make people aware that this is a real thing. It's not just pretend. And then of course, our old friend ransomware, let's see where that's going to go. >> John: Yeah. >> And let's see where we get to and just genuine hygiene and housekeeping when it comes to security. >> Excellent. Tara. >> Ah, well for us, you know, we're always constantly trying to up our game from a security perspective in the software development life cycle. But also, you know, what can we do? You know, one interesting application of AI that maybe Google doesn't like to talk about is it is really cool as an addendum to search and you know, how we might incorporate that as far as our learning environment and developer productivity, and how can we enable our developers to be more efficient, productive in their day-to-day work. So, I don't know, there's all kinds of opportunities that we're looking at for how we might improve that process here at MongoDB and then maybe be able to share it with the world. One of the things I love about working at MongoDB is we get to use our own products, right. And so being able to have this interesting document database in order to put information and then maybe apply some sort of AI to get it out again, is something that we may well be looking at, if not this year, then certainly in the coming year. >> Awesome. Lena Smart, the chief information security officer. Tara Hernandez, vice president developer of productivity from MongoDB. Thank you so much for sharing here on International Women's Day. We're going to do this quarterly every year. We're going to do it and then we're going to do quarterly updates. Thank you so much for being part of this program. >> Thank you. >> Thanks for having us. >> Okay, this is theCube's coverage of International Women's Day. I'm John Furrier, your host. Thanks for watching. (upbeat music)
SUMMARY :
Thanks for coming in to this program MongoDB is kind of gone the I'm described as the ones throat to choke. Kind of goofing on the you know, and all the challenges that you faced the time if you were, We'll go back to that you know, I want to learn how these work. Tara, when, you know, your career started, you know, to me AI in a lot And so, you know, and the bad stuff's going to come out too. you know, understand you know, money involved and you know, it spits out And so I think, you know, you know, IEEE standards, ITF standards. The developers are the new standard. and you don't want to do and developers are on the And that was, you know, in many ways of the participants I don't even know how to say it properly No, and I think they're of the proven model is If you believe that that you can do on your phone. going to take us backwards Because of we're and half the jobs in cybersecurity And I think also having, you know, I going to be of a group? You know, what does it take you Tons of stuff to taste, you know, my primary There it is. And now it's, you know, containers, Like, you know, some sort you know, absolutely. I (Lena laughs) especially to give, you know, Also that getting access to so I'm the executive sponsor of that. We'll put a plug in for it. and so that we can live to work with mentors You know, one of the things And one of the important and you know, because So we, in addition to people and Tara, you can wrap us up. and always in the lookout for it comes to security. addendum to search and you know, We're going to do it and then we're I'm John Furrier, your host.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Susan Wojcicki | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Jason | PERSON | 0.99+ |
Tara Hernandez | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Lena Smart | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
Mark Porter | PERSON | 0.99+ |
Mellanox | ORGANIZATION | 0.99+ |
Kevin Deierling | PERSON | 0.99+ |
Marty Lans | PERSON | 0.99+ |
Tara | PERSON | 0.99+ |
John | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Jim Jackson | PERSON | 0.99+ |
Jason Newton | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Daniel Hernandez | PERSON | 0.99+ |
Dave Winokur | PERSON | 0.99+ |
Daniel | PERSON | 0.99+ |
Lena | PERSON | 0.99+ |
Meg Whitman | PERSON | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Julie Sweet | PERSON | 0.99+ |
Marty | PERSON | 0.99+ |
Yaron Haviv | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Western Digital | ORGANIZATION | 0.99+ |
Kayla Nelson | PERSON | 0.99+ |
Mike Piech | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Ireland | LOCATION | 0.99+ |
Antonio | PERSON | 0.99+ |
Daniel Laury | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
six | QUANTITY | 0.99+ |
Todd Kerry | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
$20 | QUANTITY | 0.99+ |
Mike | PERSON | 0.99+ |
January 30th | DATE | 0.99+ |
Meg | PERSON | 0.99+ |
Mark Little | PERSON | 0.99+ |
Luke Cerney | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Jeff Basil | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Dan | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
Allan | PERSON | 0.99+ |
40 gig | QUANTITY | 0.99+ |
2021 AWSSQ2 069 AWS Krishna Gade and Amit Paka
(upbeat music) >> Hello and welcome to theCUBE as we present AWS Startup Showcase, The Next Big Thing in AI, Security & Life Sciences, the hottest startups. And today's session is really the next big thing in AI Security & Life Sciences. As to the AI track is really a big one most important. And we have a feature in company, fiddler.ai. I'm your host, John Furrier with theCUBE. And we're joined by the founders, Krishna Gade, founder and CEO, and Amit Paka, founder and Chief Product Officer. Great to have the founders on. Gentlemen, thank you for coming on this Cube segment for the AWS Startup Showcase. >> Thanks, John... >> Good to be here. >> So the topic of this session is staying compliant and accelerating AI adoption and model performance monitoring. Basically, bottom line is how to be innovative with AI and stay (John laughs) within the rules of the road, if you will. So, super important topic. Everyone knows the benefits of what AI can do. Everyone sees machine learning being embedded in every single application, but the business drivers of compliance and all kinds of new kinds of regulations are popping up. So we don't. The question is how do you stay compliant? Which is essentially how do you not foreclose the future opportunities? That's really the question on everyone's mind these days. So let's get into it. But before we start let's take a minute to explain what you guys do. Krishna, we'll start with you first. What does fiddler.ai do? >> Absolutely, yeah. Fiddler is a model performance management platform company. We help, you know, enterprises, mid-market companies to build responsible AI by helping them continuously monitoring their AI, analyzing it, explaining it, so that they know what's going on with their AI solutions at any given point of time. And they can be like, ensuring that, you know businesses are intact and they're compliant with all the regulations that they have in their industry. >> Everyone thinks AI is a secret sauce. It's magic beans and automatically will just change over the company. (John laughs) So it's kind of like this almost like it's a hope. But the reality is there is some value there but there's something that has to be done first. So let's get into what this model performance management is because it's a concept that needs to be understood well but also you got to implement it properly. There's some foundational things you've got to you know, walk, crawl before you walk and walk before you run kind of thing. So let's get into it. What is model performance management? >> Yeah, that's a great question. So the core software artifact most an AI system is called an AI model. So it essentially represents the patterns inside data accessing manner so that it can actually predict the future. Now, for example, let's say I'm trying to build an AI based credit underwriting system. What I would do is I would look at the historical you know, loans data. You know, good loans and bad loans. And then, I will build it a model that can capture those patterns so that when a new customer comes in I can actually predict, you know, how likely they are going to default on the loan much more activity. And this helps me as a bank or center company to produce more good loans for my company and ensure that my customer is not, you know, getting the right customer service. Now, the problem though is this AI model is a black box. Unlike regular software code you cannot really open up and read its code and read its patterns and how it is doing. And so that's where the risks around the AI models come along. And so you need a ways to innovate to actually explain it. You need to understand it and you need to monitor it. And this is where the model performance management system like Fiddler can help you look into that black box. Understand how it's doing it, monitor its predictions continuously so that you know what these models are doing at any given point of time. >> I mean, I'd love to get your thoughts on this because on the product side I could, first of all, totally awesome concept. No one debates that. But now you've got more and more companies integrating with each other more data's being shared. And so the, you know, everyone knows what an app sec review is, right? But now they're thinking about this concept of how do you do review of models, right? So understanding what's inside the black box is a huge thing. How do you do this? What does it mean? >> Yeah, so typically what you would do is it's just like software where you would validate software code going through QA and like analysis. In case of models you would try to prove the model in like different granularities to really understand how the model is behaving. This could be at a model prediction like level in case of the loans example, Krishna just gave. Why is my model saying high-risk to in particular loan? Or it might be in case of explaining groups of loans. For example, why is my model making high-risk predictions to loans made in California or loans made to all men? Was it loans made to all women? And it could also be at the global level. What are the key data factors important to my model? So the ability to prove the model deeper and really opening up the black box and then using that knowledge to explain how the model is working to non-technical folks in compliance. Or to folks who are regulators, who just want to ensure that they know how the model works to make sure that it's keeping up with kind of lending regulations to ensure that it's not biased and so on. So that's typically the way you would do it with the machine learning model. >> Krishna, talk about the potential embarrassments that could happen. You just mentioned some of the use cases you heard from a mid-saying you know, female, male. I mean, machines, aren't that smart. (John laughs) >> Yeah. >> If they don't have the data. >> Yeah. >> And data is fragmented you've got silos with all kinds of challenges just on the data problem, right? >> Yeah. >> So nevermind the machine learning problems. So, this is huge. I mean, the embarrassment opportunities. >> Yeah. >> And the risk management on whether it's a hack or something else. So you've got public embarrassment by doing something really went wrong. And then, you've got the real business impact that could be damaging. >> Absolutely. You know, AI has come forward a lot, right? I mean, you know, you have lots of data these days. You have a lot of computing power an amazing algorithms that you can actually build really sophisticated models. Some of these models were known to beat humans in image recognition and whatnot. However, the problem is there are risks in using AI, you know, without properly testing it, without properly monitoring it. For example, a couple of years ago, Apple and Goldman Sachs launched a credit card, right? And for their users where they were using algorithms presumably AI or machine learning algorithms to set credit limits. What happened was within the same household husband and wife got 10 times difference in the credit limits being set for them. And some of these people had similar FICO scores, similar salary ranges. And some of them went online and complained about it and that included the likes of Steve Wozniak as well. >> Yeah. >> So this was, these kind of stories are usually embarrassing when you could lose customer trust overnight, right? And, you know, you have to do a lot of PR damage. Eventually, there was a regulatory probate with Goldman Sachs. So there are these problems if you're not properly monitoring area systems, properly validating and testing them before you launch to the users. And that is why tools like Fiddler are coming forward so that you know, enterprises can do this. So that they can ensure responsible AI for both their organization as well as their customers. >> That's a great point, I want to get into this. What it kind of means and the kind of the industry side of it? And then, how that impacts customers? If you guys don't mind, machine learning opposite a term MLOps has been coined in the industry as you know. Basically, operations around machine learning, which kind of gets into the workflows and development life cycles. But ultimately, as you mentioned, this black box and this model being made. There's a heavy reliance on data. So Amit, what does this mean? Because now is it becomes operational with MLOps. There is now internal workflows and activities and roles and responsibilities. How is this changing organizations, you know separate the embarrassment, which is totally true. Now I've got an internal operational aspect and there's dev involved. What's the issue? >> Yeah, so typically, so if you look at the whole life cycle of machine learning ops, in some ways mirrors the traditional life cycle of kind of DevOps but in some ways it introduces new complexities. Specifically, because the models can be a black box. That's one thing to kind of watch out for. And secondly, because these models are probabilistic artifact, which means they are trained on data to grab relationships for what kind of potentially making high accuracy predictions. But the data that they see in life might actually differ and that might hurt their performance especially because machine learning is applied towards these high ROI use cases. So this process of MLOps needs to change to incorporate the fact that machine learning models can be black boxes and machine learning models can decay. And so the second part I think that's also relevant is because machine learning models can decay. You don't just create one model you create multiple versions of these models. And so you have to constantly stay on top of how your model is deviating from your reality and actual reality and kind of bring it back to that representation of reality. >> So this is interesting, I like this. So now there's a model for the model. So this is interesting. You guys have innovated on this model performance management idea. Can you explain the framework and how you guys solve that regulatory compliance piece? Because if you can be a model of the model, if you will. >> Then. >> Then you can then have some stability around maintaining the code basis or the integrity of the model. >> Okay. >> How does that? What do you guys offer? Take us through the framework and how it works and then how it ties to that regulatory piece? >> So the MPM system or the model performance management system really sits at the heart of the machine learning workflow. Keeping track of the data that is flowing through your ML life cycle, keeping track of the models that are going, you know, we're getting created and getting deployed and how they're performing. Keeping track of the whole parts of the models. So it gives you a centralized way of managing all of these information in one place, right? It gives you an oversight from a compliance standpoint from an operational standpoint of what's going on with your models in production. Imagine you're a bank you're probably creating hundreds of these models, but a variety of use cases, credit risk, fraud, anti-money laundering. How are you going to know which models are actually working very well? Which models are stale? Which models are expired? How do you know which models are underperforming? You know, are you getting alerts? So this is what this kind of governance, this performance management is what the system offers. It's a visual interface, lots of dashboards, the developers, operations folks, compliance folks can go and look into. And then they would get alerts when things go wrong with respect to their models. In terms of how it can be helpful to meet in compliance regulations. For example, let's say I'm starting to create a new credit risk model in a bank. Now I'm innovating on different AI algorithms here immediately before I even deploy that model I have to validate it. I have to explain it and create a report so that I can submit to my internal risk management team which can then review it, you know, understand all kinds of risks around it. And then potentially share it with the audit team and then keep a log of these reports so that when a regulator comes visits them, you know they can share these reports. These are the model reports. Is that how the model was created? Fiddler helps them create these reports, keep all of these reports in one place. And then once the model is deployed, you know, it basically can help them monitor these models continuously. So that they don't just have one ad hoc report when it was created upfront, they can a continuous monitoring continuous dashboard in terms of what it was doing in the last one whatever number of months it was running for. >> You know what? >> Historically, if you were to regulate it like all AI applications in the U.S. the legacy regulations are the ones that today are applied as to the equal credit opportunity or the Fed guidelines of like SR 11-7 that kind of comment that's applicable to all banks. So there is no purpose-built AI regulation but the EU released a proposed regulation just about three weeks back. That classifies risk within applications, and specifically for high-risk applications. They propose new oversight and the ads mandating explainability helping teams understand how the models are working and monitoring to ensure that when a model is trained for high accuracy, it maintains that. So now those two mandatory needs of high risk application, those are the ones that are solved by Fiddler. >> Yeah, this is, you mentioned explainable AI. Could you just quickly define that for the audience? Because this is a trend we're seeing a lot more of. Take a minute to explain what is explainable AI? >> Yeah, as I said in the beginning, you know AI model is a new software artifact that is being created. It is the core of an AI system. It's what represents all the patterns in the data and coach them and then uses that knowledge to predict the future. Now how it encodes all of these patterns is black magic, right? >> Yeah. >> You really don't know how the model is working. And so explainable AI is a set of technologies that can help you unlock that black box. You know, quote-unquote debug that model, looking to the model is introspected inspected, probate, whatever you want to call it, to understand how it works. For example, let's say I created an AI model, that again, predicts, you know, loan risk. Now let's say some person, a person comes to my bank and applies for a $10,000 loan, and the bank rejects the loan or the model rejects the loan. Now, why did it do it, right? That's a question that can explain the way I can answer. They can answer, hey, you know, the person's, you know salary range, you know, is contributing to 20% of the loan risk or this person's previous debt is contributing to 30% of the loan risk. So you can get a detailed set of dashboards in terms of attribution of taking the loan risk, the composite loan risk, and then attributing it to all the inputs that the model is observing. And so therefore, you now know how the moral is treating each of these inputs. And so now you have an idea of like where the person is getting effected by this loaner's mark. So now as a human, as an underwriter or a loan officer lending officer, I have knowledge about how the model is working. I can then have my human intuition or lap on it. I can approve the model sometimes I can disapprove the model sometimes. I can use this feedback and deliver it to the data science team, the AI team, so they can actually make the model better over time. So this unlocking black box has several benefits throughout their life cycle. >> That's awesome. Great definition. Great call. I want to grab get that on the record for the audience. Also, we'll make a clip out of that too. One of the things that I meant you brought up I love and want to get into is this MLOps impact. So as we were just talking earlier debugging module models and production, totally cool, relevant, unpacked a black box. But model decay, that's an interesting concept. Can you explain more? Because this to me, I think is potentially a big blind spot for the industry, because, you know, I talked to Swami at Amazon, who runs their AI group and, you know, they want to make AI easier and ML easier with SageMaker and other tools. But you can fall into a trap of thinking everything's done at one and done. It's iterative is you've got leverage here. You got to keep track of the performance of the models, not just debugging them. Are they actually working? Is there new data? This is a whole another practice. Could you explain this concept of model decay? >> Yeah, so let's look at the lending example Krishna was just talking about. If you expect your customers to be your citizens, right? So you will have examples in your training set which might have historical loans made to people that the needs of 40, and let's say 70. And so you will train your model and your model will be trained our highest accuracy in making loans to these type of applicants. But now let's say introduced a new loan product that you're targeting, let's say younger college going folks. So that model is not trained to work well in those kinds of scenarios. Or it could also happen that you could get a lot more older people coming in to apply for these loans. So the data that the model can see in life might not represent the data that you train the model with. And the model has recognized relationships in this data and it might not recognize relationships in this new data. So this is a constant, I would say, it's an ongoing challenge that you would face when you have a live model in ensuring that the reality meets your representation of the reality when you train the model. And so this is something that's unique to machine learning models and it has not been a problem historically in the world of DevOps. But it is a very key problem in the DevOps. >> This is really great topic. And most people who are watching might want to might know of some of these problems when they see the main mainstream press talk about fairness in black versus white skin and bias and algorithms. I mean, that's kind of like the press state that talk about those kinds of like big high level topics. But what it really means is that the data (John laughs) of practiced fairness and bias and skewing and all kinds of new things that come up that the machines just can't handle. This is a big deal. So this is happening to every part of data in an organization. So, great problem statement. I guess the next segue would be, why Fiddler, why now? What are you guys doing? How are you solving these problems? Take us through some use cases. How people engage with you guys? How you solve the problem and how you guys see this evolving? >> Great, so Fiddler is a purpose-built platform to solve for model explainability of modern monitoring and moderate bias detection. This is the only thing that we do, right? So we are super focused on building this tool to be useful across a variety of, you know, AI problems, from financial services to retail, to advertising to human resources, healthcare and so on and so forth. And so we have found a lot of commonalities around how data scientists are solving these problems across these industries. And we've created a system that can be plugged into their workflows. For example, I could be a bank, you know, creating anti-money laundering models on a modern AI platform like TensorFlow. Or I could be like a retail company that is building a recommendation models in, you know, PyTorch, like library. You can bring all of those models into one under one sort of umbrella, like using Fiddler. We can support a variety of heterogeneous types of models. And that is a very very hard technical problem to solve. To be able to ingest and digest all these different types of monotypes and then provide a single pane of glass in terms of how the model is performing. How explaining the model, tracking the model life cycle throughout its existence, right? And so that is the value prop that Fiddler offers, the MLOps team, so they can get this oversight. And so this plugs in nicely with their MLOps so they don't have to change anything and give the additional benefit... >> So, you're basically creating faster outcomes because the teams can work on real problems. >> Right. >> And not have to deal with the maintenance of model management. >> Right. >> Whether it's debugging or decay evaluations, right? >> Right, we take care of all of their model operations from a monitoring standpoint, analysis standpoint, debugability, alerting. So that they can just build the right kind of models for their customers. And we give them all the insights and intelligence to know the problems with behind those models behind their datasets. So that they can actually build more accurate models more responsible models for their customers. >> Okay, Amit, give us the secret sauce. What's going on in the product? How does it all work? What's the secret sauce? >> So there are three key kind of pillars to Fiddler product. One is of course, we leverage the latest research, and we actually productize that in like amazing ways where when you explain models you get the explanation within a second. So this activates new use cases like, let's say counterfactual analysis. You can not only get explanations for your loan, you can also see hypothetically. What if this the loan applicant was, you know, had a higher income? What would the model do? So, that's one part productizing latest research. The second part is infrastructure at scale. So we are not just building something that would work for SMBs. We are building something that works on enterprise scale. So billions and billions of predictions, right? Flowing through the system. We want to make sure that we can handle as larger scale as seamlessly as kind of possible. So we are trying to activate that and making sure we are the best enterprise grade product on the market. And thirdly, user experience. What you'll see when you use Fiddler. Finally, when we do demos to kind of customers what they really see is the product. They don't see that the scale right, right, right then and there. They don't see the deep reason. What they see, what they see are these like beautiful experiences that are very intuitive to them. Where we've merged explainability and monitoring and bias detection in like seamless way. So you get the most intuitive experiences that are not just designed for the technical user, but also for the non-technical user. Who are also stakeholders within AI. >> So the scale thing is a huge point, by the way. I think that's something that you see successful companies. That's a differentiator and frankly, it's the new sustainability. So new lock-in, if you will, not to be in a bad way but in a good way. You do a good job. You get scale, you get leverage. I want to just point out and get your guys' thoughts on your approach on the frame. Where you guys are centralized. >> Right. >> So as decentralization continues to be a wave you guys are taking much more of a centralized approach. Why is that done? Take us through the decision on that. >> Yeah. So, I mean, in terms of, you know decentralization in terms of running models on different you know, containers and, you know, scoring them on multiple number of nodes, that's absolutely makes sense, right? When from a deployment standpoint from a inference standpoint. But when it comes to actually you know, understanding how the models are working. Visualizing them, monitoring them, knowing what's going on with the models. You need a centralized dashboard that a lapsed user can actually use or a head of AI governance inside a bank and use what are all the models that my team is shipping? You know, which models carry risk, you know? How are these models performing last week? This, you need a centralized repository. Otherwise, it'll be very very hard to track these models, right? Because the models are going to grow really really fast. You know, there are so many open source libraries, open source model architecture has been produced. And so many data scientists coming out of grad schools and whatnot. And the number of models in enterprise is just going to grow many many fold in the coming years. Now, how are you going to track all of these things without having a centralized platform? And that's what we envisaged a few years ago that every team will need an oversight tool like Fiddler. Which can keep track of all of their models in one place. And that's what we are finding from our customers. >> As long as you don't get in the way of them creating value, which is the goal, right? >> Right. >> And be frictionless take away the friction. >> Yeah. >> And enable it. Love the concept. I think you guys are on something big there, great products. Great vision. The question I have for you to kind of wrap things up here. Is that this is all new, right? And new, it's all goodness, right? If you've got scale in the Cloud, all these new benefits. Again, more techies coming out of grad school and Computer Science and Engineering, and just data analysis in general is changing. And there's more people to be democratized to be contributing. >> Right. >> How do you operationalize it? How do companies get this going? Because you've got a new thing happening. It's a new wave. >> Okay. >> But it's still the same game, make business run better. >> Right. >> So you've got to deploy something new. What's the operational playbook for companies to get started? >> Absolutely. First step is to, if a company is trying to install AI, incorporate AI into their workflow. You know, most companies I would say, they're in still early stages, right? There a lot of enterprises are still, you know, developing these models. Some of them may have been in labs. ML operationalization is starting to happen and it probably started in a year or two ago, right? So now when it comes to, you know, putting AI into practice, so far, you know, you can have AI models in labs. They're not going to hurt anyone. They're not going to hurt your business. They're not going to hurt your users. But once you operationalize them then you have to do it in a proper manner, in a responsible manner, in a trustworthy manner. And so we actually have a playbook in terms of how you would have to do this, right? How are you going to test these models? How are you going to analyze and validate them before they actually are deployed? How are you going to analyze, you know, look into data bias and training set bias, or test set bias. And once they are deployed to production are you tracking, you know, model performance or time? Are you tracking drifting models? You know, the decay part that we talked about. Do you have alerts in place when model performance goes all over the place? Now, all of a sudden, suddenly you get a lot of false positives in your fraud models. Are you able to track them? We have the personnel in place. You have the data scientists, the ML engineers, the MLOps engineers, the governance teams in place if it's in a regulated industry to use these tools. And then, the tools like Fiddler, will add value, will make them, you know, do their job, institutionalize this process of responsible AI. So that they're not only reaping the benefits of this great technology. There's no doubt about the AI, right? It's actually, it's going to be game changing but then they can also do it in a responsible and trustworthy manner. >> Yeah, it's really get some wins, get some momentum, see it. This is the Cloud way. It gets them some value immediately and grow from there. I was talking to a friend the other day, Amit, about IT the lecture. I don't worry about IT and all the Cloud. I go, there's no longer IT, IT is dead. It's an AI department now. (Amit laughs) So and this is kind of what you guys are getting at. This now it's data now it's AI. It's kind of like what IT used to be enabling organizations to be successful. You guys are looking at it from the perspective of the same way it's enabled success. You put it out that you provision (John laughs) algorithms instead of servers they're algorithms now. This is the new model. >> Yeah, we believe that all companies in the future as it happened to this wave of data are going to be AI companies, right? So it's really just a matter of time. And the companies that are first movers in this are going to have a significant advantage like we're seeing that in like banking already. Where the banks that have made the leap into AI battles are reaping benefits of enabling a lot more models at the same risk profile using deep learning models. As long as you're able to like validate these to ensure that they're meeting kind of like the regulations. But it's going to give significant advantages to a lot of companies as they move faster with respect to others in the same industry. >> Yeah, quickers too, saw a friend too on the compliance side. You mentioned trust and transparency with the whole EU thing. Some are saying that, you know, to be a public company, you're going to have to have AI disclosure soon. You're going to have to have on the disclosure in your public statements around how you're explaining your AI. Again, fantasy today. But pretty plausible. >> Right, absolutely. I mean, the real reality today is, you know less than 10% of the CEOs care about ethical AI, right? And that has to change. And I think, you know, and I think that has to change for the better, because at the end of the day, if you are using AI, if you're not using in a responsible and trustworthy manner then there is like regulation. There is compliance risk, there's operational business risk. You know, customer trust. Losing customers trust can be huge. So I think, you know, we want to provide that you know, insurance, or like, you know like a preventative mechanism. So that, you know, if you have these tools in place then you're less likely to get into those situations. >> Awesome. Great, great conversation, Krishna, Amit. Thank you for sharing both the founders of Fiddler.ai. Great company. On the right side of history in my opinion, the next big thing in AI. AI departments, AI compliance, AI reporting. (John laughs) Explainable AI, ethical AI, all part of this next revolution. Gentlemen, thank you for joining us on theCUBE Amazon Startup Showcase. >> Thanks for having us, John. >> Okay, it's theCUBE coverage. Thank you for watching. (upbeat music)
SUMMARY :
really the next big thing So the topic of this We help, you know, enterprises, and walk before you run kind of thing. so that you know what And so the, you know, So the ability to prove the model deeper of the use cases you heard So nevermind the And the risk management and that included the likes so that you know, enterprises can do this. and the kind of the industry side of it? And so you have to constantly stay on top of the model, if you will. the integrity of the model. that are going, you know, and the ads mandating define that for the audience? It is the core of an AI system. know, the person's, you know One of the things that of the reality when you train the model. and how you guys see this evolving? And so that is the value because the teams can And not have to deal So that they can just build What's going on in the product? They don't see that the scale So the scale thing is you guys are taking much more And the number of models in enterprise take away the friction. I think you guys are How do you operationalize it? But it's still the same game, What's the operational playbook So now when it comes to, you know, You put it out that you of like the regulations. you know, to be a public company, And I think, you know, the founders of Fiddler.ai. Thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
California | LOCATION | 0.99+ |
10 times | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Amit Paka | PERSON | 0.99+ |
Steve Wozniak | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
EU | ORGANIZATION | 0.99+ |
30% | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
Goldman Sachs | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
40 | QUANTITY | 0.99+ |
$10,000 | QUANTITY | 0.99+ |
Krishna | PERSON | 0.99+ |
Amit | PERSON | 0.99+ |
billions | QUANTITY | 0.99+ |
70 | QUANTITY | 0.99+ |
Fed | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
Krishna Gade | PERSON | 0.99+ |
One | QUANTITY | 0.99+ |
one part | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
less than 10% | QUANTITY | 0.99+ |
one model | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
three key | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
one thing | QUANTITY | 0.98+ |
First step | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
one place | QUANTITY | 0.98+ |
Fiddler.ai | ORGANIZATION | 0.98+ |
secondly | QUANTITY | 0.97+ |
hundreds | QUANTITY | 0.97+ |
each | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
U.S. | LOCATION | 0.97+ |
Swami | PERSON | 0.96+ |
a year | DATE | 0.94+ |
first movers | QUANTITY | 0.94+ |
theCUBE | ORGANIZATION | 0.94+ |
Fiddler | ORGANIZATION | 0.94+ |
FICO | ORGANIZATION | 0.93+ |
SR 11-7 | TITLE | 0.93+ |
one | QUANTITY | 0.91+ |
two ago | DATE | 0.88+ |
three weeks back | DATE | 0.83+ |
couple of years ago | DATE | 0.82+ |
Amazon Startup Showcase | EVENT | 0.81+ |
few years ago | DATE | 0.8+ |
billions of predictions | QUANTITY | 0.77+ |
Fiddler | TITLE | 0.77+ |
two mandatory | QUANTITY | 0.76+ |
SageMaker | TITLE | 0.76+ |
single pane of | QUANTITY | 0.75+ |
a second | QUANTITY | 0.74+ |
thirdly | QUANTITY | 0.73+ |
about | DATE | 0.73+ |
single application | QUANTITY | 0.73+ |
PyTorch | ORGANIZATION | 0.73+ |
AWS | EVENT | 0.72+ |
Startup Showcase | EVENT | 0.69+ |
TensorFlow | TITLE | 0.67+ |
AWS Startup Showcase | EVENT | 0.65+ |
Reggie Jackson | SAP SapphireNow 2016
(mumbling) >> Voiceover: Covering Sapphire now. Headline sponsored by SAP HANA Cloud, the leader in platform as a service. With support from Console Inc., the cloud internet company. Now, here are your hosts, John Furrier and Peter Burris. >> We are here live at SAP Sapphire. This is SiliconANGLE Media's The Cube. It's our flagship program. We go out to the events and extract the signal to noise and want to do a shoutout to our sponsors SAP HANA Cloud and Console Inc. at console cloud, connecting the clouds together. I'm John Furrier with my co-host Peter Burris. Our next guest is Reggie Jackson, winner, athlete, tech athlete now, entrepreneur, overall great guy, and a cube alumni. Four years ago, we interviewed him here at SAP Sapphire. Welcome back, Reggie, to The Cube. Thanks for coming on. John, thank you very much. It's good to be here with old friends. We were havin' a little conversation about baseball there, but good to see you guys. Yeah, and obviously, the baseball, we were just talkin' about the whole fisticuffs and the glee of the grand slam walk-off. >> Reggie: Good stuff, good stuff. >> It's a good pivot point in some of the things that you're workin' on in here, the conversations in the tech world, which is social media and that notion of celebrating in a world of Instagram and Snapchat and social media. Certainly, ya flip the bat, the views go up. But then, baseball has these (laughing) unwritten rules, right. So does corporations. And so we're now a new era. Is baseball safe now with these unwritten rules and should they maintain those, certain things that have kept the game in balance? But yet with social media, the players are their own brand. And you certainly were a brand, even back in your day, which is a pioneer. What's your thoughts on that? >> You know John, Peter, I don't like the idea of someone going out of their way to promote their brand. Some of the great brands to me in history, Babe Ruth, Ty Cobb, the great Jim Brown, Joe Montana, Michael Jordan. And Michael Jordan would be a prominent example where technology and TV enhanced who he was. And he had someone behind him to enhance his brand, Nike, Phil Knight, who was a real pioneer. I'm not so in favor, I'm not in favor at all of someone manufacturing themselves as a brand. And I hear players talk about their brand and about trying to create something. If you're great, if you deserve it, I don't think Stephen Curry works on his brand. I think he works on bein' a great player. I think he works on bein' a great teammate. I think he does his best to maximize his skill set. And he's nothing but a gentleman along the way. He'll celebrate with joy once in awhile, with the Curry moves, which we've come to recognize. But for guys that talk about the manufacturing of their brand, there's something about it that's manufactured. It's not real, it's false. And I don't like it. I think it's okay, the Snapchats and the Google+ and all of the stuff, Twitter and Facebook and all that stuff, all of the things that go along with trying to create some hubbub, etc. I'm okay with that. >> So you're saying if it's not deserved. People are overplaying their hand before earning it. >> A lot of it, John, a lot of it. Joe Montana didn't work on his brand, he was great. Jim Brown didn't work on his brand, he was great. I don't want to use Jimmy Brown. I want to use Montana because even young people today will know Joe Montana. Tom Brady, Peyton Manning, they're not about their brand. They're about being classy, being great, being part of a team, being a leader, presenting themselves as something that's respected in the NFL, across the United States. Go ahead, Pete. >> So even though it's cheaper to get your name out there, you still believe in let your performance speak for itself. >> You got to be real about it. Ya got to be who you are. If you're not a great player, get out of the way. Get out of the space. So manufacturing your brand. I played with the Yankees. I was in the era of Cosell and Billy Martin and George Steinbrenner. We won championships with the team. I was part of something that helped me become recognized. And so in our era, the Sandy Koufax's became brands because they were associated with greatness around them. They stood out and so they earned that tremendous brand. >> We were just watching Graig Nettles gettin' taken out by George Brett in that big game and also the pine tar, we kind of gettin' some good laughs at it. You look at the balance of personalities. Certainly, Brett and Nettles and your team and you had a great personality, winning championships. Worked together as a team. And so I want to ask you that question about the balance, about the in baseball, certainly, the unwritten rules are a legacy and that has worked. And now in a era of personalities, in some cases, people self-promoting themselves, people are questioning that. Your thoughts on that because that applies to business too 'cause tech athletes or business athletes have a team, there are some unwritten rules. Thoughts on this baseball debate about unwritten rules. >> Pete and John, I'll try to correlate it between some tech giants that have a brand. I just left a guy with a brand, Bill McDermott, that runs SAP. Even Hasso, the boss. The face now of SAP is Bill McDermott. Dapper, slender, stylish, bright. It comes across well. So maintaining that brand, to me, relates to SAP, bills a great image for it. He's stylish, he's smooth, he's smart. He's about people. He presents himself with care. So that is a brand. I don't think it's manufactured. That's who he is in real life. If you take a look, and I'll go back to Steph Curry because that name resonates and everyone recognize it. That style of cool, that style of control, that style of team and care. And he presents to us all that he cares about us, the fan, his team, his family. And so those are things and I think you can go from the tech world. Bill Gates had a brand. Brilliant, somewhat reclusive, concerned about the world, concerned about the country, concerned about his company. And so that resonated it Microsoft because that's who he really was. Some of the people today don't really recognize that Jobs was thrown out of Apple. He was pushed out. All of his brilliance, which was marketing. And the gentleman there that really was the mind for the company, Steve Wozniak, happens to be here at SAP Sapphire. Today, I think he speaks. But those brands were real, not manufactured. And so, in today's world, I think you can manufacture a brand. And then all of a sudden, it'll crumble. It'll go away in the future. But the great brands of whether it's Jackie Robinson or whether it's Jack Welch or whether it's George Steinbrenner and the Yankee brand, those brands were real. They were not manufactured. Those guys were eccentric. They were brilliant. Go ahead. >> And also, they work hard. And I want to point out a comment you made yesterday here at the event. You were asked a question up on stage about that moment when you hit the home runs. I think we talked about it last time. I don't necessarily want to talk about the home runs. But you made a comment I'd like you to expand on and share with the audience. 'Cause you said, "I worked hard," but that day during warm-ups, you had batting practice. You made a comment that you were in the zone. So working hard and being great as it leads up to that. But also, in the moment, 'cause that's a theme these days, in the moment, being ready and prepared. Share your thoughts on what you meant by you had a great batting practice and you just felt it. >> I'm going to take it to what you say is in the moment. I remember when I was talkin' about it yesterday, which you reference to, when I had such a fantastic batting practice. I walked by a coupla sports writers in that era. Really well-known guys, Dave Anderson, New York Times. I can't think of his name right now, but it'll come to me, of the Daily News. It was like hey man. >> John: You were rockin' it out there. >> I kind of hope I didn't leave it out here. (laughing) That was in the moment and at the same time, >> I mean, you were crushing it. >> Yes, when the game started, I got back in that moment. I got back in what was live, what was now, what was going on. Certainly, I think our world now with the instant gratification of sending out a message or tweeting to someone or whatever certainly in the moment is about what our youth is and who we are today as a country, as a universe. >> But you didn't make that up. You worked hard, but you pulled it together in the moment. >> A comment with that is I went and did something with ESPN earlier this year in San Francisco, in Oakland with Stephen Curry. They said, "Reggie, we want ya to come up "and watch his practice, his pre-game." And it was very similar to your batting practice, where people come out and watch, etc. And so I was looking forward to it and I like to go to the games about an hour and a half or two hours early so I can see warm-up and see some of the guys and say hello. And I got a chance to watch Steph Curry. I know his dad. And happened to be the first time I went this year, the dad, Carolina, the Panthers were in town. Not the Panthers. Come on, help me, help me, help me. >> Peter: The Wizards? >> No, no, no, the Carolina. >> Peter: Carolina Panthers. >> The Carolina Hornets. >> John: Hornets. >> Were there and I know his dad, Dell Curry. And we talked a little bit. But then, Steph came out and I watched him. And I watched the dribbling exhibition. I watched the going between the legs and behind the back and the fancy passing, etc. And I watched the shots, the high-arcing threes, the normal trajectory threes, the high shots off the backboard and things like that that he did. The left-handed shots, the right-handed shots. And the guy asked me what I thought of the show. And I said, "Well, it's a cool show, "but I'm going to see all that tonight." And me watching him, the behind the backs, the between the legs, the passes, the high-arching shots from three, the high-arching touches off the glass. He does all that. >> John: He brought it into the game. >> Yeah, I said so, (laughing) >> Peter: That is his game. >> It's not a show, but that's his game. >> So Reggie, you did an interesting promotion, Reggie's Garage, where you bought a virtual reality camera and you created a really nice show of your garage demonstrating your love >> Reggie: 360. >> Peter: of cars, 360. Talk a little bit about that. And then if ya get a second, imagine what baseball's going to be like as that technology becomes available and how some of the conversation that we're having about authenticity, the fan coming into the game. >> An experience. >> Is going to change baseball. Start with the garage and how that went and then how ya think that's going to translate into baseball, if you've had any thoughts on that. >> In the technology that was used, certainly I enjoyed it. While I was doing it, I noticed where the cameras were in different spots. There was one on the floor of my car. There was one in the backseat. And then there was someone following us as closely as they could. But you could see everything. You'd see the shift and you could see my feet. It was like you were with me. When we did the 360 inside the garage as well, you could listen to me and then you could use your finger and spin around. And they had these special headset and special glasses that you could look around, just with your headset on, and see all around the room. Behind you, in front of you. And so it's an experience that I think is going to become part of who we are as a nation, who we are as a people watching television, that you're going to really feel like you're in the room. I think it's going to be exciting. And I think it's going to be fun. And when you're talking about products, when you're talking about my website, if you will, with the focus on automotive parts, where a guy can go in and shop and get any part he wants for a vehicle, you really can build a complete car from my website. You can buy a frame. You can buy body parts. You can buy a horn, an engine, brakes, tires, grills, turn signals, the whole nine yards. And it gives you an experience through 360 video of really walking into the store, walking into the building, walking into the stadium and looking around to see the hot dog stand, see the dugout, see the pitcher and the hitter, to see the parts in the garage, to see the cars and take a look and view at everything that's there. >> How are players going to react to havin' the fans virtually right there with them? >> I don't think it bothers you. I don't think ya notice. I don't think they'll show anything that will affect the player that he's going to be concerned about. I think you'd have to be sensitive if they start microphoning, start micing up and then the looseness of the language would impact. So I don't think they'll go that far. But I do think the more that you can see, the more attractive the game becomes, the more interested that you can get people. When I broadcast baseball for ABC back in the 80's, I always tried to broadcast for the lady of the house, while she worked, while she cooked the meal, she didn't have time to think about a backup slider or the fastball that painted the outside corner, the changeup, etc., the sinker. I tried to broadcast for her interpretation so I could attract another fan to the game. So I think that the technology and the viewing that you'll see from behind home plate, from under the player's feet while he's running down the bases and the slides and things of that nature, Pete, I think are going to be exciting for the fan and it'll attract more fans, attract a new type of television it's going to produce, etc. So it's exciting. >> Reggie, thanks for comin' on The Cube again. Appreciate your time. I ask ya final two questions that I want to get your thoughts on. One is obviously the cars. Reggie's Garage is goin' great. And you shared with us last time on The Cube, it's on YouTube, about you when you grew up and decide football and baseball. But when you were growin' up, what was your favorite car? What was that car that you wanted that was out of reach? That car that was your hot rod? And then the second question is, we'll get to the second question. Answer that one first. What was you dream car at the time? How did ya get >> Reggie: The dream car >> John: hooked on this? >> at the time. I had a '55 Chevrolet that I bought from a buddy by the name of Ronny Fog. I don't even know if he's still around anymore. Out of Pennsylvania. I had $300 and my dad gave me $200. I'd saved up mine from workin' for my dad. But my dream car was I went to school with a guy named Wayne Gethman and another guy named Irwin Croyes. I don't know Wayne Gethman anymore. But from the age of 16, I reengaged with Irwin Croyes, who happens to be a business investing type guy in the city of Philadelphia, right where we're still from. He's a car collector. And he drove a '62 Corvette and so did Wayne Gethman. And I always wanted one. And I now happen to have four. (laughing) >> He who get the most toys wins. Final question, 'cause you're such a legend and you're awesome and you're doin' so much work. And you're very active, engaged, appreciate that. Advice to young athletes coming up, whether they're also in business or a tech athlete or a business athlete. But the sports athletes today got travel ball, you got all this stuff goin' on. The idols like Stephen Curry are lookin' great. Great role models now emerging. What advice do you give them? >> John's got a freshman in high school. I got a junior in high school. What would ya say to 'em? >> You know, I'll tell ya. When you're young, the people you want to listen to are Mom and Dad. No one, and I'll say this to any child from the age of eight or nine years old, five, six years old to 17, 18, 19, 20, all the way up, now my daughter's 25. All the way up to the end of your parents' days. No one cares for you more than your mother or your father. Any parent, whether it's a job or whether their success in life, number one in that man or woman, mom or dad, number one in their life is their children. And so for kids, I say if there's any person you're going to listen to for advice in any path you want to walk down, it's the one that your parents talk to you about or how they show you. That is what I would leave as being most important. For kids, anything, idea that you have that you believe you can do, whether it's the athlete like Stephen Curry that has created shots and done things on the basketball court that he envisioned, that he thought about. Or whether it's the next Steve Jobs who happens to be Mark Zuckerman, who I don't know Mark is 30 years old yet. >> John: He just turned 30. >> It's an idea. He's born around the same time. He's born this week. His birthday is in this week. My birthday's tomorrow. >> John: Happy birthday. >> But thank you. Anything that you can think of in today's world of technology. With places like Silicon Valley where they take dreams and create foundations for them. I had a dream about a website that would sell automotive parts and you could go to my site and buy anything for your car. We've got about 75,000 items now. We'll get to 180,000 in a few months. We'll get to a half a million as soon as my technology is ready for it. But we have things to pay attention to and look into and issues to make sure that we iron out that aren't there for our consumer, for ease of navigation, ease of consumption and purchasing. Any idea that you have, take time to dream. It's much more so than taking time to dream when I was a young kid. Because my father would say, "Stop daydreamin' "and wastin' time." >> John: Get to work. >> Reggie: In today's world, for our children, I say take time to create a vision or to create something new. And go to someone that's in the tech world and they'll figure out a way of helping you manifest it into something that's a reality. >> Listen to your parents, kids. And folks out there, dream, build the foundation, go for it. Reggie Jackson, congratulations for being a Cube alumni again, multi-return. >> Peter: Thank you very much. >> John: Appreciate it. Congratulate on all your continued success. You're a legend. Great to have you on. And thanks so much for comin' on The Cube. >> Peter: And happy 70th birthday. >> John, Pete, always a pleasure. >> John: Happy birthday. >> Thank you very much. >> Have some cake for Reggie. It's The Cube, live here in Orlando. Bringin' all the action here on The Cube. I'm John Furrier with Peter Burris with Reggie Jackson. We'll be right back. (electronic music)
SUMMARY :
the leader in platform as a service. and extract the signal to noise in some of the things that Some of the great brands to me in history, So you're saying if it's not deserved. that's respected in the NFL, to get your name out there, Ya got to be who you are. And so I want to ask you that question And the gentleman there that really was But also, in the moment, 'cause that's I can't think of his name right now, and at the same time, I got back in that moment. But you didn't make that up. And I got a chance to watch Steph Curry. And the guy asked me what and how some of the conversation Is going to change baseball. And I think it's going to be fun. But I do think the more that you can see, And you shared with us And I now happen to have four. But the sports athletes I got a junior in high school. it's the one that your He's born around the same time. Anything that you can think of I say take time to create a vision build the foundation, go for it. Great to have you on. Bringin' all the action here on The Cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim Brown | PERSON | 0.99+ |
Steve Wozniak | PERSON | 0.99+ |
Mark Zuckerman | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Dave Anderson | PERSON | 0.99+ |
Joe Montana | PERSON | 0.99+ |
Steve Jobs | PERSON | 0.99+ |
Steph | PERSON | 0.99+ |
Stephen Curry | PERSON | 0.99+ |
Reggie Jackson | PERSON | 0.99+ |
Michael Jordan | PERSON | 0.99+ |
George Steinbrenner | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
George Steinbrenner | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Jimmy Brown | PERSON | 0.99+ |
Reggie | PERSON | 0.99+ |
second question | QUANTITY | 0.99+ |
Tom Brady | PERSON | 0.99+ |
Stephen Curry | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Bill McDermott | PERSON | 0.99+ |
Bill Gates | PERSON | 0.99+ |
Irwin Croyes | PERSON | 0.99+ |
George Brett | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Yankees | ORGANIZATION | 0.99+ |
Wayne Gethman | PERSON | 0.99+ |
Jack Welch | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
180,000 | QUANTITY | 0.99+ |
30 | QUANTITY | 0.99+ |
Orlando | LOCATION | 0.99+ |
Console Inc. | ORGANIZATION | 0.99+ |
United States | LOCATION | 0.99+ |
$200 | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
six years | QUANTITY | 0.99+ |
Peyton Manning | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
25 | QUANTITY | 0.99+ |
Pennsylvania | LOCATION | 0.99+ |
Mark | PERSON | 0.99+ |
$300 | QUANTITY | 0.99+ |
nine years | QUANTITY | 0.99+ |
17 | QUANTITY | 0.99+ |
Billy Martin | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
ESPN | ORGANIZATION | 0.99+ |
eight | QUANTITY | 0.99+ |
Pete | PERSON | 0.99+ |
Philadelphia | LOCATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
Nike | ORGANIZATION | 0.99+ |
20 | QUANTITY | 0.99+ |
Ronny Fog | PERSON | 0.99+ |
Oakland | LOCATION | 0.99+ |
Phil Knight | PERSON | 0.99+ |
Chevrolet | ORGANIZATION | 0.99+ |
this week | DATE | 0.99+ |