Image Title

Search Results for 43%:

Michael Greene, Intel - #SparkSummit - #theCUBE


 

>> Announcer: Live from San Francisco, it's the Cube covering Spark Summit 2017. Brought to you by Data Bricks. >> Welcome back to the Cube. Continuing our coverage here at Spark Summit 2017. What a great lineup of guests. I can't wait to introduce this gentleman. We have Intel's VP of the software and service group, Mr. Michael Green. Michael, welcome. >> Thank you for having me. >> All right, we also have George with us over here and George and I will both be peppering you with questions. Are you ready for that? >> I am. I've got the salt to go with the pepper. (laughs) >> Well, you just got off the stage. You did the keynote this morning. What do you think was the most important message you delivered in your keynote? >> Well, it was interesting. One of the things that we're looking at with Big DL, so the big DL framework, was we're hearing a lot of the challenges of making sure that these AI-type workloads scale easily. And one of the things when we open-source Big DL, we really were designing it to leverage that sparkability for massive scale from the beginning. So I thought that that was one of the things that connected with several of the keynotes ahead of me was talking about if this is your challenge, here is one of many solutions but a very good one that will let you take advantage of the scale that people have in their infrastructure, lots of Xeons out there. Might also make sure to fully utilize running the workloads of the future, AI. >> Okay, so Intel not just a hardware company. You do software, right? (laughs) >> Well, you know, Intel's a solutions company, right? And hardware's awesome, but hardware without software is a brick. Maybe a warm one, but it doesn't do much- >> Not a data brick. >> That's right, not a data brick, just a brick. >> And not melted down, either. >> That's right, that's right. So sand without software doesn't go very far. And I see it as software is used to ignite the hardware so that you actually get useful productivity out of it. So as a software solution and as customers, they have problems to solve. It's rare that they come in and say that, "Nope, I just need a nail," right? They're usually like, "I need a home." Well, you can't just provide the nail, you have to provide all the pieces, and one of the things that's exciting for me being part of Intel is that we provide silicon, of course, right? The processors Xeon, Accelerators, and now, software, tools, frameworks, to make sure that a customer can actually really get the value of the entire solution. >> Host: Okay, go ahead, George. >> So Michael, help those of us who've been watching from afar but aren't up-to-date on the day-to-day tactics and strategy of what Intel's doing with (mumbles) in terms of where does Big DL fit? And then the acquisition of the floating point (mumbles) technology so that there's a special purpose acceleration on the chip, so how do those two work together along with the rest of the ecosystem? >> Sure, great question. Do if you think of Intel, really, we're always looking at how we can leverage Moore's Law to get more and more integrated into the solution. And if you quickly step through a brief history, at one point, we had a 386, which was a great integer processor, which was partnered with a 387 for the floating point accelerate. 46 combined that because we're able to leverage Moore's Law to bring those two together. Got a lot of reuse for the instruction set with the acceleration. As we bring in - Altera was recently integrated into Intel - they come with a suite of incredible FPGAs and accelerators, as well as another company with Nirvana, that also accelerators, and we're looking at those special case opportunities to accelerate the user experience. So we're going to continue to follow that trend and make sure that you have the general purpose capabilities and where new workloads are coming in, and we really see a lot of growth in AI. As I think I said in the keynote, about 12x growth by 2020. We need to make sure that we have the silicon, as well as the software, and that's for Big DL to pull those two together to make sure that we're getting the full benefit of the solution. >> So a couple years ago, we were told that Intel actually thought that there was going to be more Hadoop servers, and Hadoop is umbrella term for the ecosystem, than database servers in three to fives years' time. When you look at deep learning, because we know it's so much more compute-intensive than the traditional statistical machine learning, if you look out three to five years, how much of the compute cycles, share of workloads, do you see deep learning comprising? >> I think that maybe in the last year, deep learning, or AI, as a workload's about seven percent. But if you grow by 12x, it's definitely growing quickly. So what we're expecting is that AI will become inherent in pretty much every application. An example of this is, at one point, facial detection was something that was the new thing. You can't buy a camera that doesn't do that. So if you pull up your camera and you see the little square show up, it's just commonplace. We're expecting that AI will just become an integral part of solutions, not a solution in and of itself. It's there to make software solutions smarter, it's there to make them go further, it's there to make them smarter. It's not there to be independent. It's like, "Wow, we've identified a cat." That's cool, but if we're identifying problems or making sure that the autonomous delivery systems don't kill a cat, there's a little bit more that needs to go one, so it's going to be part of the solution. >> What about the trade-off between processing at the edge and learning in the cloud? I mean, you can learn on the edge, you can learn in the cloud, you can do the analysis on either end of the run time. How do you guys see that being split up in the future? >> Absolutely, I think that the deep learning training, there's always opportunities that go through vast amount of data to figure out how to identify what's interesting, identify new insights. Once you have those models trained, then you want to use them everywhere. And what makes sense, then, then we're switching from training to inference. Inference at the edge allows you to be more real-time. In some cases, if you've imagined a smart camera, even from a smart camera point-of-view, do I send all the data stream to the data center? Well, maybe not. Let's assume that it's being used for highway patrol. If you identify the car speeding, then send the information, except leave me out. (laughs) Kidding on that. But it's that kind of piece where you allow both sides to be smart. More information for the continual training in the cloud, but also more ability to add compute to the edge so that we can do some really cool activities right at the edge, real-time, without having to send all the information. >> If you had to describe to people working on architectures for the new distributed computing in IOT, what would an edge device look like in its hardware footprint in terms of compute, memory, connectivity? >> So in terms of connectivity, we're expecting an explosion of 5G. A lot of high bandwidth, multiple things being connected with some type of communication, 5G capability. It won't just be about, let's just say, cars feeding back where they are from their GPS, but it's going to be cars talking to other cars. Maybe one needs to move over a lane. Can they adjust? We're talking autonomous world. There's going to be so much interconnection through 5G, so I expect to see 5G show up in most edge devices. And to your point, I think it's very important to add that we expect edge devices to all have some kind of compute capability. Not just sensors, but ability to sense and make some decisions based on what they're sensing. We're going to continue to see more and more compute go to the edge devices. So again, where we look at leveraging the power of Moore's Law, we're going to be able to move that compute that today is like, the cloud is just incredible with its collective compute power, that will slowly move away. And now, we've seen that from mainframe to workstations to PC, the phones, and to edge devices. I think that trend will continue and we'll continue to see bigger data centers and other use cases that require deeper analysis. So from a developer's point of view, if you're working on an edge device, make sure it has great connectivity and compute. >> So one last follow-up from me. Google is making a special effort to build their own framework, open source (mumbles) flow, and then marry it to specialized hardware, tenser processing units. So specialization versus generalization. Do you have a sense for someone who's running TPU in the cloud, do you have a sense for if they're learning tenser flow models or tenser flow-based models, would there be an advantage for that narrow set running on tenser processing units? Or would that be supported just as well on Intel hardware? >> You know, specialization is anything that's purpose-built. As you said, it's just not general purpose, but as I mentioned, over time, the specialized capabilities slide into general purpose opportunities. Recently, we added ASNI, which is an encryption algorithm, into our processors very specialized for encryption/decryption. But because it was so generally used now, it's now just part of our processor offering, it's just part of our instruction set. I expect to continue to see that trend, so many things may start off specialized, which is great, it's a great way to innovate, and then, over time, if it becomes general purpose or if it's so specialized that everyone's using it, it's not general purpose and it slides into the general purpose opportunity. I think that will be a continuation. We've seen that since the dawn of the computer, specialized memory, specialized compute, specialized floating point capabilities, are now just generally available. And so when we deploy things like Big DL, a lot of the benefit of it is that we know the Xeon processor has so much capability because it has pulled in, over time, the best of the specialized use cases that are now generally used. >> Great deep-dive questions, George. We have a couple of minutes left so I know you brought a lot to this conference. They put you up on stage. So what were you hoping to gain from the conference? Maybe you came here to learn or have you had any interesting conversations so far? >> You know, what I'm always excited about at these conferences is that the open-source community is just one that is so incredibly adaptive and innovative, so we're always out there looking to see where the world is going. By doing that, we're learning where- Because again, where the software goes, we want to make sure that the hardware that supports it, we're there to meet their needs. So today, we're learning about new frameworks coming out, the next spark on the roadmap, what they're looking at doing. I expect that we'll hear a little more about scripting languages as well. All of that is just fantastic because I'm always impressed but have come to expect a lot of innovation, but still impressed by the amount of innovation. So it's good to be in the right place and as we approach things from an Intel point of view, we know we approach it from a portfolio solutions set. It's not just silicon, it's not just an accelerator, but it's from the hardware through the software solution. So we know that we can really help to accelerate and usher in the next compute paradigm. So this has been fun. >> That would be a great ending but I got to ask you this. When you're sitting in this chair next year at Spark 2018, what do you hope to be talking about? >> Well, one of the things that we're looking and talking about is this massive amounts of data. I would love to be here next year talking more about the new memory technologies that are coming out that allow for tremendous more storage at incredible speeds, better SSDs and how they will impact the performance of the overall solution, and of course, we're going to continue to accelerate our processing cores, accelerators for unique capabilities. I want to come back in and say, "Wow, what did we 10x this year?" That's always fun. It's a great challenge to the engineering team who just heard that and said, "Ugh, he's starting off with 10x again?" (laughs) >> Great, Michael. That's a great wrap-up, too. We appreciate you coming on and sharing with the Cube audience the exciting things happening at Intel with Spark. >> Well, thank you for the time. I really appreciate it. >> All right, and thank you all for joining us for this segment. We'll be back with more guests in just a few. You're watching the Cube. (electronic music)

Published Date : Jun 6 2017

SUMMARY :

Brought to you by Data Bricks. We have Intel's VP of the software and service group, and George and I will both be peppering you with questions. I've got the salt to go with the pepper. Well, you just got off the stage. One of the things that we're looking at with Big DL, Okay, so Intel not just a hardware company. Well, you know, Intel's a solutions company, right? so that you actually get useful productivity out of it. as the software, and that's for Big DL to pull those two how much of the compute cycles, share of workloads, So if you pull up your camera and you see the little square in the cloud, you can do the analysis on either end Inference at the edge allows you to be more real-time. is like, the cloud is just incredible with its collective in the cloud, do you have a sense for if they're learning We've seen that since the dawn of the computer, specialized We have a couple of minutes left so I know you brought So it's good to be in the right place and as we approach what do you hope to be talking about? of the overall solution, and of course, we're going to continue We appreciate you coming on and sharing Well, thank you for the time. All right, and thank you all for joining us

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
GeorgePERSON

0.99+

Michael GreenPERSON

0.99+

MichaelPERSON

0.99+

Michael GreenePERSON

0.99+

San FranciscoLOCATION

0.99+

threeQUANTITY

0.99+

2020DATE

0.99+

twoQUANTITY

0.99+

GoogleORGANIZATION

0.99+

five yearsQUANTITY

0.99+

12xQUANTITY

0.99+

next yearDATE

0.99+

NirvanaORGANIZATION

0.99+

oneQUANTITY

0.99+

IntelORGANIZATION

0.99+

todayDATE

0.98+

both sidesQUANTITY

0.98+

this yearDATE

0.98+

Spark Summit 2017EVENT

0.98+

last yearDATE

0.98+

bothQUANTITY

0.97+

about seven percentQUANTITY

0.97+

fives years'QUANTITY

0.97+

10xQUANTITY

0.97+

386COMMERCIAL_ITEM

0.96+

one pointQUANTITY

0.95+

couple years agoDATE

0.92+

AlteraORGANIZATION

0.91+

46QUANTITY

0.91+

Moore's LawTITLE

0.89+

OneQUANTITY

0.88+

this morningDATE

0.86+

XeonCOMMERCIAL_ITEM

0.86+

Spark 2018EVENT

0.84+

one of the thingsQUANTITY

0.82+

CubeCOMMERCIAL_ITEM

0.76+

BricksORGANIZATION

0.75+

about 12xQUANTITY

0.74+

387COMMERCIAL_ITEM

0.72+

MooreTITLE

0.7+

thingsQUANTITY

0.62+

Big DLTITLE

0.61+

SparkORGANIZATION

0.6+

coupleQUANTITY

0.53+

XeonsOTHER

0.53+

DataPERSON

0.49+

big DLTITLE

0.47+

minutesQUANTITY

0.47+

Big DLCOMMERCIAL_ITEM

0.45+

5GOTHER

0.37+

ASNIORGANIZATION

0.34+

Ziya Ma, Intel Corporation | WiDS 2018


 

>> Announcer: Live from Stanford University in Palo Alto, California, it's theCUBE. Covering Women in Data Science Conference 2018. Brought to you by Stanford. >> Welcome back to theCUBE, we are live at Stanford University for the third annual Women in Data Science Conference, hashtag WiDS2018. Participate in the conversation and you're going to see people at WiDS events in over 177 regions in over 53 countries. This even is aiming to reach about 100,000 people in the next couple of days, which in its third year is remarkable. It's aimed at inspiring and educating data scientists worldwide and of course supporting females in the field. It's also got keynotes, technical vision tracks, and a career panel. And we're excited to welcome back to theCUBE, a cube alumni, Ziya Ma, the Vice President of Software and Services Group and the Director of Big Data Technologies at Intel. Ziya, welcome back to theCube. >> Thanks for having me, Lisa. >> You have been, this is your first time coming to a WiDS event in person and your first year here. You are on the career panel. >> Yes. >> That's pretty cool. Tell us about, you just came from that career panel, tell us about that. What were some of the things that excited you? What are some of the things that surprised you in what you heard at that panel? >> So I think one thing that was really exciting is to see the passion from the audience, so many women excited with data science. And it was the future of what data science can bring. That's the most exciting part. And also, it's very exciting to get connected with so many women professionals. And in terms of, you know, surprise? I think it's a good surprise to see so much advancement in women development in data science. Comparing where we are and where we were two years ago, it's great to see so many woman speakers and leaders talking about their work in the data science space, applying data science to solve real business problems, to solve transportation problems, to solve education, healthcare problems. I think that's the happy surprise, you know, the fast advancement with woman development in this field. >> What were some of the things that you shared, maybe recommendations or advice. You've been in industry for a long time. You've been at Intel for quite a long time. What were some of the things that you felt important to share with the audience, those in-person here at Stanford which is about 400 plus, and those watching the live stream? >> Yeah, you know, Lisa, I provide career coaching actually for many women professionals at Intel and also from the industry. And a lot of them expressed an interest of getting into a data science field. And they ask me, what is the skillset that I need to develop in order to get into this field? I think first, you need to ask yourself, what kind of job you want to get into in this field. You know, there are marketing jobs, there are sales jobs. And even for technical jobs, there are data engineering type of jobs, data visualization, statistician, data science, or AI engineer, machine learning, deep learning engineer. So you have to ask yourself, what kind of job you want to move to and then assess your skillset gap. And work to close that gap. Another advice I give to many woman professionals is that data science appears to have a high bar today. And it may be too significant a jump to move from where you are to a data science field. You may want to move to adjacent field first. And to have a sense of what is it like to work in the data science field and also have more insights with what's going on. And then, to better prepare you for eventually moving into this field. >> Great advice and I think one of the things that jumped out at me was you talked about skillsets. And we often hear a lot of the technical skills, right, that are essential for a data scientist. But there's also softer skills, maybe it's more left brain, right brain, creativity, empathy, communication. Tell me, in your ascension to now the VP level at Intel, what are some of the other skills besides the technical skills that you find as data science as a field grows and infiltrates everything, what are some of those softer skills that you think are really advantageous? >> Great question. I think openness and collaboration are very important soft skills. Because as a data scientist, you need to work with data engineering teams. Because as a data scientist, you extract business insights from the data. But then you cannot work alone. You have to work with the data engineering team who prepares the data infrastructure, stores, and manages the data very efficiently for you to consume. You also have to work with domain experts. Let's say if you are applying data science solutions to solve a real business problem, let's say in a medical field. You need to work with a domain expert from the medical field so that you can tailor your solution towards, you know, addressing some medical problems. So you need to work with that domain expert who knows the business operations and processes in medical field really, really well. So I think that's, you know, collaboration is key. And of course you also want to collaborate maybe with academia and open source community where a lot of real innovations are happening. And you want to leverage the latest technology building blocks so that you can accelerate your data science application or solution advancement. So collaboration and openness are the key. >> Openness is a great one. I'm glad that you brought that up. We had another guest on talking about that earlier. In terms of being open, one, to not expecting, you know, in the scientific method, you go into it with a hypothesis and you think you know what you're going to find or you want to know, I want to find this. And you might not, and being open to going, okay, that's okay, I'm going to course correct. 'Cause failure in this sense is not a bad F word. But also being open to other opinions, other perspectives. That seems to be kind of a theme that we're hearing more about today, it's be willing to be open-minded. >> You know, that's an excellent point, Lisa. You know, I can share one example. When coming from an engineering background, when I first moved into this field, we always had the assumption that when we talk with your customers, they must be looking for something that's high performance. So our initial discussion with our customers centered around Intel product lineup that will give you the highest of performance for deep learning training or for analytics solution. But as we went deeper with the discussion, we realized that's not what customers are looking for in many cases. The fact is that many of them have collected a massive amount of data over the years. They have built analytics applications and you add on top of that. And so as the data representations get more complex, we want to extract more complex insights. That's the time they want to apply deep learning but to the existing application infrastructure. So they're looking for something, let's say deep learning capability, that can be easily integrated into the existing analytics solutions stack, into its existing infrastructure and reuse its existing infrastructure for lower cost of ownership. That's what they are looking for. And high performance is just nice to have. So once we are open-minded to that learning, that totally changed the conversation. Actually, in the last couple of years, we applied that learning and we have collaborated with top cloud service providers like Amazon, Microsoft, Google, and you know, Alibaba and Baidu and a few others to deploy Intel-based deep learning capabilities. Libraries, frameworks, into cloud so that, you know, more businesses and individuals can have access. But again, it's that openness. You truly need to understand what is the problem you are solving before simply just selling a technology. >> Absolutely, and that's one of the best examples of openness that's obviously in this case listening to customers. We think we know the problem that we need to solve and they're telling you, actually, it's not that. It's a nice to have, and you go, whoa, that changes everything! And it also changes, sounds like, the downstream collaboration that Intel knew we need to have in order to drive our business forward and help our customers in every industry do the same thing. >> Exactly, exactly. >> So a couple of things that I'd love to get your perspective on is the culture at Intel. You've been there a long time. What is that culture like in terms of maybe fueling or being a nice opportunity for bringing in this diversity that we so need in every industry? >> Yeah, you know, one thing I want to share, actually, just now during the panel discussion I shared this. I said Intel will be the first high tech company achieving full representation of women and under-represented minorities by the end of this year. >> Wow, by the end of 2018? >> Yes, we pulled in our timeline by two years. Yes, we're well on track for this year. >> Wow. >> To achieve that. And I personally, I like this quote from Brian Krzanich, our CEO, that if we want tech to define the future, we must be representative of that future. So in the last few years now, Intel has put great effort into hiring and retention for diversity. We also have put great effort for inclusion. We want to make sure our employees, every one of them, come to work, bring their full selves for the value add. We also invest in diverse entrepreneurs through Intel capital initiatives. And most importantly, we also partner with academia, universities, to build the pipeline for tech sectors. So we put a lot of effort and we committed about $300 million for closing the gap at the company but also for the high tech sector. So definitely we are very committed to the diversity and inclusion. But that doesn't mean that we only focus on this. And of course, we make sure that our people are bringing the right skillsets and we bring the most qualified people, you know, to do the job. >> On the pipeline front, one of the things I was reading recently is some of the challenges that organizations that are going to, say, college campuses to recruit, some of the missteps they might be taking in terms of if they're trying to bring more females info their organization in STEM roles, don't staff a booth with men, right? Or have the only females that are at a recruitment event be doing, handing out swag, or taking names. Obviously there's important roles to be had everywhere. But that was one of the things that seems to be, well what a simple thing to change. Just flip the model so that the pipeline, to your point, is fueling really what corporations like Intel want to achieve so that that future is really as inclusive and diverse as it should be. The second thing that you mentioned before we went live, from an Intel perspective, is you guys were challenged on the talent acquisition front. And so a few years ago, you started the Women in Big Data Forum to solve that problem. Tell us about that and what have you achieved so far? >> Great question. So you know, this is three or four years ago. And Intel, you know, because I manage the big data engineering organization within Intel, and we are working to hire some diversity talents. So we opened some racks and we look at our candidate pool. There were very few women, actually barely any women in the candidate pool. Again, yes, we always want to hire the most qualified people, but it also does not feel right that when you don't even have any diversity candidates in that pool. Even though we exhausted all possible options, even tried to bring the relevant diversity candidates into the pool. But it's very challenging. So then we reached out to a few industrial partners to see, is Intel the only company that had this problem or you have the same problem? It turned out everyone had the same problem. So yes, people value diversity, they all see the value. But it's very challenging to have a successful recruiting process for diversity. That's the time the few of us gathered together, we said, maybe there is something that we can do to support a stronger woman pipeline for future hiring. And it may take a couple of years, and it may take one year, but unless we start doing something today, we're going to talk about the same problem two years from now. >> Exactly. >> So then with sponsorship from our executive team, Doug Fisher, the Intel software analysis group GM, and also Michael Greene and a few others, we bring the team together, we started to look at networking opportunities, training opportunities. We worked with our industrial partners to offer many free training classes and we also start reaching out to universities to build the pipeline. And especially to motivate the female students to get passionate about big data, about analytics. So as of now, we have more than 2000 members globally for the forum and also we have many chapters. We have chapters along the West Coast in the Bay Area, also East Coast. We also have chapters in Europe and Asia so we're definitely seeing more and more women getting excited with big data and analytics. And also, we have great collaboration with women in data science at Stanford. >> Yeah and it sounds like the momentum, it doesn't sound like the momentum, you can feel it, right? You can feel it online with, I can see a Twitter stream in front of me on this monitor. People are getting involved in droves all across the globe and I said to Margot, I asked her earlier, Margot Gerritsen, one of the founders of WiDS, I said, first of all, you must be pleasantly pretty shocked at how quickly this has ascended. And she said yes, and I said, where do you go from here? And she said, it's really now going to be about getting involved with WiDS more frequently throughout the year. Also, kind of going up a funnel if you will, to high school students and starting to encourage them, excite them, and start that motivation track, if you will, even earlier. And I think that is, in terms to your point about we can't do anything if the pipeline isn't there to support it. One of the things that WiDS is aiming to do, and it sounds like what you're doing as well, similar to Women in Big Data Forum at Intel, is let's start creating a pipeline of women that are educated in the technical side and the software softer skill side that are interested and find their passion so that we can help motivate them, that you can do this. The sky's the limit where data science is concerned. >> Absolutely, absolutely. And it's great to see actually everybody recognize the value of building the pipeline and reaching out beyond the university students. Because have to get more and more girls getting into the science and tech sector. And we have to start from young. And I, yeah, totally agree, I think we really need to build our pipeline and a pipeline for our pipeline. >> Yes, exactly. And also that sort of sustaining momentum as women, you know, go in university and study STEM subjects, get into the field. Obviously retention is a big challenge that the tech industry and STEM fields alike have faced. But that retention, that motivation, and I think organizations like this, just with this, you can feel the passion when you walk into this alumni center at Stanford is really key. We thank you so much for carving out some time to share your insights and your career path and your recommendations on theCUBE and wish you continued success at Intel and with Women in Big Data Forum, which I'm sure we'll see you back at WiDS next year. >> Alright, thank you, thanks Lisa. >> Absolutely, my pleasure. We want to thank you, you have been watching theCUBE live from the Women in Data Science Conference 2018. Hashtag WiDS2018, join the conversation, get involved. I'm Lisa Martin from Stanford. Stick around, I'll be right back with John Furrier to do a wrap of the day. (outro electronic music)

Published Date : Mar 6 2018

SUMMARY :

Brought to you by Stanford. Welcome back to theCUBE, we are live at You are on the career panel. What are some of the things that I think that's the happy surprise, you know, What were some of the things that you shared, And then, to better prepare you the technical skills that you find And of course you also want to collaborate to not expecting, you know, in the scientific method, And so as the data representations get more complex, It's a nice to have, and you go, to get your perspective on is the culture at Intel. Yeah, you know, one thing I want to share, actually, Yes, we pulled in our timeline by two years. So in the last few years now, Intel has put great effort Just flip the model so that the pipeline, to your point, And Intel, you know, because I manage the big data for the forum and also we have many chapters. it doesn't sound like the momentum, you can feel it, right? And it's great to see actually everybody recognize just with this, you can feel the passion when you walk from the Women in Data Science Conference 2018.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Brian KrzanichPERSON

0.99+

Lisa MartinPERSON

0.99+

MargotPERSON

0.99+

AlibabaORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

EuropeLOCATION

0.99+

Margot GerritsenPERSON

0.99+

GoogleORGANIZATION

0.99+

LisaPERSON

0.99+

Ziya MaPERSON

0.99+

Doug FisherPERSON

0.99+

AsiaLOCATION

0.99+

IntelORGANIZATION

0.99+

John FurrierPERSON

0.99+

BaiduORGANIZATION

0.99+

one yearQUANTITY

0.99+

first yearQUANTITY

0.99+

WiDSORGANIZATION

0.99+

WiDSEVENT

0.99+

threeDATE

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

two yearsQUANTITY

0.99+

Michael GreenePERSON

0.99+

first timeQUANTITY

0.99+

oneQUANTITY

0.99+

Bay AreaLOCATION

0.99+

more than 2000 membersQUANTITY

0.99+

GMORGANIZATION

0.99+

firstQUANTITY

0.99+

todayDATE

0.98+

Intel CorporationORGANIZATION

0.98+

ZiyaPERSON

0.98+

four years agoDATE

0.98+

next yearDATE

0.98+

second thingQUANTITY

0.98+

StanfordORGANIZATION

0.98+

about $300 millionQUANTITY

0.98+

two years agoDATE

0.98+

end of this yearDATE

0.97+

Stanford UniversityORGANIZATION

0.97+

TwitterORGANIZATION

0.97+

one exampleQUANTITY

0.97+

third yearQUANTITY

0.97+

about 100,000 peopleQUANTITY

0.97+

one thingQUANTITY

0.97+

Women in Data Science Conference 2018EVENT

0.97+

WiDS 2018EVENT

0.96+

this yearDATE

0.96+

over 53 countriesQUANTITY

0.96+

about 400 plusQUANTITY

0.96+

East CoastLOCATION

0.95+

Vice PresidentPERSON

0.95+

WiDS2018EVENT

0.95+

Women in Data Science ConferenceEVENT

0.94+

Software and Services GroupORGANIZATION

0.93+

OneQUANTITY

0.93+

Big Data TechnologiesORGANIZATION

0.93+

over 177 regionsQUANTITY

0.93+

end of 2018DATE

0.91+

few years agoDATE

0.89+

Covering Women in Data Science Conference 2018EVENT

0.87+

yearsDATE

0.86+

West CoastLOCATION

0.86+