Image Title

Search Results for Datawork Summit 2018:

Vimal Endiran, Global Data Business Group Ecosystem Lead, Accenture @AccentureTech


 

>> Live from San Jose, in the heart of Silicon Valley, it's theCube. Covering Datawork Summit 2018. Brought to you by Hortonworks. >> Welcome back to theCube's live coverage of Dataworks here in San Jose, California. I'm your host, Rebecca Knight along with my cohost James Kobielus. We have with us Vimal Endiran. He is the Global Business Data Group Ecosystem Lead, at Accenture. He's coming to us straight from the Motor City. So, welcome Vimal. >> Thank you, thank you Rebecca. Thank you Jim. Looking forward to talk to you for the next ten minutes. >> So, before the cameras were rolling we were talking about how data veracity and how managers can actually know that the data that they're getting, that they're seeing, is trustworthy. What's your take on that right now? >> So, in the today's age the data is coming at you in a velocity that you never thought about, right. So today, the organizations are gathering data probably in the magnitude of petabytes. This is a new normal. We used to talk about gigs and now it's in petabytes. And the data coming in the form of images, video files, from the edge, you know edge devices, sensors, social media and everything. So, the amount of data, this is becoming the fuel for the new economy, right. So that companies, who can find a way to take advantage and figure out a way to use this data going to have a competitive advantage over their competitors. So, for that purpose, even though it's coming at that volume and velocity doesn't mean it's useful. So the thing is if they can find a way to make the data can be trustworthy, by the organization, and at the same time it's governed and secured. That's what's going to happen. It used to be it's called data quality, we call it when the structure it's okay, everything is maintained in SAP or some system. It's good it's coming to you. But now, you need to take advantage of the tools like machine learning, artificial intelligence, combining these algorithms and tool sets and abilities of people's mind, putting that in there and making it somewhat... Things can happen to itself at the same time it's trustworthy, we have offerings around that Accenture is developing place... It differs from industry to industry. Given the fact if the data coming in is something it's only worth for 15 seconds. After that it has no use other than understanding how to prevent something, from a sense of data. So, we have our offerings putting into place to make the data in a trustworthy, governed, secured, for an organization to use it and help the organization to get there. That's what we are doing. >> The standard user of your tools is it a data steward in the traditional sense or is it a data scientist or data engineer who's trying to, for example, compile a body of training data for use in building and training machine learning models? Do you see those kinds of customers for your data veracity offerings, that customer segment growing? >> Yes. We see both sides pretty much all walk of customers in our life. So, you hit the nail on the head, yes. We do see that type of aspects and also becoming, the data scientists you're also getting another set of people, the citizen data scientist. The people--- >> What is that? That's a controversial term. I've used that term on a number of occasions and a lot of my colleagues and peers in terms of other analysts bat me down and say, "No, that demeans the profession of data science by calling it..." But you tell me what how Accenture's defining that. >> The thing is, it's not demeaning. The fact is to become a citizen data scientist you need the help of data scientists. Basically, every time you need to build a model. And then you feed some data to learn. And then have an outcome to put that out. So you have a data scientist creating algorithms. What a citizen data scientist means, say if I'm not a data scientist, I should be able to take advantage of a model built for my business scenario, feed something data in, whatever I need to feed in, get an output and that program, that tool's going to tell me, go do this or don't do this, kind of things. So I become a data scientist by using a predefined model that's developed by an expert. Minds of many experts together. But rather than me going and hiring hundred experts, I go and buy a model and able to have one person maintain or tweak this model continuously. So, how can I enable that large volume of people by using more models. That's what-- >> If a predictive analytics tool that you would license from whatever vendor. If that includes prebuilt machine learning models for a particular tasks in it does that... Do you as a user of that tool, do you become automatically a citizen data scientist or do you need to do some actual active work with that model or data to live up to the notion of being a citizen data scientist? >> It's a good question. In my mind, I don't want to do it, my job is something else. To make something for the company. So, my job is not creating a model and doing that. My job is, I know my sets of data, I want to feed it in. I want to get the outcome that I can go and say increase my profit, increase my sales. That's what I want to do. So I may become a citizen data scientist without me knowing. I won't even be told that I'm using a model. I will take this set of data, feed it in here, it's going to tell you something. So, our data veracity point of view, we have these models built into some of platforms. That can be a tool from foreign works, taking advantage of the data storage tool or any other... In our own algorithms put in that helps you to create and maintain the data veracity to a scale of, if you say one to five, one is being low, five is being bad, to maintain at the five level. So that's the objective of that. >> So you're democratizing the tools of data science for the rest of us to solve real business problems. >> Right. >> So the data veracity aside, you're saying the user of these tools is doing something to manage, to correct or enhance or augment the data that's used to feed into these prebuilt models to achieve these outcomes? >> Yes. The augmented data, the feed data and the training data it comes out with an outcome to say, go do something. It tells you to perform something or do not perform. It's still an action. Comes out with an action to achieve a target. That's what it's going to be. >> You mention Hortonworks and since we are here at Dataworks and the Hortonworks show, tell us a little bit about your relationship with that company. >> Definitely. So Hortonworks is one of our premiere strategic partners. We've been the number one implementers, the partners for last two years in a row, implementing their technology across many of our clients. From partnership point of view, we have jointly developed offerings. What Accenture is best at, we're very good at industry knowledge. So with our industry knowledge and with their technology together what we're doing is we're creating some offerings that you can take to market. For example, we used to have data warehouses like using Teradata and older technology data warehouses. They're still good but at the same time, people also want to take the structured, unstructured data, images files and able to incorporate into the existing data warehouses. And how I can get the value out of the whole thing together. That's where Hortonworks' type of tools comes to play. So we have developed offerings called Modern Data Warehouse, taking advantage of your legacy systems you have plus this new data coming together and immediately you can create an analytics case, used case to do something. So, we have prebuilt programs and different scripts that take in different types of data. Moving into a data lake, Hortonworks data lake and then use your existing legacy data and all those together help you to create analytics use cases. So we have that called data modernization offering, we have one of that. Then we have-- >> So that's a prebuilt model for a specific vertical industry requirements or a specific business function, predictive analytics, anomaly detection and natural language processing, am I understanding correctly? >> Yes. We have industry based solutions as well but also to begin with, the data supply chain itself. To bring the data into the lake to use it. That's one of the offerings we play-- >> ...Pipeline and prepackaged models and rules and so forth. >> Right, prepackaged data ingestion, transformation, that prepackaged to take advantage with the new data sets along with your legacy data. That's one offering called data modernization offering. That to cloud. So, we can take to cloud. Hortonworks in a cloud it can be a joure, WS, HP, any cloud plus moving data. So that's one type of offering. Today actually we announced another offering jointly with Hortonworks, Atlas and Grainger Tool to help GDPR compliance. >> Will you explain what that tool does specifically to help customers with GDPR points. Does it work out of the box with Hortonworks data stewards studio? >> Well, to me I can get your answers from my colleagues who are much more technical on that but the fact is I can tell you functionally what the tool does is. >> Okay, please. >> So you, today the GDPR is basically, there's account regulations about you need to know about your personal data and you have your own destiny about your personal data. You can call the company and say, "Forget about me." If you are an EU resident. Or say, "Modify my data." They have to do it within certain time frame. If not they get fined. The fine can be up to 4% of the company's... So it's going to be a very large fine. >> Total revenue, yeah. >> So what we do is, basically take this tool. Put it in, working with Hortonworks this Atlas and Granger tool, we can go in and scan your data leak and we can scan at the metadata level and come into showcase. Then you know where is your personal data information about a consumer lies and now I know everything. Because what used to be in a legacy situation, the data originated someplace, somebody takes it and puts a system then somebody else downloads to an X file, somebody will put in an access data base and this kind of things. So now your data's pulling it across, you don't know where that lies. In this case, in the lake we can scan it, put this information, the meta data and the lineage information. Now, you immediately know where the data lies when somebody calls. Rebecca calls and says, "No longer use my information." I exactly know it's stored in this place in this table, in this column, let me go and take it out from here so that Rebecca doesn't exist anymore. Or whoever doesn't exist anymore. So that's the idea behind it. Also, we can catalog the entire data lake and we know not just personal information, other information, everything about other dimensions as well. And we can use it for our business advantage. So that's what we announced today. >> We're almost out of time but I want to finally ask you about talent because this is a pressing issue in Silicon Valley and beyond in really the tech industry, finding the right people, putting them in the right jobs and then keeping them happy there. So recruiting, retaining, what's Accenture's approach? >> This area, talent is the hardest one. >> Yes! >> Thanks to Hortonworks and Hortonworks point of view >> Send them to Detroit where the housing is far less expensive. >> Not a bad idea. >> Exactly! But the fact is-- >> We're both for Detroiters. >> What we did was, Hortonworks, Accenture has access to Hortonworks University, all their educational aspects. So we decided we're going to take that advantage and we going to enhance our talent by bringing the people from our... Retraining the people, taking the people to the new. People who know the legacy data aspects. So take them to see how we take the new world. So then we have a plan to use Hortonworks together the University, the materials and the people help, together we going to train about 500 people in different geos, 500 per piece and also our the development centers in India, Philippines, these places, so we have a larger plan to retrain the legacy into new. So, let's go and get people from out of the college and stuff, start building them from there, from an analyst to a consultant to a technical level and so that's the best way we are doing and actually the group I work with. Our group technology officer Sanjiv Vohra, he's basically in charge of training about 90,000 people on different technologies in and around that space. So the magnet is high but that's our approach to go and try and people and take it to that. >> Are you training them to be well rounded professionals in all things data or are you training them for specific specialties? >> Very, very good question. We do have this call master data architect program, so basically in the different levels after these trainings people go through specially you have to do so many projects, come back have an interview with a panel of people and you get certified, within the company, at certain level. At the master architect level you go and help a customer transform their data transformation, architecture vision where do you want to go to, that level. So we have the program with a university and that's the way we've taken it step by step to people to that level. >> Great. Vimal, thank you so much for coming on theCube. >> Thank you. >> It was really fun talking to you. >> Thank you so much, thank you for having me. Thank you. >> I'm Rebecca Knight for James Kobielus we will have more, well we actually will not be having any more coming up from Dataworks. This has been the Dataworks show. Thank you for tuning in. >> Signing off for now. >> And we'll see you next time.

Published Date : Jun 21 2018

SUMMARY :

Brought to you by Hortonworks. He is the Global Business Data Group Ecosystem Lead, Looking forward to talk to you for the next ten minutes. and how managers can actually know that the data and help the organization to get there. the data scientists "No, that demeans the profession of data science So you have a data scientist creating algorithms. or do you need to do some actual active work with that model and maintain the data veracity to a scale of, for the rest of us to solve real business problems. The augmented data, the feed data and the training data and the Hortonworks show, and immediately you can create an analytics case, To bring the data into the lake to use it. that prepackaged to take advantage with the new data sets to help customers with GDPR points. I can tell you functionally what the tool does is. and you have your own destiny about your personal data. So that's the idea behind it. and beyond in really the tech industry, Send them to Detroit and so that's the best way we are doing At the master architect level you go Vimal, thank you so much for coming on theCube. Thank you so much, thank you for having me. This has been the Dataworks show.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RebeccaPERSON

0.99+

James KobielusPERSON

0.99+

VimalPERSON

0.99+

Rebecca KnightPERSON

0.99+

JimPERSON

0.99+

Sanjiv VohraPERSON

0.99+

HortonworksORGANIZATION

0.99+

IndiaLOCATION

0.99+

Vimal EndiranPERSON

0.99+

15 secondsQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

TodayDATE

0.99+

San JoseLOCATION

0.99+

Hortonworks UniversityORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

fiveQUANTITY

0.99+

hundred expertsQUANTITY

0.99+

San Jose, CaliforniaLOCATION

0.99+

DetroitLOCATION

0.99+

HPORGANIZATION

0.99+

oneQUANTITY

0.99+

todayDATE

0.99+

both sidesQUANTITY

0.99+

Hortonworks,ORGANIZATION

0.99+

Hortonworks'ORGANIZATION

0.99+

bothQUANTITY

0.98+

WSORGANIZATION

0.98+

about 90,000 peopleQUANTITY

0.98+

500 per pieceQUANTITY

0.97+

TeradataORGANIZATION

0.97+

one personQUANTITY

0.97+

GDPRTITLE

0.97+

about 500 peopleQUANTITY

0.96+

Global Business Data Group EcosystemORGANIZATION

0.95+

five levelQUANTITY

0.93+

up to 4%QUANTITY

0.93+

EULOCATION

0.93+

Datawork Summit 2018EVENT

0.93+

DataworksORGANIZATION

0.93+

DetroitersPERSON

0.92+

@AccentureTechORGANIZATION

0.91+

Atlas and Grainger ToolORGANIZATION

0.88+

Global Data Business Group Ecosystem LeadORGANIZATION

0.86+

theCubeORGANIZATION

0.83+

PhilippinesLOCATION

0.8+

masterTITLE

0.77+

one typeQUANTITY

0.74+

petabytesQUANTITY

0.73+

SAPORGANIZATION

0.61+

last twoDATE

0.58+

ten minutesQUANTITY

0.58+

AtlasORGANIZATION

0.52+

yearsQUANTITY

0.5+

data architect programOTHER

0.48+

GrangerORGANIZATION

0.46+

Scott Gnau, Hortonworks | DataWorks Summit 2018


 

>> Live from San Jose, in the heart of Silicone Valley, it's theCUBE. Covering Datawork Summit 2018. Brought to you by Hortonworks. >> Welcome back to theCUBE's live coverage of Dataworks Summit here in San Jose, California. I'm your host, Rebecca Knight, along with my cohost James Kobielus. We're joined by Scott Gnau, he is the chief technology officer at Hortonworks. Welcome back to theCUBE, Scott. >> Great to be here. >> It's always fun to have you on the show. So, you have really spent your entire career in the data industry. I want to start off at 10,000 feet, and just have you talk about where we are now, in terms of customer attitudes, in terms of the industry, in terms of where customers feel, how they're dealing with their data and how they're thinking about their approach in their business strategy. >> Well I have to say, 30 plus years ago starting in the data field, it wasn't as exciting as it is today. Of course, I always found it very exciting. >> Exciting means nerve-wracking. Keep going. >> Or nerve-wracking. But you know, we've been predicting it. I remember even you know, 10, 15 years ago before big data was a thing, it's like oh all this data's going to come, and it's going to be you know 10x what it is. And we were wrong. It was like 5000x, you know what it is. And I think the really exciting part is that data really used to be relegated frankly, to big companies as a derivative work of ERP systems, and so on and so forth. And while that's very interesting, and certainly enabled a whole level of productivity for industry, when you compare that to all of the data flying around everywhere today, whether it be Twitter feeds and even doing live polls, like we did in the opening session today. Data is just being created everywhere. And the same thing applies to that data that applied to the ERP data of old. And that is being able to harness, manage and understand that data is a new business creating opportunity. And you know, we were with some analysts the other day, and I think one of the more quoted things that came out of that when I was speaking with them, was really, like railroads and shipping in the 1800s and oil in the 1900s, data really is the wealth creator of this century. And so that creates a very nerve-wracking environment. It also creates an environment, a very agile and very important technological breakthroughs that enable those things to be turned into wealth. >> So thinking about that, in terms of where we are at this point in time and on the main stage this morning someone had likened it to the interstate highway system, that really revolutionized transportation, but also commerce. >> I love that actually. I may steal it in some of my future presentations. >> That's good but we'll know where you pilfered it. >> Well perhaps if data is oil the edge, in containerized applications and piping data, you know, microbursts of data across the internet of things, is sort of like the new fracking. You know, you're being able to extract more of this precious resource from the territory. >> Hopefully not quite as damaging to the environment. >> Maybe not. I'm sorry for environmentalist if I just offended you, I apologize. >> But I think you know, all of those analogies are very true, and I particularly like the interstate one this morning. Because when I think about what we've done in our core http platform, and I know Arun was here talking about all the great advances that we built into this, the kind of the core hadoop platform. Very traditional. Store data, analyze data but also bring in new kinds of algorithms, rapid innovation and so on. That's really great but that's kind of half of the story. In a device connected world, in a consumer centric world, capturing data at the edge, moving and processing data at the edge is the new normal, right? And so just like the interstate highway system actually created new ways of commerce because we could move people and things more efficiently, moving data and processing data more efficiently is kind of the second part of the opportunity that we have in this new deluge of data. And that's really where we've been with our Hortonworks data flow. And really saying that the complete package of managing data from origination at the edge all the way through analytic to decision that's triggered back at the edge is like the holy grail, right? And building a technology for that footprint, is why I'm certainly excited today. It's not the caffeine, it's just the opportunity of making all of that work. >> You know, one of the, I think the key announcement for me at this show, that you guys made on HDP 3.0 was containerization of more of the capabilities of your distributed environment so that these capabilities, in terms of processing. First of all, capturing and analyzing an moving that data, can be pushed closer to the end points. Can you speak a bit Scott, about this new capability or this containerization support? Within HDP 3.0 but really in your broader portfolio and where you're going with that in terms of addressing edge applications perhaps, autonomous vehicles or you know, whatever you might put into a new smart phone or whatever you put at the edge. Describe the potential containerizations to sort of break this ecosystem wide open. >> Yeah, I think there are a couple of aspects to containerization and by the way, we're like so excited about kind of the cloud first, containerized HDP 3.0 that we launched here today. There's a lot of great tech that our customers have been clamoring for that they can take advantage of. And it's really just the beginning, which again is part of the excitement of being in the technology space and certainly being part of Hortonworks. So containerization affords a couple of things. Certainly, agility. Agility in deploying applications. So, you know for 30 years we've built these enterprise software stacks that were very integrated, hugely complicated systems that could bring together multiple different applications, different workloads and manage all that in a multi-tendency kind of environment. And that was because we had to do that, right? Servers were getting bigger, they were more powerful but not particularly well distributed. Obviously in a containerized world, you now turn that whole paradigm on its head and you say, you know what? I'm just going to collect these three microservices that I need to do this job. I can isolate them. I can have them run in a server-less technology. I can actually allocate in the cloud servers to go run, and when they're done they go away. And I don't pay for them anymore. So thinking about kind of that from a software development deployment implementation perspective, there huge implications but the real value for customers is agility, right? I don't have to wait until next year to upgrade my enterprise software stack to take advantage of this new algorithm. I can simply isolate it inside of a container, have it run, and have it go away. And get the answer, right? And so when I think about, and a number of our keynotes this morning were talking about just kind of the exponential rate of change, this is really the net new norm. Because the only way we can do things faster, is in fact to be able to provide this. >> And it's not just microservices. Also orchestrating them through Kubernetes, and so forth, so they can be. >> Sure. That's the how versus yeah. >> Quickly deployed as an ensemble and then quickly de-provisioned when you don't need them anymore. >> Yeah so then there's obviously the cost aspect, right? >> Yeah. >> So if you're going to run a whole bunch of stuff or even if you have something as mundane as a really big merge join inside of hive. Let me spin up a thousand extra containers to go do that big thing, and then have them go away when it's done. >> And oh, by the way, you'll be deployed on. >> And only pay for it while I'm using it. >> And then you can possibly distribute those containers across different public clouds depending on what's most cost effective at any point in time Azure or AWS or whatever it might be. >> And I tease with Arun, you know the only thing that we haven't solved is for the speed of light, but we're working on it. >> In talking about how this warp speed change, being the new norm, can you talk about some of the most exciting use cases you've seen in terms of the customers and clients that are using Hortonworks in the coolest ways. >> Well I mean obviously autonomous vehicles is one that we all captured all of our imagination. 'Cause we understand how that works. But it's a perfect use case for this kind of technology. But the technology also applies in fraud detection and prevention. It applies in healthcare management, in proactive personalized medicine delivery, and in generating better outcomes for treatment. So, you know, all across. >> It will bind us in every aspect of our lives including the consumer realm increasingly, yeah. >> Yeah, all across the board. And you know one of the things that really changed, right, is well a couple things. A lot of bandwidth so you can start to connect these things. The devices themselves are particularly smart, so you don't any longer have to transfer all the data to a mainframe and then wait three weeks, sorry, wait three weeks for your answer and then come back. You can have analytic models running on and edge device. And think about, you know, that is really real time. And that actually kind of solves for the speed of light. 'Cause you're not waiting for those things to go back and forth. So there are a lot of new opportunities and those architectures really depend on some of the core tenets of ultimately containerization stateless application deployment and delivery. And they also depend on the ability to create feedback loops to do point-to-point and peer kinds of communication between devices. This is a whole new world of how data get moved and how the decisions around date movement get made. And certainly that's what we're excited about, building with the core components. The other implication of all of this, and we've know each other for a long time. Data has gravity. Data movements expensive. It takes time, frankly, you have to pay for the bandwidth and all that kind of stuff. So being able to play the data where it lies becomes a lot more interesting from an application portability perspective and with all of these new sensors, devices and applications out there, a lot more data is living its entire lifecycle in the cloud. And so being able to create that connective tissue. >> Or as being as terralexical on the edge. >> And even on the edge. >> In with machine learn, let me just say, butt in a second. One of the areas that we're focusing on increasingly in Wikibot in terms of our focus on machine learning at the edge, is more and more machine learning frameworks are coming into the browser world. Javascript for the most like tenser flow JS, you know more of this inferencing and training is going to happen inside your browser. That blows a lot of people's minds. It may not be heavy hitting machine learning, but it'll be good enough for a lot of things that people do in their normal life. Where you don't want to round trip back to the cloud. It's all happening right there, in you know, Chrome or whatever you happen to be using. >> Yeah and so the point being now, you know when I think about the early days, talking about scalability, I remember ship being my first one terabyte database. And then the first 10 terabyte database. Yeah, it doesn't sound very exciting. When I think about scalability of the future, it's really going to, scalability is not going to be defined as petabytes or exabytes under management. It's really going to be defined as petabytes or exabytes affected across a grid of storage and processing devices. And that's a whole new technology paradigm, and really that's kind of the driving force behind what we've been building and what we've been talking about at this conference. >> Excellent. >> So when you're talking about these things. I mean how much, are the companies themselves prepared, and do they have the right kind of talent to use the kinds of insights that you're able to extract? And then act on them in the real time. 'Cause you're talking about how this is saving a lot of the waiting around time. So is this really changing the way business gets done, and do companies have the talent to execute? >> Sure. I mean it's changing the way business gets done. We showed a quote on stage this morning from the CEO of Marriott, right? So, I think there a couple of pieces. One is business are increasingly data driven and business strategy is increasingly the data strategy. And so it starts from the top, kind of setting that strategy and understanding the value of that asset and how that needs to be leveraged to drive new business. So that's kind of one piece. And you know, obviously there are more and more folks kind of coming to the realization that that is important. The other thing that's been helpful is, you know, as with any new technology there's always kind of the startup shortage of resource and people start to spool up and learn. You know the really good news, and for the past 10 years I've been working with a number of different university groups. Parents are actually going to universities and demanding that the curriculum include data, and processing and big data and all of these technologies. Because they know that their children educated in that kind of a world, number one, they're going to have a fun job to go to everyday. 'Cause it's going to be something different everyday. But number two they're going to be employed for life. (laughing) >> Yeah. >> They will be solvent. >> Frankly the demand has actually created a catch up in supply that we're seeing. And of course, you know, as tools start to get more mature and more integrated, they also become a little bit easier to use. You know, less, there's a little bit easier deployment and so on. So a combination of, I'm seeing a really good supply, there really, obviously we invest in education through the community. And then frankly, the education system itself, and folks saying this is really the hot job of the next century. You know, I can be the new oil barren. Or I can be the new railroad captain. It's actually creating more supply which is also very helpful. >> Data's the heart of what I call the new stem cell. It's science, technology, engineering, mathematics that you want to implant in the brains of the young as soon as possible. I hear ya. >> Yeah, absolutely. >> Well Scott thanks so much for coming on. But I want to first also, we can't let you go without the fashion statement. You arrived on set wearing it. >> The elephants. >> I mean it was quite a look. >> Well I did it because then you couldn't see I was sweating on my brow. >> Oh please, no, no, no. >> 'Cause I was worried about this tough interview. >> You know one of the things I love about your logo, and I'll just you know, sounds like I'm fawning. The elephant is a very intelligent animal. >> It is indeed. >> My wife's from Indonesia. I remember going back one time they had Asian elephants at a one of these safari parks. And watching it perform, and then my son was very little then. The elephant is a very sensitive, intelligent animal. You don't realize 'till you're up close. They pick up all manner of social cues. I think it's an awesome symbol for a company that's all about data driven intelligence. >> The elephant never forgets. >> Yeah. >> That's what we know. >> That's right we never forget. >> Him forget 'cause he's got a brain. Or she, I'm sorry. He or she has a brain. >> And it's data driven. >> Yeah. >> Thanks very much. >> Great. Well thanks for coming on theCUBE. I'm Rebecca Knight for James Kobielus. We will have more coming up from Dataworks just after this. (upbeat music)

Published Date : Jun 20 2018

SUMMARY :

in the heart of Silicone Valley, he is the chief technology in terms of the industry, in the data field, Exciting means nerve-wracking. and shipping in the 1800s and on the main stage this I love that actually. where you pilfered it. is sort of like the new fracking. to the environment. I apologize. And really saying that the of more of the capabilities of the cloud servers to go run, and so forth, so they can be. and then quickly de-provisioned and then have them go away when it's done. And oh, by the way, And then you can possibly is for the speed of light, Hortonworks in the coolest ways. But the technology also including the consumer and how the decisions around terralexical on the edge. One of the areas that we're Yeah and so the point being now, the talent to execute? and demanding that the And of course, you know, in the brains of the young the fashion statement. then you couldn't see 'Cause I was worried and I'll just you know, and then my son was very little then. He or she has a brain. for coming on theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rebecca KnightPERSON

0.99+

James KobielusPERSON

0.99+

ScottPERSON

0.99+

HortonworksORGANIZATION

0.99+

Scott GnauPERSON

0.99+

IndonesiaLOCATION

0.99+

three weeksQUANTITY

0.99+

30 yearsQUANTITY

0.99+

10xQUANTITY

0.99+

San JoseLOCATION

0.99+

MarriottORGANIZATION

0.99+

San Jose, CaliforniaLOCATION

0.99+

1900sDATE

0.99+

1800sDATE

0.99+

10,000 feetQUANTITY

0.99+

Silicone ValleyLOCATION

0.99+

one pieceQUANTITY

0.99+

Dataworks SummitEVENT

0.99+

AWSORGANIZATION

0.99+

ChromeTITLE

0.99+

theCUBEORGANIZATION

0.99+

next yearDATE

0.98+

next centuryDATE

0.98+

todayDATE

0.98+

30 plus years agoDATE

0.98+

JavascriptTITLE

0.98+

second partQUANTITY

0.98+

TwitterORGANIZATION

0.98+

firstQUANTITY

0.97+

DataworksORGANIZATION

0.97+

OneQUANTITY

0.97+

5000xQUANTITY

0.97+

Datawork Summit 2018EVENT

0.96+

HDP 3.0TITLE

0.95+

oneQUANTITY

0.95+

this morningDATE

0.95+

HDP 3.0TITLE

0.94+

three microservicesQUANTITY

0.93+

first one terabyteQUANTITY

0.93+

FirstQUANTITY

0.92+

DataWorks Summit 2018EVENT

0.92+

JSTITLE

0.9+

AsianOTHER

0.9+

3.0TITLE

0.87+

one timeQUANTITY

0.86+

a thousand extra containersQUANTITY

0.84+

this morningDATE

0.83+

15 years agoDATE

0.82+

ArunPERSON

0.81+

this centuryDATE

0.81+

10,DATE

0.8+

first 10 terabyteQUANTITY

0.79+

coupleQUANTITY

0.72+

AzureORGANIZATION

0.7+

KubernetesTITLE

0.7+

theCUBEEVENT

0.66+

parksQUANTITY

0.59+

a secondQUANTITY

0.58+

past 10 yearsDATE

0.57+

number twoQUANTITY

0.56+

WikibotTITLE

0.55+

HDPCOMMERCIAL_ITEM

0.54+

rd.QUANTITY

0.48+