Image Title

Search Results for Neiman Marcus:

Armon Dadgar, HashiCorp | PagerDuty Summit 2018


 

(upbeat techno music) >> From Union Square in downtown San Francisco, it's theCUBE, covering PagerDuty Summit '18. Now, here's Jeff Frick. >> Hey welcome back everybody, Jeff Frick here with theCUBE. We're at PagerDuty summit in the Westin St. Francis, Union Square, San Francisco. We're excited to have our next guest, this guy likes to get into the weeds. We'll get some into the weeds, not too far in the weeds. Armon Dagar, he's a co-founder and CTO of HashiCorp. Armon, great to see you. >> Thanks so much for having me, Jeff. >> Absolutely, so you're just coming off your session so how did the session go? What did you guys cover? >> It's super good, I mean I think what we wanted to do was sort of take a broader look and not just talk too much just about monitoring and so the talk was really about zero trust networking. Sort of the what, the how, the why. >> Right, right, so that's very important topic. Did Bitcoin come up or blockchain? Or are you able to do zero trust with no blockchain? >> We were able to get through with no blockchain, thankfully I suppose. >> Right. >> But I think kind of the gist of it when we talk about, I think that the challenge is it's still sort of at that nascent point where people are like, okay, zero trust networking I've heard of it, I don't really know what it is or what mental category to put it in. So I think what we tried to do was sort not get too far in the weeds, as you know I tend to do but sort of start high level. >> Right, right. >> And say, what's the problem, right? And I think the problem is we live in this world today of traditional flat networks where, I have a castle and moat, right? I wrap my data center in four walls, all my traffic comes over a drawbridge, and you're either on the outside and you're bad and untrusted or your on the inside and you're good and trusted. And so what happens when a bad guy gets in, right? >> Right. >> It's sort of this all or nothing model, right? >> But now we know, the bad guys are going to get in, right? It's only a function of time, right? >> Right, and I think you see it with the Target breech, the Neiman Marcus breech, the Google breech, right? The list sort of goes on, right? It's like, Equifax, right? It's a bad idea to assume they never get in. (laughing) >> If you assume they get in, so then, if you know the bad guys are going to get in, you got to bake that security in all different levels of your applications, your data, all over the place. >> Exactly. >> So what are some of the things you guys covered in the session? >> So I think the core of it is really saying how do we get to a point where we don't trust our network, where we assume the attacker will get on the network and then what? How do you design around that assumption, right? And what you really have to do is push identity everywhere, right? So every application has to say, I'm a web server and I'm connecting to a database, and is this allowed, right? Is a web server allowed to talk to the database? And that's really the crux of what Google calls Beyond Crop, what other people call sort of zero trust networking, is this idea of identity based where I'm saying it's not IP one talking to IP two, it's web server talking to database. >> Right, right, because then you've got all the role and rules and everything associated at that identity level? >> Bingo, exactly. >> Yeah. >> Exactly, and I think what's made that very hard historically is when we say, what do you have at the network? You have IPs and ports. So how do we get to a point where we know one thing is a web server and one thing's a database, right? >> Right. >> And I think the crux of the challenge there, is kind of three pieces, right? You need application identity. You have to say this is a web server, this is a database. You need to distribute certificates to them and say, you get a certificate that says you're a web server, you get a certificate that says you're a database and you have to enforce that access, right? So everyone can't just randomly talk to each other. >> Right, well then what about context too, right? Because context is another piece that maybe somebody takes advantage of and has access to the identity but is using it in way or there's an interaction that's kind of atypical to what's expected behavior, it just doesn't make sense. So context really matters quite a bit as well. >> Yeah, you're super, super right and I think this is where it gets into not only do we need to assign identity to the applications but how do we tie that back into sort of rich access controls of who's allowed to do what, audit trails of, okay it seems odd, this web server that never connects to this database suddenly out of the blue doing so, why? >> Right, right. >> And do we need to react to it? Do we need to change the rule? Do we need to investigate what's going on? >> Right. >> But you're right. It's like, that context is important of what's expected versus what's unexpected. >> Right, then you have this other X factor called shared infrastructure and hybrid cloud and I've got apps running on AWS, I've got apps running at Google, I've got apps running at Microsoft, I got apps running in the database, I've got some dev here, I've got some prod here. You know that adds another little X factor to the zero trust. (laughing) >> Yeah, I think I aptly heard it called once, we have a service mess on our hands, right? (laughing) >> Right, right. >> We have this stuff so sort of sprawled everywhere now, how do we wrangle it? How do we get our hands around it? And so as much as I think service mess is a play on sort of the language, I think this is where that emerging category of service mesh does make sense. >> Right. >> It's really looking at that and saying, okay, I'm going to have stuff in private cloud, public cloud, maybe multiple public cloud providers, how do I treat all of that in a uniform way? I want to know what's running where. I want to have rules around who can talk to who. >> Right. >> And that's a big focus for us with Console, in terms of, how do we have a consistent way of knowing what's running where a consistent set of rules around who can talk to who. >> Right. >> And do it across all these hybrid environments, right? >> Right, right, but wait, don't buy it yet, there's more. (laughing) Because then I've got all the APIs right? So now you've got all this application integration, many of which are with cloud based applications. So now you've got that complexity and you're pulling all these bits and connections from different infrastructures, different applications, some in house, some outside, so how do you bring some organization to that madness? >> No, that's a super good question. If you ever want to role change, take a look at our marketing department, you've got this down. (laughing) You know, I would say what it comes down to a heterogeneity is going to be fundamental, right? You're going to have folks that are going to operate different tools, different technologies for whatever reasons, right? Might be a historical choice, might be just they have better relations with a particular vendor. So our view has been, how do you inter op with all these things? Part of it is focus on open source. Part of it is focus on API driven. Part of it is focused on you have to do API integrations with all these systems because you're never going to get sort of the end user to standardize everything on a single platform. >> Right, right. It's funny, we were at a show talking about RPA, robotic process automation, and they, they treat those processes as employees in the fact that they give them identities. >> Right. >> So they can manage them. You hire them, you turn 'em on, they work for you for a while and then you might want to turn them off after they're done whatever doing, that you've put them in place for. But literally they were treating them as an employee. >> Right. >> Treating them with like an employee lead identity that they could have all the assigned rules and restrictions to then let the RPA do what it was supposed to do. It's like interesting concept. >> Yeah, and I think it mirrors I think what we see in a lot of different spaces which is what we were maybe managing before was the sort of very physical thing. Maybe it was we called it Robot 1234, right? Or in the same way we might say, this is server at IP 1234. >> Right. >> On our network. And so we're managing this really physical unit, whether it's an IP, a machine, a serial number. How do we take up the level of abstraction and instead say, you know actually all of these machines, whether IP one, IP two, IP three, they're a web server and whether it's robots one, two or three, they're a door attach, right? >> Right, right. >> And so now we start talking about identity and it gives us this more powerful abstraction to sort of talk about these underlying bits. >> Right. >> And I think it sort of follows the history of everything, right? Which is like how do we add new layers of abstraction that let us manage the complexity that we have? >> Right, right, so it's interesting right in Ray Kurzweil's keynote earlier today, hopefully you saw that, he talked about, basically exponential curves and that's really what we're facing so the amount of data, the amount of complexity is only going to increase dramatically. We're trying to virtualize so much of this and abstract it away but then that adds a different layer of management. At the same time, you're going to have a lot more horsepower to work with on the compute side, so is it kind of like the old Wintel, I got a faster PC, it's getting eaten up by more windows? I mean, do you see the automation being able to keep up with kind of the increasing layers of abstraction? >> Yeah, I mean I think there's a grain of that. Are we losing, just because we're getting access to more resources are we using it more efficiently? I think there's some fairness in, with each layer of abstraction we're sort of introduction additional performance cost, sort of to reduce that, but I think overall what we might be doing is increasing the amount of compute tenfold, but adding a 5% additional management fee, so it's still, I think it's still net and net we're able to do much more productive work, go to much bigger scale but only if you have the right abstractions, right? And I think that's where this kind of stuff comes in is, okay great, I'm going to have 10 times as many machines, how do I deal with the fact that my current security model barely works at my current scale? How do I go to 10x the scale? Or if I'm pointing and clicking to provision a machine, how does that work when I'm going to manage a thousand machines, right? >> Yeah. >> You have to bring in additional tooling and automation and sort of think about it at the next higher level. >> Yeah. >> And I think that's all, all part of this process of adopting cloud and sort of getting that leverage. >> It's so interesting, just the whole scale discussion because at the end of the day, right, scale wins and there's a great interview with James Hamilton from AWS, and it's old, but he's talking about kind of scale and he talks about how many server that were sold in this whatever calendar year it was, versus how many mobile phones were sold and it's many ores of magnitude different and the fact that he's thinking in terms of these types of scales as opposed to, you know, which was a big number in the service sales side, but really the scale challenge introduced by these giant clouds and Facebook and the like really changed the game fundamentally in how do you manage these things. >> Totally, totally and I think that's been our view at HashiCorp, is that when you talk about about kinds of the tidal shift of infrastructure from on premise, relatively static VMware centric to AWS, plus Azure, plus Google, plus VMware, it's not just a change of, okay it's of one server here to one server there. It's like going from one server here to 50 servers that I'm changing at every other day rather than every other year, right? >> Right, right. >> And so it's this sort of order of magnitude of scale but also an order of magnitude in terms of sort of the rate of change as well. >> Right, right. >> And I think that puts downward pressure on how do I provision? How do I secure? How do I deploy applications? How do I secure all of this stuff, right? >> Right. >> I think ever layer of the infrastructure gets hit by this change. >> Right, right, alright so you're a smart guy. You're always looking forward. What are some of the things you're working on down the road? Big challenges that you're looking forward to tackling? >> Oh, okay, that's fun. I mean I think the biggest challenge is how do we get this stuff to be simpler for people to use? Because I think what we're going through is you get this sort of see-saw effect, right? Which is okay, we're getting access to all this new hardware, all this new compute, all these new APIs, but it's not getting simpler, right? >> Right, right. >> It's getting exponentially more complicated. >> Right, right. >> And so I think part of it is how do we go back to sort of looking at what's the core of drivers here? It's like, okay well we want to make it easier for people to deliver and deploy their applications, let's go back to sort of, in some sense, the drawing board, say how do we abstract all of these new goodies that we've been given but make it consumable and easy to learn? Because otherwise, you know, what's the point? It's like, here's a catalog of 50,000 things and no one knows how to use any of it. >> Right, right, right. (laughing) Yeah it's funny, I'm waiting for that next abstraction for AWS, instead of the big giant slide that Andy shows every year. (laughing) It's just that I just want to plug in and you figure out. >> Right. >> What connects on the backend. I can't even hardly read that stuff-- >> Maybe AI will save us. >> Let's hope so. Alright Armon, well thanks for taking a few minutes out of your day and sitting down with us. >> My pleasure, thanks so much, Jeff. >> Alright, he's Armon, I'm Jeff, you're watching theCUBE, we're at PagerDuty Summit in downtown San Francisco, thanks for watching. (upbeat techno music)

Published Date : Sep 11 2018

SUMMARY :

From Union Square in downtown San Francisco, this guy likes to get into the weeds. and so the talk was really about zero trust networking. Or are you able to do zero trust with no blockchain? We were able to get through with no blockchain, But I think kind of the gist of it And I think the problem is we live Right, and I think you see it with the Target breech, if you know the bad guys are going to get in, And that's really the crux of what Google calls Beyond Crop, So how do we get to a point where we know and you have to enforce that access, right? and has access to the identity It's like, that context is important I got apps running in the database, I think this is where that emerging category and saying, okay, I'm going to have stuff of knowing what's running where some organization to that madness? Part of it is focused on you have to do API integrations in the fact that they give them identities. You hire them, you turn 'em on, they work for you to then let the RPA do what it was supposed to do. Or in the same way we might say, this is server at IP 1234. and instead say, you know actually to sort of talk about these underlying bits. I mean, do you see the automation being able to keep up And I think that's where this kind of stuff comes in and sort of think about it at the next higher level. and sort of getting that leverage. and the fact that he's thinking is that when you talk about about kinds of the tidal shift of sort of the rate of change as well. of the infrastructure gets hit by this change. Right, right, alright so you're a smart guy. Because I think what we're going through It's getting exponentially And so I think part of it is how do we go back for AWS, instead of the big giant slide What connects on the backend. Alright Armon, well thanks for taking a few minutes in downtown San Francisco, thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Jeff FrickPERSON

0.99+

James HamiltonPERSON

0.99+

10xQUANTITY

0.99+

5%QUANTITY

0.99+

10 timesQUANTITY

0.99+

ArmonPERSON

0.99+

50 serversQUANTITY

0.99+

one serverQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

HashiCorpORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Armon DadgarPERSON

0.99+

Union SquareLOCATION

0.99+

EquifaxORGANIZATION

0.99+

Ray KurzweilPERSON

0.99+

Armon DagarPERSON

0.99+

GoogleORGANIZATION

0.99+

each layerQUANTITY

0.99+

Neiman MarcusORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

AndyPERSON

0.98+

single platformQUANTITY

0.97+

three piecesQUANTITY

0.97+

50,000 thingsQUANTITY

0.97+

PagerDuty Summit '18EVENT

0.97+

IP 1234OTHER

0.94+

theCUBEORGANIZATION

0.94+

TargetORGANIZATION

0.93+

PagerDuty Summit 2018EVENT

0.93+

zeroQUANTITY

0.93+

AzureTITLE

0.91+

one thingQUANTITY

0.87+

San FranciscoLOCATION

0.87+

WintelORGANIZATION

0.86+

IP twoOTHER

0.86+

Robot 1234OTHER

0.85+

earlier todayDATE

0.85+

threeQUANTITY

0.84+

Union Square, San FranciscoLOCATION

0.84+

Westin St. Francis,LOCATION

0.84+

todayDATE

0.83+

oneQUANTITY

0.82+

PagerDuty summitEVENT

0.81+

thousand machinesQUANTITY

0.77+

IP threeOTHER

0.77+

PagerDuty SummitLOCATION

0.76+

zero trustQUANTITY

0.75+

VMwareTITLE

0.74+

X factorTITLE

0.73+

oneOTHER

0.7+

X factorORGANIZATION

0.69+

HashiCorpPERSON

0.64+

twoQUANTITY

0.63+

windowsTITLE

0.57+

IPQUANTITY

0.48+

ConsoleORGANIZATION

0.35+

Machine Learning Panel | Machine Learning Everywhere 2018


 

>> Announcer: Live from New York, it's theCUBE. Covering machine learning everywhere. Build your ladder to AI. Brought to you by IBM. Welcome back to New York City. Along with Dave Vellante, I'm John Walls. We continue our coverage here on theCUBE of machine learning everywhere. Build your ladder to AI, IBM our host here today. We put together, occasionally at these events, a panel of esteemed experts with deep perspectives on a particular subject. Today our influencer panel is comprised of three well-known and respected authorities in this space. Glad to have Colin Sumpter here with us. He's the man with the mic, by the way. He's going to talk first. But, Colin is an IT architect with CrowdMole. Thank you for being with us, Colin. Jennifer Shin, those of you on theCUBE, you're very familiar with Jennifer, a long time Cuber. Founded 8 Path Solutions, on the faculty at NYU and Cal Berkeley, and also with us is Craig Brown, a big data consultant. And a home game for all of you guys, right, more or less here we are in the city. So, thanks for having us, we appreciate the time. First off, let's just talk about the title of the event, Build Your Path... Or Your Ladder, excuse me, to AI. What are those steps on that ladder, Colin? The fundamental steps that you've got to jump on, or step on, in order to get to that true AI environment? >> In order to get to that true AI environment, John, is a matter of mastering or organizing your information well enough to perform analytics. That'll give you two choices to do either linear regression or supervised classification, and then you actually have enough organized data to talk to your team and organize your team around that data to begin that ladder to successively benefit from your data science program. >> Want to take a stab at it, Jennifer? >> So, I would say, compute, right? You need to have the right processing, or at least the ability to scale out to be able to process the algorithm fast enough to be able to find value in your data. I think the other thing is, of course, the data source itself. Do you have right data to answer the questions you want to answer? So, I think, without those two things, you'll either have a lot of great data that you can't process in time, or you'll have a great process or a great algorithm that has no real information, so your output is useless. I think those are the fundamental things you really do need to have any sort of AI solution built. >> I'll take a stab at it from the business side. They have to adopt it first. They have to believe that this is going to benefit them and that the effort that's necessary in order to build into the various aspects of algorithms and data subjects is there, so I think adopting the concept of machine learning and the development aspects that it takes to do that is a key component to building the ladder. >> So this just isn't toe in the water, right? You got to dive in the deep end, right? >> Craig: Right. >> It gets to culture. If you look at most organizations, not the big five market capped companies, but most organizations, data is not at their core. Humans are at their core, human expertise and data is sort of bolted on, but that has to change, or they're going to get disrupted. Data has to be at the core, maybe the human expertise leverages that data. What do you guys seeing with end customers in terms of their readiness for this transformation? >> What I'm seeing customers spending time right now is getting out of the silos. So, when you speak culture, that's primarily what the culture surrounds. They develop applications with functionality as a silo, and data specific to that functionality is the component in which they look at data. They have to get out of that mindset and look at the data holistically, and ultimately, in these events, looking at it as an asset. >> The data is a shared resource. >> Craig: Right, correct. >> Okay, and again, with the exception of the... Whether it's Google, Facebook, obviously, but the Ubers, the AirBNB's, etc... With the exception of those guys, most customers aren't there. Still, the data is in silos, they've got myriad infrastructure. Your thoughts, Jennifer? >> I'm also seeing sort of a disconnect between the operationalizing team, the team that runs these codes, or has a real business need for it, and sometimes you'll see corporations with research teams, and there's sort of a disconnect between what the researchers do and what these operations, or marketing, whatever domain it is, what they're doing in terms of a day to day operation. So, for instance, a researcher will look really deep into these algorithms, and may know a lot about deep learning in theory, in theoretical world, and might publish a paper that's really interesting. But, that application part where they're actually being used every day, there's this difference there, where you really shouldn't have that difference. There should be more alignment. I think actually aligning those resources... I think companies are struggling with that. >> So, Colin, we were talking off camera about RPA, Robotic Process Automation. Where's the play for machine intelligence and RPA? Maybe, first of all, you could explain RPA. >> David, RPA stands for Robotic Process Automation. That's going to enable you to grow and scale a digital workforce. Typically, it's done in the cloud. The way RPA and Robotic Process Automation plays into machine learning and data science, is that it allows you to outsource business processes to compensate for the lack of human expertise that's available in the marketplace, because you need competency to enable the technology to take advantage of these new benefits coming in the market. And, when you start automating some of these processes, you can keep pace with the innovation in the marketplace and allow the human expertise to gradually grow into these new data science technologies. >> So, I was mentioning some of the big guys before. Top five market capped companies: Google, Amazon, Apple, Facebook, Microsoft, all digital. Microsoft you can argue, but still, pretty digital, pretty data oriented. My question is about closing that gap. In your view, can companies close that gap? How can they close that gap? Are you guys helping companies close that gap? It's a wide chasm, it seems. Thoughts? >> The thought on closing the chasm is... presenting the technology to the decision-makers. What we've learned is that... you don't know what you don't know, so it's impossible to find the new technologies if you don't have the vocabulary to just begin a simple research of these new technologies. And, to close that gap, it really comes down to the awareness, events like theCUBE, webinars, different educational opportunities that are available to line of business owners, directors, VP's of systems and services, to begin that awareness process, finding consultants... begin that pipeline enablement to begin allowing the business to take advantage and harness data science, machine learning and what's coming. >> One of the things I've noticed is that there's a lot of information out there, like everyone a webinar, everyone has tutorials, but there's a lot of overlap. There aren't that many very sophisticated documents you can find about how to implement it in real world conditions. They all tend to use the same core data set, a lot of these machine learning tutorials you'll find, which is hilarious because the data set's actually very small. And I know where it comes from, just from having the expertise, but it's not something I'd ever use in the real world. The level of skill you need to be able to do any of these methodologies. But that's what's out there. So, there's a lot of information, but they're kind of at a rudimentary level. They're not really at that sophisticated level where you're going to learn enough to deploy in real world conditions. One of the things I'm noticing is, with the technical teams, with the data science team, machine learning teams, they're kind of using the same methodologies I used maybe 10 years ago. Because the management who manage these teams are not technical enough. They're business people, so they don't understand how to guide them, how to explain hey maybe you shouldn't do that with your code, because that's actually going to cause a problem. You should use parallel code, you should make sure everything is running in parallel so compute's faster. But, if these younger teams are actually learning for the first time, they make the same mistakes you made 10 years ago. So, I think, what I'm noticing is that lack of leadership is partly one of the reasons, and also the assumption that a non-technical person can lead the technical team. >> So, it's just not skillset on the worker level, if you will. It's also knowledge base on the decision-maker level. That's a bad place to be, right? So, how do you get into the door to a business like that? Obviously, and we've talked about this a little bit today, that some companies say, "We're not data companies, we're not digital companies, we sell widgets." Well, yeah but you sell widgets and you need this to sell more widgets. And so, how do you get into the door and talk about this problem that Jennifer just cited? You're signing the checks, man. You're going to have to get up to speed on this otherwise you're not going to have checks to sign in three to five years, you're done! >> I think that speaks to use cases. I think that, and what I'm actually saying at customers, is that there's a disconnect and an understanding from the executive teams and the low-level technical teams on what the use case actually means to the business. Some of the use cases are operational in nature. Some of the use cases are data in nature. There's no real conformity on what does the use case mean across the organization, and that understanding isn't there. And so, the CIO's, the CEO's, the CTO's think that, "Okay, we're going to achieve a certain level of capability if we do a variety of technological things," and the business is looking to effectively improve some or bring some efficiency to business processes. At each level within the organization, the understanding is at the level at which the discussions are being made. And so, I'm in these meetings with senior executives and we have lots of ideas on how we can bring efficiencies and some operational productivity with technology. And then we get in a meeting with the data stewards and "What are these guys talking about? They don't understand what's going on at the data level and what data we have." And then that's where the data quality challenges come into the conversation, so I think that, to close that cataclysm, we have to figure out who needs to be in the room to effectively help us build the right understanding around the use cases and then bring the technology to those use cases then actually see within the organization how we're affecting that. >> So, to change the questioning here... I want you guys to think about how capable can we make machines in the near term, let's talk next decade near term. Let's say next decade. How capable can we make machines and are there limits to what we should do? >> That's a tough one. Although you want to go next decade, we're still faced with some of the challenges today in terms of, again, that adoption, the use case scenarios, and then what my colleagues are saying here about the various data challenges and dev ops and things. So, there's a number of things that we have to overcome, but if we can get past those areas in the next decade, I don't think there's going to be much of a limit, in my opinion, as to what the technology can do and what we can ask the machines to produce for us. As Colin mentioned, with RPA, I think that the capability is there, right? But, can we also ultimately, as humans, leverage that capability effectively? >> I get this question a lot. People are really worried about AI and robots taking over, and all of that. And I go... Well, let's think about the example. We've all been online, probably over the weekend, maybe it's 3 or 4 AM, checking your bank account, and you get an error message your password is wrong. And we swear... And I've been there where I'm like, "No, no my password's right." And it keeps saying that the password is wrong. Of course, then I change it, and it's still wrong. Then, the next day when I login, I can login, same password, because they didn't put a great error message there. They just defaulted to wrong password when it's probably a server that's down. So, there are these basics or processes that we could be improving which no one's improving. So you think in that example, how many customer service reps are going to be contacted to try to address that? How many IT teams? So, for every one of these bad technologies that are out there, or technologies that are not being run efficiently or run in a way that makes sense, you actually have maybe three people that are going to be contacted to try to resolve an issue that actually maybe could have been avoided to begin with. I feel like it's optimistic to say that robots are going to take over, because you're probably going to need more people to put band-aids on bad technology and bad engineering, frankly. And I think that's the reality of it. If we had hoverboards, that would be great, you know? For a while, we thought we did, right? But we found out, oh it's not quite hoverboards. I feel like that might be what happens with AI. We might think we have it, and then go oh wait, it's not really what we thought it was. >> So there are real limits, certainly in the near to mid to maybe even long term, that are imposed. But you're an optimist. >> Yeah. Well, not so much with AI but everything else, sure. (laughing) AI, I'm a little bit like, "Well, it would be great, but I'd like basic things to be taken care of every day." So, I think the usefulness of technology is not something anyone's talking about. They're talking about this advancement, that advancement, things people don't understand, don't know even how to use in their life. Great, great is an idea. But, what about useful things we can actually use in our real life? >> So block and tackle first, and then put some reverses in later, if you will, to switch over to football. We were talking about it earlier, just about basics. Fundamentals, get your fundamentals right and then you can complement on that with supplementary technologies. Craig, Colin? >> Jen made some really good points and brought up some very good points, and so has... >> John: Craig. >> Craig, I'm sorry. (laughing) >> Craig: It's alright. >> 10 years out, Jen and Craig spoke to false positives. And false positives create a lot of inefficiency in businesses. So, when you start using machine learning and AI 10 years from now, maybe there's reduced false positives that have been scored in real time, allowing teams not to have their time consumed and their business resources consumed trying to resolve false positives. These false positives have a business value that, today, some businesses might not be able to record. In financial services, banks count money not lended. But, in every day business, a lot of businesses aren't counting the monetary consequences of false positives and the drag it has on their operational ability and capacity. >> I want to ask you guys about disruption. If you look at where the disruption, the digital disruptions, have taken place, obviously retail, certainly advertising, certainly content businesses... There are some industries that haven't been highly disruptive: financial services, insurance, we were talking earlier about aerospace, defense rather. Is any business, any industry, safe from digital disruption? >> There are. Certain industries are just highly regulated: healthcare, financial services, real estate, transactional law... These are very extremely regulated technologies, or businesses, that are... I don't want to say susceptible to technology, but they can be disrupted at a basic level, operational efficiency, to make these things happen, these business processes happen more rapidly, more accurately. >> So you guys buy that? There's some... I'd like to get a little debate going here. >> So, I work with the government, and the government's trying to change things. I feel like that's kind of a sign because they tend to be a little bit slower than, say, other private industries, or private companies. They have data, they're trying to actually put it into a system, meaning like if they have files... I think that, at some point, I got contacted about putting files that they found, like birth records, right, marriage records, that they found from 100-plus years ago and trying to put that into the system. By the way, I did look into it, there was no way to use AI for that, because there was no standardization across these files, so they have half a million files, but someone's probably going to manually have to enter that in. The reality is, I think because there's a demand for having things be digital, we aren't likely to see a decrease in that. We're not going to have one industry that goes, "Oh, your files aren't digital." Probably because they also want to be digital. The companies themselves, the employees themselves, want to see that change. So, I think there's going to be this continuous move toward it, but there's the question of, "Are we doing it better?" It is better than, say, having it on paper sometimes? Because sometimes I just feel like it's easier on paper than to have to look through my phone, look through the app. There's so many apps now! >> (laughing) I got my index cards cards still, Jennifer! Dave's got his notebook! >> I'm not sure I want my ledger to be on paper... >> Right! So I think that's going to be an interesting thing when people take a step back and go like, "Is this really better? Is this actually an improvement?" Because I don't think all things are better digital. >> That's a great question. Will the world be a better, more prosperous place... Uncertain. Your thoughts? >> I think the competition is probably the driver as to who has to this now, who's not safe. The organizations that are heavily regulated or compliance-driven can actually use that as the reasoning for not jumping into the barrel right now, and letting it happen in other areas first, watching the technology mature-- >> Dave: Let's wait. >> Yeah, let's wait, because that's traditionally how they-- >> Dave: Good strategy in your opinion? >> It depends on the entity but I think there's nothing wrong with being safe. There's nothing wrong with waiting for a variety of innovations to mature. What level of maturity, I think, is the perspective that probably is another discussion for another day, but I think that it's okay. I don't think that everyone should jump in. Get some lessons learned, watch how the other guys do it. I think that safety is in the eyes of the beholder, right? But some organizations are just competition fierce and they need a competitive edge and this is where they get it. >> When you say safety, do you mean safety in making decisions, or do you mean safety in protecting data? How are you defining safety? >> Safety in terms of when they need to launch, and look into these new technologies as a basis for change within the organization. >> What about the other side of that point? There's so much more data about it, so much more behavior about it, so many more attitudes, so on and so forth. And there is privacy issues and security issues and all that... Those are real challenges for any company, and becoming exponentially more important as more is at stake. So, how do companies address that? That's got to be absolutely part of their equation, as they decide what these future deployments are, because they're going to have great, vast reams of data, but that's a lot of vulnerability too, isn't it? >> It's as vulnerable as they... So, from an organizational standpoint, they're accustomed to these... These challenges aren't new, right? We still see data breaches. >> They're bigger now, right? >> They're bigger, but we still see occasionally data breaches in organizations where we don't expect to see them. I think that, from that perspective, it's the experiences of the organizations that determine the risks they want to take on, to a certain degree. And then, based on those risks, and how they handle adversity within those risks, from an experience standpoint they know ultimately how to handle it, and get themselves to a place where they can figure out what happened and then fix the issues. And then the others watch while these risk-takers take on these types of scenarios. >> I want to underscore this whole disruption thing and ask... We don't have much time, I know we're going a little over. I want to ask you to pull out your Hubble telescopes. Let's make a 20 to 30 year view, so we're safe, because we know we're going to be wrong. I want a sort of scale of 1 to 10, high likelihood being 10, low being 1. Maybe sort of rapid fire. Do you think large retail stores are going to mostly disappear? What do you guys think? >> I think the way that they are structured, the way that they interact with their customers might change, but you're still going to need them because there are going to be times where you need to buy something. >> So, six, seven, something like that? Is that kind of consensus, or do you feel differently Colin? >> I feel retail's going to be around, especially fashion because certain people, and myself included, I need to try my clothes on. So, you need a location to go to, a physical location to actually feel the material, experience the material. >> Alright, so we kind of have a consensus there. It's probably no. How about driving-- >> I was going to say, Amazon opened a book store. Just saying, it's kind of funny because they got... And they opened the book store, so you know, I think what happens is people forget over time, they go, "It's a new idea." It's not so much a new idea. >> I heard a rumor the other day that their next big acquisition was going to be, not Neiman Marcus. What's the other high end retailer? >> Nordstrom? >> Nordstrom, yeah. And my wife said, "Bad idea, they'll ruin it." Will driving and owning your own car become an exception? >> Driving and owning your own car... >> Dave: 30 years now, we're talking. >> 30 years... Sure, I think the concept is there. I think that we're looking at that. IOT is moving us in that direction. 5G is around the corner. So, I think the makings of it is there. So, since I can dare to be wrong, yeah I think-- >> We'll be on 10G by then anyway, so-- >> Automobiles really haven't been disrupted, the car industry. But you're forecasting, I would tend to agree. Do you guys agree or no, or do you think that culturally I want to drive my own car? >> Yeah, I think people, I think a couple of things. How well engineered is it? Because if it's badly engineered, people are not going to want to use it. For instance, there are people who could take public transportation. It's the same idea, right? Everything's autonomous, you'd have to follow in line. There's going to be some system, some order to it. And you might go-- >> Dave: Good example, yeah. >> You might go, "Oh, I want it to be faster. I don't want to be in line with that autonomous vehicle. I want to get there faster, get there sooner." And there are people who want to have that control over their lives, but they're not subject to things like schedules all the time and that's their constraint. So, I think if the engineering is bad, you're going to have more problems and people are probably going to go away from wanting to be autonomous. >> Alright, Colin, one for you. Will robots and maybe 3D printing, for example RPA, will it reverse the trend toward offshore manufacturing? >> 30 years from now, yes. I think robotic process engineering, eventually you're going to be at your cubicle or your desk, or whatever it is, and you're going to be able to print office supplies. >> Do you guys think machines will make better diagnoses than doctors? Ohhhhh. >> I'll take that one. >> Alright, alright. >> I think yes, to a certain degree, because if you look at the... problems with diagnosis, right now they miss it and I don't know how people, even 30 years from now, will be different from that perspective, where machines can look at quite a bit of data about a patient in split seconds and say, "Hey, the likelihood of you recurring this disease is nil to none, because here's what I'm basing it on." I don't think doctors will be able to do that. Now, again, daring to be wrong! (laughing) >> Jennifer: Yeah so--6 >> Don't tell your own doctor either. (laughing) >> That's true. If anything happens, we know, we all know. I think it depends. So maybe 80%, some middle percentage might be the case. I think extreme outliers, maybe not so much. You think about anything that's programmed into an algorithm, someone probably identified that disease, a human being identified that as a disease, made that connection, and then it gets put into the algorithm. I think what w6ll happen is that, for the 20% that isn't being done well by machine, you'll have people who are more specialized being able to identify the outlier cases from, say, the standard. Normally, if you have certain symptoms, you have a cold, those are kind of standard ones. If you have this weird sort of thing where there's n6w variables, environmental variables for instance, your environment can actually lead to you having cancer. So, there's othe6 factors other than just your body and your health that's going to actually be important to think about wh6n diagnosing someone. >> John: Colin, go ahead. >> I think machines aren't going to out-decision doctors. I think doctors are going to work well the machine learning. For instance, there's a published document of Watson doing the research of a team of four in 10 minutes, when it normally takes a month. So, those doctors,6to bring up Jen and Craig's point, are going to have more time to focus in on what the actual symptoms are, to resolve the outcome of patient care and patient services in a way that benefits humanity. >> I just wish that, Dave, that you would have picked a shorter horizon that... 30 years, 20 I feel good about our chances of seeing that. 30 I'm just not so sure, I mean... For the two old guys on the panel here. >> The consensus is 20 years, not so much. But beyond 10 years, a lot's going to change. >> Well, thank you all for joining this. I always enjoy the discussions. Craig, Jennifer and Colin, thanks for being here with us here on theCUBE, we appreciate the time. Back with more here from New York right after this. You're watching theCUBE. (upbeat digital music)

Published Date : Feb 27 2018

SUMMARY :

Brought to you by IBM. enough organized data to talk to your team and organize or at least the ability to scale out to be able to process and that the effort that's necessary in order to build but that has to change, or they're going to get disrupted. and data specific to that functionality but the Ubers, the AirBNB's, etc... I think companies are struggling with that. Maybe, first of all, you could explain RPA. and allow the human expertise to gradually grow Are you guys helping companies close that gap? presenting the technology to the decision-makers. how to guide them, how to explain hey maybe you shouldn't You're going to have to get up to speed on this and the business is looking to effectively improve some and are there limits to what we should do? I don't think there's going to be much of a limit, that are going to be contacted to try to resolve an issue certainly in the near to mid to maybe even long term, but I'd like basic things to be taken care of every day." in later, if you will, to switch over to football. and brought up some very good points, and so has... Craig, I'm sorry. and the drag it has on their operational ability I want to ask you guys about disruption. operational efficiency, to make these things happen, I'd like to get a little debate going here. So, I think there's going to be this continuous move ledger to be on paper... So I think that's going to be an interesting thing Will the world be a better, more prosperous place... as to who has to this now, who's not safe. It depends on the entity but I think and look into these new technologies as a basis That's got to be absolutely part of their equation, they're accustomed to these... and get themselves to a place where they can figure out I want to ask you to pull out your Hubble telescopes. because there are going to be times I feel retail's going to be around, Alright, so we kind of have a consensus there. I think what happens is people forget over time, I heard a rumor the other day that their next big Will driving and owning your own car become an exception? So, since I can dare to be wrong, yeah I think-- or do you think that culturally I want to drive my own car? There's going to be some system, some order to it. going to go away from wanting to be autonomous. Alright, Colin, one for you. be able to print office supplies. Do you guys think machines will make "Hey, the likelihood of you recurring this disease Don't tell your own doctor either. being able to identify the outlier cases from, say, I think doctors are going to work well the machine learning. I just wish that, Dave, that you would have picked The consensus is 20 years, not so much. I always enjoy the discussions.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CraigPERSON

0.99+

JenniferPERSON

0.99+

ColinPERSON

0.99+

DavidPERSON

0.99+

Dave VellantePERSON

0.99+

AmazonORGANIZATION

0.99+

JenPERSON

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Jennifer ShinPERSON

0.99+

AppleORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

DavePERSON

0.99+

Colin SumpterPERSON

0.99+

Craig BrownPERSON

0.99+

John WallsPERSON

0.99+

20QUANTITY

0.99+

JohnPERSON

0.99+

NordstromORGANIZATION

0.99+

IBMORGANIZATION

0.99+

AirBNBORGANIZATION

0.99+

New YorkLOCATION

0.99+

Neiman MarcusORGANIZATION

0.99+

80%QUANTITY

0.99+

20%QUANTITY

0.99+

3DATE

0.99+

todayDATE

0.99+

threeQUANTITY

0.99+

New York CityLOCATION

0.99+

20 yearsQUANTITY

0.99+

CrowdMoleORGANIZATION

0.99+

10QUANTITY

0.99+

4 AMDATE

0.99+

8 Path SolutionsORGANIZATION

0.99+

TodayDATE

0.99+

two old guysQUANTITY

0.99+

five yearsQUANTITY

0.99+

30 yearsQUANTITY

0.99+

30 yearQUANTITY

0.99+

FirstQUANTITY

0.99+

three peopleQUANTITY

0.99+

UbersORGANIZATION

0.99+

10 minutesQUANTITY

0.99+

10 yearsQUANTITY

0.99+

a monthQUANTITY

0.98+

oneQUANTITY

0.98+

first timeQUANTITY

0.98+

next decadeDATE

0.98+

10 years agoDATE

0.98+

sevenQUANTITY

0.98+

30QUANTITY

0.98+

HubbleORGANIZATION

0.98+

two thingsQUANTITY

0.98+

1QUANTITY

0.98+

half a million filesQUANTITY

0.97+

Omer Trajman, Rocana - #BigDataNYC 2016 - #theCUBE


 

>> Announcer: From New York, it's the Cube. Covering Big Data New York City 2016. Brought to you by Headline Sponsors, Cisco, IBM, NVIDIA, and our ecosystem sponsors. Now, here are your hosts, Dave Vellante and George Gilbert. >> Welcome back to New York City everybody, this is the Cube, the worldwide leader in live tech coverage, and we've been going wall to wall since Monday here at Strata plus Hadoop World, Big Data NYC is our show within the show. Omer Trajman is here, he's the CEO of Rocana, Cube alum, good to see you again. >> Yeah you too, it's good to be here again. >> What's the deal with the shirt, it says, 'your boss is useless', what are you talking about? >> So, if I wasn't on mic'd up, I'd get up and show you, ~but you can see in the faint print that it's not talking about how your boss is useless, right, it's talking about how you make better use of data and what your boss' expectations are. The point we're trying to get across is that context matters. If you're looking at a small fraction of the information then you're not going to get the full picture, you're not going understand what's actually going on. You have to look at everything, you have no choice today. >> So Rocana has some ambitious plans to enter this market, generally referred to as IT operations, if I can call it that, why does the world need another play on IT operations? >> In IT operations? If you look at the current state of IT operations in general, and specifically people think of this largely versus monitoring, is I've got a bunch of systems, I can't keep track of everything, so I'm going to pick and choose what I pay attention to. I'm going to look at data selectively, I'm only going to keep it for as long as I can afford to keep it, and I'm not going to pay attention to the stuff that's outside that hasn't caused problems, yet. The problem is, the yet, right? You all have seen the Delta outages, the Southwest issues, the Neiman Marcus website, right? There's plenty of examples of where someone just wasn't looking at information, no one was paying attention to it or collecting it and they got blindsided. And in today's pace of business where everything is digital, everyone's interacting with the machines directly, everything's got to be up all the time. Or at least you have to know that something's gone askew and fix it quickly. And so our take is, what we call total operational visibility. You got to pay attention to everything all the time and that's easier said than done. >> Well, because that requires you got to pay attention to all the data, although this reminds me of IP meta in 2010, said, "Sampling is dead", alright? Do you agree he's right? >> Trajman: I agree. And so it's much more than that, of course right, sampling is dead, you want to look at all the details all the time, you want to look at it from all sources. You want to keep enough histories so if you're the CIO of a retailer, if your CEO says, "Are we ready for Cyber Monday, can you take a look at last year's lead up and this years", and the CEO's going to look back at them and say, "I have seven days of data (chuckles), "what are you talking about, last year". You have to keep it for as long as you need to, to address business issues. But collecting the data, that's step one, right? I think that's where people struggle today, but they don't realize that you can't just collect it all and give someone a search box, or say, "go build your charts". Companies don't have data scientists to throw at these problems. You actually have to have the analytics built in. Things that are purpose built for data center and IT operations, the machine learning models, the built in cubes, the built in views, visualizations that just work out of the box, and show you billions of events a day, the way you need to look at that information. That's prebuilt, that comes out of the box, that's also a key differentiator. >> Would it be fair to say that Hadoop has historically has been this repository for all sorts of data, and but it was a tool set, and that Splunk was the anti-Hadoop, sort of out of the box. It was an application that had some... It collected certain types of data and it had views out of the box for that data. Sounds like you're trying to take the best of each world where you have the full extensibility and visibility that you can collect with all your data in Hadoop but you've pre built all the analytic infrastructure that you need to see your operations in context. >> I think when you look at Hadoop and Splunk and your concert of Rocana's the best of both worlds, is very apt. It's a prepackaged application, it just installs. You don't have to go in under the covers and stitch everything together. It has the power of scalability that Hadoop has, it has the openness, right, 'cause you can still get at the data and do what you need with it, but you get an application that's creating value, day one. >> Okay, so maybe take us... Peel back the onion one layer, if you can go back to last year's Cyber Monday and you've got out of the box functionality, tell us how you make sense out of the data for each organization, so that the context is meaningful for them. >> Yeah, absolutely. What's interesting is that it's not a one time task, right? Every time you're trying to solve a slightly different problem, or move the business in different direction, you want to look at data differently. So we think of this more as a toolkit that helps you navigate where to find the root cause or isolate where a particular problem is, or where you need to invest, or grow the business. In the Cyber Monday example, right what you want to look at is, let me take a zoom out view, I just want to see trends over time, the months leading up or the weeks leading up to Cyber Monday. Let's look at it this year. Let's look at it last year. Let's stack on the graph everything from the edge caching, to the application, to my proxy servers to my host servers through to my network, gimmie the broad view of everything, and just show me the trend lines and show me how those trend lines are deviating. Where is there unexpected patterns and behavior, and then I'm going to zoom in on those. And what's causing those, is there a new disconfiguration, did someone deploy a new network infrastructure, what has caused some change? Or is it just... It's all good, people are making more money, more people are coming to the website it's actually a capacity issue, we just need to add more servers. So you get the step back, show me everything without a query, and then drag and drop, zoom in to isolate where are there particular issues that I need to pay attention to. >> Vellante: And this is infrastructure? >> Trajman: It's infrastructure all the way through application... >> Correct? It is? So you can do application performance management, as well? >> We don't natively do the instrumentation there's a whole domain which is, bytecode instrumentation, we partner with companies that provide APM functionality, take that feed and incorporate it. Similar to a partner with companies that do wire level deep packet inspection. >> Vellante: I was going to say... >> Yeah, take that feed and incorporate it. Some stuff we do out of the box. NetFlow, things like IPFIX, STATSD, Syslog, log4j, right? There's kind of a lot of stuff that everyone needs standard interfaces that we do out of the box. And there's also pre-configured, content oriented parsers and visualizations for an OpenStack or for Cloud Foundry or for a Blue Coat System. There's certain things that we see everywhere that we can just handle out of the box, and then there's things that are very specific to each customer. >> A lot of talk about machine learning, deep learning, AI, at this event, how do you leverage that? >> How do we fit in? It's interesting 'cause we talk about the power delivers in the product but part of it is that it's transparent. Our users, who are actually on the console day to day or trying to use Rocana to solve problems, they're not data scientists. They don't understand the difference between analytic queries and full text search. They understand understand machine learning models. >> They're IT people, is that correct? >> They're IT folks, whose job it is to keep the lights on, right? And so, they expect the software to just do all of that. We employ the data scientists, we deliver the machine learning models. The software dynamically builds models continuously for everything it's looking at and then shows it in a manner that someone can just look at it and make sense of it. >> So it might be fair to say, maybe replay this, and if it's coming out right, most people, and even the focus of IBM's big roll out this week is, people have got their data links populated and they're just now beginning to experiment with the advanced analytics. You've got an application where it's already got the advanced analytics baked into such an extent that the operator doesn't really care or need to know about it. >> So here's the caveat, people have their data links populated with the data they know they need to look at. And that's largely line of business driven, which is a great area to apply big data machine learning, analytics, that's where the data scientists are employed. That's why what IBM is saying makes sense. When you get to the underlying infrastructure that runs it day to day, the data lakes are not populated. >> Interviewer: Oh, okay. >> They're data puddles. They do not have the content of information, the wealth of information, and so, instead of saying, "hey, let's populate them, "and then let's try to think about "how to analyze them, and then let's try to think about "how get insights from them, and then let's try to think "about, and then and then", how about we just have a product that does it all for you? That just shows you what to do. >> I don't want to pollute my data lake with that information, do I? >> What you want is, you want to take the business feeds that have been analyzed and you want to overlay them, so you want to send those over to probably a much larger lake, which is all the machine data underneath it. Because what you end up with especially as people move towards more elastic environments, or the hybrid cloud environments, in those environments, if a disk fails or machine fails it may not matter. Unless you can see the topline revenue have an impact, maybe it's fine to just leave the dead machine there and isolate it. How IT operates in those environments requires knowledge of the business in order to become more efficient. >> You want to link the infrastructure to the value. >> Trajman: Exactly. >> You're taking feeds essentially, from the business data and that's informing prioritization. >> That's exactly right. So take as an example, Point of Sale systems. All the Point of Sale systems today, they're just PCs, they're computers, right? I have to monitor them and the infrastructure to make sure it's up and running. As a side effect, I also know the transactions. As an IT person, I not only know that a system is up, I know that it's generating the same amount of revenue, or a different amount of revenue than it did last week, or that another system is doing. So I can both isolate a problem as an IT person, right, as an operator, but I can also go to the business and say, "Hey nothing's wrong with the system, we're not making as much money as we were, why is that", and let's have a conversation about that. So it brings IT into a conversation with the business that they've never been able to have before, using the data they've always had. They've always had access to. >> Omer, We were talking a little before about how many more companies are starting to move big parts of their workloads into public cloud. But the notion of hybrid cloud, having a hybrid cloud strategy is still a bit of a squishy term. >> Trajman: Yeah. (laughs) >> Help us fill in, for perhaps, those customers who are trying to figure out how to do it, where you add value and make that possible. >> Well, what's happening is the world's actually getting more complex with cloud, it's another place that I can use to cost effectively balance my workloads. We do see more people moving towards public cloud or setting up private cloud. We don't see anyone whole scale, saying "I'm shutting down everything", and "I'm going to send everything to Amazon" or "I'm going to send everything to Microsoft". Even in the public cloud, it's a multi cloud strategy. And so what you've done is, you've expanded the number of data centers. Maybe I add, a half dozen data centers, now I've got a half dozen more in each of these cloud providers. It actually exacerbates the need for being able to do multi-tier monitoring. Let me monitor at full fidelity, full scale, everything that's happening in each piece of my infrastructure, aggregate the key parts of that, forward them onto something central so I can see everything that's going on in one place, but also be able to dive into the details. And that hybrid model keeps you from clogging up the pipes, it keeps you from information overload, but now you need it more than ever. >> To what extent does that actually allow you, not just to monitor, but to re-mediate? >> The sooner you notice that there's an issue, the sooner you can address that issue. The sooner you see how that issue impacts other systems, the more likely you are to identify the common root cause. An example is a customer that we worked with prior to Rocana, had spent an entire weekend isolating an issue, it was a ticket that had gotten escalated, they found the root cause, it was a core system, and they looked at it and said, "Well if that core system was actually "the root cause, these other four systems "should have also had issues". They went back into the ticketing system, sure enough, there were tickets that just didn't get escalated. Had they seen all of those issues at the same time, had they been able to quickly spin the cube view of everything, they would have found it significantly faster. They would have drawn that commonality and seen the relationships much more quickly. It requires having all the data in the same place. >> Part of the actionable information is to help triage the tickets in a sense, of that's the connection to remediation. >> Trajman: Context is everything. >> Okay. >> So how's it going? Rocana's kind of a heavy lift. (Trajman laughs) You're going after some pretty entrenched businesses that have been used to doing things a certain way. How's business? How you guys doing? >> Business is, it's amazing, I mean, the need is so severe. We had a prospective customer we were talking to, who's just starting to think about this digital transformation initiative and what they needed from an operational visibility perspective. We connected them with an existing customer that had rolled out a system and, the new prospect looked at the existing customer, called us up and said, "That," (laughs) "that's what we want, right there". Everyone's got centralized log analytics, total operational visibility, people are recognizing these are necessary to support where the business has to go and businesses are now realizing they have to digitize everything. They have to have the same kind of experience that Amazon and Google and Facebook and everyone else has. Consumers have come to expect it. This is what is required from IT in order to support it, and so we're actually getting... You say it's a heavy lift, we're getting pulled by the market. I don't think we've had a conversation where someone hasn't said, "I need that", that's what we're going through today that is my number one pang. >> That's good. Heavy lifts are good if you've got the stomach for it. >> Trajman: That's what I do. >> If you got a tailwind, that's fantastic. It sounds like things are going well. Omer, congratulations on the success we really appreciate you sharing it with our Cube audience. >> Thank you very much, thanks for having me. >> You're welcome. Keep it right there everybody. We'll be back with our next guest, this is the Cube, we're live, day four from NYC. Be right back.

Published Date : Sep 30 2016

SUMMARY :

Brought to you by Headline Sponsors, Cube alum, good to see you again. good to be here again. fraction of the information and I'm not going to pay attention the way you need to look the best of each world where you have the it has the openness, right, 'cause you can for each organization, so that the context from the edge caching, to the application, Trajman: It's infrastructure all the do the instrumentation that we do out of the box. on the console day to day We employ the data scientists, that the operator doesn't really care that runs it day to day, They do not have the and you want to overlay them, infrastructure to the value. essentially, from the business and the infrastructure But the notion of hybrid and make that possible. and "I'm going to send the sooner you can address that issue. Part of the actionable information How you guys doing? They have to have the you've got the stomach for it. Omer, congratulations on the success Thank you very much, Keep it right there everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TrajmanPERSON

0.99+

IBMORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

NYCLOCATION

0.99+

George GilbertPERSON

0.99+

NVIDIAORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Omer TrajmanPERSON

0.99+

MicrosoftORGANIZATION

0.99+

2010DATE

0.99+

GoogleORGANIZATION

0.99+

last yearDATE

0.99+

FacebookORGANIZATION

0.99+

Neiman MarcusORGANIZATION

0.99+

New YorkLOCATION

0.99+

seven daysQUANTITY

0.99+

New York CityLOCATION

0.99+

four systemsQUANTITY

0.99+

HadoopTITLE

0.99+

last weekDATE

0.99+

RocanaORGANIZATION

0.99+

this yearDATE

0.99+

RocanaPERSON

0.99+

each pieceQUANTITY

0.98+

todayDATE

0.98+

bothQUANTITY

0.98+

MondayDATE

0.98+

eachQUANTITY

0.98+

each organizationQUANTITY

0.97+

each customerQUANTITY

0.97+

Big DataORGANIZATION

0.97+

log4jTITLE

0.96+

this weekDATE

0.95+

IPFIXTITLE

0.95+

Cyber MondayEVENT

0.95+

OpenStackTITLE

0.95+

day fourQUANTITY

0.94+

step oneQUANTITY

0.93+

STATSDTITLE

0.93+

both worldsQUANTITY

0.93+

SyslogTITLE

0.93+

SouthwestORGANIZATION

0.92+

billions of events a dayQUANTITY

0.92+

each worldQUANTITY

0.91+

SplunkTITLE

0.9+

SplunkPERSON

0.89+

one layerQUANTITY

0.89+

#BigDataNYCEVENT

0.87+

a half dozen data centersQUANTITY

0.87+

Cloud FoundryTITLE

0.85+

oneQUANTITY

0.85+

HadoopPERSON

0.84+

VellanteORGANIZATION

0.84+

half dozenQUANTITY

0.83+

one time taskQUANTITY

0.82+

Hadoop WorldLOCATION

0.8+

NetFlowTITLE

0.8+

CubeORGANIZATION

0.79+

OmerPERSON

0.76+

2016DATE

0.75+

DeltaORGANIZATION

0.75+

this yearsDATE

0.73+

CubeCOMMERCIAL_ITEM

0.73+

HeadlineORGANIZATION

0.7+

day oneQUANTITY

0.66+

RocanaTITLE

0.62+

Big DataTITLE

0.53+

StrataLOCATION

0.5+