John Pollard, Zebra Technologies | Sports Data {Silicon Valley} 2018
>> Hey, welcome back everybody, Jeff Frick here with theCUBE. We're having a Cube conversation in our Palo Alto studio, the conference season hasn't got to full swing yet, so we can have a little bit more relaxed atmosphere here in the studio and we're really excited, as part of our continuing coverage for the Data Makes Possible sponsored by Western Digital, looking at cool applications, really the impact of data and analytics, ultimately it gets stored usually on a Western Digital hard drive some place, and this is a great segment. Who doesn't like talking about sports, and football, and advanced analytics? And we're really excited, I have John Pollard here, he is the VP of Business Development for Zebra Sports, John, great to see you. >> Jeff, thanks for having me. >> Absolutely, so before we jump into the fun stuff, just a little bit of background on Zebra Sports and Zebra Technologies. >> Okay well, first, Zebra Technologies is a publicly traded company, we started in the late 1960s, and really what we do is we track enterprise assets in industries typically like healthcare, retail, travel and logistics, and transportation. And what we've done is take that heritage and bring that over into the world of sports, starting four years ago with our relationship with the NFL as the official player tracking technology. >> It's such a great story of an old-line company, right? based in Illinois-- >> Yeah, Lincolnshire. >> Outside of Chicago, right? RFID tags, and inventory management, and all this kind of old-school stuff. But then to take that into this really dynamic world, A, of sports, but even more, advanced analytics, which is relatively new. And we've been at it for a few years, but what a great move by the company to go into this space. How did they choose to do that? >> Well it was an opportunity that just came to them through an RFP, the NFL had investigated different technologies to track players including optical and a GPS-based technologies, and now of course with Zebra, our location and technologies are based on RFID. And so we just took the heritage and our capabilities of really working at the edge of enterprises in those traditional industries from transactional moments, to inventory control moments, to analytics at the end, and took that model and ported it over to football, and it's turned out to be a very good relationship for us in a couple of ways. We've matured as a sports business over the four years, we've developed more opportunities to take our solutions, not just in-game but moving them into the practice facilities for NFL teams, but it's also opened up the aperture for other industries to now appreciate how we can track minute types of information, like players moving around on the football field, and translating it into usable information. >> So, for the people that aren't familiar, they can do a little homework. But basically you have a little tag, a little sensor, that goes onto the shoulder pads, right? >> There's two chips. >> Two chips, and from that you can tell where that player is all the time and how they move, how they fast they move, acceleration and all the type of stuff, right? >> Correct, we put two chips inside of the shoulder pads for down linemen, or people who play with their hands on the ground, we put a third chip between the shoulder blades. Those chips communicate with receiver boxes that have been installed across the perimeter or around the perimeter of a stadium, and they blink 12 times per second. And that does tell you who's on the field, where they are on the field, and in proximity to other players on the field. And once the play starts itself, we can see how fast they're going, we can calculate change of direction, acceleration and deceleration metrics, we can also see, as you know with football, interesting information like separation from a wide receiver in defensive back, which is critical when you're evaluating players' capabilities. >> So, this started about four years ago, right? >> Yes, we started our relationship with the league in-game, four years ago. >> Okay, so I'd just love to kind of hear your take on how the evolution of the introduction of this data was received by the league, received by the teams, something they'd never had before, right? Kind of a look and feel and you can look at film, but not to the degree and the tightness of tolerances that you guys are able to deliver. >> Well, like any new technology and information resource, it takes time to first of all determine what you want to do with that information, you have an idea when you start, and then it evolves over time. And so what we started with was tagging the players themselves and during the time, what we've really enjoyed in working with the NFL is that the league has to be very pragmatic and thoughtful when introducing new technologies and information. So they studied and researched the information to determine how much of this information do they share with the clubs, how much do they share with the fans and the media, and then what type of information sharing, what does that mean in terms of impact of the integrity of the game and fair competition. So, for the first two years it was more of a research and testing type of process, and starting in 2016 you started to see more of an acceleration of that data being shared with the clubs. Each club would receive their own data for in-game, and then we would start to see some of that trickle out through the NFL's Next Gen Stats brand banner on their NFL.com site. And so then we start to see more of that and then what I think we've really seen pick up pace certainly in 2017 is more utilization of this information from a media perspective. We're seeing it more integrated into the broadcasts themselves, so you have like kind of a live tracking set of information that keeps you contextually involved in the game. >> Right. And you were involved in advanced analytics before you joined Zebra, so you've been kind of in this advanced stats world for a while. So how did it change when you actually had a real-time sensor on people's bodies? >> Yeah it does feel a bit like Groundhog Day, right? I started more in the stats and advanced analytics when I worked for STATS LLC. In 2007, I developed a piece of software for the New Orleans Saints that they used to track observational statistics to game video. And it was a similar type of experience in starting in 2009 and introducing that to teams where it took about three or four years where teams started to feel like that new information resource was not a nice to have but a need to have, a premium ingredient that they could use for game planning, and then player evaluation, and also the technology could provide them some efficiencies. We're seeing that now with the tracking data. We just returned from the NFL Combine a couple weeks ago, and what I felt in all the conversations that we had with clubs was that there was a high level of appreciation and a lot of interest in how tracking data can help facilitate their traditional scouting and player evaluation processes, the technology itself how can it make the teams more efficient in evaluating players and developing game plans, so there's a lot of excitement. We've kind of hit that tipping point, if I may, where there's general acceptance and excitement about the data and then it's incumbent upon us as a partner with the league and with the teams for our practice clients to teach them how to use the analytics and statistics effectively. >> So I'm just curious, some of the specific data points that you've seen evolve over time and also the uses. I think you were talking about a little bit off camera that originally it was really more the training staff and it was really more kind of the health of the player. Then I would imagine it evolved to now you can actually see what's going on in terms of better analysis, but I would imagine it's going to evolve where coaches are getting that feedback in real-time on a per-play basis and are making in-game adjustments based on this real-time data. >> Well technically that's feasible today but then there's the rules of engagement with the league itself, and so the teams themselves, and the coaches, and the sideline aren't seeing this tracking data live, whether it be in the booth or on the sidelines. Now in a practice environment, that's what teams are using our system for. With inside of three seconds they're seeing real-time information show up about players during practice. Let's take an example, a player during practice who's coming back from injury. You might want to monitor their output during the week as they come back and they make sure that they're ready for the game on a week to week basis. Trainers are now able to see that information and take that over to a position coach or a head coach and make them aware of the performance of the player during practice. And I think sometimes people think with tracking data it's all about managing in the health of the player and making sure they don't overwork. Where really, the antithesis of that is you can actually also identify players who aren't necessarily reaching their maximum output that will help them build throughout the week from peak performance during a game. And so a lot of teams like to say okay, I have a wide receiver, I know their max miles per hour, is, let's use an example, 20.5 miles an hour. He hasn't hit his max yet during the entire week, so let's get him into some drills and some sessions, where he can start hitting that max so that we reduce the potential for injury on game day. >> Right, another area that probably a lot of people would never think is you also put sensors on the refs. So you know not only where the refs are, but are they in the right positions technically and kind of from a best practices to make the calls for the areas that they're trying to cover. >> Right. >> There's got to be, was their a union pushback on this type of stuff? I mean there's got to be some interesting kind of dynamics going on. >> Yeah as far as the referees, I know that referees are tagged and the NFL uses that information and correlates that with the play calls themselves. We're not involved in that process but I know they're utilizing the information. In addition to the referees I should add, we also have a tag in the ball itself. >> [Jeff] That's right. >> 2017 season was the first year that we had every single game had a tagged ball. Now that tagged information in the ball was not shared with the clubs yet, the league is still researching the information, like they did with the players' stuff. A couple years of research, then they decide to distribute that to the teams and the media. So we are tracking a lot of assets, we also have tags in the first down markers and the pylons and I'll just cut to the chase, there are people who will say okay, does that mean you can use these chips and this technology to identify first down marks or when a ball might break the plane for a potential touchdown? Technically you can do that, and that's something the league may be researching, but right now that's not part of our charter with them. >> Right, so I'm just curious about the conversations about the data and the use of the data. 'Cause as you said there's a lot of raw data, and there's kind of governance issues and rules of engagement, and then there's also what types of analytics get applied on top of that data, and then of course also it's about context, what's the context of the analytics? So I wonder if you could speak to the kind of the evolution of that process, what were people looking at when you first introduced this four years ago, and how has it moved over time in terms of adding new analytics on top of that data set? >> That's one of my favorite topics to talk about, when we first started with the league and engaging teams for the practice solution or providing them analytics, they in essence got a large raw data file of XY coordinates, you can imagine (laughs) it was a gigantic hard drive-- >> Even better, XY coordinates. >> And put it into a spreadsheet and go. There was some of that early on and really what we had to do through the power of software, is develop and application platform that would help teams manage and organize this data appropriately, develop the appropriate reports, or interesting reports and analysis. And over the last two or three years I think we've really found our stride at Zebra in providing solutions to go along with the capabilities of the technology itself. So at first it was strength and conditioning coaches, plowing through this information in great detail or analytics staffs, and what we've seen over the last 24 months is director of analytics now, personnel staff, coaches as well, a broadening group of people inside of a football organization start to use this data because the software itself allows them to do so. I'll give an example, instead of just tabular information, and charts and graphs, we now take the data and we can plot them into a play field schematic, which as you know as we talked off camera you're very familiar with football, that just automates the process of what teams do today manually, is develop play cards so they can do self-study and advanced scouting techniques. That's all automated today, and not only that, it's animated because we have the tracking information and we can merge that to game video. So we're just trying to make the tools with the software more functional so everybody in the organization can utilize it beyond strength and conditioning, which is important, but now we're broadening the aperture and appealing to everybody in the organization. >> Do you do, I can just see you can do play development too, if you plug in everybody's speeds and feeds, you have a certain duration of time, you can probably AB test all types of routes, and timing on drops and now you know how hard the guy throws the ball to come up with a pretty wide array of options, I would imagine within the time window. >> Exactly, a couple of examples I could give, when we meet with teams we have every player, let's say on a team and we know all the routes they ran during an entire season. So you can imagine on a visualization tool, you can imagine, it's like a spaghetti chart of different routes and then you start breaking down the scenarios of context like we talked about earlier, it's third down, it's in the red zone, it's receptions. And so that becomes a smaller set of lines that you see on the chart. I'll tell you Jeff, when we start meeting with teams at the Combine and we start showing them their X or a primary receiver, or their slot receiver tendencies visually, they start leaning forward a bit, oh my goodness, we spend way too much time on the same route when we're targeting for touch down passes. Or we're right-handed too much, we have to change that up. That's the most gratifying thing, is that you're taking a picture and you're really illuminating and those coaches who intrinsically know that, but once they see a visual cue, it validates something in their head that either they have to change or evolve something in their game plan or their practice regimen. >> Well, that's what I was going to ask, and you lead right into it is, what are some of the things that get the old-school person or the people that just don't get that, they don't get it, they don't have the time, they don't believe it, or maybe believe it but they don't have the time, they're afraid to understand. What are some of those kind of light bulb moments when they go okay, I get it, as you said, most of the time if they're smart, it's going to be kind of a validation of something they've already felt, but they've never actually had the data in front of them. >> Right, that's exactly right. So that, the first thing is just quantifying, providing a quantifiable empirical set of evidence to support what they intrinsically know as professional evaluators or coaches. So we always say that they data itself and the technology isn't meant to be a silver bullet. It's now a new premium ingredient that can help support the processes that existed in the past and hopefully provide some efficiency. And so that's the first thing, I think the visual, the example I showed about the wide receiver tendencies when they're thrown to in the red zone, that always gets people leaning forward a little bit. Also with running backs, third down in three plus yards, or third down in short situations, and my right-hander to left-hander when I'm on a certain hash. Again the visualization just allows them to really mark something in their head-- >> Just in the phase. >> Where it makes them really understand. Another example that's interesting is players who play on special teams who are also wide receivers, so as we know, linebackers and tight ends tend to be, and quarterbacks tend to be involved in special teams. Well is there an effect when they've covered kick offs and punts, a large amount of those in a game, did that affect them on side a ball play, for instance? Think about Julian Edelman two Superbowls ago, he played 93 snaps against the Atlanta Falcons. and when you look at the route-- >> [Jeff] He played 93 snaps? >> Yeah, between special, because it went into overtime, right? It was an offensive game-- >> And he's on all the-- >> He played a lot of snaps, he played 93 snaps. how does that affect his route integrity? Not only the types and quality of the route, but the depth and speed he gets to those points, those change over time. So this type of information can give the experts just a little bit more information to find that edge. And I have a great mentor of mine, I have to bring him up, Gill Brant, former VP of Personnel to Dallas Cowboys, with Tex Schramm and Tom Landry, he looks at this type of information and he says, what would a team pay for one more victory? >> So as we know, all coaches and professional organizations and college are looking for an edge, and if we can provide that with our technology through efficiencies and some type of support information resource then we're doing our job. >> I just wanted to, before I let you go, just the human factors on that. I mean, football coaches are notoriously crazy workers and, right, you can always watch more films. So now you're adding a whole new category of data and information. How's that being received on their side? Is it, are they going to have to put new staff and resources against this? I mean, there's only so many hours in a day and I can't help but think of the second tier or third tier coaches who are going to be on the hook for going through this. Or can you automate so much of it so it's not necessarily this additional burden that they have to take on? 'Cause I would imagine if the Cowboys are doing it, the Eagles got to do it, the Giants got to do it, and the Washington Redskins got to do it, right? >> Right, right, well each team as you might expect, their cultures are different. And I would say two or three years ago you started to see more teams hire literally by title, director of analytics, or director of football information, instead of sharing that responsibility between two or three people that already existed in the organization. So that staffing I think occurred a couple, two or three years ago or over the last two or three years. This becomes another element for those staffs to work with. But also along that process over the last two or three years is, really, I always try to say in talking to teams and I'll be on the road again here soon talking to clubs after pro days conclude, is forget about staffs and analytics and that idea. Do you want to be information driven, and do you want to be efficient? And that's something everybody can grasp onto, whether you're the strength and conditioning coach, personnel staff or scout, or a position coach, or a head coach, or a coordinator. So we try to be information driven, and then that seems to ease the process of people thinking I have to hire more people. What I really need to do is ask my people that are already in place to maybe be more curious about this information, and if we're going to invest in a resource that can help support them and make them more efficient, make sure we leverage it. And so that's our process that we work with, it varies by team, some teams have large, large expansive staffs. That doesn't necessarily mean, in my opinion the most effective staff is using information. Sometimes it's the organizations that run very lean with a few set of people, but very focused and moving in one direction. >> I love it, data for efficiency, right? In God we trust, everybody else bring data. One of my favorite lines that we hear over and over and over at these shows. >> In fact, I might borrow that next week. >> You could take that one, alright. >> Thank you, Jeff. >> Well John, thanks for taking a few minutes and stopping by and participating in this Western Digital program, because it is all about the data and it is about efficiency, so it's not necessarily trying to kill people with more tools, but help them be better. >> That's what we're trying to do, I appreciate the opportunity and love to talk to you more. >> Absolutely, well hopefully we'll see you again. He's John Pollard, I'm Jeff Frick, you're watching theCUBE from Palo Alto studios, thanks for watching, we'll see you next time. (Upbeat music)
SUMMARY :
the conference season hasn't got to full swing yet, Zebra Sports and Zebra Technologies. and bring that over into the world of sports, and all this kind of old-school stuff. that just came to them through an RFP, that goes onto the shoulder pads, right? and in proximity to other players on the field. with the league in-game, four years ago. how the evolution of the introduction of this data is that the league has to be very pragmatic and thoughtful So how did it change when you actually had a real-time and player evaluation processes, the technology itself and it was really more kind of the health of the player. and take that over to a position coach or a head coach and kind of from a best practices to make the calls I mean there's got to be some interesting and correlates that with the play calls themselves. and the pylons and I'll just cut to the chase, and then there's also what types of analytics because the software itself allows them to do so. and timing on drops and now you know and then you start breaking down that get the old-school person and the technology isn't meant to be a silver bullet. and when you look at the route-- but the depth and speed he gets to those points, and if we can provide that with our technology and the Washington Redskins got to do it, right? and I'll be on the road again here soon that we hear over and over and over at these shows. You could take that one, because it is all about the data I appreciate the opportunity and love to talk to you more. thanks for watching, we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
John Pollard | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Tom Landry | PERSON | 0.99+ |
two chips | QUANTITY | 0.99+ |
Two chips | QUANTITY | 0.99+ |
Western Digital | ORGANIZATION | 0.99+ |
Gill Brant | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
2009 | DATE | 0.99+ |
2017 | DATE | 0.99+ |
Illinois | LOCATION | 0.99+ |
2007 | DATE | 0.99+ |
Julian Edelman | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
John | PERSON | 0.99+ |
Zebra Technologies | ORGANIZATION | 0.99+ |
Atlanta Falcons | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Eagles | ORGANIZATION | 0.99+ |
Zebra Sports | ORGANIZATION | 0.99+ |
Washington Redskins | ORGANIZATION | 0.99+ |
Dallas Cowboys | ORGANIZATION | 0.99+ |
Lincolnshire | LOCATION | 0.99+ |
Giants | ORGANIZATION | 0.99+ |
Cowboys | ORGANIZATION | 0.99+ |
2018 | DATE | 0.99+ |
Chicago | LOCATION | 0.99+ |
93 snaps | QUANTITY | 0.99+ |
Zebra | ORGANIZATION | 0.99+ |
third chip | QUANTITY | 0.99+ |
STATS LLC | ORGANIZATION | 0.99+ |
third | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |
three plus yards | QUANTITY | 0.99+ |
four years ago | DATE | 0.99+ |
two | DATE | 0.99+ |
second tier | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Tex Schramm | PERSON | 0.99+ |
New Orleans Saints | ORGANIZATION | 0.99+ |
first two years | QUANTITY | 0.99+ |
three people | QUANTITY | 0.98+ |
first thing | QUANTITY | 0.98+ |
late 1960s | DATE | 0.98+ |
each team | QUANTITY | 0.98+ |
NFL | ORGANIZATION | 0.98+ |
first | QUANTITY | 0.98+ |
four years | QUANTITY | 0.98+ |
Each club | QUANTITY | 0.98+ |
20.5 miles an hour | QUANTITY | 0.98+ |
Superbowls | EVENT | 0.98+ |
One | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
12 times per second | QUANTITY | 0.97+ |
three years ago | DATE | 0.96+ |
Silicon Valley | LOCATION | 0.95+ |
third tier | QUANTITY | 0.95+ |
Groundhog Day | EVENT | 0.95+ |
first year | QUANTITY | 0.94+ |
about three | QUANTITY | 0.93+ |
one direction | QUANTITY | 0.92+ |
three years | QUANTITY | 0.91+ |
about four years ago | DATE | 0.89+ |
theCUBE | ORGANIZATION | 0.87+ |
couple weeks ago | DATE | 0.86+ |
Personnel | ORGANIZATION | 0.85+ |
three seconds | QUANTITY | 0.84+ |
Atri Basu & Necati Cehreli | Zebrium Root Cause as a Service
>>Okay. We're back with Ari Basu, who is Cisco's resident philosopher, who also holds a master's in computer science. We're gonna have to unpack that a little bit and Najati chair he who's technical lead at Cisco. Welcome guys. Thanks for coming on the cube. >>Happy to be here. Thanks a >>Lot. All right, let's get into it. We want you to explain how Cisco validated the SBRI technology and the proof points that, that you have, that it actually works as advertised. So first Outre tell first, tell us about Cisco tech. What does Cisco tech do? >>So T is otherwise it's an acronym for technical assistance center is Cisco's support arm, the support organization, and, you know, the risk of sounding like I'm spotting a corporate line. The, the easiest way to summarize what tag does is provide world class support to Cisco customers. What that means is we have about 8,000 engineers worldwide, and any of our Cisco customers can either go on our web portal or call us to open a support request. And we get about 2.2 million of these support requests a year. And what these support requests are, are essentially the customer will describe something that they need done some networking goal that they have, that they wanna accomplish. And then it's tax job to make sure that that goal does get accomplished. Now, it could be something like they're having trouble with an existing network solution, and it's not working as expected, or it could be that they're integrating with a new solution. >>They're, you know, upgrading devices, maybe there's a hardware failure, anything really to do with networking support and, you know, the customer's network goals. If they open up a case for request for help, then tax job is to, is to respond and make sure the customer's, you know, questions and requirements are met about 44% of these support requests are usually trivial and, you know, can be solved within a call or within a day. But the rest of tax cases really involve getting into the network device, looking at logs. It's a very technical role. It's a very technical job. You're look you're, you need to be conversing with network solutions, their designs protocols, et cetera. >>Wow. So 56% non-trivial. And so I would imagine you spend a lot of time digging through through logs. Is that, is that true? Can you quantify that like, you know, every month, how much time you spend digging through logs and is that a pain point? >>Yeah, it's interesting. You asked that because when we started this on this journey to augment our support engineers workflow with zebra solution, one of the things that we did was we went out and asked our engineers what their experience was like doing log analysis. And the anecdotal evidence was that on average, an engineer will spend three out of their eight hours reviewing logs, either online or offline. So what that means is either with the customer live on a WebEx, they're going to be going over logs, network, state information, et cetera, or they're gonna do it offline, where the customer sends them the logs, it's attached to a, you know, a service request and they review it and try to figure out what's going on and provide the customer with information. So it's a very large chunk of our day. You know, I said 8,000 plus engineers, and so three hours a day, that's 24,000 man hours a day spent on long analysis. >>Now the struggle with logs or analyzing logs is there by out of necessity. Logs are very contr contr. They try to pack a lot of information in a very little space. And this is for performance reasons, storage reasons, et cetera, BEC, but the side effect of that is they're very esoteric. So they're hard to read if you're not conversant, if you're not the developer who wrote these logs or you or you, aren't doing code deep dives. And you're looking at where this logs getting printed and things like that, it may not be immediately obvious or even after a low while it may not be obvious what that log line means or how it correlates to whatever problem you're troubleshooting. So it requires tenure. It requires, you know, like I was saying before, it requires a lot of knowledge about the protocol what's expected because when you're doing log analysis, what you're really looking for is a needle in a haystack. You're looking for that one anomalous event, that single thing that tells you this shouldn't have happened. And this was a problem right now doing that kind of anomaly detection requires you to know what is normal. It requires, you know, what the baseline is. And that requires a very in-depth understanding of, you know, the state changes for that network solution or product. So it requires time, tenure and expertise to do well. And it takes a lot of time even when you have that kind of expertise. >>Wow. So thank you, archery. And Najati, that's, that's about, that's almost two days a week for, for a technical resource. That's that's not inexpensive. So what was Cisco looking for to sort of help with this and, and how'd you stumble upon zebra? >>Yeah, so, I mean, we have our internal automation system, which has been running more than a decade now. And what happens is when a customer attaches a log bundle or diagnostic bundle into the service request, we take that from the Sr we analyze it and we represent some kind of information. You know, it can be alert or some tables, some graph to the engineer, so they can, you know, troubleshoot this particular issue. This is an incredible system, but it comes with its own challenges around maintenance to keep it up to date and relevant with Cisco's new products or new version of the product, new defects, new issues, and all kind of things. And when I, what I mean with those challenges are, let's say Cisco comes up with a product today. We need to come together with those engineers. We need to figure out how this bundle works, how it's structured out. >>We need to select individual logs, which are relevant and then start modeling these logs and get some values out of those logs, using pars or some rag access to come to a level that we can consume the logs. And then people start writing rules on top of that abstraction. So people can say in this log, I'm seeing this value together with this other value in another log, maybe I'm hitting this particular defect. So that's how it works. And if you look at it, the abstraction, it can fail the next time. And the next release when the development or the engineer decides to change that log line, which you write that rag X, or we can come up with a new version, which we completely change the services or processes, then whatever you have wrote needs to be re written for that new service. And we see that a lot with products, like for instance, WebEx, where you have a very short release cycle that things can change maybe the next week with a new release. >>So whatever you are writing, especially for that abstraction and for those rules are maybe not relevant with that new release. With that being sake, we have a incredible rule creation process and governance process around it, which starts with maybe a defect. And then it takes it to a level where we have an automation in place. But if you look at it, this really ties to human bandwidth. And our engineers are really busy working on, you know, customer facing, working on issues daily and sometimes creating these rules or these pars are not their biggest priorities, so they can be delayed a bit. So we have this delay between a new issue being identified to a level where we have the automation to detect it next time that some customer faces it. So with all these questions and with all challenges in mind, we start looking into ways of actually how we can automate these automations. >>So these things that we are doing manually, how we can move it a bit further and automate. And we had actually a couple of things in mind that we were looking for and this being one of them being, this has to be product agnostic. Like if Cisco comes up with a product tomorrow, I should be able to take it logs without writing, you know, complex regs, pars, whatever, and deploy it into this system. So it can embrace our logs and make sense of it. And we wanted this platform to be unsupervised. So none of the engineers need to create rules, you know, label logs. This is bad. This is good. Or train the system like which requires a lot of computational power. And the other most important thing for us was we wanted this to be not noisy at all, because what happens with noises when your level of false PE positives really high your engineers start ignoring the good things between that noise. >>So they start the next time, you know, thinking that this thing will not be relevant. So we want something with a lot or less noise. And ultimately we wanted this new platform or new framework to be easily adaptable to our existing workflows. So this is where we started. We start looking into the, you know, first of all, internally, if we can build this thing and also start researching it, and we came up to Zeum actually Larry, one of the co co-founders of Zeum. We came upon his presentation where he clearly explained why this is different, how this works, and it immediately clicked in. And we said, okay, this is exactly what we were looking for. We dived deeper. We checked the block posts where SBRI guys really explained everything very clearly there, they are really open about it. And most importantly, there is a button in their system. >>So what happens usually with AI ML vendors is they have this button where you fill in your details and sales guys call you back. And, you know, we explain the system here. They were like, this is our trial system. We believe in the system, you can just sign up and try it yourself. And that's what we did. We took our, one of our Cisco live DNA center, wireless platforms. We start streaming logs out of it. And then we synthetically, you know, introduce errors, like we broke things. And then we realized that zebra was really catching the errors perfectly. And on top of that, it was really quiet unless you are really breaking something. And the other thing we realized was during that first trial is zebra was actually bringing a lot of context on top of the logs. During those failures, we work with couple of technical leaders and they said, okay, if this failure happens, I I'm expecting this individual log to be there. And we found out with zebra, apart from that individual log, there were a lot of other things which gives a bit more context around the root columns, which was great. And that's where we wanted to take it to the next level. Yeah. >>Okay. So, you know, a couple things to unpack there. I mean, you have the dart board behind you, which is kind of interesting, cuz a lot of times it's like throwing darts at the board to try to figure this stuff out. But to your other point, Cisco actually has some pretty rich tools with AppD and doing observability and you've made acquisitions like thousand eyes. And like you said, I'm, I'm presuming you gotta eat your own dog food or drink your own champagne. And so you've gotta be tools agnostic. And when I first heard about Z zebra, I was like, wait a minute. Really? I was kind of skeptical. I've heard this before. You're telling me all I need is plain text and, and a timestamp. And you got my problem solved. So, and I, I understand that you guys said, okay, let's run a POC. Let's see if we can cut that from, let's say two days a week down to one day, a week. In other words, 50%, let's see if we can automate 50% of the root cause analysis. And, and so you funded a POC. How, how did you test it? You, you put, you know, synthetic, you know, errors and problems in there, but how did you test that? It actually works Najati >>Yeah. So we, we wanted to take it to the next level, which is meaning that we wanted to back test is with existing SARS. And we decided, you know, we, we chose four different products from four different verticals, data center, security, collaboration, and enterprise networking. And we find out SARS where the engineer put some kind of log in the resolution summary. So they closed the case. And in the summary of the Sr, they put, I identified these log lines and they led me to the roots and we, we ingested those log bundles. And we, we tried to see if Zeum can surface that exact same log line in their analysis. So we initially did it with archery ourself and after 50 tests or so we were really happy with the results. I mean, almost most of them, we saw the log line that we were looking for, but that was not enough. >>And we brought it of course, to our management and they said, okay, let's, let's try this with real users because the log being there is one thing, but the engineer reaching to that log is another take. So we wanted to make sure that when we put it in front of our users, our engineers, they can actually come to that log themselves because, you know, we, we know this platform so we can, you know, make searches and find whatever we are looking for, but we wanted to do that. So we extended our pilots to some selected engineers and they tested with their own SRSS. Also do some back testing for some SARS, which are closed in the past or recently. And with, with a sample set of, I guess, close to 200 SARS, we find out like majority of the time, almost 95% of the time the engineer could find the log they were looking for in zebra analysis. >>Yeah. Okay. So you were looking for 50%, you got to 95%. And my understanding is you actually did it with four pretty well known Cisco products, WebEx client DNA center, identity services, engine ISE, and then, then UCS. Yes. Unified pursuit. So you use actual real data and, and that was kind of your proof proof point, but Ari. So that's sounds pretty impressive. And, and you've have you put this into production now and what have you found? >>Well, yes, we're, we've launched this with the four products that you mentioned. We're providing our tech engineers with the ability, whenever a, whenever a support bundle for that product gets attached to the support request. We are processing it, using sense and then providing that sense analysis to the tech engineer for their review. >>So are you seeing the results in production? I mean, are you actually able to, to, to reclaim that time that people are spending? I mean, it was literally almost two days a week down to, you know, a part of a day, is that what you're seeing in production and what are you able to do with that extra time and people getting their weekends back? Are you putting 'em on more strategic tasks? How are you handling that? >>Yeah. So, so what we're seeing is, and I can tell you from my own personal experience using this tool, that troubleshooting any one of the cases, I don't take more than 15 to 20 minutes to go through the zebra report. And I know within that time either what the root causes or I know that zebra doesn't have the information that I need to solve this particular case. So we've definitely seen, well, it's been very hard to measure exactly how much time we've saved per engineer, right? What we, again, anecdotally, what we've heard from our users is that out of those three hours that they were spending per day, we're definitely able to reclaim at least one of those hours and, and what, even more importantly, you know, what the kind of feedback that we've gotten in terms of, I think one statement that really summarizes how Zebra's impacted our workflow was from one of our users. >>And they said, well, you know, until you provide us with this tool, log analysis was a very black and white affair, but now it's become really colorful. And I mean, if you think about it, log analysis is indeed black and white. You're looking at it on a terminal screen where the background is black and the text is white, or you're looking at it as a text where the background is white and the text is black, but what's what they're really trying to say. Is there hardly any visual cues that help you navigate these logs, which are so esoteric, so dense, et cetera. But what XRM does is it provides a lot of color and context to the whole process. So now you're able to quickly get to, you know, using their word cloud, using their interactive histogram, using the summaries of every incident. You're very quickly able to summarize what might be happening and what you need to look into. >>Like, what are the important aspects of this particular log bundle that might be relevant to you? So we've definitely seen that a really great use case that kind of encapsulates all of this was very early on in our experiment. There was, there was this support request that had been escalated to the business unit or the development team. And the tech engineer had really, they, they had an intuition about what was going wrong because of their experience because of, you know, the symptoms that they'd seen. They kind of had an idea, but they weren't able to convince the development team because they weren't able to find any evidence to back up what they thought was happening. And we, it was entirely happenstance that I happened to pick up that case and did an analysis using Seebri. And then I sat down with the attack engineer and we were very quickly within 15 minutes, we were able to get down to the exact sequence of events that highlighted what the customer thought was happening, evidence of what the, so not the customer, what the attack engineer thought was the, was a root cause. It was a rude pause. And then we were able to share that evidence with our business unit and, you know, redirect their resources so that we could change down what the problem was. And that really has been, that that really shows you how that color and context helps in log analysis. >>Interesting. You know, we do a fair amount of work in the cube in the RPA space, the robotic process automation and the narrative in the press when our RPA first started taking off was, oh, it's, you know, machines replacing humans, or we're gonna lose jobs. And, and what actually happened was people were just eliminating mundane tasks and, and the, the employee's actually very happy about it. But my question to you is, was there ever a reticence amongst your team? Like, oh, wow, I'm gonna, I'm gonna lose my job if the machine's gonna replace me, or have you found that people were excited about this and what what's been the reaction amongst the team? >>Well, I think, you know, every automation and AI project has that immediate gut reaction of you're automating away our jobs and so forth. And there is initially there's a little bit of reticence, but I mean, it's like you said, once you start using the tool, you realize that it's not your job, that's getting automated away. It's just that your job's becoming a little easier to do, and it's faster and more efficient. And you're able to get more done in less time. That's really what we're trying to accomplish here at the end of the day, rim will identify these incidents. They'll do the correlation, et cetera. But if you don't understand what you're reading, then that information's useless to you. So you need the human, you need the network expert to actually look at these incidents, but what we are able to skin away or get rid of is all of the fat that's involved in our, you know, in our process, like without having to download the bundle, which, you know, when it's many gigabytes in size, and now we're working from home with the pandemic and everything, you're, you know, pulling massive amounts of logs from the corporate network onto your local device that takes time and then opening it up, loading it in a text editor that takes time. >>All of these things are we're trying to get rid of. And instead we're trying to make it easier and quicker for you to find what you're looking for. So it's like you said, you take away the mundane, you take away the, the difficulties and the slog, but you don't really take away the work, the work still needs to be done. >>Yeah. Great guys. Thanks so much. Appreciate you sharing your story. It's quite, quite fascinating. Really. Thank you for coming on. >>Thanks for having us. >>You're very welcome. Okay. In a moment, I'll be back to wrap up with some final thoughts. This is Dave Valante and you're watching the, >>So today we talked about the need, not only to gain end to end visibility, but why there's a need to automate the identification of root cause problems and doing so with modern technology and machine intelligence can dramatically speed up the process and identify the vast majority of issues right out of the box. If you will. And this technology, it can work with log bundles in batches, or with real time data, as long as there's plain text and a timestamp, it seems Zebra's technology will get you the outcome of automating root cause analysis with very high degrees of accuracy. Zebra is available on Preem or in the cloud. Now this is important for some companies on Preem because there's really some sensitive data inside logs that for compliance and governance reasons, companies have to keep inside their four walls. Now SBRI has a free trial. Of course they better, right? So check it out@zebra.com. You can book a live demo and sign up for a free trial. Thanks for watching this special presentation on the cube, the leader in enterprise and emerging tech coverage on Dave Valante and.
SUMMARY :
Thanks for coming on the cube. Happy to be here. and the proof points that, that you have, that it actually works as advertised. Cisco's support arm, the support organization, and, you know, to do with networking support and, you know, the customer's network goals. And so I would imagine you spend a lot of where the customer sends them the logs, it's attached to a, you know, a service request and And that requires a very in-depth understanding of, you know, to sort of help with this and, and how'd you stumble upon zebra? some graph to the engineer, so they can, you know, troubleshoot this particular issue. And if you look at it, the abstraction, it can fail the next time. And our engineers are really busy working on, you know, customer facing, So none of the engineers need to create rules, you know, label logs. So they start the next time, you know, thinking that this thing will So what happens usually with AI ML vendors is they have this button where you fill in your And like you said, I'm, you know, we, we chose four different products from four different verticals, And we brought it of course, to our management and they said, okay, let's, let's try this with And my understanding is you actually did it with Well, yes, we're, we've launched this with the four products that you mentioned. and what, even more importantly, you know, what the kind of feedback that we've gotten in terms And they said, well, you know, until you provide us with this tool, And that really has been, that that really shows you how that color and context helps But my question to you is, was there ever a reticence amongst or get rid of is all of the fat that's involved in our, you know, So it's like you said, you take away the mundane, Appreciate you sharing your story. This is Dave Valante and you're watching the, it seems Zebra's technology will get you the outcome of automating root cause analysis with
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ari Basu | PERSON | 0.99+ |
Dave Valante | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
one day | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
95% | QUANTITY | 0.99+ |
Zeum | ORGANIZATION | 0.99+ |
eight hours | QUANTITY | 0.99+ |
SARS | ORGANIZATION | 0.99+ |
Najati | PERSON | 0.99+ |
56% | QUANTITY | 0.99+ |
Larry | PERSON | 0.99+ |
three hours | QUANTITY | 0.99+ |
UCS | ORGANIZATION | 0.99+ |
50 tests | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
a week | QUANTITY | 0.98+ |
about 8,000 engineers | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
next week | DATE | 0.97+ |
about 2.2 million | QUANTITY | 0.97+ |
three | QUANTITY | 0.97+ |
one statement | QUANTITY | 0.97+ |
first trial | QUANTITY | 0.97+ |
WebEx | ORGANIZATION | 0.97+ |
three hours a day | QUANTITY | 0.96+ |
first | QUANTITY | 0.96+ |
Seebri | ORGANIZATION | 0.96+ |
15 minutes | QUANTITY | 0.96+ |
SBRI | ORGANIZATION | 0.95+ |
tomorrow | DATE | 0.95+ |
more than a decade | QUANTITY | 0.95+ |
about 44% | QUANTITY | 0.95+ |
Outre | ORGANIZATION | 0.93+ |
single thing | QUANTITY | 0.93+ |
more than 15 | QUANTITY | 0.93+ |
two days a week | QUANTITY | 0.93+ |
AppD | TITLE | 0.92+ |
a day | QUANTITY | 0.91+ |
Necati Cehreli | PERSON | 0.91+ |
four products | QUANTITY | 0.9+ |
couple | QUANTITY | 0.89+ |
Ari | PERSON | 0.89+ |
pandemic | EVENT | 0.87+ |
one thing | QUANTITY | 0.87+ |
SRSS | TITLE | 0.86+ |
almost 95% | QUANTITY | 0.86+ |
20 minutes | QUANTITY | 0.85+ |
two days a week | QUANTITY | 0.85+ |
Zebra | ORGANIZATION | 0.85+ |
a year | QUANTITY | 0.85+ |
8,000 plus engineers | QUANTITY | 0.83+ |
almost two days a week | QUANTITY | 0.82+ |
WebEx | TITLE | 0.82+ |
ISE | ORGANIZATION | 0.81+ |
Zebrium | ORGANIZATION | 0.81+ |
24,000 man hours a day | QUANTITY | 0.8+ |
thousand eyes | QUANTITY | 0.79+ |
Atri Basu | PERSON | 0.79+ |
DNA | ORGANIZATION | 0.76+ |
zebra | ORGANIZATION | 0.74+ |
out@zebra.com | OTHER | 0.74+ |
BEC | ORGANIZATION | 0.72+ |
four | QUANTITY | 0.72+ |
Zebra | TITLE | 0.71+ |
one anomalous event | QUANTITY | 0.71+ |
one of our users | QUANTITY | 0.67+ |
Najati | ORGANIZATION | 0.65+ |
200 | QUANTITY | 0.63+ |
Carl Krupitzer, ThingLogix | AWS Marketplace 2018
>> From the ARIA Resort in Las Vegas, it's theCube. Covering AWS Marketplace. Brought to you by Amazon Web Services. >> Hey, welcome back everybody. Jeff Frick here with theCube. We are at AWS Reinvent 2018. We got to get a number, I don't know how many people are here, but Vegas is packed. I think it's in six different venues tonight. We're at the ARIA at the hub with the AWS Marketplace & Service Catalog Experience, kicking everything off. We're excited to be joined by cube alumni. Last we saw him, I think it was in San Francisco Summit 2017. Carl Krupitzer, the CEO of ThingLogix. Carl, great to see you. >> Thank you it's great to be here. >> So I think you were saying before we turned the cameras on, you came early days. This whole piece here was not even as big a the room we're in. >> Right well we were part of the service launch for IoT, and that was just a few years ago, and it's exponentially bigger. Yeah. Just the expo, this is not even the expo floor right? And this is bigger than what we had originally. So excited to see it grow. >> So IoT keeps growing, growing, growing. That's all we hear about. In Industrial IoT, we did the Industrial IoT launch with GE back in better days. For them, huge opportunity. Really seeing a lot of momentum. What are some of the observations you're seeing actually out in marketplace? >> You know it's interesting. When we first started with the IoT service offering for AWS, there was a lot of proof of concepts going on, a lot of people kind of hacking their way through understanding what IoT is and how it could impact their business. And I think we've gotten to the point now where we're seeing more production roll-outs with very considerate business drivers behind it. >> Right. I think it's funny you're talking about doing some research for this, and you guys are really specific. I love it. It's not Greenfield projects you know? Have specific design objectives, have specific KPIs, have specific kind of ideas about what the functionality you want before you just kind of jump into IoT space with two feet. >> Right. Yeah we strongly discourage companies from just jumping in with both feet just because right? It's an expensive undertaking IoT, and it has the potential to really change your business for the better if you do it well. >> So where are you seeing the most uptake? Or maybe that surprises you the most in these early days? Kind of industry wise? >> We see a lot of creative use cases starting to come up. Kind of that secondary use of data, and one of the things that we've-- we kind of describe our customers having a life cycle of IoT right? They come in to solve a specific problem with us, which is usually a scalability, or a go to market issue. And then very quickly, they kind of get to the art of the possible. What can we do next? And we see a lot of companies really getting creative with the way they do things. From charging with-- using our FID tags in sub-Saharan Africa for water to solar power and things like that. It's interesting to see companies that didn't exist a few years ago, and couldn't have existed a few years ago, really kind of getting a lot of traction now. >> Right. It's funny we did an interview with Zebra Sports a few years ago actually now. And they're the one that's old RFID technology that put the pads in the shoulder pads for all the NFL players. They're on the refs, they're in the balls. It is such a cool way to apply on old technology to a new application and then really open up this completely different kind of consumer experience in watching sports. When you've got all this additional data about how fast are they running and what's their acceleration. And I think they had one example where they showed a guy in an interception. They had the little line tracker. Before he'd gotten all the way back in, it was a pick six. It's unbelievable now with this data. >> Our Middle Eastern group is actually doing a pilot right now for camel racing. So we're doing telemetry attached to the camels that are running around the tracks. We're getting speed and heart rate and those sorts of things. So it's everywhere right? >> I love it. Camel racing. So we're here at the AWS Marketplace Experience. So tell us a little bit about how's it working with AWS. How's the the marketplace fit within your entire kind of go to market strategy? >> Well so for us, the marketplace is really key to our go to market strategy right? I mean we're a small company and we-- our sales team is really kind of focused on helping customers solve problems and the marketplace really offers us the ability to not have to deal with a lot of the infrastructure things of servicing a customer right? They can go there, they can self sign up, they can implement the platform, our technology platform on their own and then billing is taken off of our plate. So it's not something that we have to have a bunch of resources dedicated to. >> Is there still a big services component though, that you still have to come in to help them as you say kind of define nice projects and good KPI's and kind of good places to start? Or do they often times on the marketplace purchase just go off to the races on their own? >> So it's a combination. If companies are looking to solve a specific problem with an IoT platform like Foundry, it's definitely a self implementable thing and it's becoming more and more self implementable. Foundry really deploys into a customers account using Cloud formation, and Cloud formation templates allow us to kind of create these customized solutions that can then be deployed. So it's-- we're getting a combination of both. >> Yeah, and I would imagine it's taken you into all kinds of markets that you just don't-- you just don't have the manpower to cover when you have a distribution partner at EWS. >> Yeah it's made things a lot faster for us to be able to spin up vertical solutions or specific offerings for a particular large customer. Marketplace can take care of all of the infrastructure on that. >> Alright so what are you looking for here at Reinvent 2018? You've been coming to these things for awhile. I know Andy's tweeting out, his keynote is ready to have the chicken wing contest I think, last night at midnight. Too late for me, I didn't make it. (laughs) >> For us I mean, some of the more exciting things that are out there are the emergence of server-less right? You see server-less, all of those AWS services really taking off. >> Right. >> But there's also the Sumarian, the ARVR's really kind of exploding. So for us it's really about, this is a great place for us to see the direction that AWS is heading and then make sure that our offering, and our technology is layered on top of that appropriately. >> And what are you hearing from your customers about Edge? All the talk about Edge and there's some fudd I think going about how does Edge work with Cloud and to me it's like two completely separate technology applications, but then you know what you're trying to accomplish. As kind of the buzzwords, Edge gets beyond the buzz and actually starts to be implemented, what do you kind of seeing and how's that working together with some of the services that Amazon's got? >> I mean Edge architecture's are an important component to a solution. Especially solutions that require real time data processing and decision making at the shop floor or whatever you have. AWS has taken very big strides toward creating service offerings and products down at the Edge that interface well with the Cloud. So for us, our perspective on it is that the Edge is really a reflection of the business logic and the processes and things that we define and build for a customer. Because ultimately those Edge processes have to feed the enterprise processes, which is what we really focus on right? How do we get machine data into enterprise systems? So Edge technology for us is definitely a consideration and when we build our select technology solutions, we look at Edge as a component in that architecture and we try to meet the needs of the customers specific use case when it comes to Edge. >> Right. Yeah it's not killing the Cloud. Who said that? - Right. >> So silly. >> Yeah it can't kill it. >> It's not slowing down this thing. >> Right. Alright Carl well thanks for taking a few minutes and have great Reinvent. >> Yeah thank you. - [Jeff] Hydrate. >> Thanks for your time. Definitely. - They say hydrate. Alright he's Carl, I'm Jeff. You're watching theCube. We're at AWS Marketplace inservice catalog experience. We're at the Aria in the quads. Stop on by. Thanks for watching we'll see you next time.
SUMMARY :
Brought to you by Amazon Web Services. We're at the ARIA at the hub with the So I think you were saying and that was just a few years ago, What are some of the observations you're seeing When we first started with the IoT service and you guys are really specific. and it has the potential to really change your business and one of the things that we've-- that put the pads in the shoulder pads that are running around the tracks. How's the the marketplace fit the ability to not have to deal with a lot and it's becoming more and more self implementable. all kinds of markets that you just don't-- all of the infrastructure on that. the chicken wing contest I think, some of the more exciting things that are out there the ARVR's really kind of exploding. and actually starts to be implemented, and the processes and things that we define Yeah it's not killing the Cloud. and have great Reinvent. Yeah thank you. We're at the Aria in the quads.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff | PERSON | 0.99+ |
Carl Krupitzer | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Carl | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
both feet | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
GE | ORGANIZATION | 0.99+ |
two feet | QUANTITY | 0.99+ |
ThingLogix | ORGANIZATION | 0.99+ |
ThingLogix | PERSON | 0.99+ |
Vegas | LOCATION | 0.99+ |
EWS | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Edge | TITLE | 0.98+ |
one example | QUANTITY | 0.98+ |
Zebra Sports | ORGANIZATION | 0.98+ |
San Francisco Summit 2017 | EVENT | 0.98+ |
AWS Marketplace | ORGANIZATION | 0.98+ |
last night | DATE | 0.97+ |
one | QUANTITY | 0.97+ |
few years ago | DATE | 0.97+ |
Reinvent 2018 | EVENT | 0.97+ |
six different venues | QUANTITY | 0.96+ |
sub-Saharan Africa | LOCATION | 0.96+ |
Foundry | ORGANIZATION | 0.95+ |
Cloud | TITLE | 0.95+ |
2018 | DATE | 0.94+ |
first | QUANTITY | 0.94+ |
Aria | ORGANIZATION | 0.94+ |
tonight | DATE | 0.93+ |
ARIA Resort | ORGANIZATION | 0.88+ |
pick six | QUANTITY | 0.85+ |
AWS Reinvent 2018 | EVENT | 0.84+ |
two completely separate technology applications | QUANTITY | 0.81+ |
Sumarian | TITLE | 0.72+ |
FID | OTHER | 0.72+ |
Greenfield | LOCATION | 0.7+ |
theCube | ORGANIZATION | 0.67+ |
ARIA | ORGANIZATION | 0.67+ |
Middle Eastern | LOCATION | 0.66+ |
Industrial IoT | EVENT | 0.65+ |
NFL | ORGANIZATION | 0.59+ |
CEO | PERSON | 0.51+ |
theCube | COMMERCIAL_ITEM | 0.43+ |
Marketplace | TITLE | 0.42+ |
ARVR | TITLE | 0.34+ |