Phillip Adams, National Ignition Facility | Splunk .conf18
>> Narrator: Live from Orlando, Florida, it's theCUBE covering .conf18. Brought to you by Splunk. >> Welcome back to Orlando, everybody, of course home of Disney World. I'm Dave Vellante with Stu Miniman. We're here covering Splunk's Conf18, #conf, sorry, #splunkconf18, I've been fumbling that all week, Stu. Maybe by day two I'll have it down. But this is theCUBE, the leader in live tech coverage. Phillip Adams is here, he's the CTO and lead architect for the National Ignition Facility. Thanks for coming on. >> Thanks for having me. >> Super-interesting off-camera conversation. You guys are basically responsible for keeping the country's nuclear arsenal functional and secure. Is that right? >> Phillip: And effective. >> And effective. So talk about your mission and your role. >> So the mission of the National Ignition Facility is to provide data to scientists of how matter behaves under high pressures and high temperatures. And so what we do is basically take 192 laser beams of the world's largest laser in a facility about the size of three football fields and run that through into a target the size of a B.B. that's filled with deuterium and tritium. And that implosion that we get, we have diagnostics around that facility that collect what's going on for that experiment and that data goes off to the scientists. >> Wow, okay. And what do they do with it? They model it? I mean that's real data, but then they use it to model real-world nuclear stores? >> Some time back if you actually look on Google Earth and you look over Nevada you'll see a lot of craters in the desert. And we aren't able to do underground nuclear testing anymore, so this replaces that. And it allows us to be able to capture, by having a small burning plasma in a lab you can either simulate what happens when you detonate a nuclear warhead, you can find out what happens, if you're an astrophysicist, understand what happens from the birth of a star to full supernova. You can understand what happens to materials as they get subjected to, you know, 100 million degrees. (laughs) >> Dave: For real? >> Phillip: For real. >> Well, so now some countries, North Korea in particular, up until recently were still doing underground testing. >> Correct. >> Are you able to, I don't know, in some way, shape or form, monitor that? Or maybe there's intelligence that you can't talk about, but do you learn from those? Or do you already know what's going on there because you've been through it decades ago? >> There are groups at the lab that know things about things but I'm not at liberty to talk about that. (laughs) >> Dave: (chuckles) I love that answer. >> Stu: Okay. >> Go ahead, Stu. >> Maybe you could talk a little bit about the importance of data. Your group's part of Lawrence Livermore Labs. I've loved geeking out in my career to talk to your team, really smart people, you know, some sizeable budgets and, you know, build, you know, supercomputers and the like. So, you know, how important is data and, you know, how's the role of data been changing the last few years? >> So, data's very critical to what we do. That whole facility is designed about getting data out. And there are two aspects of data for us. There's data that goes to the scientists and there's data about the facility itself. And it's just amazing the tremendous amount of information that we collect about the facility in trying to keep that facility running. And we have a whole just a line out the door and around the corner of scientists trying to get time on the laser. And so the last thing IT wants to be is the reason why they can't get their experiment off. Some of these experimentalists are waiting up to like three, four years to get their chance to run their experiment, which could be the basis of their scientific career that they're studying for that. And so, with a facility that large, 66 thousand control points, you can consider it 66 thousand IOT points, that's a lot of data. And it's amazing some days that it all works. So, you know, by being able to collect all that information into a central place we can figure out which devices are starting to misbehave, which need servicing and make sure that the environment is functional as well as reproducible for the next experiment. >> Yeah well you're a case-in-point. When you talk about 66 thousand devices, I can't have somebody going manually checking everything. Just the power of IOT, is there predictive things that let you know if something's going to break? How do you do things like break-fix? >> So we collect a lot of data about those end-point devices. We have been collecting them and looking at that data into Splunk and plotting that over time, all the way from, like, capacitors to motor movements and robot behavior that is going on in the facility. So you can then start getting trends for what average looks like and when things start deviating from norm and set a crew of technicians that'll go in there on our maintenance days to be able to replace components. >> Phillip what are you architecting? Is it the data model, kind of the ingest, the analyze, the dissemination, the infrastructure, the collaboration platform, all of the above? Maybe you could take us inside. >> I am the infrastructure architect, the lead infrastructure architect, so I have other architects that work with me, for database, network, sys admin, et cetera. >> Okay, and then so the data, presumably, informs what the infrastructure needs to looks like, right, i.e. where the data is, is it centralized, de-centralized, how much is it, et cetera. Is that a fair assertion? >> I would say the machine defines what the architecture needs to look like. The business processes change for that, you know, in terms of like, well how do you protect and secure a SCADA environment, for example. And then for the nuances of trying to keep a machine like that continually running and separated and segregated as need be. >> Is what? >> As need be. >> Yeah, what are the technical challenges of doing that? >> Definitely, you know, one challenge is that the Department of Energy never really shares data to the public. And for, you know, it's not like NASA where you take a picture and you say, here you go, right. And so when you get sensitive information it's a way of being able to dissect that out and say, okay well now we've got to use our community of folks that now want to come in remotely, take their data and go. So we want to make sure we do that in a secure manner and also that protects scientists that are working on a particular experiment from another scientist working on their experiment. You know, we want to be able to keep swim lanes, you know, very separated and segregated. Then you get into just, you know, all of these different components, IT, the general IT environment likes to age out things every five years. But our project is, you know, looking at things on a scale of 30 years. So, you know, the challenges we deal with on a regular basis for example are protocols getting decommissioned. And not all the time because, you know, the protocol change doesn't mean that you want to spend that money to redesign that IOT device anymore, especially when you might have a warehouse full of them and then back-up, yeah. >> So obviously you're trying to provide access to those who have the right to see it, like you say, swim lanes get data to the scientists. But you also have a lot of bad guys who would love to get their hands on that data. >> Phillip: That's right. >> So how do you use, I presume you use Splunk at least in part in a security context, is that right? >> Yeah, we have a pretty sharp cyber security team that's always looking at the perimeter and, you know, making sure that we're doing the right things because, you know, there are those of us that are builders and there are those that want to destroy that house of cards. So, you know, we're doing everything we can to make sure that we're keeping the nation's information safe and secure. >> So what's the culture like there? I mean, do you got to be like a PhD to work there? Do you have to have like 15 degrees, CS expert? I mean, what's it like? Is it a diverse environment? Describe it to us. >> It is a very diverse environment. You've got PhD's working with engineers, working with you know, IT people, working with software developers. I mean, it takes an army to making a machine like this work and, you know, it takes a rigid schedule, a lot of discipline but also, you know, I mean everybody's involved in making the mission happen. They believe in it strongly. You know, for myself I've been there 15 years. Some folks have been there working at the lab 35 years plus, so. >> All right, so you're a Splunk customer but what brings you to .conf? You know, what do you look to get out of this? Have you been to these before? >> Ah yes, you know, so at .conf, you know, I really enjoy the interactions with other folks that have similar issues and missions that we do. And learning what they have been doing in order to address those challenges. In addition staying very close with technology, figuring out how we can leverage the latest and greatest items in our environment is what's going to make us not only successful but a great payoff for the American taxpayer. >> So we heard from Doug Merritt this morning that data is messy and that what you want to be able to do is be able to organize the data when you need to. Is that how you guys are looking at this? Is your data messy? You know, this idea of schema on read. And what was life like, and you may or may not know this, kind of before Splunk and after Splunk? >> Before Splunk, you know, we spent a lot of time in traditional data warehousing. You know, we spent a lot of time trying to figure out what content we wanted to go after, ETL, and put that data sets into rows and tables, and that took a lot of time. If there was a change that needed to happen or data that wasn't on-boarded, you couldn't get the answer that you needed. And so it took a long time to actually deliver an answer about what's going on in the environment. And today, you know one of the things that resonated with me is that we are putting data in now, throwing it in, getting it into an index and, you know, almost at the speed of thought, then being able to say, okay, even though I didn't properly on-board that data item I can do that now, I can grab that, and now I can deliver the answer. >> Am I correct that, I mean we talk to a lot of practitioners, they'll tell you that when you go back a few years, their EDW they would say was like a snake swallowing a basketball. They were trying to get it to do things that it really just wasn't designed to do, so they would chase intel every time intel came up with a new chip, hey we need that because we're starved for horsepower. At the same time big data practitioners would tell you, we didn't throw out our EDW, you know, it has its uses. But it's the right tool for the right job, the horses for courses as they say. >> Phillip: Correct. >> Is that a fair assessment? >> That is exactly where we're in. We're in very much a hybrid mode to where we're doing both. One thing I wanted to bring up is that the message before was always that, you know, the log data was unstructured content. And I think, you know, Splunk turned that idea on its head and basically said there is structure in log data. There is no such thing as unstructured content. And because we're able to rise that information up from all these devices in our facility and take relational data and marry that together through like DB Connect for example, it really changed the game for us and really allowed us to gain a lot more information and insight from our systems. >> When they talked about the enhancements coming out in 7.2 they talked about scale, performance and manageability. You've got quite a bit of scale and, you know, I'm sure performance is pretty important. How's Splunk doing? What are you looking for them to enhance their environment down the road, maybe with some of the things they talked about in the Splunk Next that would make your job easier? >> One of the things I was really looking forward to that I see that the signs are there for is being able to roll off buckets into the cloud. So, you know, the concept of being able to use S3 is great, you know, great news for us. You know, another thing we'd like to be able to do is store longer-lived data sets in our environment in longer time series data sets. And also annotate a little bit more, so that, you know, a scientist that sees a certain feature in there can annotate what that feature meant, so that when you have to go through the process of actually doing a machine-learning, you know, algorithm or trying to train a data set you know what data set you're trying to look for or what that pattern looks like. >> Why the S3, because you need a simple object store, where the GET PUT kind of model and S3 is sort of a de facto standard, is that right? >> Pretty much, yeah, that and also, you know, if there was a path to, let's say, Glacier, so all the frozen buckets have a place to go. Because, again, you never know how deep, how long back you'll have to go for a data set to really start looking for a trend, and that would be key. >> So are you using Glacier? >> Phillip: Not very much right now. >> Yeah, okay. >> There are certain areas my counterparts are using AWS quite a bit. So Lawrence Livermore has a pretty big Splunk implementation out on AWS right now. >> Yeah, okay, cool. All right, well, Phillip thank you so much for coming on theCUBE and sharing your knowledge. And last thoughts on conf18, things you're learning, things you're excited about, anything you can talk about. >> (laughs) No, this is a great place to meet folks, to network, to also learn different techniques in order to do, you know, data analysis and, you know, it's been great to just be in this community. >> Dave: Great, well thanks again for coming on. I appreciate it. >> Thank you. >> All right, keep it right there, everybody. Stu and I will be right back with our next guest. We're in Orlando, day 1 of Splunk's conf18. You're watching theCUBE.
SUMMARY :
Brought to you by Splunk. for the National Ignition Facility. You guys are basically responsible for keeping the country's And effective. And that implosion that we get, we have diagnostics And what do they do with it? as they get subjected to, you know, 100 million degrees. Well, so now some countries, North Korea in particular, There are groups at the lab that know things about things So, you know, how important is data and, you know, So, you know, by being able to collect all that information that let you know if something's going to break? and robot behavior that is going on in the facility. Phillip what are you architecting? I am the infrastructure architect, the lead infrastructure Is that a fair assertion? The business processes change for that, you know, And not all the time because, you know, the protocol change But you also have a lot of bad guys who would love and, you know, making sure that we're doing the right things I mean, do you got to be like a PhD to work there? a lot of discipline but also, you know, You know, what do you look to get out of this? Ah yes, you know, so at that data is messy and that what you want to be able to do getting it into an index and, you know, almost at the speed we didn't throw out our EDW, you know, it has its uses. the message before was always that, you know, You've got quite a bit of scale and, you know, the process of actually doing a machine-learning, you know, Pretty much, yeah, that and also, you know, So Lawrence Livermore has a pretty big Splunk implementation All right, well, Phillip thank you so much in order to do, you know, data analysis and, you know, I appreciate it. Stu and I will be right back with our next guest.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Phillip Adams | PERSON | 0.99+ |
Phillip | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Doug Merritt | PERSON | 0.99+ |
Orlando | LOCATION | 0.99+ |
Nevada | LOCATION | 0.99+ |
30 years | QUANTITY | 0.99+ |
Department of Energy | ORGANIZATION | 0.99+ |
15 years | QUANTITY | 0.99+ |
Lawrence Livermore Labs | ORGANIZATION | 0.99+ |
Stu | PERSON | 0.99+ |
15 degrees | QUANTITY | 0.99+ |
100 million degrees | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
NASA | ORGANIZATION | 0.99+ |
two aspects | QUANTITY | 0.99+ |
S3 | TITLE | 0.99+ |
192 laser beams | QUANTITY | 0.99+ |
three | QUANTITY | 0.98+ |
35 years | QUANTITY | 0.98+ |
Splunk | TITLE | 0.98+ |
both | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Splunk | ORGANIZATION | 0.98+ |
one challenge | QUANTITY | 0.98+ |
four years | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
National Ignition Facility | ORGANIZATION | 0.96+ |
decades ago | DATE | 0.96+ |
66 thousand control points | QUANTITY | 0.95+ |
Disney World | LOCATION | 0.95+ |
intel | ORGANIZATION | 0.95+ |
three football fields | QUANTITY | 0.94+ |
one | QUANTITY | 0.94+ |
day two | QUANTITY | 0.92+ |
66 thousand IOT | QUANTITY | 0.91+ |
Google Earth | TITLE | 0.88+ |
Glacier | TITLE | 0.88+ |
.conf | OTHER | 0.87+ |
Lawrence Liverm | PERSON | 0.87+ |
Splunk | PERSON | 0.86+ |
five years | QUANTITY | 0.84+ |
this morning | DATE | 0.83+ |
One thing | QUANTITY | 0.81+ |
66 thousand devices | QUANTITY | 0.8+ |
Splunk | OTHER | 0.78+ |
American | LOCATION | 0.75+ |
DB Connect | TITLE | 0.74+ |
tritium | OTHER | 0.73+ |
North Korea | ORGANIZATION | 0.69+ |
theCUBE | ORGANIZATION | 0.69+ |
last | DATE | 0.68+ |
day 1 | QUANTITY | 0.65+ |
National Ignition | ORGANIZATION | 0.64+ |
conf18 | TITLE | 0.63+ |
years | DATE | 0.61+ |
SCADA | ORGANIZATION | 0.57+ |
lot of | QUANTITY | 0.54+ |
Conf18 | TITLE | 0.43+ |
years | QUANTITY | 0.4+ |
.conf18 | OTHER | 0.37+ |
EDW | COMMERCIAL_ITEM | 0.37+ |