Image Title

Search Results for Splunk Enterprise:

Horizon3.ai Signal | Horizon3.ai Partner Program Expands Internationally


 

hello I'm John Furrier with thecube and welcome to this special presentation of the cube and Horizon 3.ai they're announcing a global partner first approach expanding their successful pen testing product Net Zero you're going to hear from leading experts in their staff their CEO positioning themselves for a successful Channel distribution expansion internationally in Europe Middle East Africa and Asia Pacific in this Cube special presentation you'll hear about the expansion the expanse partner program giving Partners a unique opportunity to offer Net Zero to their customers Innovation and Pen testing is going International with Horizon 3.ai enjoy the program [Music] welcome back everyone to the cube and Horizon 3.ai special presentation I'm John Furrier host of thecube we're here with Jennifer Lee head of Channel sales at Horizon 3.ai Jennifer welcome to the cube thanks for coming on great well thank you for having me so big news around Horizon 3.aa driving Channel first commitment you guys are expanding the channel partner program to include all kinds of new rewards incentives training programs help educate you know Partners really drive more recurring Revenue certainly cloud and Cloud scale has done that you got a great product that fits into that kind of Channel model great Services you can wrap around it good stuff so let's get into it what are you guys doing what are what are you guys doing with this news why is this so important yeah for sure so um yeah we like you said we recently expanded our Channel partner program um the driving force behind it was really just um to align our like you said our Channel first commitment um and creating awareness around the importance of our partner ecosystems um so that's it's really how we go to market is is through the channel and a great International Focus I've talked with the CEO so you know about the solution and he broke down all the action on why it's important on the product side but why now on the go to market change what's the what's the why behind this big this news on the channel yeah for sure so um we are doing this now really to align our business strategy which is built on the concept of enabling our partners to create a high value high margin business on top of our platform and so um we offer a solution called node zero it provides autonomous pen testing as a service and it allows organizations to continuously verify their security posture um so we our company vision we have this tagline that states that our pen testing enables organizations to see themselves Through The Eyes of an attacker and um we use the like the attacker's perspective to identify exploitable weaknesses and vulnerabilities so we created this partner program from a perspective of the partner so the partner's perspective and we've built It Through The Eyes of our partner right so we're prioritizing really what the partner is looking for and uh will ensure like Mutual success for us yeah the partners always want to get in front of the customers and bring new stuff to them pen tests have traditionally been really expensive uh and so bringing it down in one to a service level that's one affordable and has flexibility to it allows a lot of capability so I imagine people getting excited by it so I have to ask you about the program What specifically are you guys doing can you share any details around what it means for the partners what they get what's in it for them can you just break down some of the mechanics and mechanisms or or details yeah yep um you know we're really looking to create business alignment um and like I said establish Mutual success with our partners so we've got two um two key elements that we were really focused on um that we bring to the partners so the opportunity the profit margin expansion is one of them and um a way for our partners to really differentiate themselves and stay relevant in the market so um we've restructured our discount model really um you know highlighting profitability and maximizing profitability and uh this includes our deal registration we've we've created deal registration program we've increased discount for partners who take part in our partner certification uh trainings and we've we have some other partner incentives uh that we we've created that that's going to help out there we've we put this all so we've recently Gone live with our partner portal um it's a Consolidated experience for our partners where they can access our our sales tools and we really view our partners as an extension of our sales and Technical teams and so we've extended all of our our training material that we use internally we've made it available to our partners through our partner portal um we've um I'm trying I'm thinking now back what else is in that partner portal here we've got our partner certification information so all the content that's delivered during that training can be found in the portal we've got deal registration uh um co-branded marketing materials pipeline management and so um this this portal gives our partners a One-Stop place to to go to find all that information um and then just really quickly on the second part of that that I mentioned is our technology really is um really disruptive to the market so you know like you said autonomous pen testing it's um it's still it's well it's still still relatively new topic uh for security practitioners and um it's proven to be really disruptive so um that on top of um just well recently we found an article that um that mentioned by markets and markets that reports that the global pen testing markets really expanding and so it's expected to grow to like 2.7 billion um by 2027. so the Market's there right the Market's expanding it's growing and so for our partners it's just really allows them to grow their revenue um across their customer base expand their customer base and offering this High profit margin while you know getting in early to Market on this just disruptive technology big Market a lot of opportunities to make some money people love to put more margin on on those deals especially when you can bring a great solution that everyone knows is hard to do so I think that's going to provide a lot of value is there is there a type of partner that you guys see emerging or you aligning with you mentioned the alignment with the partners I can see how that the training and the incentives are all there sounds like it's all going well is there a type of partner that's resonating the most or is there categories of partners that can take advantage of this yeah absolutely so we work with all different kinds of Partners we work with our traditional resale Partners um we've worked we're working with systems integrators we have a really strong MSP mssp program um we've got Consulting partners and the Consulting Partners especially with the ones that offer pen test services so we they use us as a as we act as a force multiplier just really offering them profit margin expansion um opportunity there we've got some technology partner partners that we really work with for co-cell opportunities and then we've got our Cloud Partners um you'd mentioned that earlier and so we are in AWS Marketplace so our ccpo partners we're part of the ISP accelerate program um so we we're doing a lot there with our Cloud partners and um of course we uh we go to market with uh distribution Partners as well gotta love the opportunity for more margin expansion every kind of partner wants to put more gross profit on their deals is there a certification involved I have to ask is there like do you get do people get certified or is it just you get trained is it self-paced training is it in person how are you guys doing the whole training certification thing because is that is that a requirement yeah absolutely so we do offer a certification program and um it's been very popular this includes a a seller's portion and an operator portion and and so um this is at no cost to our partners and um we operate both virtually it's it's law it's virtually but live it's not self-paced and we also have in person um you know sessions as well and we also can customize these to any partners that have a large group of people and we can just we can do one in person or virtual just specifically for that partner well any kind of incentive opportunities and marketing opportunities everyone loves to get the uh get the deals just kind of rolling in leads from what we can see if our early reporting this looks like a hot product price wise service level wise what incentive do you guys thinking about and and Joint marketing you mentioned co-sell earlier in pipeline so I was kind of kind of honing in on that piece sure and yes and then to follow along with our partner certification program we do incentivize our partners there if they have a certain number certified their discount increases so that's part of it we have our deal registration program that increases discount as well um and then we do have some um some partner incentives that are wrapped around meeting setting and um moving moving opportunities along to uh proof of value gotta love the education driving value I have to ask you so you've been around the industry you've seen the channel relationships out there you're seeing companies old school new school you know uh Horizon 3.ai is kind of like that new school very cloud specific a lot of Leverage with we mentioned AWS and all the clouds um why is the company so hot right now why did you join them and what's why are people attracted to this company what's the what's the attraction what's the vibe what do you what do you see and what what do you use what did you see in in this company well this is just you know like I said it's very disruptive um it's really in high demand right now and um and and just because because it's new to Market and uh a newer technology so we are we can collaborate with a manual pen tester um we can you know we can allow our customers to run their pen test um with with no specialty teams and um and and then so we and like you know like I said we can allow our partners can actually build businesses profitable businesses so we can they can use our product to increase their services revenue and um and build their business model you know around around our services what's interesting about the pen test thing is that it's very expensive and time consuming the people who do them are very talented people that could be working on really bigger things in the in absolutely customers so bringing this into the channel allows them if you look at the price Delta between a pen test and then what you guys are offering I mean that's a huge margin Gap between street price of say today's pen test and what you guys offer when you show people that they follow do they say too good to be true I mean what are some of the things that people say when you kind of show them that are they like scratch their head like come on what's the what's the catch here right so the cost savings is a huge is huge for us um and then also you know like I said working as a force multiplier with a pen testing company that offers the services and so they can they can do their their annual manual pen tests that may be required around compliance regulations and then we can we can act as the continuous verification of their security um um you know that that they can run um weekly and so it's just um you know it's just an addition to to what they're offering already and an expansion so Jennifer thanks for coming on thecube really appreciate you uh coming on sharing the insights on the channel uh what's next what can we expect from the channel group what are you thinking what's going on right so we're really looking to expand our our Channel um footprint and um very strategically uh we've got um we've got some big plans um for for Horizon 3.ai awesome well thanks for coming on really appreciate it you're watching thecube the leader in high tech Enterprise coverage [Music] [Music] hello and welcome to the Cube's special presentation with Horizon 3.ai with Raina Richter vice president of emea Europe Middle East and Africa and Asia Pacific APAC for Horizon 3 today welcome to this special Cube presentation thanks for joining us thank you for the invitation so Horizon 3 a guy driving Global expansion big international news with a partner first approach you guys are expanding internationally let's get into it you guys are driving this new expanse partner program to new heights tell us about it what are you seeing in the momentum why the expansion what's all the news about well I would say uh yeah in in international we have I would say a similar similar situation like in the US um there is a global shortage of well-educated penetration testers on the one hand side on the other side um we have a raising demand of uh network and infrastructure security and with our approach of an uh autonomous penetration testing I I believe we are totally on top of the game um especially as we have also now uh starting with an international instance that means for example if a customer in Europe is using uh our service node zero he will be connected to a node zero instance which is located inside the European Union and therefore he has doesn't have to worry about the conflict between the European the gdpr regulations versus the US Cloud act and I would say there we have a total good package for our partners that they can provide differentiators to their customers you know we've had great conversations here on thecube with the CEO and the founder of the company around the leverage of the cloud and how successful that's been for the company and honestly I can just Connect the Dots here but I'd like you to weigh in more on how that translates into the go to market here because you got great Cloud scale with with the security product you guys are having success with great leverage there I've seen a lot of success there what's the momentum on the channel partner program internationally why is it so important to you is it just the regional segmentation is it the economics why the momentum well there are it's there are multiple issues first of all there is a raising demand in penetration testing um and don't forget that uh in international we have a much higher level in number a number or percentage in SMB and mid-market customers so these customers typically most of them even didn't have a pen test done once a year so for them pen testing was just too expensive now with our offering together with our partners we can provide different uh ways how customers could get an autonomous pen testing done more than once a year with even lower costs than they had with with a traditional manual paint test so and that is because we have our uh Consulting plus package which is for typically pain testers they can go out and can do a much faster much quicker and their pain test at many customers once in after each other so they can do more pain tests on a lower more attractive price on the other side there are others what even the same ones who are providing um node zero as an mssp service so they can go after s p customers saying okay well you only have a couple of hundred uh IP addresses no worries we have the perfect package for you and then you have let's say the mid Market let's say the thousands and more employees then they might even have an annual subscription very traditional but for all of them it's all the same the customer or the service provider doesn't need a piece of Hardware they only need to install a small piece of a Docker container and that's it and that makes it so so smooth to go in and say okay Mr customer we just put in this this virtual attacker into your network and that's it and and all the rest is done and within within three clicks they are they can act like a pen tester with 20 years of experience and that's going to be very Channel friendly and partner friendly I can almost imagine so I have to ask you and thank you for calling the break calling out that breakdown and and segmentation that was good that was very helpful for me to understand but I want to follow up if you don't mind um what type of partners are you seeing the most traction with and why well I would say at the beginning typically you have the the innovators the early adapters typically Boutique size of Partners they start because they they are always looking for Innovation and those are the ones you they start in the beginning so we have a wide range of Partners having mostly even um managed by the owner of the company so uh they immediately understand okay there is the value and they can change their offering they're changing their offering in terms of penetration testing because they can do more pen tests and they can then add other ones or we have those ones who offer 10 tests services but they did not have their own pen testers so they had to go out on the open market and Source paint testing experts um to get the pen test at a particular customer done and now with node zero they're totally independent they can't go out and say okay Mr customer here's the here's the service that's it we turn it on and within an hour you're up and running totally yeah and those pen tests are usually expensive and hard to do now it's right in line with the sales delivery pretty interesting for a partner absolutely but on the other hand side we are not killing the pain testers business we do something we're providing with no tiers I would call something like the foundation work the foundational work of having an an ongoing penetration testing of the infrastructure the operating system and the pen testers by themselves they can concentrate in the future on things like application pen testing for example so those Services which we we're not touching so we're not killing the paint tester Market we're just taking away the ongoing um let's say foundation work call it that way yeah yeah that was one of my questions I was going to ask is there's a lot of interest in this autonomous pen testing one because it's expensive to do because those skills are required are in need and they're expensive so you kind of cover the entry level and the blockers that are in there I've seen people say to me this pen test becomes a blocker for getting things done so there's been a lot of interest in the autonomous pen testing and for organizations to have that posture and it's an overseas issue too because now you have that that ongoing thing so can you explain that particular benefit for an organization to have that continuously verifying an organization's posture yep certainly so I would say um typically you are you you have to do your patches you have to bring in new versions of operating systems of different Services of uh um operating systems of some components and and they are always bringing new vulnerabilities the difference here is that with node zero we are telling the customer or the partner package we're telling them which are the executable vulnerabilities because previously they might have had um a vulnerability scanner so this vulnerability scanner brought up hundreds or even thousands of cves but didn't say anything about which of them are vulnerable really executable and then you need an expert digging in one cve after the other finding out is it is it really executable yes or no and that is where you need highly paid experts which we have a shortage so with notes here now we can say okay we tell you exactly which ones are the ones you should work on because those are the ones which are executable we rank them accordingly to the risk level how easily they can be used and by a sudden and then the good thing is convert it or indifference to the traditional penetration test they don't have to wait for a year for the next pain test to find out if the fixing was effective they weren't just the next scan and say Yes closed vulnerability is gone the time is really valuable and if you're doing any devops Cloud native you're always pushing new things so pen test ongoing pen testing is actually a benefit just in general as a kind of hygiene so really really interesting solution really bring that global scale is going to be a new new coverage area for us for sure I have to ask you if you don't mind answering what particular region are you focused on or plan to Target for this next phase of growth well at this moment we are concentrating on the countries inside the European Union Plus the United Kingdom um but we are and they are of course logically I'm based into Frankfurt area that means we cover more or less the countries just around so it's like the total dark region Germany Switzerland Austria plus the Netherlands but we also already have Partners in the nordics like in Finland or in Sweden um so it's it's it it's rapidly we have Partners already in the UK and it's rapidly growing so I'm for example we are now starting with some activities in Singapore um um and also in the in the Middle East area um very important we uh depending on let's say the the way how to do business currently we try to concentrate on those countries where we can have um let's say um at least English as an accepted business language great is there any particular region you're having the most success with right now is it sounds like European Union's um kind of first wave what's them yes that's the first definitely that's the first wave and now we're also getting the uh the European instance up and running it's clearly our commitment also to the market saying okay we know there are certain dedicated uh requirements and we take care of this and and we're just launching it we're building up this one uh the instance um in the AWS uh service center here in Frankfurt also with some dedicated Hardware internet in a data center in Frankfurt where we have with the date six by the way uh the highest internet interconnection bandwidth on the planet so we have very short latency to wherever you are on on the globe that's a great that's a great call outfit benefit too I was going to ask that what are some of the benefits your partners are seeing in emea and Asia Pacific well I would say um the the benefits is for them it's clearly they can they can uh talk with customers and can offer customers penetration testing which they before and even didn't think about because it penetrates penetration testing in a traditional way was simply too expensive for them too complex the preparation time was too long um they didn't have even have the capacity uh to um to support a pain an external pain tester now with this service you can go in and say even if they Mr customer we can do a test with you in a couple of minutes within we have installed the docker container within 10 minutes we have the pen test started that's it and then we just wait and and I would say that is we'll we are we are seeing so many aha moments then now because on the partner side when they see node zero the first time working it's like this wow that is great and then they work out to customers and and show it to their typically at the beginning mostly the friendly customers like wow that's great I need that and and I would say um the feedback from the partners is that is a service where I do not have to evangelize the customer everybody understands penetration testing I don't have to say describe what it is they understand the customer understanding immediately yes penetration testing good about that I know I should do it but uh too complex too expensive now with the name is for example as an mssp service provided from one of our partners but it's getting easy yeah it's great and it's great great benefit there I mean I gotta say I'm a huge fan of what you guys are doing I like this continuous automation that's a major benefit to anyone doing devops or any kind of modern application development this is just a godsend for them this is really good and like you said the pen testers that are doing it they were kind of coming down from their expertise to kind of do things that should have been automated they get to focus on the bigger ticket items that's a really big point so we free them we free the pain testers for the higher level elements of the penetration testing segment and that is typically the application testing which is currently far away from being automated yeah and that's where the most critical workloads are and I think this is the nice balance congratulations on the international expansion of the program and thanks for coming on this special presentation really I really appreciate it thank you you're welcome okay this is thecube special presentation you know check out pen test automation International expansion Horizon 3 dot AI uh really Innovative solution in our next segment Chris Hill sector head for strategic accounts will discuss the power of Horizon 3.ai and Splunk in action you're watching the cube the leader in high tech Enterprise coverage foreign [Music] [Music] welcome back everyone to the cube and Horizon 3.ai special presentation I'm John Furrier host of thecube we're with Chris Hill sector head for strategic accounts and federal at Horizon 3.ai a great Innovative company Chris great to see you thanks for coming on thecube yeah like I said uh you know great to meet you John long time listener first time caller so excited to be here with you guys yeah we were talking before camera you had Splunk back in 2013 and I think 2012 was our first splunk.com and boy man you know talk about being in the right place at the right time now we're at another inflection point and Splunk continues to be relevant um and continuing to have that data driving Security in that interplay and your CEO former CTO of his plug as well at Horizon who's been on before really Innovative product you guys have but you know yeah don't wait for a breach to find out if you're logging the right data this is the topic of this thread Splunk is very much part of this new international expansion announcement uh with you guys tell us what are some of the challenges that you see where this is relevant for the Splunk and Horizon AI as you guys expand uh node zero out internationally yeah well so across so you know my role uh within Splunk it was uh working with our most strategic accounts and so I looked back to 2013 and I think about the sales process like working with with our small customers you know it was um it was still very siled back then like I was selling to an I.T team that was either using this for it operations um we generally would always even say yeah although we do security we weren't really designed for it we're a log management tool and we I'm sure you remember back then John we were like sort of stepping into the security space and and the public sector domain that I was in you know security was 70 of what we did when I look back to sort of uh the transformation that I was witnessing in that digital transformation um you know when I look at like 2019 to today you look at how uh the IT team and the security teams are being have been forced to break down those barriers that they used to sort of be silent away would not commute communicate one you know the security guys would be like oh this is my box I.T you're not allowed in today you can't get away with that and I think that the value that we bring to you know and of course Splunk has been a huge leader in that space and continues to do Innovation across the board but I think what we've we're seeing in the space and I was talking with Patrick Coughlin the SVP of uh security markets about this is that you know what we've been able to do with Splunk is build a purpose-built solution that allows Splunk to eat more data so Splunk itself is ulk know it's an ingest engine right the great reason people bought it was you could build these really fast dashboards and grab intelligence out of it but without data it doesn't do anything right so how do you drive and how do you bring more data in and most importantly from a customer perspective how do you bring the right data in and so if you think about what node zero and what we're doing in a horizon 3 is that sure we do pen testing but because we're an autonomous pen testing tool we do it continuously so this whole thought I'd be like oh crud like my customers oh yeah we got a pen test coming up it's gonna be six weeks the week oh yeah you know and everyone's gonna sit on their hands call me back in two months Chris we'll talk to you then right not not a real efficient way to test your environment and shoot we saw that with Uber this week right um you know and that's a case where we could have helped oh just right we could explain the Uber thing because it was a contractor just give a quick highlight of what happened so you can connect the doctor yeah no problem so um it was uh I got I think it was yeah one of those uh you know games where they would try and test an environment um and with the uh pen tester did was he kept on calling them MFA guys being like I need to reset my password we need to set my right password and eventually the um the customer service guy said okay I'm resetting it once he had reset and bypassed the multi-factor authentication he then was able to get in and get access to the building area that he was in or I think not the domain but he was able to gain access to a partial part of that Network he then paralleled over to what I would assume is like a VA VMware or some virtual machine that had notes that had all of the credentials for logging into various domains and So within minutes they had access and that's the sort of stuff that we do you know a lot of these tools like um you know you think about the cacophony of tools that are out there in a GTA architect architecture right I'm gonna get like a z-scale or I'm going to have uh octum and I have a Splunk I've been into the solar system I mean I don't mean to name names we have crowdstriker or Sentinel one in there it's just it's a cacophony of things that don't work together they weren't designed work together and so we have seen so many times in our business through our customer support and just working with customers when we do their pen tests that there will be 5 000 servers out there three are misconfigured those three misconfigurations will create the open door because remember the hacker only needs to be right once the defender needs to be right all the time and that's the challenge and so that's what I'm really passionate about what we're doing uh here at Horizon three I see this my digital transformation migration and security going on which uh we're at the tip of the spear it's why I joined sey Hall coming on this journey uh and just super excited about where the path's going and super excited about the relationship with Splunk I get into more details on some of the specifics of that but um you know well you're nailing I mean we've been doing a lot of things on super cloud and this next gen environment we're calling it next gen you're really seeing devops obviously devsecops has already won the it role has moved to the developer shift left is an indicator of that it's one of the many examples higher velocity code software supply chain you hear these things that means that it is now in the developer hands it is replaced by the new Ops data Ops teams and security where there's a lot of horizontal thinking to your point about access there's no more perimeter huge 100 right is really right on things one time you know to get in there once you're in then you can hang out move around move laterally big problem okay so we get that now the challenges for these teams as they are transitioning organizationally how do they figure out what to do okay this is the next step they already have Splunk so now they're kind of in transition while protecting for a hundred percent ratio of success so how would you look at that and describe the challenge is what do they do what is it what are the teams facing with their data and what's next what are they what are they what action do they take so let's use some vernacular that folks will know so if I think about devsecops right we both know what that means that I'm going to build security into the app it normally talks about sec devops right how am I building security around the perimeter of what's going inside my ecosystem and what are they doing and so if you think about what we're able to do with somebody like Splunk is we can pen test the entire environment from Soup To Nuts right so I'm going to test the end points through to its I'm going to look for misconfigurations I'm going to I'm going to look for um uh credential exposed credentials you know I'm going to look for anything I can in the environment again I'm going to do it at light speed and and what what we're doing for that SEC devops space is to you know did you detect that we were in your environment so did we alert Splunk or the Sim that there's someone in the environment laterally moving around did they more importantly did they log us into their environment and when do they detect that log to trigger that log did they alert on us and then finally most importantly for every CSO out there is going to be did they stop us and so that's how we we do this and I think you when speaking with um stay Hall before you know we've come up with this um boils but we call it fine fix verifying so what we do is we go in is we act as the attacker right we act in a production environment so we're not going to be we're a passive attacker but we will go in on credentialed on agents but we have to assume to have an assumed breach model which means we're going to put a Docker container in your environment and then we're going to fingerprint the environment so we're going to go out and do an asset survey now that's something that's not something that Splunk does super well you know so can Splunk see all the assets do the same assets marry up we're going to log all that data and think and then put load that into this long Sim or the smoke logging tools just to have it in Enterprise right that's an immediate future ad that they've got um and then we've got the fix so once we've completed our pen test um we are then going to generate a report and we can talk about these in a little bit later but the reports will show an executive summary the assets that we found which would be your asset Discovery aspect of that a fix report and the fixed report I think is probably the most important one it will go down and identify what we did how we did it and then how to fix that and then from that the pen tester or the organization should fix those then they go back and run another test and then they validate like a change detection environment to see hey did those fixes taste play take place and you know snehaw when he was the CTO of jsoc he shared with me a number of times about it's like man there would be 15 more items on next week's punch sheet that we didn't know about and it's and it has to do with how we you know how they were uh prioritizing the cves and whatnot because they would take all CBDs it was critical or non-critical and it's like we are able to create context in that environment that feeds better information into Splunk and whatnot that brings that brings up the efficiency for Splunk specifically the teams out there by the way the burnout thing is real I mean this whole I just finished my list and I got 15 more or whatever the list just can keeps growing how did node zero specifically help Splunk teams be more efficient like that's the question I want to get at because this seems like a very scale way for Splunk customers and teams service teams to be more so the question is how does node zero help make Splunk specifically their service teams be more efficient so so today in our early interactions we're building customers we've seen are five things um and I'll start with sort of identifying the blind spots right so kind of what I just talked about with you did we detect did we log did we alert did they stop node zero right and so I would I put that you know a more Layman's third grade term and if I was going to beat a fifth grader at this game would be we can be the sparring partner for a Splunk Enterprise customer a Splunk Essentials customer someone using Splunk soar or even just an Enterprise Splunk customer that may be a small shop with three people and just wants to know where am I exposed so by creating and generating these reports and then having um the API that actually generates the dashboard they can take all of these events that we've logged and log them in and then where that then comes in is number two is how do we prioritize those logs right so how do we create visibility to logs that that um are have critical impacts and again as I mentioned earlier not all cves are high impact regard and also not all or low right so if you daisy chain a bunch of low cves together boom I've got a mission critical AP uh CPE that needs to be fixed now such as a credential moving to an NT box that's got a text file with a bunch of passwords on it that would be very bad um and then third would be uh verifying that you have all of the hosts so one of the things that splunk's not particularly great at and they'll literate themselves they don't do asset Discovery so dude what assets do we see and what are they logging from that um and then for from um for every event that they are able to identify one of the cool things that we can do is actually create this low code no code environment so they could let you know Splunk customers can use Splunk sword to actually triage events and prioritize that event so where they're being routed within it to optimize the Sox team time to Market or time to triage any given event obviously reducing MTR and then finally I think one of the neatest things that we'll be seeing us develop is um our ability to build glass cables so behind me you'll see one of our triage events and how we build uh a Lockheed Martin kill chain on that with a glass table which is very familiar to the community we're going to have the ability and not too distant future to allow people to search observe on those iocs and if people aren't familiar with it ioc it's an instant of a compromise so that's a vector that we want to drill into and of course who's better at Drilling in the data and smoke yeah this is a critter this is an awesome Synergy there I mean I can see a Splunk customer going man this just gives me so much more capability action actionability and also real understanding and I think this is what I want to dig into if you don't mind understanding that critical impact okay is kind of where I see this coming got the data data ingest now data's data but the question is what not to log you know where are things misconfigured these are critical questions so can you talk about what it means to understand critical impact yeah so I think you know going back to the things that I just spoke about a lot of those cves where you'll see um uh low low low and then you daisy chain together and they're suddenly like oh this is high now but then your other impact of like if you're if you're a Splunk customer you know and I had it I had several of them I had one customer that you know terabytes of McAfee data being brought in and it was like all right there's a lot of other data that you probably also want to bring but they could only afford wanted to do certain data sets because that's and they didn't know how to prioritize or filter those data sets and so we provide that opportunity to say hey these are the critical ones to bring in but there's also the ones that you don't necessarily need to bring in because low cve in this case really does mean low cve like an ILO server would be one that um that's the print server uh where the uh your admin credentials are on on like a printer and so there will be credentials on that that's something that a hacker might go in to look at so although the cve on it is low is if you daisy chain with somebody that's able to get into that you might say Ah that's high and we would then potentially rank it giving our AI logic to say that's a moderate so put it on the scale and we prioritize those versus uh of all of these scanners just going to give you a bunch of CDs and good luck and translating that if I if I can and tell me if I'm wrong that kind of speaks to that whole lateral movement that's it challenge right print serve a great example looks stupid low end who's going to want to deal with the print server oh but it's connected into a critical system there's a path is that kind of what you're getting at yeah I use Daisy Chain I think that's from the community they came from uh but it's just a lateral movement it's exactly what they're doing in those low level low critical lateral movements is where the hackers are getting in right so that's the beauty thing about the uh the Uber example is that who would have thought you know I've got my monthly Factor authentication going in a human made a mistake we can't we can't not expect humans to make mistakes we're fallible right the reality is is once they were in the environment they could have protected themselves by running enough pen tests to know that they had certain uh exposed credentials that would have stopped the breach and they did not had not done that in their environment and I'm not poking yeah but it's an interesting Trend though I mean it's obvious if sometimes those low end items are also not protected well so it's easy to get at from a hacker standpoint but also the people in charge of them can be fished easily or spearfished because they're not paying attention because they don't have to no one ever told them hey be careful yeah for the community that I came from John that's exactly how they they would uh meet you at a uh an International Event um introduce themselves as a graduate student these are National actor States uh would you mind reviewing my thesis on such and such and I was at Adobe at the time that I was working on this instead of having to get the PDF they opened the PDF and whoever that customer was launches and I don't know if you remember back in like 2008 time frame there was a lot of issues around IP being by a nation state being stolen from the United States and that's exactly how they did it and John that's or LinkedIn hey I want to get a joke we want to hire you double the salary oh I'm gonna click on that for sure you know yeah right exactly yeah the one thing I would say to you is like uh when we look at like sort of you know because I think we did 10 000 pen tests last year is it's probably over that now you know we have these sort of top 10 ways that we think and find people coming into the environment the funniest thing is that only one of them is a cve related vulnerability like uh you know you guys know what they are right so it's it but it's it's like two percent of the attacks are occurring through the cves but yeah there's all that attention spent to that and very little attention spent to this pen testing side which is sort of this continuous threat you know monitoring space and and this vulnerability space where I think we play a such an important role and I'm so excited to be a part of the tip of the spear on this one yeah I'm old enough to know the movie sneakers which I loved as a you know watching that movie you know professional hackers are testing testing always testing the environment I love this I got to ask you as we kind of wrap up here Chris if you don't mind the the benefits to Professional Services from this Alliance big news Splunk and you guys work well together we see that clearly what are what other benefits do Professional Services teams see from the Splunk and Horizon 3.ai Alliance so if you're I think for from our our from both of our uh Partners uh as we bring these guys together and many of them already are the same partner right uh is that uh first off the licensing model is probably one of the key areas that we really excel at so if you're an end user you can buy uh for the Enterprise by the number of IP addresses you're using um but uh if you're a partner working with this there's solution ways that you can go in and we'll license as to msps and what that business model on msps looks like but the unique thing that we do here is this C plus license and so the Consulting plus license allows like a uh somebody a small to mid-sized to some very large uh you know Fortune 100 uh consulting firms use this uh by buying into a license called um Consulting plus where they can have unlimited uh access to as many IPS as they want but you can only run one test at a time and as you can imagine when we're going and hacking passwords and um checking hashes and decrypting hashes that can take a while so but for the right customer it's it's a perfect tool and so I I'm so excited about our ability to go to market with uh our partners so that we understand ourselves understand how not to just sell to or not tell just to sell through but we know how to sell with them as a good vendor partner I think that that's one thing that we've done a really good job building bring it into the market yeah I think also the Splunk has had great success how they've enabled uh partners and Professional Services absolutely you know the services that layer on top of Splunk are multi-fold tons of great benefits so you guys Vector right into that ride that way with friction and and the cool thing is that in you know in one of our reports which could be totally customized uh with someone else's logo we're going to generate you know so I I used to work in another organization it wasn't Splunk but we we did uh you know pen testing as for for customers and my pen testers would come on site they'd do the engagement and they would leave and then another release someone would be oh shoot we got another sector that was breached and they'd call you back you know four weeks later and so by August our entire pen testings teams would be sold out and it would be like well even in March maybe and they're like no no I gotta breach now and and and then when they do go in they go through do the pen test and they hand over a PDF and they pack on the back and say there's where your problems are you need to fix it and the reality is that what we're going to generate completely autonomously with no human interaction is we're going to go and find all the permutations of anything we found and the fix for those permutations and then once you've fixed everything you just go back and run another pen test it's you know for what people pay for one pen test they can have a tool that does that every every Pat patch on Tuesday and that's on Wednesday you know triage throughout the week green yellow red I wanted to see the colors show me green green is good right not red and one CIO doesn't want who doesn't want that dashboard right it's it's exactly it and we can help bring I think that you know I'm really excited about helping drive this with the Splunk team because they get that they understand that it's the green yellow red dashboard and and how do we help them find more green uh so that the other guys are in red yeah and get in the data and do the right thing and be efficient with how you use the data know what to look at so many things to pay attention to you know the combination of both and then go to market strategy real brilliant congratulations Chris thanks for coming on and sharing um this news with the detail around the Splunk in action around the alliance thanks for sharing John my pleasure thanks look forward to seeing you soon all right great we'll follow up and do another segment on devops and I.T and security teams as the new new Ops but and super cloud a bunch of other stuff so thanks for coming on and our next segment the CEO of horizon 3.aa will break down all the new news for us here on thecube you're watching thecube the leader in high tech Enterprise coverage [Music] yeah the partner program for us has been fantastic you know I think prior to that you know as most organizations most uh uh most Farmers most mssps might not necessarily have a a bench at all for penetration testing uh maybe they subcontract this work out or maybe they do it themselves but trying to staff that kind of position can be incredibly difficult for us this was a differentiator a a new a new partner a new partnership that allowed us to uh not only perform services for our customers but be able to provide a product by which that they can do it themselves so we work with our customers in a variety of ways some of them want more routine testing and perform this themselves but we're also a certified service provider of horizon 3 being able to perform uh penetration tests uh help review the the data provide color provide analysis for our customers in a broader sense right not necessarily the the black and white elements of you know what was uh what's critical what's high what's medium what's low what you need to fix but are there systemic issues this has allowed us to onboard new customers this has allowed us to migrate some penetration testing services to us from from competitors in the marketplace But ultimately this is occurring because the the product and the outcome are special they're unique and they're effective our customers like what they're seeing they like the routineness of it many of them you know again like doing this themselves you know being able to kind of pen test themselves parts of their networks um and the the new use cases right I'm a large organization I have eight to ten Acquisitions per year wouldn't it be great to have a tool to be able to perform a penetration test both internal and external of that acquisition before we integrate the two companies and maybe bringing on some risk it's a very effective partnership uh one that really is uh kind of taken our our Engineers our account Executives by storm um you know this this is a a partnership that's been very valuable to us [Music] a key part of the value and business model at Horizon 3 is enabling Partners to leverage node zero to make more revenue for themselves our goal is that for sixty percent of our Revenue this year will be originated by partners and that 95 of our Revenue next year will be originated by partners and so a key to that strategy is making us an integral part of your business models as a partner a key quote from one of our partners is that we enable every one of their business units to generate Revenue so let's talk about that in a little bit more detail first is that if you have a pen test Consulting business take Deloitte as an example what was six weeks of human labor at Deloitte per pen test has been cut down to four days of Labor using node zero to conduct reconnaissance find all the juicy interesting areas of the of the Enterprise that are exploitable and being able to go assess the entire organization and then all of those details get served up to the human to be able to look at understand and determine where to probe deeper so what you see in that pen test Consulting business is that node zero becomes a force multiplier where those Consulting teams were able to cover way more accounts and way more IPS within those accounts with the same or fewer consultants and so that directly leads to profit margin expansion for the Penn testing business itself because node 0 is a force multiplier the second business model here is if you're an mssp as an mssp you're already making money providing defensive cyber security operations for a large volume of customers and so what they do is they'll license node zero and use us as an upsell to their mssb business to start to deliver either continuous red teaming continuous verification or purple teaming as a service and so in that particular business model they've got an additional line of Revenue where they can increase the spend of their existing customers by bolting on node 0 as a purple team as a service offering the third business model or customer type is if you're an I.T services provider so as an I.T services provider you make money installing and configuring security products like Splunk or crowdstrike or hemio you also make money reselling those products and you also make money generating follow-on services to continue to harden your customer environments and so for them what what those it service providers will do is use us to verify that they've installed Splunk correctly improved to their customer that Splunk was installed correctly or crowdstrike was installed correctly using our results and then use our results to drive follow-on services and revenue and then finally we've got the value-added reseller which is just a straight up reseller because of how fast our sales Cycles are these vars are able to typically go from cold email to deal close in six to eight weeks at Horizon 3 at least a single sales engineer is able to run 30 to 50 pocs concurrently because our pocs are very lightweight and don't require any on-prem customization or heavy pre-sales post sales activity so as a result we're able to have a few amount of sellers driving a lot of Revenue and volume for us well the same thing applies to bars there isn't a lot of effort to sell the product or prove its value so vars are able to sell a lot more Horizon 3 node zero product without having to build up a huge specialist sales organization so what I'm going to do is talk through uh scenario three here as an I.T service provider and just how powerful node zero can be in driving additional Revenue so in here think of for every one dollar of node zero license purchased by the IT service provider to do their business it'll generate ten dollars of additional revenue for that partner so in this example kidney group uses node 0 to verify that they have installed and deployed Splunk correctly so Kitty group is a Splunk partner they they sell it services to install configure deploy and maintain Splunk and as they deploy Splunk they're going to use node 0 to attack the environment and make sure that the right logs and alerts and monitoring are being handled within the Splunk deployment so it's a way of doing QA or verifying that Splunk has been configured correctly and that's going to be internally used by kidney group to prove the quality of their services that they've just delivered then what they're going to do is they're going to show and leave behind that node zero Report with their client and that creates a resell opportunity for for kidney group to resell node 0 to their client because their client is seeing the reports and the results and saying wow this is pretty amazing and those reports can be co-branded where it's a pen testing report branded with kidney group but it says powered by Horizon three under it from there kidney group is able to take the fixed actions report that's automatically generated with every pen test through node zero and they're able to use that as the starting point for a statement of work to sell follow-on services to fix all of the problems that node zero identified fixing l11r misconfigurations fixing or patching VMware or updating credentials policies and so on so what happens is node 0 has found a bunch of problems the client often lacks the capacity to fix and so kidney group can use that lack of capacity by the client as a follow-on sales opportunity for follow-on services and finally based on the findings from node zero kidney group can look at that report and say to the customer you know customer if you bought crowdstrike you'd be able to uh prevent node Zero from attacking and succeeding in the way that it did for if you bought humano or if you bought Palo Alto networks or if you bought uh some privileged access management solution because of what node 0 was able to do with credential harvesting and attacks and so as a result kidney group is able to resell other security products within their portfolio crowdstrike Falcon humano Polito networks demisto Phantom and so on based on the gaps that were identified by node zero and that pen test and what that creates is another feedback loop where kidney group will then go use node 0 to verify that crowdstrike product has actually been installed and configured correctly and then this becomes the cycle of using node 0 to verify a deployment using that verification to drive a bunch of follow-on services and resell opportunities which then further drives more usage of the product now the way that we licensed is that it's a usage-based license licensing model so that the partner will grow their node zero Consulting plus license as they grow their business so for example if you're a kidney group then week one you've got you're going to use node zero to verify your Splunk install in week two if you have a pen testing business you're going to go off and use node zero to be a force multiplier for your pen testing uh client opportunity and then if you have an mssp business then in week three you're going to use node zero to go execute a purple team mssp offering for your clients so not necessarily a kidney group but if you're a Deloitte or ATT these larger companies and you've got multiple lines of business if you're Optive for instance you all you have to do is buy one Consulting plus license and you're going to be able to run as many pen tests as you want sequentially so now you can buy a single license and use that one license to meet your week one client commitments and then meet your week two and then meet your week three and as you grow your business you start to run multiple pen tests concurrently so in week one you've got to do a Splunk verify uh verify Splunk install and you've got to run a pen test and you've got to do a purple team opportunity you just simply expand the number of Consulting plus licenses from one license to three licenses and so now as you systematically grow your business you're able to grow your node zero capacity with you giving you predictable cogs predictable margins and once again 10x additional Revenue opportunity for that investment in the node zero Consulting plus license my name is Saint I'm the co-founder and CEO here at Horizon 3. I'm going to talk to you today about why it's important to look at your Enterprise Through The Eyes of an attacker the challenge I had when I was a CIO in banking the CTO at Splunk and serving within the Department of Defense is that I had no idea I was Secure until the bad guys had showed up am I logging the right data am I fixing the right vulnerabilities are my security tools that I've paid millions of dollars for actually working together to defend me and the answer is I don't know does my team actually know how to respond to a breach in the middle of an incident I don't know I've got to wait for the bad guys to show up and so the challenge I had was how do we proactively verify our security posture I tried a variety of techniques the first was the use of vulnerability scanners and the challenge with vulnerability scanners is being vulnerable doesn't mean you're exploitable I might have a hundred thousand findings from my scanner of which maybe five or ten can actually be exploited in my environment the other big problem with scanners is that they can't chain weaknesses together from machine to machine so if you've got a thousand machines in your environment or more what a vulnerability scanner will do is tell you you have a problem on machine one and separately a problem on machine two but what they can tell you is that an attacker could use a load from machine one plus a low from machine two to equal to critical in your environment and what attackers do in their tactics is they chain together misconfigurations dangerous product defaults harvested credentials and exploitable vulnerabilities into attack paths across different machines so to address the attack pads across different machines I tried layering in consulting-based pen testing and the issue is when you've got thousands of hosts or hundreds of thousands of hosts in your environment human-based pen testing simply doesn't scale to test an infrastructure of that size moreover when they actually do execute a pen test and you get the report oftentimes you lack the expertise within your team to quickly retest to verify that you've actually fixed the problem and so what happens is you end up with these pen test reports that are incomplete snapshots and quickly going stale and then to mitigate that problem I tried using breach and attack simulation tools and the struggle with these tools is one I had to install credentialed agents everywhere two I had to write my own custom attack scripts that I didn't have much talent for but also I had to maintain as my environment changed and then three these types of tools were not safe to run against production systems which was the the majority of my attack surface so that's why we went off to start Horizon 3. so Tony and I met when we were in Special Operations together and the challenge we wanted to solve was how do we do infrastructure security testing at scale by giving the the power of a 20-year pen testing veteran into the hands of an I.T admin a network engineer in just three clicks and the whole idea is we enable these fixers The Blue Team to be able to run node Zero Hour pen testing product to quickly find problems in their environment that blue team will then then go off and fix the issues that were found and then they can quickly rerun the attack to verify that they fixed the problem and the whole idea is delivering this without requiring custom scripts be developed without requiring credential agents be installed and without requiring the use of external third-party consulting services or Professional Services self-service pen testing to quickly Drive find fix verify there are three primary use cases that our customers use us for the first is the sock manager that uses us to verify that their security tools are actually effective to verify that they're logging the right data in Splunk or in their Sim to verify that their managed security services provider is able to quickly detect and respond to an attack and hold them accountable for their slas or that the sock understands how to quickly detect and respond and measuring and verifying that or that the variety of tools that you have in your stack most organizations have 130 plus cyber security tools none of which are designed to work together are actually working together the second primary use case is proactively hardening and verifying your systems this is when the I that it admin that network engineer they're able to run self-service pen tests to verify that their Cisco environment is installed in hardened and configured correctly or that their credential policies are set up right or that their vcenter or web sphere or kubernetes environments are actually designed to be secure and what this allows the it admins and network Engineers to do is shift from running one or two pen tests a year to 30 40 or more pen tests a month and you can actually wire those pen tests into your devops process or into your detection engineering and the change management processes to automatically trigger pen tests every time there's a change in your environment the third primary use case is for those organizations lucky enough to have their own internal red team they'll use node zero to do reconnaissance and exploitation at scale and then use the output as a starting point for the humans to step in and focus on the really hard juicy stuff that gets them on stage at Defcon and so these are the three primary use cases and what we'll do is zoom into the find fix verify Loop because what I've found in my experience is find fix verify is the future operating model for cyber security organizations and what I mean here is in the find using continuous pen testing what you want to enable is on-demand self-service pen tests you want those pen tests to find attack pads at scale spanning your on-prem infrastructure your Cloud infrastructure and your perimeter because attackers don't only state in one place they will find ways to chain together a perimeter breach a credential from your on-prem to gain access to your cloud or some other permutation and then the third part in continuous pen testing is attackers don't focus on critical vulnerabilities anymore they know we've built vulnerability Management Programs to reduce those vulnerabilities so attackers have adapted and what they do is chain together misconfigurations in your infrastructure and software and applications with dangerous product defaults with exploitable vulnerabilities and through the collection of credentials through a mix of techniques at scale once you've found those problems the next question is what do you do about it well you want to be able to prioritize fixing problems that are actually exploitable in your environment that truly matter meaning they're going to lead to domain compromise or domain user compromise or access your sensitive data the second thing you want to fix is making sure you understand what risk your crown jewels data is exposed to where is your crown jewels data is in the cloud is it on-prem has it been copied to a share drive that you weren't aware of if a domain user was compromised could they access that crown jewels data you want to be able to use the attacker's perspective to secure the critical data you have in your infrastructure and then finally as you fix these problems you want to quickly remediate and retest that you've actually fixed the issue and this fine fix verify cycle becomes that accelerator that drives purple team culture the third part here is verify and what you want to be able to do in the verify step is verify that your security tools and processes in people can effectively detect and respond to a breach you want to be able to integrate that into your detection engineering processes so that you know you're catching the right security rules or that you've deployed the right configurations you also want to make sure that your environment is adhering to the best practices around systems hardening in cyber resilience and finally you want to be able to prove your security posture over a time to your board to your leadership into your regulators so what I'll do now is zoom into each of these three steps so when we zoom in to find here's the first example using node 0 and autonomous pen testing and what an attacker will do is find a way to break through the perimeter in this example it's very easy to misconfigure kubernetes to allow an attacker to gain remote code execution into your on-prem kubernetes environment and break through the perimeter and from there what the attacker is going to do is conduct Network reconnaissance and then find ways to gain code execution on other machines in the environment and as they get code execution they start to dump credentials collect a bunch of ntlm hashes crack those hashes using open source and dark web available data as part of those attacks and then reuse those credentials to log in and laterally maneuver throughout the environment and then as they loudly maneuver they can reuse those credentials and use credential spraying techniques and so on to compromise your business email to log in as admin into your cloud and this is a very common attack and rarely is a CV actually needed to execute this attack often it's just a misconfiguration in kubernetes with a bad credential policy or password policy combined with bad practices of credential reuse across the organization here's another example of an internal pen test and this is from an actual customer they had 5 000 hosts within their environment they had EDR and uba tools installed and they initiated in an internal pen test on a single machine from that single initial access point node zero enumerated the network conducted reconnaissance and found five thousand hosts were accessible what node 0 will do under the covers is organize all of that reconnaissance data into a knowledge graph that we call the Cyber terrain map and that cyber Terrain map becomes the key data structure that we use to efficiently maneuver and attack and compromise your environment so what node zero will do is they'll try to find ways to get code execution reuse credentials and so on in this customer example they had Fortinet installed as their EDR but node 0 was still able to get code execution on a Windows machine from there it was able to successfully dump credentials including sensitive credentials from the lsas process on the Windows box and then reuse those credentials to log in as domain admin in the network and once an attacker becomes domain admin they have the keys to the kingdom they can do anything they want so what happened here well it turns out Fortinet was misconfigured on three out of 5000 machines bad automation the customer had no idea this had happened they would have had to wait for an attacker to show up to realize that it was misconfigured the second thing is well why didn't Fortinet stop the credential pivot in the lateral movement and it turned out the customer didn't buy the right modules or turn on the right services within that particular product and we see this not only with Ford in it but we see this with Trend Micro and all the other defensive tools where it's very easy to miss a checkbox in the configuration that will do things like prevent credential dumping the next story I'll tell you is attackers don't have to hack in they log in so another infrastructure pen test a typical technique attackers will take is man in the middle uh attacks that will collect hashes so in this case what an attacker will do is leverage a tool or technique called responder to collect ntlm hashes that are being passed around the network and there's a variety of reasons why these hashes are passed around and it's a pretty common misconfiguration but as an attacker collects those hashes then they start to apply techniques to crack those hashes so they'll pass the hash and from there they will use open source intelligence common password structures and patterns and other types of techniques to try to crack those hashes into clear text passwords so here node 0 automatically collected hashes it automatically passed the hashes to crack those credentials and then from there it starts to take the domain user user ID passwords that it's collected and tries to access different services and systems in your Enterprise in this case node 0 is able to successfully gain access to the Office 365 email environment because three employees didn't have MFA configured so now what happens is node 0 has a placement and access in the business email system which sets up the conditions for fraud lateral phishing and other techniques but what's especially insightful here is that 80 of the hashes that were collected in this pen test were cracked in 15 minutes or less 80 percent 26 of the user accounts had a password that followed a pretty obvious pattern first initial last initial and four random digits the other thing that was interesting is 10 percent of service accounts had their user ID the same as their password so VMware admin VMware admin web sphere admin web Square admin so on and so forth and so attackers don't have to hack in they just log in with credentials that they've collected the next story here is becoming WS AWS admin so in this example once again internal pen test node zero gets initial access it discovers 2 000 hosts are network reachable from that environment if fingerprints and organizes all of that data into a cyber Terrain map from there it it fingerprints that hpilo the integrated lights out service was running on a subset of hosts hpilo is a service that is often not instrumented or observed by security teams nor is it easy to patch as a result attackers know this and immediately go after those types of services so in this case that ILO service was exploitable and were able to get code execution on it ILO stores all the user IDs and passwords in clear text in a particular set of processes so once we gain code execution we were able to dump all of the credentials and then from there laterally maneuver to log in to the windows box next door as admin and then on that admin box we're able to gain access to the share drives and we found a credentials file saved on a share Drive from there it turned out that credentials file was the AWS admin credentials file giving us full admin authority to their AWS accounts not a single security alert was triggered in this attack because the customer wasn't observing the ILO service and every step thereafter was a valid login in the environment and so what do you do step one patch the server step two delete the credentials file from the share drive and then step three is get better instrumentation on privileged access users and login the final story I'll tell is a typical pattern that we see across the board with that combines the various techniques I've described together where an attacker is going to go off and use open source intelligence to find all of the employees that work at your company from there they're going to look up those employees on dark web breach databases and other forms of information and then use that as a starting point to password spray to compromise a domain user all it takes is one employee to reuse a breached password for their Corporate email or all it takes is a single employee to have a weak password that's easily guessable all it takes is one and once the attacker is able to gain domain user access in most shops domain user is also the local admin on their laptop and once your local admin you can dump Sam and get local admin until M hashes you can use that to reuse credentials again local admin on neighboring machines and attackers will start to rinse and repeat then eventually they're able to get to a point where they can dump lsas or by unhooking the anti-virus defeating the EDR or finding a misconfigured EDR as we've talked about earlier to compromise the domain and what's consistent is that the fundamentals are broken at these shops they have poor password policies they don't have least access privilege implemented active directory groups are too permissive where domain admin or domain user is also the local admin uh AV or EDR Solutions are misconfigured or easily unhooked and so on and what we found in 10 000 pen tests is that user Behavior analytics tools never caught us in that lateral movement in part because those tools require pristine logging data in order to work and also it becomes very difficult to find that Baseline of normal usage versus abnormal usage of credential login another interesting Insight is there were several Marquee brand name mssps that were defending our customers environment and for them it took seven hours to detect and respond to the pen test seven hours the pen test was over in less than two hours and so what you had was an egregious violation of the service level agreements that that mssp had in place and the customer was able to use us to get service credit and drive accountability of their sock and of their provider the third interesting thing is in one case it took us seven minutes to become domain admin in a bank that bank had every Gucci security tool you could buy yet in 7 minutes and 19 seconds node zero started as an unauthenticated member of the network and was able to escalate privileges through chaining and misconfigurations in lateral movement and so on to become domain admin if it's seven minutes today we should assume it'll be less than a minute a year or two from now making it very difficult for humans to be able to detect and respond to that type of Blitzkrieg attack so that's in the find it's not just about finding problems though the bulk of the effort should be what to do about it the fix and the verify so as you find those problems back to kubernetes as an example we will show you the path here is the kill chain we took to compromise that environment we'll show you the impact here is the impact or here's the the proof of exploitation that we were able to use to be able to compromise it and there's the actual command that we executed so you could copy and paste that command and compromise that cubelet yourself if you want and then the impact is we got code execution and we'll actually show you here is the impact this is a critical here's why it enabled perimeter breach affected applications will tell you the specific IPS where you've got the problem how it maps to the miter attack framework and then we'll tell you exactly how to fix it we'll also show you what this problem enabled so you can accurately prioritize why this is important or why it's not important the next part is accurate prioritization the hardest part of my job as a CIO was deciding what not to fix so if you take SMB signing not required as an example by default that CVSs score is a one out of 10. but this misconfiguration is not a cve it's a misconfig enable an attacker to gain access to 19 credentials including one domain admin two local admins and access to a ton of data because of that context this is really a 10 out of 10. you better fix this as soon as possible however of the seven occurrences that we found it's only a critical in three out of the seven and these are the three specific machines and we'll tell you the exact way to fix it and you better fix these as soon as possible for these four machines over here these didn't allow us to do anything of consequence so that because the hardest part is deciding what not to fix you can justifiably choose not to fix these four issues right now and just add them to your backlog and surge your team to fix these three as quickly as possible and then once you fix these three you don't have to re-run the entire pen test you can select these three and then one click verify and run a very narrowly scoped pen test that is only testing this specific issue and what that creates is a much faster cycle of finding and fixing problems the other part of fixing is verifying that you don't have sensitive data at risk so once we become a domain user we're able to use those domain user credentials and try to gain access to databases file shares S3 buckets git repos and so on and help you understand what sensitive data you have at risk so in this example a green checkbox means we logged in as a valid domain user we're able to get read write access on the database this is how many records we could have accessed and we don't actually look at the values in the database but we'll show you the schema so you can quickly characterize that pii data was at risk here and we'll do that for your file shares and other sources of data so now you can accurately articulate the data you have at risk and prioritize cleaning that data up especially data that will lead to a fine or a big news issue so that's the find that's the fix now we're going to talk about the verify the key part in verify is embracing and integrating with detection engineering practices so when you think about your layers of security tools you've got lots of tools in place on average 130 tools at any given customer but these tools were not designed to work together so when you run a pen test what you want to do is say did you detect us did you log us did you alert on us did you stop us and from there what you want to see is okay what are the techniques that are commonly used to defeat an environment to actually compromise if you look at the top 10 techniques we use and there's far more than just these 10 but these are the most often executed nine out of ten have nothing to do with cves it has to do with misconfigurations dangerous product defaults bad credential policies and it's how we chain those together to become a domain admin or compromise a host so what what customers will do is every single attacker command we executed is provided to you as an attackivity log so you can actually see every single attacker command we ran the time stamp it was executed the hosts it executed on and how it Maps the minor attack tactics so our customers will have are these attacker logs on one screen and then they'll go look into Splunk or exabeam or Sentinel one or crowdstrike and say did you detect us did you log us did you alert on us or not and to make that even easier if you take this example hey Splunk what logs did you see at this time on the VMware host because that's when node 0 is able to dump credentials and that allows you to identify and fix your logging blind spots to make that easier we've got app integration so this is an actual Splunk app in the Splunk App Store and what you can come is inside the Splunk console itself you can fire up the Horizon 3 node 0 app all of the pen test results are here so that you can see all of the results in one place and you don't have to jump out of the tool and what you'll show you as I skip forward is hey there's a pen test here are the critical issues that we've identified for that weaker default issue here are the exact commands we executed and then we will automatically query into Splunk all all terms on between these times on that endpoint that relate to this attack so you can now quickly within the Splunk environment itself figure out that you're missing logs or that you're appropriately catching this issue and that becomes incredibly important in that detection engineering cycle that I mentioned earlier so how do our customers end up using us they shift from running one pen test a year to 30 40 pen tests a month oftentimes wiring us into their deployment automation to automatically run pen tests the other part that they'll do is as they run more pen tests they find more issues but eventually they hit this inflection point where they're able to rapidly clean up their environment and that inflection point is because the red and the blue teams start working together in a purple team culture and now they're working together to proactively harden their environment the other thing our customers will do is run us from different perspectives they'll first start running an RFC 1918 scope to see once the attacker gained initial access in a part of the network that had wide access what could they do and then from there they'll run us within a specific Network segment okay from within that segment could the attacker break out and gain access to another segment then they'll run us from their work from home environment could they Traverse the VPN and do something damaging and once they're in could they Traverse the VPN and get into my cloud then they'll break in from the outside all of these perspectives are available to you in Horizon 3 and node zero as a single SKU and you can run as many pen tests as you want if you run a phishing campaign and find that an intern in the finance department had the worst phishing behavior you can then inject their credentials and actually show the end-to-end story of how an attacker fished gained credentials of an intern and use that to gain access to sensitive financial data so what our customers end up doing is running multiple attacks from multiple perspectives and looking at those results over time I'll leave you two things one is what is the AI in Horizon 3 AI those knowledge graphs are the heart and soul of everything that we do and we use machine learning reinforcement techniques reinforcement learning techniques Markov decision models and so on to be able to efficiently maneuver and analyze the paths in those really large graphs we also use context-based scoring to prioritize weaknesses and we're also able to drive collective intelligence across all of the operations so the more pen tests we run the smarter we get and all of that is based on our knowledge graph analytics infrastructure that we have finally I'll leave you with this was my decision criteria when I was a buyer for my security testing strategy what I cared about was coverage I wanted to be able to assess my on-prem cloud perimeter and work from home and be safe to run in production I want to be able to do that as often as I wanted I want to be able to run pen tests in hours or days not weeks or months so I could accelerate that fine fix verify loop I wanted my it admins and network Engineers with limited offensive experience to be able to run a pen test in a few clicks through a self-service experience and not have to install agent and not have to write custom scripts and finally I didn't want to get nickeled and dimed on having to buy different types of attack modules or different types of attacks I wanted a single annual subscription that allowed me to run any type of attack as often as I wanted so I could look at my Trends in directions over time so I hope you found this talk valuable uh we're easy to find and I look forward to seeing seeing you use a product and letting our results do the talking when you look at uh you know kind of the way no our pen testing algorithms work is we dynamically select uh how to compromise an environment based on what we've discovered and the goal is to become a domain admin compromise a host compromise domain users find ways to encrypt data steal sensitive data and so on but when you look at the the top 10 techniques that we ended up uh using to compromise environments the first nine have nothing to do with cves and that's the reality cves are yes a vector but less than two percent of cves are actually used in a compromise oftentimes it's some sort of credential collection credential cracking uh credential pivoting and using that to become an admin and then uh compromising environments from that point on so I'll leave this up for you to kind of read through and you'll have the slides available for you but I found it very insightful that organizations and ourselves when I was a GE included invested heavily in just standard vulnerability Management Programs when I was at DOD that's all disa cared about asking us about was our our kind of our cve posture but the attackers have adapted to not rely on cves to get in because they know that organizations are actively looking at and patching those cves and instead they're chaining together credentials from one place with misconfigurations and dangerous product defaults in another to take over an environment a concrete example is by default vcenter backups are not encrypted and so as if an attacker finds vcenter what they'll do is find the backup location and there are specific V sender MTD files where the admin credentials are parsippled in the binaries so you can actually as an attacker find the right MTD file parse out the binary and now you've got the admin credentials for the vcenter environment and now start to log in as admin there's a bad habit by signal officers and Signal practitioners in the in the Army and elsewhere where the the VM notes section of a virtual image has the password for the VM well those VM notes are not stored encrypted and attackers know this and they're able to go off and find the VMS that are unencrypted find the note section and pull out the passwords for those images and then reuse those credentials across the board so I'll pause here and uh you know Patrick love you get some some commentary on on these techniques and other things that you've seen and what we'll do in the last say 10 to 15 minutes is uh is rolled through a little bit more on what do you do about it yeah yeah no I love it I think um I think this is pretty exhaustive what I like about what you've done here is uh you know we've seen we've seen double-digit increases in the number of organizations that are reporting actual breaches year over year for the last um for the last three years and it's often we kind of in the Zeitgeist we pegged that on ransomware which of course is like incredibly important and very top of mind um but what I like about what you have here is you know we're reminding the audience that the the attack surface area the vectors the matter um you know has to be more comprehensive than just thinking about ransomware scenarios yeah right on um so let's build on this when you think about your defense in depth you've got multiple security controls that you've purchased and integrated and you've got that redundancy if a control fails but the reality is that these security tools aren't designed to work together so when you run a pen test what you want to ask yourself is did you detect node zero did you log node zero did you alert on node zero and did you stop node zero and when you think about how to do that every single attacker command executed by node zero is available in an attacker log so you can now see you know at the bottom here vcenter um exploit at that time on that IP how it aligns to minor attack what you want to be able to do is go figure out did your security tools catch this or not and that becomes very important in using the attacker's perspective to improve your defensive security controls and so the way we've tried to make this easier back to like my my my the you know I bleed Green in many ways still from my smoke background is you want to be able to and what our customers do is hey we'll look at the attacker logs on one screen and they'll look at what did Splunk see or Miss in another screen and then they'll use that to figure out what their logging blind spots are and what that where that becomes really interesting is we've actually built out an integration into Splunk where there's a Splunk app you can download off of Splunk base and you'll get all of the pen test results right there in the Splunk console and from that Splunk console you're gonna be able to see these are all the pen tests that were run these are the issues that were found um so you can look at that particular pen test here are all of the weaknesses that were identified for that particular pen test and how they categorize out for each of those weaknesses you can click on any one of them that are critical in this case and then we'll tell you for that weakness and this is where where the the punch line comes in so I'll pause the video here for that weakness these are the commands that were executed on these endpoints at this time and then we'll actually query Splunk for that um for that IP address or containing that IP and these are the source types that surface any sort of activity so what we try to do is help you as quickly and efficiently as possible identify the logging blind spots in your Splunk environment based on the attacker's perspective so as this video kind of plays through you can see it Patrick I'd love to get your thoughts um just seeing so many Splunk deployments and the effectiveness of those deployments and and how this is going to help really Elevate the effectiveness of all of your Splunk customers yeah I'm super excited about this I mean I think this these kinds of purpose-built integration snail really move the needle for our customers I mean at the end of the day when I think about the power of Splunk I think about a product I was first introduced to 12 years ago that was an on-prem piece of software you know and at the time it sold on sort of Perpetual and term licenses but one made it special was that it could it could it could eat data at a speed that nothing else that I'd have ever seen you can ingest massively scalable amounts of data uh did cool things like schema on read which facilitated that there was this language called SPL that you could nerd out about uh and you went to a conference once a year and you talked about all the cool things you were splunking right but now as we think about the next phase of our growth um we live in a heterogeneous environment where our customers have so many different tools and data sources that are ever expanding and as you look at the as you look at the role of the ciso it's mind-blowing to me the amount of sources Services apps that are coming into the ciso span of let's just call it a span of influence in the last three years uh you know we're seeing things like infrastructure service level visibility application performance monitoring stuff that just never made sense for the security team to have visibility into you um at least not at the size and scale which we're demanding today um and and that's different and this isn't this is why it's so important that we have these joint purpose-built Integrations that um really provide more prescription to our customers about how do they walk on that Journey towards maturity what does zero to one look like what does one to two look like whereas you know 10 years ago customers were happy with platforms today they want integration they want Solutions and they want to drive outcomes and I think this is a great example of how together we are stepping to the evolving nature of the market and also the ever-evolving nature of the threat landscape and what I would say is the maturing needs of the customer in that environment yeah for sure I think especially if if we all anticipate budget pressure over the next 18 months due to the economy and elsewhere while the security budgets are not going to ever I don't think they're going to get cut they're not going to grow as fast and there's a lot more pressure on organizations to extract more value from their existing Investments as well as extracting more value and more impact from their existing teams and so security Effectiveness Fierce prioritization and automation I think become the three key themes of security uh over the next 18 months so I'll do very quickly is run through a few other use cases um every host that we identified in the pen test were able to score and say this host allowed us to do something significant therefore it's it's really critical you should be increasing your logging here hey these hosts down here we couldn't really do anything as an attacker so if you do have to make trade-offs you can make some trade-offs of your logging resolution at the lower end in order to increase logging resolution on the upper end so you've got that level of of um justification for where to increase or or adjust your logging resolution another example is every host we've discovered as an attacker we Expose and you can export and we want to make sure is every host we found as an attacker is being ingested from a Splunk standpoint a big issue I had as a CIO and user of Splunk and other tools is I had no idea if there were Rogue Raspberry Pi's on the network or if a new box was installed and whether Splunk was installed on it or not so now you can quickly start to correlate what hosts did we see and how does that reconcile with what you're logging from uh finally or second to last use case here on the Splunk integration side is for every single problem we've found we give multiple options for how to fix it this becomes a great way to prioritize what fixed actions to automate in your soar platform and what we want to get to eventually is being able to automatically trigger soar actions to fix well-known problems like automatically invalidating passwords for for poor poor passwords in our credentials amongst a whole bunch of other things we could go off and do and then finally if there is a well-known kill chain or attack path one of the things I really wish I could have done when I was a Splunk customer was take this type of kill chain that actually shows a path to domain admin that I'm sincerely worried about and use it as a glass table over which I could start to layer possible indicators of compromise and now you've got a great starting point for glass tables and iocs for actual kill chains that we know are exploitable in your environment and that becomes some super cool Integrations that we've got on the roadmap between us and the Splunk security side of the house so what I'll leave with actually Patrick before I do that you know um love to get your comments and then I'll I'll kind of leave with one last slide on this wartime security mindset uh pending you know assuming there's no other questions no I love it I mean I think this kind of um it's kind of glass table's approach to how do you how do you sort of visualize these workflows and then use things like sore and orchestration and automation to operationalize them is exactly where we see all of our customers going and getting away from I think an over engineered approach to soar with where it has to be super technical heavy with you know python programmers and getting more to this visual view of workflow creation um that really demystifies the power of Automation and also democratizes it so you don't have to have these programming languages in your resume in order to start really moving the needle on workflow creation policy enforcement and ultimately driving automation coverage across more and more of the workflows that your team is seeing yeah I think that between us being able to visualize the actual kill chain or attack path with you know think of a of uh the soar Market I think going towards this no code low code um you know configurable sore versus coded sore that's going to really be a game changer in improve or giving security teams a force multiplier so what I'll leave you with is this peacetime mindset of security no longer is sustainable we really have to get out of checking the box and then waiting for the bad guys to show up to verify that security tools are are working or not and the reason why we've got to really do that quickly is there are over a thousand companies that withdrew from the Russian economy over the past uh nine months due to the Ukrainian War there you should expect every one of them to be punished by the Russians for leaving and punished from a cyber standpoint and this is no longer about financial extortion that is ransomware this is about punishing and destroying companies and you can punish any one of these companies by going after them directly or by going after their suppliers and their Distributors so suddenly your attack surface is no more no longer just your own Enterprise it's how you bring your goods to Market and it's how you get your goods created because while I may not be able to disrupt your ability to harvest fruit if I can get those trucks stuck at the border I can increase spoilage and have the same effect and what we should expect to see is this idea of cyber-enabled economic Warfare where if we issue a sanction like Banning the Russians from traveling there is a cyber-enabled counter punch which is corrupt and destroy the American Airlines database that is below the threshold of War that's not going to trigger the 82nd Airborne to be mobilized but it's going to achieve the right effect ban the sale of luxury goods disrupt the supply chain and create shortages banned Russian oil and gas attack refineries to call a 10x spike in gas prices three days before the election this is the future and therefore I think what we have to do is shift towards a wartime mindset which is don't trust your security posture verify it see yourself Through The Eyes of the attacker build that incident response muscle memory and drive better collaboration between the red and the blue teams your suppliers and Distributors and your information uh sharing organization they have in place and what's really valuable for me as a Splunk customer was when a router crashes at that moment you don't know if it's due to an I.T Administration problem or an attacker and what you want to have are different people asking different questions of the same data and you want to have that integrated triage process of an I.T lens to that problem a security lens to that problem and then from there figuring out is is this an IT workflow to execute or a security incident to execute and you want to have all of that as an integrated team integrated process integrated technology stack and this is something that I very care I cared very deeply about as both a Splunk customer and a Splunk CTO that I see time and time again across the board so Patrick I'll leave you with the last word the final three minutes here and I don't see any open questions so please take us home oh man see how you think we spent hours and hours prepping for this together that that last uh uh 40 seconds of your talk track is probably one of the things I'm most passionate about in this industry right now uh and I think nist has done some really interesting work here around building cyber resilient organizations that have that has really I think helped help the industry see that um incidents can come from adverse conditions you know stress is uh uh performance taxations in the infrastructure service or app layer and they can come from malicious compromises uh Insider threats external threat actors and the more that we look at this from the perspective of of a broader cyber resilience Mission uh in a wartime mindset uh I I think we're going to be much better off and and will you talk about with operationally minded ice hacks information sharing intelligence sharing becomes so important in these wartime uh um situations and you know we know not all ice acts are created equal but we're also seeing a lot of um more ad hoc information sharing groups popping up so look I think I think you framed it really really well I love the concept of wartime mindset and um I I like the idea of applying a cyber resilience lens like if you have one more layer on top of that bottom right cake you know I think the it lens and the security lens they roll up to this concept of cyber resilience and I think this has done some great work there for us yeah you're you're spot on and that that is app and that's gonna I think be the the next um terrain that that uh that you're gonna see vendors try to get after but that I think Splunk is best position to win okay that's a wrap for this special Cube presentation you heard all about the global expansion of horizon 3.ai's partner program for their Partners have a unique opportunity to take advantage of their node zero product uh International go to Market expansion North America channel Partnerships and just overall relationships with companies like Splunk to make things more comprehensive in this disruptive cyber security world we live in and hope you enjoyed this program all the videos are available on thecube.net as well as check out Horizon 3 dot AI for their pen test Automation and ultimately their defense system that they use for testing always the environment that you're in great Innovative product and I hope you enjoyed the program again I'm John Furrier host of the cube thanks for watching

Published Date : Sep 28 2022

SUMMARY :

that's the sort of stuff that we do you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Patrick CoughlinPERSON

0.99+

Jennifer LeePERSON

0.99+

ChrisPERSON

0.99+

TonyPERSON

0.99+

2013DATE

0.99+

Raina RichterPERSON

0.99+

SingaporeLOCATION

0.99+

EuropeLOCATION

0.99+

PatrickPERSON

0.99+

FrankfurtLOCATION

0.99+

JohnPERSON

0.99+

20-yearQUANTITY

0.99+

hundredsQUANTITY

0.99+

AWSORGANIZATION

0.99+

20 yearsQUANTITY

0.99+

seven minutesQUANTITY

0.99+

95QUANTITY

0.99+

FordORGANIZATION

0.99+

2.7 billionQUANTITY

0.99+

MarchDATE

0.99+

FinlandLOCATION

0.99+

seven hoursQUANTITY

0.99+

sixty percentQUANTITY

0.99+

John FurrierPERSON

0.99+

SwedenLOCATION

0.99+

John FurrierPERSON

0.99+

six weeksQUANTITY

0.99+

seven hoursQUANTITY

0.99+

19 credentialsQUANTITY

0.99+

ten dollarsQUANTITY

0.99+

JenniferPERSON

0.99+

5 000 hostsQUANTITY

0.99+

Horizon 3TITLE

0.99+

WednesdayDATE

0.99+

30QUANTITY

0.99+

eightQUANTITY

0.99+

Asia PacificLOCATION

0.99+

American AirlinesORGANIZATION

0.99+

DeloitteORGANIZATION

0.99+

three licensesQUANTITY

0.99+

two companiesQUANTITY

0.99+

2019DATE

0.99+

European UnionORGANIZATION

0.99+

sixQUANTITY

0.99+

seven occurrencesQUANTITY

0.99+

70QUANTITY

0.99+

three peopleQUANTITY

0.99+

Horizon 3.aiTITLE

0.99+

ATTORGANIZATION

0.99+

Net ZeroORGANIZATION

0.99+

SplunkORGANIZATION

0.99+

UberORGANIZATION

0.99+

fiveQUANTITY

0.99+

less than two percentQUANTITY

0.99+

less than two hoursQUANTITY

0.99+

2012DATE

0.99+

UKLOCATION

0.99+

AdobeORGANIZATION

0.99+

four issuesQUANTITY

0.99+

Department of DefenseORGANIZATION

0.99+

next yearDATE

0.99+

three stepsQUANTITY

0.99+

node 0TITLE

0.99+

15 minutesQUANTITY

0.99+

hundred percentQUANTITY

0.99+

node zeroTITLE

0.99+

10xQUANTITY

0.99+

last yearDATE

0.99+

7 minutesQUANTITY

0.99+

one licenseQUANTITY

0.99+

second thingQUANTITY

0.99+

thousands of hostsQUANTITY

0.99+

five thousand hostsQUANTITY

0.99+

next weekDATE

0.99+

Morgan McLean & Danielle Greshock | AWS Partner Showcase S1E2


 

(gentle music) >> Hello, welcome to theCUBE's presentation of the AWS Showcase season one, episode two with the ISV Startups partners. I'm John Furrier, your host of theCUBE. We're joined by Morgan McLean, director of product management at Splunk, and Danielle Greshock, who is the director of ISVs solution architects at AWS. Welcome to the show. Thanks for coming on. >> Thanks for having us. >> And great. Thanks for having us. >> Great to see both of you, both theCUBE alumni, but the Splunk-AWS relationship has been going very, very well. You guys are doing great business enabling this app revolution. And cloud scale has been going extremely well. So let's get into it. You guys are involved in a lot of action around application revolution, around OpenTelemetry and open source. So let's get into it. What's the latest? >> Danielle, you go ahead. >> Well, I'll just jump in first. Obviously last year, not last year, but in 2020, we launched the AWS Distro for OpenTelemetry. The idea being essentially, we're able to bring in data from partners, from infrastructure running on AWS, from apps running on AWS, to really be able to increase observability across all cloud assets at your entire cloud platform. So, Morgan, if you want to chime in on how Splunk >> Morgan: Certainly. >> has worked out OpenTelemetry. >> Yeah. I mean, OpenTelemetry is super exciting. Obviously, there's a lot of partnership points between Amazon and Splunk, but OpenTelemetry is probably one of them that's the most visible to people who aren't already maybe using these two products together. And so, as Danielle mentioned, Amazon has their own distribution of OpenTelemetry, Splunk has their own, as well, and of course there's the main open source distribution that everybody knows and loves. Just for our viewers, just for clarity's sake, the separate distributions are fundamentally very similar to, almost identical to what's offered in the open source space, but they come preconfigured and they come with support guarantees from each company, meaning that you can actually get paid full support for an open source project, which is really fantastic for customers. And as Danielle mentioned, it's a great demonstration of the alliance between Splunk and Amazon Web Services. For example, the AWS Distro, when you use it, can export data to Amazon CloudWatch, various Amazon backed open source initiatives like Prometheus and others, and to Splunk Observability Cloud and to Splunk Enterprise. So it's a place that we've worked very closely together, and it's something that we're very excited about. >> So, Morgan, I want to get your take on the on the product management side and also how product are built these days. >> One of the big things we're seeing in cloud is that open source has been the big enabler for a lot of refactoring. And you got multiple distributions, but the innovations on top of that, can you talk about how you see the productization of new innovations with open source as you guys go into this market, because this is the new dynamic with cloud. We're seeing examples all over the place. Obviously, Amazon's going next level with what they're doing, and that open source, it's not a one game for all of it. You can have mix and match. Take us through the product angle. >> And in many ways, this is just another wave of the same thing, right? Like, if you think back in time, we all used and still use in many cases, virtual machines, most of those are based on Linux, right? Another large open source project. And so, open source software has been accelerating innovation in the cloud space and in the computing space generally for a very long time, which is fantastic. Our excitement with something like OpenTelemetry comes from both the project's capabilities but also what we can do with it. So for those who aren't already familiar with OpenTelemetry, OpenTelemetry allows you to extract really critical system telemetry, application signals and everything else you need from your own applications, from new services, from your infrastructure, from everything that you're running in a cloud environment. You can then send that data to another location for processing. And so John, you ask like, how does this accelerate innovation? What does it unlock? Well, the insight you can gain from this data means you can become so much more efficient as a development organization. You can make your applications so much more effective because when you send that data to something like Splunk Observability Cloud, to something like Amazon CloudWatch, to various other solutions on the market, they can give you deep, deep insight into your application's performance, to its structure, they can help you reduce outages. And so, it's very, very powerful because it allows organizations to use tools like Splunk, like Amazon, like other things to innovate so much more effectively. >> Danielle, can you comment >> If I could... >> on the AWS side because this is again on the big point. You guys are going next level, and you're starting to see patterns in the ISV world, certainly on the architecture side of partners doing things differently now on top of what they've already done. Could you share how AWS is helping customers accelerate? >> Well, just as Morgan was talking about what OpenTelemetry provides, you can see how from a partnership perspective, this is so valuable, right? What the partner team here at AWS is in the business of doing, is really enabling customer choice, right? And having that ability to plug in and pull data from different sources, post it to different sources, make it available for visibility across all of your resources is very powerful and it's something from the partner community that we really value because we want customers to be able to select best of breed solutions, what works for their business, which businesses are different and they may have different needs, and that also fosters that true innovation. A small company is going to develop and release software a lot differently than a large enterprise. And so, being able to support something like OpenTelemetry just enables that for all different kinds of customers. >> Morgan, add to that because the velocity of releases, certainly operational, stability, is key every predominant security, uptime, these are top concerns. And, you mention data too, >> And you mention challenges. >> You got the data in here. So you got a lot of data moving around, a lot of value. What's your take? >> Yeah. So, I'll speak with some specifics. So a challenge that developers have had for years when you're developing large services, which you can now do with platforms like AWS. So, it's very easy to go develop huge deployments. But a challenge they have is you go and build a mess, right? And like, I've worked earlier in my career in Web Services. And I remember in one of the first orgs I was in, I was one of the five people who really understood our ecommerce stack. Right? And so like, I would get dragged into all these meetings and I'd have to go draw like the 50 services we had, and how they interacted, and the changes that were made in the last week. And without observability tools like Splunk Observability Cloud, like the ones offered by Amazon, like the ones that are backed by the data that comes with OpenTelemetry, organizations basically rely on people like this, to go draw out their deployments so they understand what it is they've built. Well, as you can imagine, this crimps your development velocity, because most of your engineers, most of your tech leads, most of everyone else don't actually understand what it is they've built what it is they're running, because they need that global context. You get something like OpenTelemetry and the solutions that consume the data from it, and suddenly now, all your developers have that context, all of them when they're adding functionality to a service or they're updating their infrastructure, can actually understand how it interacts with the rest of the broader application. This lets you speed up your time to development, this lets you ship more safely, more securely. And finally, when things do go wrong, which will be less frequent, but when they do go wrong, you can fix them super rapidly. >> If I'm a customer, let me ask a question. I'm a customer and I say, "Okay, I love AWS, I love Splunk, I love OpenTelemetry. I got to have open sources, technology innovation is happening." What's the integration? What are some of the standards? Can you take us through how that's working together with you guys as a shared platform? >> Yeah. So let's take the Amazon distribution for OpenTelemetry or even the Splunk one. One of the first things they do is they include all of the receivers, all of the sort of data capture components that you need, out of box for platforms like AWS, right? And so, right away, you get that power and flexibility where you're getting access to all of these data sources, right? And so, that's part of that partnership. And additionally, once the data comes into OpenTelemetry, you can now send that to various different data sources, including, as Danielle mentioned, to multiple at the same time. So you can use whatever tools you want. And so when you talk about like what the partnership is actually providing to you as a customer and still, this is just within the context of OpenTelemetry, obviously there's a much broader partnership between these two companies than just that. But within the context of OpenTelemetry means you can download one of these distributions. It's fully supported. It works with both solutions and everything is just great, right? You don't need to go fiddle with that out of the box. To be clear, OpenTelemetry is a batteries included project, right? This means that even the standard distributions of OpenTelemetry include the components you need. You have to go directly, reference them and ensure that they're packaged in there, but they exist, right. But the nice thing about these distributions is that it's done, it's out of the box, you don't even have to worry about is something missing or do I need to include new exporters or new receivers? It's all there. It's preconfigured. It just works. And if something goes wrong and you have a support contact, you pick up the phone, you talk to someone to get it fixed. >> Danielle, what's the Amazon side 'cause agility and scale is one of the highlights you guys are seeing. How does this tie into that and how are you guys working backwards from the customers to support the partners? >> Well, I think just to add on essentially to what Morgan said, I think that AWS is a cloud platform, has always really had a focus on developers. And, we talk a lot about how AWS and Amazon as a whole really embraces this continuous integration and continuous deployment methods inside of our organization. And we talk about services, and observability is a huge part of that. The only way that you're actually able to release hundreds, thousands of times a day like Amazon does, is by having an observability platform, to be able to measure metrics, see changes in the environment, to be able to roll back if you need to, and to be able to quickly mitigate any challenges or anything that goes wrong at any part of the process. And so, when we preach that to our customers, I think it's something that we do that because we live it and breathe it. And so, things such as OpenTelemetry and such as the products that Splunk builds, those are also ways in which we believe our customers can achieve that. >> Yeah. And we can... I mean, as I mentioned before, this partnership goes well beyond OpenTelemetry, right? And so, if you go use like Splunk Enterprise, Enterprise Cloud, Splunk Observability Cloud, and you're running on AWS, you have excellent support and excellent visibility into your Amazon infrastructure, into the services and applications you've deployed on top of that infrastructure. We try and give you, and I think we do succeed in this. We give you the best possible experience, the deepest possible visibility, into what it is you've deployed on AWS, so that you can be even more successful as a business, and so that you can be even more successful on AWS as a platform. >> Yeah. This is a great conversation, Morgan. You mentioned the early days of Web Services. AWS stands for Amazon Web Services built on web services. So interesting throwback there, but made me think about the days of the early days of web services. And if you look at data, what's going on now, the top partners in AWS, you're seeing a lot of people thinking about data differently, they're refactoring, a lot of machine learning, a lot of AI going on at scale. So then, you got cloud native, things like Kubernetes and these new services being stood up and teared down with automation. A whole new operating model's coming. And so when you think about observability, the importance of it, I mean, can you share your perspective on this whole 'nother level? I mean, I always say that whole another level sounds cliche, but it is next level. I mean, this is completely different. What's your reaction? >> Yeah. There there's a ton of factors here, right? So as you point out, companies are totally shifting how they use their cloud infrastructure. And part of this you see during their cloud migrations, a part of it you see after, and they're shifting from their sort of stateful VMs that they may have had in the past to infrastructure that they tear down and put up regularly. And there's a lot more automation. With this, comes as I mentioned before, complexity, right? And also, with this comes more and more businesses becoming even more reliant on their digital infrastructure. And so, not having observability into your applications, into your services, into your infrastructure, to me, is akin to running a business, say running a large warehousing or distribution company, but not having any idea where you're shipping products or where things are, or not having any accounting or CFO, right? Like, business has become so digital. Business is so reliant on technology, and that's unlocked a ton of new things. It's great. But not having visibility into how that technology works or what it is that's deployed or how to fix it is akin to having no visibility to anything else in your business. It's nuts. And so, observability is super, super critical, particularly for customers who are adopting this new wave of cloud technologies on platforms like AWS. >> Danielle, on your side too, you're enabling this new capability so that businesses can do it, the partners do it, we're calling it super cloud. We've been calling it super cloud kind of dynamic where new things are happening with the data. And you guys are evolving with that. Can you share what you're seeing on your side as your partners start to go to the next level? What are you guys doing? How does it all come together? >> Well, we always talk about what has happened with data in the last couple of years, which the cloud has really enabled around, you know, variety and velocity and there's one other "V" that's escaping me right now, but essentially, all of this data is coming in and providing the ability for us to make better decisions, to build better products, to provide better experiences for customers. And so, I just think, the OpenTelemetry project, as well as what Splunk is doing is just another example of how we're taking this massive amount of data and being able to provide better experiences and outcomes for customers. >> And you guys have been working along together for long time, Splunk, and, it's been a great partners, if we're going back with that been covering it on theCUBE and SiliconANGLE. So, we know that, the change is key observability. Can you imagine a company without a CFO, Morgan? That's just boggles your mind, but that's what it's like right now. So... >> It is, yeah. >> And the people who take advantage of that are winning, right? So it's like, that's the key. >> Yeah, I know. I mean, even in my own career, right, I've moved between different companies. And I remember, when I joined Google in particular, which is where I worked at previously, I was very impressed with their internal observability tools. And I'm certain, I haven't worked at Amazon. I'm certainly, I just assume inside of Amazon they're excellent as well, so a lot of the large cloud firms these days. But it was so refreshing going from an organization where if we had some outage or something went wrong, there were like a very small set of people who could actually understand what was going on. And then you would just have to manually dive through logs and correlate requests manually between services. It's very challenging. And so, when things went wrong, they went wrong for a long, long time. And so, the companies that understood this even in the past are already very successful as a result. I think now, the rest of the industry is really in the midst of adopting these observability practices and the tools that are required to implement them, because you're right. Otherwise your development velocity slows down. Now you're getting out competed by your competition. And then, when you have a problem, it blows up for ages. And once again, your competition can take advantage of it. >> And, can you just summarize the observability piece relative to the OpenTelemetry? Where is that going to go? Where do you see that evolving? >> Sure. >> I see open source is growing like crazy, we all know that. >> Of course. >> But OpenTelemetry in particular and open source, 'cause this is a big hot area. >> Yes. So to set the stage for people, OpenTelemetry, unlocks observability in many ways. As I mentioned earlier, OpenTelemetry is how you capture data out of your application. It doesn't process it. It's not a replacement for something like Amazon CloudWatch or any Splunk's products, but it's how we get the data out of your system, which is a remarkably difficult problem. I won't dive into it today, but, those who work in this space are very aware. That's why this project exists and it's so big, that actually extracting information, metrics, logs, distributed traces, profiles, everything else, from your applications and from your infrastructure is very, very difficult. So for OpenTelemetry, where it's going is just continually getting better at extracting more types of data from more sources, and doing that more effectively for people in a more standardized way. That will unlock firms like Splunk, firms, like Amazon and others to better process this data. In terms of where that's going, the sky's the limit, right? Like, everyone's familiar with APM, people are familiar with infrastructure monitoring, but there's a lot more capabilities coming there for security analytics, for network performance monitoring, for getting down all the way to single lines of coding your application, how they impact everything. There's just so much power that's coming to the industry right now. I'm really excited to see where things go in the next few years. >> And Danielle, you're in the middle of all the action as a solution architect, really set the stage for their companies and the ISVs, and this is a big, hot area. What are the patterns you're seeing and what are some of the best practices that you're doing will help companies? >> Right. So I think, summarizing our entire conversation, the big things that we're seeing in the market is essentially more and more companies are looking to move to a continuous deployment and a continuous integration environment. And they're looking to innovate faster and spend less time hot patching or hot fixing their environments and they want to spend more time innovating. And so, that you know, the patterns that we're seeing is... What I see and what I actually experience firsthand at re:Invent when I talk to probably over 40 or 50 ISVs, is customers want to know in their environment, where are their changes? Where are their security vulnerabilities? Where are their data changes, and what are customers really experiencing, whether it's latency, poor experience throughout their products, those types of things? So security, data, and observability are just key to all of that experience and that's what we're definitely seeing as patterns, what we're seeing with our customers and also what value our ISVs are providing in that space. >> That's awesome. And the other thing I would observe is that there's more of an integration story going on around joint projects, whether it's open source. >> Absolutely. >> Because this is where we want to get that services connected. And it's mutual beneficial. I mean, this is really >> Exactly. >> whole 'nother, new kind of interoperable cloud scale. >> Yeah, if I could say one thing else there, I think that, a lot of the customers who are trying to move into the cloud now are, maybe not technology forward companies and they really need that solution. And that's very important. I think COVID has pushed a lot of companies into the cloud maybe very quickly. And, that has been something else we've observed in the market. So, solutions and full solutions between ISVs and ISVs, or ISVs and AWS is just becoming more and more common thing that we see. >> And, you mentioned John, in the open source space as well. Like, we're certainly from Amazon to Splunk. So we're talking a lot about those, but there's a lot of other firms involved in projects like OpenTelemetry. And I think it's very endearing, very heartening to see how well they cooperate in this community and how, when their interests are aligned, how effective they can be. And it's been very exciting to work in the space and very pleasant, honestly, to see everything come together with this huge set of customers and partners. >> Yeah. The pleasant surprise of the pandemic has been that people come into the cloud and they like it and they, "Hey, this works," and they double down on it. Then they realize, there's more there and they refactor. So, you're seeing real examples of that. So, this is a great discussion, great success story. Congratulations Morgan, Danielle. >> Thank you. >> Great partnership between Splunk and AWS. We've been following for a long time. And again, this highlights this whole another level of integrating super cloud kind of experience where people are getting more capabilities and doing more together, so great stuff. >> And this is just one facet of that, right? Like, there's all the other connections of Splunk Enterprise, Splunk security analytics products, and others. It's a deep, deep partnership between these firms. >> Yeah. And the companies that innovate and get that new capability are going to have an advantage. And you're seeing... >> Yes. >> Right? >> Agreed. >> And this is awesome, and great stuff, thank you for coming on and sharing that insight. >> Thank you. >> Congratulations Morgan over there at Splunk, great stuff. And Danielle, thanks for coming on and sharing the AWS perspective. >> Thanks for having me. >> And you guys are going to the next level. You moving up to stack as they say, all good stuff for customers. Thanks. >> Thank you. >> Okay. >> Thank you. >> This is season one, episode two of the AWS Partner Showcase. I'm John Furrier with theCUBE. Thanks for watching. (gentle music)

Published Date : Mar 2 2022

SUMMARY :

of the AWS Showcase And great. but the Splunk-AWS relationship So, Morgan, if you want it's a great demonstration of the alliance on the on the product management side One of the big things Well, the insight you on the AWS side And having that ability to plug in the velocity of releases, You got the data in here. and the changes that were What are some of the standards? is actually providing to you as a customer from the customers to to be able to roll back if you need to, and so that you can be And so when you think about observability, And part of this you see And you guys are evolving with that. and providing the ability for And you guys have been And the people who And so, the companies that is growing like crazy, 'cause this is a big hot area. OpenTelemetry is how you capture data What are the patterns you're seeing And so, that you know, And the other thing I I mean, this is really new kind of interoperable cloud scale. into the cloud maybe very quickly. And I think it's very has been that people come into the cloud And again, this highlights And this is just one And the companies that innovate And this is awesome, and great stuff, and sharing the AWS perspective. And you guys are of the AWS Partner Showcase.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

DaniellePERSON

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

MorganPERSON

0.99+

Danielle GreshockPERSON

0.99+

John FurrierPERSON

0.99+

GoogleORGANIZATION

0.99+

2020DATE

0.99+

Morgan McLeanPERSON

0.99+

last yearDATE

0.99+

SplunkORGANIZATION

0.99+

two companiesQUANTITY

0.99+

Amazon Web ServicesORGANIZATION

0.99+

50 servicesQUANTITY

0.99+

Amazon Web ServicesORGANIZATION

0.99+

five peopleQUANTITY

0.99+

oneQUANTITY

0.99+

last weekDATE

0.99+

bothQUANTITY

0.99+

ISVORGANIZATION

0.99+

OpenTelemetryTITLE

0.99+

both solutionsQUANTITY

0.99+

PrometheusTITLE

0.98+

ISVsORGANIZATION

0.98+

LinuxTITLE

0.98+

SiliconANGLEORGANIZATION

0.97+

firstQUANTITY

0.97+

OneQUANTITY

0.97+

Morgan McLean, Splunk & Danielle Greshock, AWS | AWS Partner Showcase


 

(gentle music) >> Hello, welcome to theCUBE's presentation of the AWS Showcase season one, episode two with the ISV Startups partners. I'm John Furrier, your host of theCUBE. We're joined by Morgan McLean, director of product management at Splunk, and Danielle Greshock, who is the director of ISVs solution architects at AWS. Welcome to the show. Thanks for coming on. >> Thanks for having us. >> And great. Thanks for having us. >> Great to see both of you, both theCUBE alumni, but the Splunk-AWS relationship has been going very, very well. You guys are doing great business enabling this app revolution. And cloud scale has been going extremely well. So let's get into it. You guys are involved in a lot of action around application revolution, around OpenTelemetry and open source. So let's get into it. What's the latest? >> Danielle, you go ahead. >> Well, I'll just jump in first. Obviously last year, not last year, but in 2020, we launched the AWS Distro for OpenTelemetry. The idea being essentially, we're able to bring in data from partners, from infrastructure running on AWS, from apps running on AWS, to really be able to increase observability across all cloud assets at your entire cloud platform. So, Morgan, if you want to chime in on how Splunk >> Morgan: Certainly. >> has worked out OpenTelemetry. >> Yeah. I mean, OpenTelemetry is super exciting. Obviously, there's a lot of partnership points between Amazon and Splunk, but OpenTelemetry is probably one of them that's the most visible to people who aren't already maybe using these two products together. And so, as Danielle mentioned, Amazon has their own distribution of OpenTelemetry, Splunk has their own, as well, and of course there's the main open source distribution that everybody knows and loves. Just for our viewers, just for clarity's sake, the separate distributions are fundamentally very similar to, almost identical to what's offered in the open source space, but they come preconfigured and they come with support guarantees from each company, meaning that you can actually get paid full support for an open source project, which is really fantastic for customers. And as Danielle mentioned, it's a great demonstration of the alliance between Splunk and Amazon Web Services. For example, the AWS Distro, when you use it, can export data to Amazon CloudWatch, various Amazon backed open source initiatives like Prometheus and others, and to Splunk Observability Cloud and to Splunk Enterprise. So it's a place that we've worked very closely together, and it's something that we're very excited about. >> So, Morgan, I want to get your take on the on the product management side and also how product are built these days. >> One of the big things we're seeing in cloud is that open source has been the big enabler for a lot of refactoring. And you got multiple distributions, but the innovations on top of that, can you talk about how you see the productization of new innovations with open source as you guys go into this market, because this is the new dynamic with cloud. We're seeing examples all over the place. Obviously, Amazon's going next level with what they're doing, and that open source, it's not a one game for all of it. You can have mix and match. Take us through the product angle. >> And in many ways, this is just another wave of the same thing, right? Like, if you think back in time, we all used and still use in many cases, virtual machines, most of those are based on Linux, right? Another large open source project. And so, open source software has been accelerating innovation in the cloud space and in the computing space generally for a very long time, which is fantastic. Our excitement with something like OpenTelemetry comes from both the project's capabilities but also what we can do with it. So for those who aren't already familiar with OpenTelemetry, OpenTelemetry allows you to extract really critical system telemetry, application signals and everything else you need from your own applications, from new services, from your infrastructure, from everything that you're running in a cloud environment. You can then send that data to another location for processing. And so John, you ask like, how does this accelerate innovation? What does it unlock? Well, the insight you can gain from this data means you can become so much more efficient as a development organization. You can make your applications so much more effective because when you send that data to something like Splunk Observability Cloud, to something like Amazon CloudWatch, to various other solutions on the market, they can give you deep, deep insight into your application's performance, to its structure, they can help you reduce outages. And so, it's very, very powerful because it allows organizations to use tools like Splunk, like Amazon, like other things to innovate so much more effectively. >> Danielle, can you comment >> If I could... >> on the AWS side because this is again on the big point. You guys are going next level, and you're starting to see patterns in the ISV world, certainly on the architecture side of partners doing things differently now on top of what they've already done. Could you share how AWS is helping customers accelerate? >> Well, just as Morgan was talking about what OpenTelemetry provides, you can see how from a partnership perspective, this is so valuable, right? What the partner team here at AWS is in the business of doing, is really enabling customer choice, right? And having that ability to plug in and pull data from different sources, post it to different sources, make it available for visibility across all of your resources is very powerful and it's something from the partner community that we really value because we want customers to be able to select best of breed solutions, what works for their business, which businesses are different and they may have different needs, and that also fosters that true innovation. A small company is going to develop and release software a lot differently than a large enterprise. And so, being able to support something like OpenTelemetry just enables that for all different kinds of customers. >> Morgan, add to that because the velocity of releases, certainly operational, stability, is key every predominant security, uptime, these are top concerns. And, you mention data too, >> And you mention challenges. >> You got the data in here. So you got a lot of data moving around, a lot of value. What's your take? >> Yeah. So, I'll speak with some specifics. So a challenge that developers have had for years when you're developing large services, which you can now do with platforms like AWS. So, it's very easy to go develop huge deployments. But a challenge they have is you go and build a mess, right? And like, I've worked earlier in my career in Web Services. And I remember in one of the first orgs I was in, I was one of the five people who really understood our ecommerce stack. Right? And so like, I would get dragged into all these meetings and I'd have to go draw like the 50 services we had, and how they interacted, and the changes that were made in the last week. And without observability tools like Splunk Observability Cloud, like the ones offered by Amazon, like the ones that are backed by the data that comes with OpenTelemetry, organizations basically rely on people like this, to go draw out their deployments so they understand what it is they've built. Well, as you can imagine, this crimps your development velocity, because most of your engineers, most of your tech leads, most of everyone else don't actually understand what it is they've built what it is they're running, because they need that global context. You get something like OpenTelemetry and the solutions that consume the data from it, and suddenly now, all your developers have that context, all of them when they're adding functionality to a service or they're updating their infrastructure, can actually understand how it interacts with the rest of the broader application. This lets you speed up your time to development, this lets you ship more safely, more securely. And finally, when things do go wrong, which will be less frequent, but when they do go wrong, you can fix them super rapidly. >> If I'm a customer, let me ask a question. I'm a customer and I say, "Okay, I love AWS, I love Splunk, I love OpenTelemetry. I got to have open sources, technology innovation is happening." What's the integration? What are some of the standards? Can you take us through how that's working together with you guys as a shared platform? >> Yeah. So let's take the Amazon distribution for OpenTelemetry or even the Splunk one. One of the first things they do is they include all of the receivers, all of the sort of data capture components that you need, out of box for platforms like AWS, right? And so, right away, you get that power and flexibility where you're getting access to all of these data sources, right? And so, that's part of that partnership. And additionally, once the data comes into OpenTelemetry, you can now send that to various different data sources, including, as Danielle mentioned, to multiple at the same time. So you can use whatever tools you want. And so when you talk about like what the partnership is actually providing to you as a customer and still, this is just within the context of OpenTelemetry, obviously there's a much broader partnership between these two companies than just that. But within the context of OpenTelemetry means you can download one of these distributions. It's fully supported. It works with both solutions and everything is just great, right? You don't need to go fiddle with that out of the box. To be clear, OpenTelemetry is a batteries included project, right? This means that even the standard distributions of OpenTelemetry include the components you need. You have to go directly, reference them and ensure that they're packaged in there, but they exist, right. But the nice thing about these distributions is that it's done, it's out of the box, you don't even have to worry about is something missing or do I need to include new exporters or new receivers? It's all there. It's preconfigured. It just works. And if something goes wrong and you have a support contact, you pick up the phone, you talk to someone to get it fixed. >> Danielle, what's the Amazon side 'cause agility and scale is one of the highlights you guys are seeing. How does this tie into that and how are you guys working backwards from the customers to support the partners? >> Well, I think just to add on essentially to what Morgan said, I think that AWS is a cloud platform, has always really had a focus on developers. And, we talk a lot about how AWS and Amazon as a whole really embraces this continuous integration and continuous deployment methods inside of our organization. And we talk about services, and observability is a huge part of that. The only way that you're actually able to release hundreds, thousands of times a day like Amazon does, is by having an observability platform, to be able to measure metrics, see changes in the environment, to be able to roll back if you need to, and to be able to quickly mitigate any challenges or anything that goes wrong at any part of the process. And so, when we preach that to our customers, I think it's something that we do that because we live it and breathe it. And so, things such as OpenTelemetry and such as the products that Splunk builds, those are also ways in which we believe our customers can achieve that. >> Yeah. And we can... I mean, as I mentioned before, this partnership goes well beyond OpenTelemetry, right? And so, if you go use like Splunk Enterprise, Enterprise Cloud, Splunk Observability Cloud, and you're running on AWS, you have excellent support and excellent visibility into your Amazon infrastructure, into the services and applications you've deployed on top of that infrastructure. We try and give you, and I think we do succeed in this. We give you the best possible experience, the deepest possible visibility, into what it is you've deployed on AWS, so that you can be even more successful as a business, and so that you can be even more successful on AWS as a platform. >> Yeah. This is a great conversation, Morgan. You mentioned the early days of Web Services. AWS stands for Amazon Web Services built on web services. So interesting throwback there, but made me think about the days of the early days of web services. And if you look at data, what's going on now, the top partners in AWS, you're seeing a lot of people thinking about data differently, they're refactoring, a lot of machine learning, a lot of AI going on at scale. So then, you got cloud native, things like Kubernetes and these new services being stood up and teared down with automation. A whole new operating model's coming. And so when you think about observability, the importance of it, I mean, can you share your perspective on this whole 'nother level? I mean, I always say that whole another level sounds cliche, but it is next level. I mean, this is completely different. What's your reaction? >> Yeah. There there's a ton of factors here, right? So as you point out, companies are totally shifting how they use their cloud infrastructure. And part of this you see during their cloud migrations, a part of it you see after, and they're shifting from their sort of stateful VMs that they may have had in the past to infrastructure that they tear down and put up regularly. And there's a lot more automation. With this, comes as I mentioned before, complexity, right? And also, with this comes more and more businesses becoming even more reliant on their digital infrastructure. And so, not having observability into your applications, into your services, into your infrastructure, to me, is akin to running a business, say running a large warehousing or distribution company, but not having any idea where you're shipping products or where things are, or not having any accounting or CFO, right? Like, business has become so digital. Business is so reliant on technology, and that's unlocked a ton of new things. It's great. But not having visibility into how that technology works or what it is that's deployed or how to fix it is akin to having no visibility to anything else in your business. It's nuts. And so, observability is super, super critical, particularly for customers who are adopting this new wave of cloud technologies on platforms like AWS. >> Danielle, on your side too, you're enabling this new capability so that businesses can do it, the partners do it, we're calling it super cloud. We've been calling it super cloud kind of dynamic where new things are happening with the data. And you guys are evolving with that. Can you share what you're seeing on your side as your partners start to go to the next level? What are you guys doing? How does it all come together? >> Well, we always talk about what has happened with data in the last couple of years, which the cloud has really enabled around, you know, variety and velocity and there's one other "V" that's escaping me right now, but essentially, all of this data is coming in and providing the ability for us to make better decisions, to build better products, to provide better experiences for customers. And so, I just think, the OpenTelemetry project, as well as what Splunk is doing is just another example of how we're taking this massive amount of data and being able to provide better experiences and outcomes for customers. >> And you guys have been working along together for long time, Splunk, and, it's been a great partners, if we're going back with that been covering it on theCUBE and SiliconANGLE. So, we know that, the change is key observability. Can you imagine a company without a CFO, Morgan? That's just boggles your mind, but that's what it's like right now. So... >> It is, yeah. >> And the people who take advantage of that are winning, right? So it's like, that's the key. >> Yeah, I know. I mean, even in my own career, right, I've moved between different companies. And I remember, when I joined Google in particular, which is where I worked at previously, I was very impressed with their internal observability tools. And I'm certain, I haven't worked at Amazon. I'm certainly, I just assume inside of Amazon they're excellent as well, so a lot of the large cloud firms these days. But it was so refreshing going from an organization where if we had some outage or something went wrong, there were like a very small set of people who could actually understand what was going on. And then you would just have to manually dive through logs and correlate requests manually between services. It's very challenging. And so, when things went wrong, they went wrong for a long, long time. And so, the companies that understood this even in the past are already very successful as a result. I think now, the rest of the industry is really in the midst of adopting these observability practices and the tools that are required to implement them, because you're right. Otherwise your development velocity slows down. Now you're getting out competed by your competition. And then, when you have a problem, it blows up for ages. And once again, your competition can take advantage of it. >> And, can you just summarize the observability piece relative to the OpenTelemetry? Where is that going to go? Where do you see that evolving? >> Sure. >> I see open source is growing like crazy, we all know that. >> Of course. >> But OpenTelemetry in particular and open source, 'cause this is a big hot area. >> Yes. So to set the stage for people, OpenTelemetry, unlocks observability in many ways. As I mentioned earlier, OpenTelemetry is how you capture data out of your application. It doesn't process it. It's not a replacement for something like Amazon CloudWatch or any Splunk's products, but it's how we get the data out of your system, which is a remarkably difficult problem. I won't dive into it today, but, those who work in this space are very aware. That's why this project exists and it's so big, that actually extracting information, metrics, logs, distributed traces, profiles, everything else, from your applications and from your infrastructure is very, very difficult. So for OpenTelemetry, where it's going is just continually getting better at extracting more types of data from more sources, and doing that more effectively for people in a more standardized way. That will unlock firms like Splunk, firms, like Amazon and others to better process this data. In terms of where that's going, the sky's the limit, right? Like, everyone's familiar with APM, people are familiar with infrastructure monitoring, but there's a lot more capabilities coming there for security analytics, for network performance monitoring, for getting down all the way to single lines of coding your application, how they impact everything. There's just so much power that's coming to the industry right now. I'm really excited to see where things go in the next few years. >> And Danielle, you're in the middle of all the action as a solution architect, really set the stage for their companies and the ISVs, and this is a big, hot area. What are the patterns you're seeing and what are some of the best practices that you're doing will help companies? >> Right. So I think, summarizing our entire conversation, the big things that we're seeing in the market is essentially more and more companies are looking to move to a continuous deployment and a continuous integration environment. And they're looking to innovate faster and spend less time hot patching or hot fixing their environments and they want to spend more time innovating. And so, that you know, the patterns that we're seeing is... What I see and what I actually experience firsthand at re:Invent when I talk to probably over 40 or 50 ISVs, is customers want to know in their environment, where are their changes? Where are their security vulnerabilities? Where are their data changes, and what are customers really experiencing, whether it's latency, poor experience throughout their products, those types of things? So security, data, and observability are just key to all of that experience and that's what we're definitely seeing as patterns, what we're seeing with our customers and also what value our ISVs are providing in that space. >> That's awesome. And the other thing I would observe is that there's more of an integration story going on around joint projects, whether it's open source. >> Absolutely. >> Because this is where we want to get that services connected. And it's mutual beneficial. I mean, this is really >> Exactly. >> whole 'nother, new kind of interoperable cloud scale. >> Yeah, if I could say one thing else there, I think that, a lot of the customers who are trying to move into the cloud now are, maybe not technology forward companies and they really need that solution. And that's very important. I think COVID has pushed a lot of companies into the cloud maybe very quickly. And, that has been something else we've observed in the market. So, solutions and full solutions between ISVs and ISVs, or ISVs and AWS is just becoming more and more common thing that we see. >> And, you mentioned John, in the open source space as well. Like, we're certainly from Amazon to Splunk. So we're talking a lot about those, but there's a lot of other firms involved in projects like OpenTelemetry. And I think it's very endearing, very heartening to see how well they cooperate in this community and how, when their interests are aligned, how effective they can be. And it's been very exciting to work in the space and very pleasant, honestly, to see everything come together with this huge set of customers and partners. >> Yeah. The pleasant surprise of the pandemic has been that people come into the cloud and they like it and they, "Hey, this works," and they double down on it. Then they realize, there's more there and they refactor. So, you're seeing real examples of that. So, this is a great discussion, great success story. Congratulations Morgan, Danielle. >> Thank you. >> Great partnership between Splunk and AWS. We've been following for a long time. And again, this highlights this whole another level of integrating super cloud kind of experience where people are getting more capabilities and doing more together, so great stuff. >> And this is just one facet of that, right? Like, there's all the other connections of Splunk Enterprise, Splunk security analytics products, and others. It's a deep, deep partnership between these firms. >> Yeah. And the companies that innovate and get that new capability are going to have an advantage. And you're seeing... >> Yes. >> Right? >> Agreed. >> And this is awesome, and great stuff, thank you for coming on and sharing that insight. >> Thank you. >> Congratulations Morgan over there at Splunk, great stuff. And Danielle, thanks for coming on and sharing the AWS perspective. >> Thanks for having me. >> And you guys are going to the next level. You moving up to stack as they say, all good stuff for customers. Thanks. >> Thank you. >> Okay. >> Thank you. >> This is season one, episode two of the AWS Partner Showcase. I'm John Furrier with theCUBE. Thanks for watching. (gentle music)

Published Date : Feb 25 2022

SUMMARY :

of the AWS Showcase And great. but the Splunk-AWS relationship So, Morgan, if you want it's a great demonstration of the alliance on the on the product management side One of the big things Well, the insight you on the AWS side And having that ability to plug in the velocity of releases, You got the data in here. and the changes that were What are some of the standards? is actually providing to you as a customer from the customers to to be able to roll back if you need to, and so that you can be And so when you think about observability, And part of this you see And you guys are evolving with that. and providing the ability for And you guys have been And the people who And so, the companies that is growing like crazy, 'cause this is a big hot area. OpenTelemetry is how you capture data What are the patterns you're seeing And so, that you know, And the other thing I I mean, this is really new kind of interoperable cloud scale. into the cloud maybe very quickly. And I think it's very has been that people come into the cloud And again, this highlights And this is just one And the companies that innovate And this is awesome, and great stuff, and sharing the AWS perspective. And you guys are of the AWS Partner Showcase.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

DaniellePERSON

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

MorganPERSON

0.99+

Danielle GreshockPERSON

0.99+

John FurrierPERSON

0.99+

GoogleORGANIZATION

0.99+

2020DATE

0.99+

Morgan McLeanPERSON

0.99+

last yearDATE

0.99+

SplunkORGANIZATION

0.99+

two companiesQUANTITY

0.99+

Amazon Web ServicesORGANIZATION

0.99+

50 servicesQUANTITY

0.99+

Amazon Web ServicesORGANIZATION

0.99+

five peopleQUANTITY

0.99+

oneQUANTITY

0.99+

last weekDATE

0.99+

bothQUANTITY

0.99+

ISVORGANIZATION

0.99+

OpenTelemetryTITLE

0.99+

both solutionsQUANTITY

0.99+

PrometheusTITLE

0.98+

ISVsORGANIZATION

0.98+

LinuxTITLE

0.98+

SiliconANGLEORGANIZATION

0.97+

SplunkPERSON

0.97+

Garth Fort, Splunk | Splunk .conf21


 

(upbeat music) >> Hello everyone, welcome back to theCUBE's coverage of splunk.com 2021 virtual. We're here live in the Splunk studios. We're all here gettin all the action, all the stories. Garth Fort, senior vice president, Chief Product Officer at Splunk is here with me. CUBE alumni. Great to see you. Last time I saw you, we were at AWS now here at Splunk. Congratulations on the new role. >> Thank you. Great to see you again. >> Great keynote and great team. Congratulations. >> Thank you. Thank you. It's a lot of fun. >> So let's get into the keynote a little bit on the product. You're the Chief Product Officer. We interviewed Shawn Bice, who's also working with you as well. He's your boss. Talk about the, the next level, cause you're seeing some new enhancements. Let's get to the news first. Talk about the new enhancements. >> Yeah, this was actually a really fun keynote for me. So I think there was a lot of great stuff that came out of the rest of it. But I had the honor to actually showcase a lot of the product innovation, you know, since we did .conf last year, we've actually closed four different acquisitions. We shipped 43 major releases and we've done hundreds of small enhancements, like we're shipping code in the cloud every six weeks and we're shipping new versions twice a year for our Splunk Enterprise customers. And so this was kind of like if you've seen that movie Sophie's Choice, you know, where you have to pick one of your children, like this was a really hard, hard thing to pick. Cause we only had about 25 minutes, but we did like four demos that I think landed really well. The first was what we call ingest actions and you know, there's customers that are using, they start small with gigabytes and they go to terabytes and up to petabytes of data per day. And so they wanted tools that allow them to kind of modify filter and then route data to different sort of parts of their infrastructure. So that was the first demo. We did another demo on our, our visual playbook editor for SOAR, which has improved quite a bit. You know, a lot of the analysts that are in the, in the, in the SOC trying to figure out how to automate responses and reduce sort of time to resolution, like they're not Python experts. And so having a visual playbook editor that lets them drag and drop and sort of with a few simple gestures create complex playbooks was pretty cool. We showed some new capabilities in our APM tool. Last year, we announced we acquired a company called Plumbr, which has expertise in basically like code level analysis and, and we're calling it "Always On" profiling. So we, we did that demo and gosh, we did one more, four, but four total demos. I think, you know, people were really happy to see, you know, the thing that we really tried to do was ground all of our sort of like tech talk and stuff that was like real and today, like this is not some futuristic vision. I mean, Shawn did lay out some, some great visions, visionary kind of pillars. But, what we showed in the keynote was I it's all shipping code. >> I mean, there's plenty of head room in this market when it comes to data as value and data in motion, all these things. But we were talking before you came on camera earlier in the morning about actually how good Splunk product and broad and deep the product portfolio as well. >> Garth: Yeah. >> I mean, it's, I mean, it's not a utility and a tooling, it's a platform with tools and utilities. >> Garth: Yeah >> It's a fully blown out platform. >> Yeah. Yeah. It is a platform and, and, you know, it's, it's one that's quite interesting. I've had the pleasure to meet a couple of big customers and it's kind of amazing, like what they do with Splunk. Like I was meeting with a large telco on the east coast and you know, they actually, for their set top boxes, they actually have to figure out in real time, which ads to display and the only tool they could find to process 15 million events in real time, to decide what ad to display, was Splunk. So that was, that was like really cool to hear. Like we never set out to be like an ad tech kind of platform and yet we're the only tool that operates at that level of scale and that kind of data. >> You know, it's funny, Doug Merritt mentioned this in my interview with him earlier today about, you know, and he wasn't shy about it, which was great. He was like, we're an enabling platform. We don't have to be experts in all these vertical industries >> Garth: Yep >> because AI takes care of that. That's where the machine learning >> Garth: Yeah >> and the applications get built. So others are trying to build fully vertically integrated stacks into these verticals when in reality they don't have to, if they don't want it. >> Yeah, and Splunk's kind of, it's quite interesting when you look across our top 100 customers, you know, Doug talks about like the, you know, 92 of the fortune 100 are kind of using Splunk today, but the diversity across industries and, you know, we have government agencies, we have, you know, you name the retail or the vertical, you know, we've got really big customers, they're using Splunk. And the other thing that I kind of, I was excited about, we announced the last demo I forgot was TruSTAR integration with Enterprise Security. That's pretty cool. We're calling that Splunk Threat Intelligence. And so That was really fun and we only acquired, we closed the acquisition to TruSTAR in May, but the good news is they've been a partner with us like for 18 months before we actually bought em. And so they'd already done a lot of the work to integrate. And so they had a running start in that regard, But other, one other one that was kind of a, it was a small thing. I didn't get to demo it, but we talked about the, the content pack for application performance monitoring. And so, you know, in some ways we compete in the APM level, but in many ways there's a ton of great APM vendors out there that customers are using. But what they wanted us to do was like, hey, if I'm using APM for that one app, I still want to get data out of that and into Splunk because Splunk ends up being like the core repository for observability, security, IT ops, Dev Sec Ops, et cetera. It's kind of like where the truth, the operational truth of how your systems works, lives in Splunk. >> It's so funny. The Splunk business model has actually been replicated. They call it data lake, whatever you want to call it. People are bringing up all these different metaphors. But at the end of the day, if you guys can create a value proposition where you can have data just be, you know, stored and dumped and dumped into whatever they call it stored in a way >> Garth: We call it ingest >> Ingested, ingested. >> Garth: Not dumped. >> Data dump. >> Garth: It's ingested. >> Well, I mean, well you given me a plan, but you don't have to do a lot of work to store just, okay, we can only get to it later, >> Garth: Yep. >> But let the machines take over >> Garth: Yep. >> With the machine learning. I totally get that. Now, as a pro, as a product leader, I have to ask you your, your mindset around optimization. What do you optimize for? Because a lot of times these use cases are emerging. They just pop out of nowhere. It's a net new use case that you want to operationalize. So balancing the headroom >> Yep. >> Or not to foreclose those new opportunities for customers. How are customers deciding what's important to them? How do you, because you're trying to read the tea leaves for the future >> Garth: A little bit, yeah. >> and then go, okay, what do our customers need, but you don't want to foreclose anything. How do you think about product strategy around that? >> There's a ton of opportunity to interact with customers. We have this thing called the Customer Advisory Board. We run, I think, four of them and we run a monthly. And so we got an opportunity to kind of get that anecdotal data and the direct contact. We also have a portal called ideas.splunk.com where customers can come tell us what they want us to build next. And we look at that every month, you know, and there's no way that we could ever build everything that they're asking us to, but we look at that monthly and we use it in sort of our sprint planning to decide where we're going to prioritize engineering resources. And it's just, it's kind of like customers say the darndest things, right? Sometimes they ask us for stuff and we never imagined building it in a million years, >> John: Yeah. >> Like that use case around ads on the set top box, but it's, it's kind of a fun place to be like, we, we just, before this event, we kind of laid out internally what, you know, Shawn and I kind of put together this doc, actually Shawn wrote the bulk of it, but it was about sort of what do we think? Where, where can we take Splunk to the next three to five years? And we talked about these, we referred to them as waves of innovation. Cause you know, like when you think about waves, there's multiple waves that are heading towards the beach >> John: Yeah. >> in parallel, right? It's not like a series of phases that are going to be serialized. It's about making a set of investments. that'll kind of land over time. And, and the first wave is really about, you know, what I would say is sort of, you know, really delivering on the promise of Splunk and some of that's around integration, single sign-on things about like making all of the Splunk Splunk products work together more easily. We've talked a lot in the Q and a about like edge and hybrid. And that's really where our customers are. If you watch the Koby Avital's sort of customer keynote, you know, Walmart by necessity, given their geographic breadth and the customers they serve has to have their own infrastructure. They use Google, they use Azure and they have this abstraction layer that Koby's team has built on top. And they use Splunk to manage kind of, operate basically all of their infrastructure across those three clouds. So that's the hybrid edge scenario. We were thinking a lot about, you mentioned data lakes. You know, if you go back to 2002, when Splunk was founded, you know, the thing we were trying to do is help people make sense of log files. But now if you talk to customers that are moving to cloud, everybody's building a data lake and there's like billions of objects flowing into millions of these S3 buckets all over the place. And we're kind of trying to think about, hey, is there an opportunity for us to point our indexing and analytics capability against structured and unstructured data and those data lakes. So that that'll be something we're going to >> Yeah. >> at least start prototyping pretty soon. And then lastly, machine learning, you know, I'd say, you know, to use a baseball metaphor, like in terms of like how we apply machine learning, we're like in the bottom of the second inning, >> Yeah. >> you know, we've been doing it for a number of years, but there's so much more. >> There's so, I mean, machine learning is only as good as the data you put into the machine learning. >> Exactly, exactly. >> And so if you have, if you have gap in the data, the machine learning is going to have gaps in it. >> Yeah. And we have, we announced a feature today called auto detect. And I won't go into the gory details, but effectively what it does is it runs a real-time analytics job over whatever metrics you want to look at and you can do what I would consider more statistics versus machine learning. You can say, hey, if in a 10 minute period, like, you know, we see more errors than we see on average over the last week, throw an alert so I can go investigate and take a look. Imagine if you didn't have to figure out what the right thresholds were, if we could just watch those metrics for you and automatically understand the seasonality, the timing, is it a weekly thing? Is it a monthly thing? And then like tell you like use machine learning to do the anomaly detection, but do it in a way that's more intelligent than just the static threshold. >> Yeah. >> And so I think you'll see things like auto detect, which we announced this week will evolve to take advantage of machine learning kind of under the covers, if you will. >> Yeah. It was interesting with cloud scale and the data velocity, automations become super important. >> Oh yeah. >> You don't have a lot of new disciplines emerge, like explainable AI is hot right now. So you got, the puck is coming. You can see where the puck is going. >> Yeah >> And that is automation at the app edge or the application layer where the data has got to be free-flowing or addressable. >> Garth: Yeah. >> This is something that is being talked about. And we talked about data divide with, with Chris earlier about the policy side of things. And now data is part of everything. It's part of the apps. >> Garth: Yeah. >> It's not just stored stuff. So it's always in flight. It should be addressable. This is what people want. What do you think about all of that? >> No, I think it's great. I actually just can I, I'll quote from Steve Schmidt in, in sort of the keynote, he said, look like security at the end of the day is a human problem, but it kind of manifests itself through data. And so being able to understand what's happening in the data will tell you, like, is there a bad actor, like wreaking havoc inside of my systems? And like, you can use that, the data trail if you will, of the bad actor to chase them down and sort of isolate em. >> The digital footprints, if you will, looking at a trail. >> Yeah. >> All right, what's the coolest thing that you like right now, when you look at the treasure trove of, of a value, as you look at it, and this is a range of value, Splunk, Splunk has had customers come in with, with the early product, but they keep the customers and they always do new things and they operationalize it >> Garth: Yep. >> and another new thing comes, they operationalize it. What's the next new thing that's coming, that's the next big thing. >> Dude that is like asking me which one of my daughters do I love the most, like that is so unfair. (laughing) I'm not going to answer that one. Next question please. >> Okay. All right. Okay. What's your goals for the next year or two? >> Yeah, so I just kind of finished roughly my first 100 days and it's been great to, you know, I had a whole plan, 30, 60, 90, and I had a bunch of stuff I wanted to do. Like I'm really hoping, sort of, we get past this current kind of COVID scare and we get to back to normal. Cause I'm really looking forward to getting back on the road and sort of meeting with customers, you know, you can meet over Zoom and that's great, but what I've learned over time, you know, I used to go, I'd fly to Wichita, Kansas and actually go sit down with the operators like at their desk and watch how they use my tools. And that actually teaches you. Like you, you come up with things when you see, you know, your product in the hands of your customer, that you don't get from like a CAB meeting or from a Zoom call, you know? >> John: Yeah, yeah. >> And so being able to visit customers where they live, where they work and kind of like understand what we can do to make their lives better. Like that's going to, I'm actually really excited to gettin back to travel. >> If you could give advice to CTO, CISO, or CIO or a practitioner out there who are, who is who's sitting at their virtual desk or their physical desk thinking, okay, the pandemic, were coming through the pandemic. I want to come out with a growth strategy, with a plan that's going to be expansive, not restrictive. The pandemic has shown what's what works, what doesn't work. >> Garth: Sure. >> So it's going to be some projects that might not get renewed, but there's doubling down on, certainly with cloud scale. What would advice would you give that person when they start thinking about, okay, I got to get my architecture right. >> Yeah. >> I got to get my playbooks in place. I got to get my people aligned. >> Yeah >> What's what do you see as a best practice for kind of the mindset to actual implementation of data, managing the data? >> Yeah, and again, I'm, I'm, this is not an original Garth thought. It actually came from one of our customers. You know, the, I think we all, like you think back to March and April of 2020 as this thing was really getting real. Everybody moved as fast as they could to either scale up or scale scaled on operations. If you were in travel and hospitality, you know, that was, you know, you had to figure how to scale down quickly and like what you could shut down safely. If you were like in the food delivery business, you had to figure out how you could scale up, like Chipotle hit two, what is it? $2 billion run rate on delivery last year. And so people scrambled as fast as they could to sort of adapt to this new world. And I think we're all coming to the realization that as we sort of exit and get back to some sense of new normal, there's a lot of what we're doing today that's going to persist. Like, I think we're going to have like flexible rules. I don't think everybody's going to want to come back into the office. And so I think, I think the thing to do is you think about returning to whatever this new normal looks like is like, what did we learn that was good. And like the pandemic had a silver lining for folks in many ways. And it sucked for a lot. I'm not saying it was a good thing, but you know, there were things that we did to adapt that I think actually made like the workplace, like stronger and better. And, and sort of. >> It showed that data's important, internet is important. Didn't break, the internet didn't break. >> Garth: Correct. >> Zoom was amazing. And the teleconferencing with other tools. >> But that's kind of, just to sort of like, what did you learn over the last 18 months that you're going to take for it into the next 18 years? You know what I mean? Cause there was a lot of good and I think people were creative and they figured out like how to adapt super quickly and take the best of the pandemic and turn it into like a better place to work. >> Hybrid, hybrid events, hybrid workforce, hybrid workflows. What's what's your vision on Splunk as a tier one enterprise? Because a lot of the news that I'm seeing that's, that's the tell sign to me in terms of this next growth wave is big SI deals, Accenture and others are yours working with and you still got the other Partnerverse going. You have the ecosystems emerging. >> Garth: Yep. >> That's a good, that means your product's enabling people to make money. >> Garth: Yeah. Yeah, yeah, yeah. >> And that's a good thing. >> Yeah, BlueVoyant was a great example in the keynote yesterday and they, you know, they've really, they've kind of figured out how, you know, most of their customers, they serve customers in heavily regulated industries kind of, and you know, those customers actually want their data in a Splunk tenant that they own and control and they want to have that secure boundary around that. But BlueVoyant's figured out how they can come in and say, hey, I'm going to take care of the heavy lifting of the day-to-day operations, the monitoring of that environment with the security. So, so BlueVoyant has done a great job sort of pivoting and figuring out how they can add value to customers and do, you know, because they they're managing not just one Splunk instance, but they're managing 100s of Splunk cloud instances. And so they've got best practices and automation that they can play across their entire client base. And I think you're going to see a lot more of that. And, and Teresa's just, Teresa is just, she loves Partners, absolutely loves Partners. And that was just obvious. You could, you could hear it in her voice. You could see it in her body language, you know, when she talked about Partnerverse. So I think you'll see us start to really get a lot more serious. Cause as big as Splunk is like our pro serve and support teams are not going to scale for the next 10,000, 100,000 Splunk customers. And we really need to like really think about how we use Partners. >> There's a real growth wave. And I, and I love the multiples wave in parallel because I think that's what everyone's consensus on. So I have to ask you as a final question, what's your takeaway? Obviously, there's been a virtual studio here where all the Splunk executives and, and, and customers and partners are here. TheCUBE's here doing all the presentations, live by the way. It was awesome. What would you say the takeaway is for this .conf, for the people watching and consuming all the content online? A lot of asynchronous consumption would be happening. >> Sure. >> What's your takeaway from this year's Splunk .conf? >> You know, I, it's hard cause you know, you get so close to it and we've rehearsed this thing so many times, you know, the feedback that I got and if you look at Twitter and you look at my Slack and everything else, like this felt like a conf that was like kind of like a really genuine, almost like a Splunk two dot O. But it's sort of true to the roots of what Splunk was true to the product reality. I mean, you know, I was really careful with my team and to avoid any whiff of vaporware, like what were, what we wanted to show was like, look, this is Splunk, we're acquiring companies, you know, 43 major releases, you know, 100s of small ones. Like we're continuing to innovate on your behalf as fast as we can. And hopefully this is the last virtual conf. But even when we go back, like there was so much good about the way we did this this week, that, you know, when we, when we broke yesterday on the keynote and we were sitting around with the crew and it kind of looking at that stage and everything, we were like, wow, there is a lot of this that we want to bring to an in-person event as well. Cause so for those that want to travel and come sit in the room with us, we're super excited to do that as soon as we can. But, but then, you know, there may be 25, 50, 100,000 that don't want to travel, but can access us via this virtual event. >> It's like a time. It's a moment in time that becomes a timeless moment. That could be, >> Wow, did you make that up right now? >> that could be an NFT. >> Yeah >> We can make a global cryptocurrency. Garth, great to see you. Of course I made it up right then. So, great to see you. >> Air bump, air bump? Okay, good. >> Okay. Garth Fort, senior vice president, Chief Product Officer. In theCUBE here, we're live on site at Splunk Studio for the .conf virtual event. I'm John Furrier. Thanks for watching. >> All right. Thank you guys. (upbeat music)

Published Date : Oct 20 2021

SUMMARY :

Congratulations on the new role. Great to see you again. Great keynote and great It's a lot of fun. a little bit on the product. But I had the honor to But we were talking before you it's a platform with tools and utilities. I've had the pleasure to meet today about, you know, and That's where the machine learning and the applications get built. the vertical, you know, be, you know, stored and dumped I have to ask you your, your the tea leaves for the future but you don't want to foreclose anything. And we look at that every month, you know, the next three to five years? what I would say is sort of, you know, you know, to use a baseball metaphor, like you know, we've been doing as the data you put into And so if you have, if if in a 10 minute period, like, you know, under the covers, if you will. with cloud scale and the data So you got, the puck is coming. the app edge or the application It's part of the apps. What do you think about all of that? of the bad actor to chase them you will, looking at a trail. that's coming, that's the next I love the most, like that is so unfair. the next year or two? 100 days and it's been great to, you know, And so being able to visit If you could give advice to CTO, CISO, What would advice would you I got to get my playbooks in place. And like the pandemic had Didn't break, the internet didn't break. And the teleconferencing what did you learn over the that's the tell sign to me in people to make money. and you know, So I have to ask you as a final question, this year's Splunk .conf? I mean, you know, It's like a time. So, great to see you. for the Thank you guys.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ShawnPERSON

0.99+

Steve SchmidtPERSON

0.99+

JohnPERSON

0.99+

Doug MerrittPERSON

0.99+

John FurrierPERSON

0.99+

Garth FortPERSON

0.99+

ChrisPERSON

0.99+

TeresaPERSON

0.99+

GarthPERSON

0.99+

Sophie's ChoiceTITLE

0.99+

MarchDATE

0.99+

DougPERSON

0.99+

25QUANTITY

0.99+

10 minuteQUANTITY

0.99+

Last yearDATE

0.99+

100sQUANTITY

0.99+

Shawn BicePERSON

0.99+

WalmartORGANIZATION

0.99+

SplunkORGANIZATION

0.99+

MayDATE

0.99+

fourQUANTITY

0.99+

$2 billionQUANTITY

0.99+

2002DATE

0.99+

AWSORGANIZATION

0.99+

BlueVoyantORGANIZATION

0.99+

ChipotleORGANIZATION

0.99+

yesterdayDATE

0.99+

last yearDATE

0.99+

30QUANTITY

0.99+

TruSTARORGANIZATION

0.99+

43 major releasesQUANTITY

0.99+

ideas.splunk.comOTHER

0.99+

first demoQUANTITY

0.99+

this weekDATE

0.99+

CUBEORGANIZATION

0.99+

oneQUANTITY

0.99+

twoQUANTITY

0.99+

next yearDATE

0.99+

60QUANTITY

0.99+

18 monthsQUANTITY

0.99+

PlumbrORGANIZATION

0.98+

firstQUANTITY

0.98+

90QUANTITY

0.98+

first 100 daysQUANTITY

0.98+

50QUANTITY

0.98+

last weekDATE

0.98+

pandemicEVENT

0.98+

todayDATE

0.98+

PartnerverseORGANIZATION

0.98+

four demosQUANTITY

0.98+

this weekDATE

0.97+

millionsQUANTITY

0.97+

second inningQUANTITY

0.97+

PythonTITLE

0.97+

.confEVENT

0.97+

GoogleORGANIZATION

0.97+

AzureTITLE

0.97+

Claire Hockin, Splunk | Splunk .conf21


 

(soft music) >> Hi, everyone. Welcome back to the Cube's covers of Splunk's dot com virtual event, their annual summit. I'm John Ferry, host of the cube. We've been covering dot conf since twenty twelve. Usually a physical event in person. This year it's virtual. I'm here with Claire Hockin, the CMO of Splunk. She's been here three and a half years. Your first year as CMO, and you got to go virtual from physical. Welcome to the cube. Good to see you. >> Thank you very much, John. Great. >> I got to ask you, I mean, this has been the most impressive virtual venue, you've taken over the hotel here in Silicon valley. You're entire teams here. It feels like there's a dynamic of like the teamwork. You can kind of feel the vibe. It's almost like a little VIP Splunk event, but you're broadcasting it to the world. Tell us what's happening. >> Yeah, it's been, I think for everyone a year where we really hope to be back to having a hybrid event, so having a big virtual component, but running dot conf as we had before from Las Vegas, which wasn't possible. So what we thought in the last six weeks is that we would actually bring the Splunk studio to a physical location. So we've been live all of this week from California, where we're sitting today and really thought through bringing the best of that programming to our, you know, our amazing audience of twenty six thousand people. So we were sitting here in a studio, we have a whole live stage and we've activated the best of dot conf to bring as many Splunkers as we can. And as many external guests to make it feel as real and as vibrant as possible. So. >> I have to say I'm very impressed. Since twenty twelve we've been watching the culture evolve. Splunk has always been that next big thing. And then the next big thing again, it seems to be the theme as data becomes so bigger and more important even than ever. There's a new Splunk emerging, another kind of next big thing. And this kind of says the patterns like do something big, that's new, operationalize it and do something new again. This is a theme, big part of this culture here. Can you share more about how you see this evolving? >> Sure. And I think that's what makes Splunk such a great place to be. And I think it attracts people who like to continually challenge reinvent. And I think we've spent a lot of time this year building out our portfolio, going through this cloud transformation. It just gives you a whole new landscape of how you unlock that power of data and how customers use it. So we've had a lot of fun, always building on top of that building, you know, our partnerships, what customers do and really having fun with it. I think one of the best things about Splunk is we do have this incredibly fun and playful brand and as data just becomes something that is more and more powerful, it's really relatable. And we have fun with activating that and storytelling. So, yeah. >> And you have a new manager, Teresa Carlson came in from Amazon web services. You have a lot more messaging kind of building on previous messaging. How are you handling and looking at the aperture of, that's growing from a messaging standpoint, you have a partner verse, which has rebranded of your solution of your ecosystem, kind of a lot of action going on in your world. What's the update? >> Yeah. It keeps us busy. And I think at one end, you know, the number of people that are using Splunk inside any customer base is just growing. So you have different kinds of users. And this year we're really working hard on how to partner and position Splunk with developers, but at the top end of that, the value of data and the idea of having a data foundation is something that's incredibly compelling for CTOs. So working really hard about looking at Splunk and data from that perspective, as well as the individual uses across areas like security and observability. So. >> You know, one of the things I wanted to ask you is, I was thinking about this when I was driving in this morning, Splunk has a lot of customers and you keep your customers and you've have a lot of customers that organically came into the Splunk through the product leadership and just great product. And then as security became more important, Splunk kind of takes that territory now. Now mainstream enterprise with the platform are leaning into Splunk solutions, and now you've got an ecosystem. So it's just becoming bigger and bigger just seems that the scale of the Splunk is growing radically bigger than it was, Is that happening? And what's your take on that? >> I think that's definitely a thing, John. So I think that the power of the ecosystem is amazing. We have customers, partners, as you've seen and everything just joins up. So we're seeing more and more dot joining through data. And we're just seeing this incredible velocity in terms of what's possible and how we can co-build with our partners and do more and more with our customers. So Splunk moves incredibly quickly. And I think if anything, we're just, gaining velocity, which is fun and also really challenging. >> Cloud-scale. And certainly during the pandemic, you guys had a tailwind on the business side, talk about the journey that you've had with Splunk as in your career and also for the customers. How are they reacting and what can they expect as Splunk continues to evolve? >> I think we're working really hard to make sure that Splunk is easier to use. Everything gets every more integrated. And I think our goal and our vision is you just capture your data and you can apply it to any use case using Splunk. And to make it sort of easier see that data in action. And one of the things I love from today was the dashboard studio. They're just these beautiful visualizations that really are inspiring around how data is working in your organization. And for me, I've been a Splunker for three and a half years. And I just think there is just so much to do, and there's so much of our story ahead of us and so much potential. So just really enjoying working with customers on the next data frontier, really. >> You have the Jedi Knight from Star Wars speaking, you had the F1 car racing. Lando was here, kind of the young Jedi, the old Jedi. The generations are coming together. You're seeing that old IT world, which relied on Splunk. And now you have this new developer real-time shifting left with security DevOps now going mainstream, you kind of have the confluences of these cultures coming together. It's not really clashing. It's kind of jelling. How are you handling that? How do you see that? What's Splunk kind of doing? Because I can see the themes, am I right? >> No, no. One of the stories from this morning that really struck me is we have Cal Poly and we worked with Cal Poly on their security and they actually have their students using Splunk and they run their whole security environment. And at the very top end, you have Walmart, the Fortune one, just using Splunk at a massive, incredible scale. And I think that's the power of data. I mean, data is something that everyone should and can be able to use. And that's what we're really seeing is unlocking the ability to bring, you know, bring all of your data in service of what you're trying to do, which is fun. And it just keeps growing. >> We had Zach Brown, the CEO of F1 McLaren Racing Team, here on the queue earlier. And it was interesting cause I was like driving the advantage with data, you know, kind of cliche, but they're using data very specifically, highly competitive. It almost kind of feels like a cloud kind of scale model because we've got thousands of people working on the team. They're on the track, they're competing, they're using data, they got to be agile and they got to be fast real time. Kind of sounds like the current enterprise's these days. >> Absolutely. And I think what's interesting about McLaren that the thing I love is either they have hundreds of terabytes of data moving at just at incredible speed through Splunk Enterprise, but it all goes back to their mission control in the UK. And there are 32 people that look at all that data. And I think it's got a half second delay and they make all the decisions for the car on the track. And that I think is a great lesson to any enterprises you have to, you know, you have to bring all that data together and you have to look at it and take decisions centrally for the benefit of your whole team. And I think McLaren is a really good example of when you do that it pays dividends and the team has had a really, really great season. >> Well, I want to say congratulations for pulling off a great virtual event. I know you had your physical event was on track and literally canceled the last minute because of the pandemic with the Delta virus. But it was amazing, made for digital TV kind of event. >> Absolutely, >> This is the future of media. >> Absolutely. And it is a lot of fun. And I think I'm really proud. We have done all of this with our in-house team, the brand, the experiences that you see, which is really fantastic. And it's given us a lot of ideas for sort of, you know, digital media and how we story tell, and really connect to our twenty thousand customers or two hundred and thirty thousand community members and keep everyone connected through digital. So this has been a lot of fun and a really nice moment for us this week. >> You know it's interesting, I was saying to the team here on one of our breaks, is that when you have this kind of agility with media to tell your own story directly, you're almost telling more stories there before. And there's a lot to tell you have a lot of successful customers, the new partners. What's the coolest story that you've seen. What would you share that you think is your favorite? If you could pick one or a few of them, what are your top stories that you see happening? >> So I've talked about Cal Poly, which I love because it's students and you know, the scale of Walmart, but there are so many stories. And I think the ones that I love most are the data heroes. We talk about the data here is a lot of Splunk and the people that are able to harness that data and to take action on that data and make something amazing happen. And we just see that time and time again, across all kinds of organizations where data heroes are surfacing, those insights. Those red flags, if you like and helping organizations stay on step ahead. And Conf is really a celebration of that. I think that's why we do this every year. And we really celebrate those data heroes. So across the program, probably too many to mention, but in every industry and at every scale, people are, you know, making things happen with data and that's an incredibly exciting place to be. >> Well you have a lot of great customers to, to use as references. But I got to ask you that as you go forward this year in marketing, what are your plans to take on this new dynamic? You've got hybrid events, you've got the community is always popular and thriving with Splunk at large-scale enterprises, global system integrators, doing business deals with you guys, as you guys are continuing to grow and grow and grow, what's the strategy? How do you keep the Splunk coolness going? Cause that's, you know, you guys are growing so fast. That's your job, is to keep things on track. What's your strategy? >> I think I look at that and just, we put the customer at the heart of that. And we think, you know, who are the personas, who are the people that use Splunk? What's their experience? What are they trying to do? What are those challenges? And we design those moments to help them move forward faster. And so that I think is just a really good north star. It is really unifying and our partners and customers, and every Splunker gets really behind that. So stay focused on that. >> Thanks for coming on the Cube, really appreciate it. Congratulations for great event. And thanks for having the Cube. We love coming in and sharing our media partnership with you. Thank you for coming. >> Thank you so much. And next year is your tenth year John. So we look forward to celebrating that as well. Thank you very much. >> Thank you. Thanks for coming on. Okay it's the Cube coverage here live in the Splunk studios. We are a virtual event, but it's turning out to be a hybrid event. It's like a VIP event, a lot of great stories. Check them out online. They'll be recycling through so much digital content. This is truly a great digital event. Jeffery, hot of the Cube. Thanks for watching. (soft music)

Published Date : Oct 20 2021

SUMMARY :

I'm John Ferry, host of the cube. Thank you very much, John. You can kind of feel the vibe. programming to our, you know, how you see this evolving? And I think that's what makes Splunk And you have a new manager, And I think at one end, you know, and you keep your customers And I think if anything, we're just, on the business side, And one of the things I love from today And now you have this new developer And at the very top end, you have Walmart, Kind of sounds like the current And I think what's interesting I know you had your the brand, the experiences that you see, is that when you have this kind of agility is a lot of Splunk and the But I got to ask you that as you And we think, you know, And thanks for having the Cube. And next year is your tenth year John. Jeffery, hot of the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Claire HockinPERSON

0.99+

Zach BrownPERSON

0.99+

Claire HockinPERSON

0.99+

JohnPERSON

0.99+

Teresa CarlsonPERSON

0.99+

John FerryPERSON

0.99+

CaliforniaLOCATION

0.99+

JefferyPERSON

0.99+

WalmartORGANIZATION

0.99+

Las VegasLOCATION

0.99+

SplunkORGANIZATION

0.99+

UKLOCATION

0.99+

Cal PolyORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Cal PolyORGANIZATION

0.99+

Silicon valleyLOCATION

0.99+

32 peopleQUANTITY

0.99+

McLarenORGANIZATION

0.99+

next yearDATE

0.99+

Star WarsTITLE

0.99+

twenty six thousand peopleQUANTITY

0.99+

tenth yearQUANTITY

0.99+

LandoPERSON

0.99+

F1 McLaren Racing TeamORGANIZATION

0.99+

three and a half yearsQUANTITY

0.99+

This yearDATE

0.99+

oneQUANTITY

0.98+

this yearDATE

0.98+

first yearQUANTITY

0.98+

twenty thousand customersQUANTITY

0.97+

Splunk EnterpriseORGANIZATION

0.97+

CubeCOMMERCIAL_ITEM

0.96+

this weekDATE

0.95+

todayDATE

0.95+

thousands of peopleQUANTITY

0.94+

one endQUANTITY

0.91+

this morningDATE

0.9+

pandemicEVENT

0.9+

last six weeksDATE

0.89+

FortuneORGANIZATION

0.89+

Jedi KnightPERSON

0.87+

two hundred and thirty thousand community membersQUANTITY

0.87+

SplunkerORGANIZATION

0.86+

JediPERSON

0.85+

half secondQUANTITY

0.84+

Delta virusOTHER

0.83+

a yearQUANTITY

0.81+

hundreds of terabytes of dataQUANTITY

0.81+

twenty twelveQUANTITY

0.71+

One of theQUANTITY

0.66+

Splunk .conf21OTHER

0.62+

ConfORGANIZATION

0.55+

SplunkTITLE

0.51+

Jill Cagliostro, Anomali | Splunk .conf19


 

>> Announcer: Live from Las Vegas, it's theCUBE, covering Splunk .conf19 , brought to you by Splunk. >> Okay, welcome back, everyone. It's theCUBE's live coverage of, we're on day three of our three days of coverage of .conf from Splunk. This is their 10th anniversary, and theCUBE has been there along the way, riding the data wave with them, covering all the action. Our next guest is Jill Cagliostro, who's a product strategist at Anomali, who also has a sister in cyber. So she's got the cyber sisters going on. Jill, great to have you on. Looking forward to hearing about your story. >> Great, thanks. I'm glad to be here. I've been in the security industry for about seven years now. I started when I was 19, and my sister had started before me. She's a few years older than me, and she started out doing defense contracting on the cyber side. And she just kind of ended up in the internship looking for a summer job, and she fell in love. And as I got to kind of learn about what she was doing and how it all worked together, I started to pursue it at Georgia Tech. And I joined our on campus hacker's group club, Grey Hat. I was the first female executive. That was fun. I ended up getting an internship from there with ConocoPhillips and Bishop Fox, and moved on to the vendor side eventually with a brief stop in security operations. >> And so you have a computer science degree from Georgia Tech, is that right? >> I do, and I'm actually pursuing my master's in their online master's in cyber security program right now as well. >> Awesome. Georgia Tech, great school. One of the best computer science programs. Been following it for years. Amazing graduates come out of there. >> Yeah, we've got some pretty impressive graduates. >> So you just jumped right into cyber, okay. Male-dominated field. More women are coming in, more than ever now because there's a big surface area in security. What's your-- What attracted you to cyber? So, I love that it's evolving, and it allows you to think about problems in different ways, right. It's a new problem, there's new issues to solve, and I've been exposed to technology from a young age. I went to an all girls high school which had a really strong focus on STEM. So, I took my first computer science class at 15, and it was in an environment of all women that were incredibly supportive. I actually started a scholarship at our high school to get more women to look at technology longer term as career options, and I go back and speak and teach them that technology is more than coding. There's product management, there's, you know, customer success, there's sales engineering, there's marketing, there's so much more in the space than just coding. So, I really try to help the younger generation see that and explore their options. >> You know that's a great point, and, you know, when I was in the computer science back in the '80s, it was coding. And then it was--well, I got lucky it was systems also, a lot of operating systems, and Linux revolution was just begun coming on the scene. But it's more than that. There's data, data analytics. There's a whole creative side of it. There's a nerdy math side. >> The user experience. >> John: There's a huge area. >> Work flows and processes is something that is so needed in the security industry, right. It's how you do everything. It's how you retain knowledge. It's how you train your new staff. And even just building processes, is something that can be tedious, but it can be so powerful. And if that's something your used to doing, it can be a great field to build. >> Well, you're here. It's our third day at the .conf, our seventh year here. What's your take of Splunk, because you're coming in guns blaring in the industry. You've got your cyber sister; she's at AWS. You see Splunk now. They've got a lot of capabilities. What's the security conversations like? What are people talking about? What's the top story in your mind here at .comf for security and Splunk? >> Yeah, so I'm actually a Splunk certified architect as well. Splunk was one of the first security tools that I really got to play with, so it's near and dear to my heart. And I get to work with-- I'm over at Anomali, which is a threat intelligence company, and I get to work with our own art, Splunk integration. So, what we do is we enable you to bring your intelligence into Splunk to search against all of the logs that you're bringing there to help you find the known data in your environment. And so, that's if you're a Splunk Enterprise customer or Splunk Core. But if you're an Enterprise Security customer, they have the threat intel component of their product, which we integrate with seamlessly. So, the components are really easy to work with, and we help you manage your intelligence a little bit more effectively, so you can significantly reduce your false positive rate while working within the framework you're comfortable in. And one of the-- >> What's the problem-- What's the problems statement that you guys solve? Is there one specific thing? >> God, there's--Yes there's quite a few issues, right. I would say the biggest thing that we solve is enabling our customers to operationalize their intelligence. There's so much information out there about the known bad, and CCOs and CEOs are sending emails every day, "Are we impacted? "Are we safe?" And we enable you to answer those questions very easily and very effectively. One of the other big trends we see is there is an issue in knowledge gaps, right. The industry is evolving so quickly. There's so much to know. Data on everything, right. So, we have another way that we can work with Splunk that isn't a direct integration, and it's our product called Anomali Lens. And what it does is it uses natural language processing to interpret the page that you're on and bring the threat intelligence to you. So, if you're looking at a Splunk search page, you know, investigating an incident on brute force, and you have a seemingly random list of IPs in front of you, and you need to know what does everyone else know about these, to make your job easier, you can scan it with Lens, and it'll bring the information right there to you. You don't have to go anywhere else. You can stay in the Splunk UI that you love. >> What's some exciting things you're working on now that you think people should know about that if maybe covered in the press or in the media or in general? What is some exciting areas that are happening? >> Yeah, so Lens is pretty exciting for us. We just launched that last month. We're doing a lot. So, we also have a product called Anomali Match, which is purpose built for threat intel because often what we see is when a breach happens, the indicators that you need to know if they're in your environment, they don't come to light until six months to a year later. And then being able to go backwards in time to answer that question of were you impacted can be very difficult and very expensive, right. Anomali Match is purpose built to answer those questions. So, as the indicators become available, you know immediately was I impacted on the order of seconds. So, it just enables you to answer your CEOs a little faster, right, and get better visibility into your environment. >> So when you look at data to everything, how do you see it evolving as more volume comes in? There's more threat surface area out there. >> Right, and continues to increase it's bounds. >> How should people be thinking about it as they zoom out and think architecturally, "I got to lay out my enterprise strategy. "I bought a few tools that try to be platforms, "but I need a broader playbook. "I need something bigger to help me." >> You've got to take a step back and get a little altitude, right? >> John: Yeah, take a little step back, yeah. >> Yeah, so threat intelligence should really be driving your whole security practice. We already know, for the most part, who's attacking who and what they're trying to do. And so, threat intelligence shouldn't just be an integration into Splunk, although that is a critical component of it. It should be informing, you know, your security practices where you stand up offices. There may be locations that are higher risk for you as a particular type of entity. And all this information is available, but you have to just get access to it. You need one place to stop where you can google the threat intel, and that's what Anomali ThreatStream, our flagship product, aims to do. And Lens just makes it more accessible than ever. Rather than having to go look it up yourself, it brings it to you. And so, we're trying to augment the knowledge base without having to memorize everything. That's what we need to do is we need to find ways to bring this information and make it more accessible so you don't have to look in three tools to find it. >> So, I got to ask you and change topics. As the younger generation comes into the industry, one of the things that I'm seeing as a trend is more developers are coming in. And it's not just so much devops, whose clouds gray, we love devops, but ops, network ops and security ops, are also a big part of it. People are building applications now. So, like, you're seeing startups that have been tech for good startups coming out, where you're seeing a great examples of people literally standing up applications with data. What's the young generation-- because there's a hacker culture out there that can move fast, solve a problem, but they don't have to provision a lot of stuff. That's what cloud computing does. But now Splunk's the world. Data's becoming more accessible. Data's the raw materials to get that asset or that value. What are developers-- how do you see the developers programming with data? >> So, they're looking at their jobs and saying, "What am I bored doing "that I have to do over and over every day, "and how can I automate it?" So, there's a lot of store technology. Splunk also has Phantom, and that's enabling our developers, our younger generation who grew up around Python and coding, to quickly plug a few pieces together and automate half their jobs, which gives them the time to do the really interesting stuff, the stuff that requires human intervention and interpretation, and analysis that can't be coded. And it's just giving us more time and more resources to put-- >> What kind of things are they doing with that extra time? Creative things, pet projects, or critical problems? >> Oh, God, so many pet projects. God, what are you interested in? I've seen things being done to like mine bit coin on the side, right, to make a little extra cash. That's always fun. I've seen people automate their social media profile. I've seen threat researchers use scripting to help them find new information on the internet and reshare it to build their public brand. That's a really big component of the younger generation that I don't think was as big in previous generations, where your public brand matters more than ever. And so, we're bringing that into everything we do. It's not just a job, it's a lifestyle. >> Sharing's a big ethos, too, sharing data. How important is sharing data in the security culture? >> Oh, it's critical. So, I mean, sharing data's been happening for forever, right. Company A has always been calling up their friend at company B, "Hey, we see this thing. "You might want to take a look, "but you didn't hear it from me," right. But through intel platforms, not just ThreatStream but all of them, allow you to share information at a larger scale ever than ever before. But it also, it gives you the ability to remain anonymous. Everyone's really scared to put into writing, "Hey, we saw this at our company," 'cause there's the risk of attribution, there's legal requirements, right. But with automated sharing you can retain a little bit of-- you can be a little bit anonymous. So, you can help the others be protected without exposing yourself to additional risk. >> Jill, you're awesome to have on theCUBE. Love to get the perspective of the young, up and coming, computer science, cyber, cyber sister. >> Cyber sister. >> John: You can just, other--where does she work? Amazon? >> She's over at AWS now. She just moved over a couple of weeks ago. We actually used to work together at Anomali. She did presales, and I did post sales. It was a lot of fun. >> And she hooked you into security, didn't she? >> Oh, she did, for better or worse, although I hope she's not watching. >> She will. She'll get a clip of this, I'll make sure. Jill, final question. The Splunk this year .conf, what's your takeaway? What are you going to take back to the office with you or share with your friends if they say, "Hey, what was the big story happening at Splunk this year?" What's going on here this year? >> The big thing is the data. The data is more accessible than ever before, so we're being challenged by Splunk to find new ways to use it, to innovate new ways. And I think that's kind of been their messaging the whole time, "Hey, we're giving you the power to do what you want. "What are you going to do with it?" This is my third Splunk conference in a row, and every year it just gets more and more exciting. I can't wait to see what next year holds. >> They allow people to deal with data, messy data to good data. >> Clean it up. >> John: Clean it up >> Make it easy to search across multiple data sources from one command line. Their user experience is the most intuitive I've used in terms of the log management solutions. >> Jill, great to have you, great insights. Thanks for sharing the data >> Thanks so much, John. >> John: here on theCUBE. Sharing data on theCUBE, that's what we do. We bring the data, the guests, we try to create it for you. Of course, we're data-driven, we're a CUBE-driven. I'm John Furrier, here from .conf, the 10th anniversary. We've been here from the beginning, riding the data tsunami waves. Waves plural 'cause there's more waves coming. I'm John Furrier. Thanks for watching. (upbeat music)

Published Date : Oct 24 2019

SUMMARY :

brought to you by Splunk. Jill, great to have you on. And as I got to kind of learn about what she was doing I do, and I'm actually pursuing my master's One of the best computer science programs. and it allows you to think about problems You know that's a great point, and, you know, It's how you train your new staff. What's the top story in your mind here to help you find the known data in your environment. and bring the threat intelligence to you. So, it just enables you to answer your CEOs a little faster, So when you look at data to everything, "I need something bigger to help me." so you don't have to look in three tools to find it. So, I got to ask you and change topics. and more resources to put-- and reshare it to build their public brand. How important is sharing data in the security culture? But it also, it gives you the ability to remain anonymous. Love to get the perspective of the young, She just moved over a couple of weeks ago. Oh, she did, for better or worse, with you or share with your friends if they say, "Hey, we're giving you the power to do what you want. They allow people to deal with data, Make it easy to search across multiple data sources Jill, great to have you, great insights. We bring the data, the guests, we try to create it for you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

Jill CagliostroPERSON

0.99+

JillPERSON

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Grey HatORGANIZATION

0.99+

John FurrierPERSON

0.99+

Georgia TechORGANIZATION

0.99+

PythonTITLE

0.99+

AnomaliORGANIZATION

0.99+

three daysQUANTITY

0.99+

seventh yearQUANTITY

0.99+

three toolsQUANTITY

0.99+

15QUANTITY

0.99+

ConocoPhillipsORGANIZATION

0.99+

last monthDATE

0.99+

third dayQUANTITY

0.99+

this yearDATE

0.99+

next yearDATE

0.99+

Las VegasLOCATION

0.99+

LinuxTITLE

0.99+

10th anniversaryQUANTITY

0.99+

SplunkORGANIZATION

0.98+

a year laterDATE

0.98+

theCUBEORGANIZATION

0.98+

oneQUANTITY

0.98+

about seven yearsQUANTITY

0.97+

OneQUANTITY

0.96+

thirdQUANTITY

0.96+

19QUANTITY

0.96+

AnomaliPERSON

0.96+

day threeQUANTITY

0.95+

one placeQUANTITY

0.95+

Bishop FoxORGANIZATION

0.94+

couple of weeks agoDATE

0.94+

first femaleQUANTITY

0.92+

one specific thingQUANTITY

0.86+

first computer scienceQUANTITY

0.85+

ThreatStreamTITLE

0.84+

Splunk .conf19OTHER

0.81+

LensORGANIZATION

0.8+

Splunk EnterpriseORGANIZATION

0.79+

'80sDATE

0.74+

halfQUANTITY

0.73+

Anomali ThreatStreamORGANIZATION

0.73+

MatchCOMMERCIAL_ITEM

0.73+

one commandQUANTITY

0.72+

sixQUANTITY

0.71+

.confTITLE

0.7+

first security toolsQUANTITY

0.68+

SplunkTITLE

0.64+

GodPERSON

0.61+

intelORGANIZATION

0.59+

tsunami wavesEVENT

0.56+

monthsDATE

0.54+

jobsQUANTITY

0.54+

.confOTHER

0.52+

yearsQUANTITY

0.52+

.confEVENT

0.49+

Bina Khimani, Amazon Web Services | Splunk .conf18


 

>> Announcer: Live from Orlando, Florida, it's theCUBE, covering .conf2018. Brought to you by Splunk. >> Welcome back to .conf2018 everybody, this is theCUBE the leader in live tech coverage. I'm Dave Vellante with Stu Miniman, wrapping up day one and we're pleased to have Bina Khimani, who's the global head of Partner Ecosystem for the infrastructure segments at AWS. Bina, it's great to see you, thanks for coming on theCUBE. >> Thank you for having me. >> You're very welcome. >> Pleasure to be here. >> It's an awesome show, everybody's talking data, we love data. >> Yes. >> You guys, you know, you're the heart of data and transformation. Talk about your role, what does it mean to be the global head Partner Ecosystems infrastructure segments, a lot going on in your title. >> Yes. >> Dave: You're busy. (laughing) >> So, in the infrastructure segment, we cover dev apps, security, networking as well as cloud migration programs, different types of cloud migration programs, and we got segment leaders who really own the strategy and figure out where are the best opportunities for us to work with the partners as well as partner development managers and solution architects who drive adoption of the strategy. That's the team we have for this segment. >> So everybody wants to work with AWS, with maybe one or two exceptions. And so Splunk, obviously, you guys have gotten together and formed an alliance. I think AWS has blessed a lot of the Splunk technology, vice versa. What's the partnership like, how has it evolved? >> So Splunk has been an excellent partner. We are really joined hands together in many fronts. They are fantastic AWS marketplace partner. We have many integrations of Splunk and AWS services, whether it is Kinesis data, Firehose, or Macy, or WAF. So many services Splunk and AWS really are well integrated together. They work together. In addition, we have joined go to market programs. We have field engagement, we have remand generation campaigns. We join hands together to make sure that our customers, joint customers, are really getting the best value out of it. So speaking of partnership, we recently launched migration program for getting Splunk on prem, Splunk Enterprise customers to Splunk Cloud while, you know, they are on their journey to Cloud anyway. >> Yeah, Bina let's dig into that some, we know AWS loves talking about migrations, we dig into all the databases that are going and we talk at this conference, you know Splunk started out very much on premises but we've talked to lots of users that are using the Cloud and it's always that right. How much do they migrate, how much do they start there? Bring us instead, you know, what led to this and what are the workings of it. >> So what, you know if you look at the common problems people have customers have on prem, they are same problems that customers have with Splunk Enterprise on prem, which is, you know, they are looking for resiliency. Their administrator goes on vacation. They want to keep it up and running all the time. They help people making some changes that shouldn't have been made. They want the experts to run their infrastructure. So Splunk Cloud is run by Splunk which is, you know they are the best at running that. Also, you know I just heard a term called lottery proof. So Splunk Cloud is lottery proof, what that means the funny thing is, that you know, your administrator wins lottery, you're not out of business. (laughs) At the same time if you look at the the time to value. I was talking to a customer last night over dinner and they were saying that if they wanted to get on Splunk Enterprise, for their volume of data that they needed to be ingested in Splunk, it would take them six months to just get the hardware in place. With Splunk Cloud they were running in 15 minutes. So, just the time to value is very important. Other things, you know, you don't need to plan for your peak performance. You can stretch it, you can get all the advantages of scalability, flexibility, security, everything you need. As well as running Splunk Cloud you know you are truly cost optimized. Also Splunk Cloud is built for AWS so it's really cost optimized in terms of infrastructure costs, as well as the Splunk licensing cost. >> Yeah it's funny you mentioned the joke, you know you go to Splunk cloud you're not out of a job, I mean what we've heard, the Splunk admins are in such high demand. Kind of running their instances probably isn't, you know a major thing that they'd want to be worrying about. >> Yes, yes, so-- >> Dave: Oh please, go. >> So Splunk administrators are in such a high demand and because of that, you know, not only that customers are struggling with having the right administrators in place, also retaining them. And when they go to Cloud, you know, this is a SAS version, they don't need administrators, nor they need hardware. They can just trust the experts who are really good at doing that. >> So migrations are a tricky thing and I wonder if we can get some examples because it's like moving a house. You don't want to move, or you actually do want to move but it's, you have be planful, it's a bit of a pain, but the benefits, a new life, so. In your world, you got to be better, so the world that you just described of elastic, you don't have to plan for peaks, or performance, the cost, capex, the opex, all that stuff. It's 10 X better, no debate there. But still there's a barrier that you have to go through. So, how does AWS make it easier or maybe you could give us some examples of successful migrations and the business impact that you saw. >> Definitely. So like you said, right, migration is a journey. And it's not always easy one. So I'll talk about different kinds of migration but let me talk about Splunk migration first. So Splunk migration unlike many other migration is actually fairly easy because the Splunk data is transient data, so customers can just point all their data sources to Splunk Cloud instead of Splunk Enterprise and it will start pumping data into Splunk Cloud which is productive from day one. Now if some customers want to retain 60 to 90 days data, then they can run this Splunk Enterprise on prem for 60 more days. And then they can move on to Splunk Cloud. So in this case there was no actual data migration involved. And because this is the log data that people want to see only for 60 to 90 days and then it's not valuable anymore. They don't really need to do large migration in this case it's practically just configure your data sources and you are done. That's the simplest part of the migration which is Splunk migration to Splunk Cloud. Let's talk about different migrations. So... you have heard many customers, you know like Capital One or many other Dow-Jones, they are saying that we are going all in on AWS and they are shutting down their data centers, they are, you know, migrating hundreds of thousands of applications and servers, which is not as simple as Splunk Cloud, right? So, what AWS, you know, AWS does this day in and day out. So we have figured it out again and again and again. In all of our customer interactions and migrations we are acquiring ton of knowledge that we are building toward our migration programs. We want to make sure that our customers are not reinventing the wheel every time. So we have migration programs like migration acceleration program which is for custom large scale migrations for larger customers. We have partner migration programs which is entirely focused on working with SI partners, consulting partners to lead the migrations. As well as we're workload migration program where we are standardizing migrations of standard applications like Splunk or Atlassian, or many of their such standard applications, how we can provide kind of easy button to migrate. Now, when customers are going through this migration journey, you know, it's going to be 10 X better like you said, but initially there is a hump. They are probably needing to run two parallel environments, there is a cost element to that. They are also optimizing their business processes there is some delay there. They are doing some technical work, you know, discovery, prioritization, landing zone creations, security, and networking aspects. There are many elements to this. What we try to do is, if you look at the graph, their cost is right now where this and it's going to go down but before that it goes up and then goes down. So what we try to do is really provide all the resources to take that hump out in terms of technical support, technical enablement, you know, partner support, funding elements, marketing. There are all types of elements as well as lot of technical integrations and quick starts to take that hump out and make it really easy for our customers. >> And that was our experience, we're Amazon customer and we went through a migration about, I don't know five or six years ago. We had, you know, server axe and a cage and we were like, you know, moving wires over and you'd get an alert you'd have to go down and fix things. And so it took us some time to get there, but it is 10 X better now though. >> It is. >> The developers were so excited and I wanted to ask you about, sort of the dev-ops piece of it because that's really, it became, we just completely eliminated all the operational pieces of it and integrated it and let the developers take care of it. Became, truly became infrastructure as code. So the dev-ops culture has permeated our small organization, can't imagine the impact on a larger company. Wonder if you could talk about that a little bit. >> Definitely. So... As customers are going through this cloud migration journey they are looking at their entire landscape of application and they're discovering things that they never did. When they discover they are trying to figure out should I go ahead and migrate everything to AWS right now, or should I a refactor and optimize some of my applications. And there I'm seeing both types of decisions where some customers are taking most of their applications shifting it to cloud and then pausing and thinking now it is phase two where I am on cloud, I want to take advantage of the best of the breed whatever technology is there. And I want to transform my applications and I want to really be more agile. At the same time there are customers who are saying that I'm going to discover all my workload and applications and I'm going to prioritize a small set of applications which we are going to take through transformation right now. And for the rest of it we will lift and shift and then we will transform. But as they go through this transformation they are changing the way they do business. They are changing the way they are utilizing different technology. Their core focus is on how do I really compete with my competition in the industry and for that how can IT provide me that agility that I need to roll out changes in my business day in day out. And for that, you know, Lambda, entire code portfolio, code build, code commit, code deploy, as well as cloud trail, and you know all the things that, all the services we have as well as our partners have, they provide them truly that edge on their industry and market. >> Bina, how has the security discussion changed? When Stu and I were at the AWS public sector summit in June, the CIO of the CIA stood up on stage in front of 10,000 people and said, "The cloud on my worst day from a security perspective "is better than my client server infrastructure "on a best day." That's quite an endorsement from the CIA, who's got some chops in security. How has that discussion changed? Obviously it's still fundamental, critical, it's something that you guys emphasize. But how has the perception and reality changed over the last five years? >> Cloud is, you know, security in cloud is a shared responsibility. So, Amazon is really, really good at providing all the very, very secure infrastructure. At the same time we are also really good at providing customers and business partners all of the tools and hand-holding them so that they can make their application secure. Like you said, you know, AWS, many of the analysts are saying that AWS is far more secure than anything they can have within their own data center. And as you can see that in this journey also customers are not now thinking about is it secure or not. We are seeing the conversation that, how in fact, speaking of Splunk right, one customer that I talked to he was saying that I was asking them why did you choose Splunk cloud on AWS and his take was that, "I wanted near instantaneous SOA compliant "and by moving to Splunk cloud on AWS "I got that right away." Even I'm talking to public sector customers they are saying, you know, I want fair DRAM I want in healthcare industry, I want HIPPA Compliance. Everywhere we are seeing that we are able to keep up with security and compliance requirements much faster than what customers can do on their own. >> So they, so you take care of, certainly from the infrastructure standpoint, those certifications and that piece of the compliance so the customer can worry about maybe some of the things that you don't cover, maybe some of their business processes and other documentation, ITIL stuff that they have to do, whatever. But now they have more time to do that presumably 'cause that's check box, AWS has that covered for me, right? Is that the right thinking? >> Yes, plus we provide them all the tools and support and knowledge and everything so that they, and even partner support who are really good at it so that not only they understand that the application and infrastructure will come together as entire secure environment but also they have everything they need to be able to make applications secure. And Splunk is another great example, right? Splunk helps customer get application level security and AWS is providing them infrastructure and together we are working together to make sure our customers' application and infrastructure together are secure. >> So speaking about migrations database, hot topic at a high level anyway, I wonder if you could talk about database migrations. Andy Jassy obviously talks a lot about, well let's see we saw RDS on Prim at VMworld, big announcement. Certainly Aurora, DynamoDB is one of the databases we use. Redshift obviously. How are database migrations going, what are you doing to make those easier? >> So what we do in a nutshell, right for everything we try to build a programatic reputable, scalable approach. That's what Amazon does. And what we do is that for each of these standard migrations for databases, we try to figure out, that let's take few examples, and let's figure out Play Books, let's figure out runbooks, let's make sure technical integrations are in place. We have quick starts in place. We have consulting partners who are really good at doing this again and again and again. And we have all the knowledge built into tools and services and support so that whenever customers want to do it they don't run into hiccups and they have really pleasant experience. >> Excellent. Well I know you're super busy thanks for making some time to come on theCUBE I always love to have AWS on. So thanks for your time Bina. >> Thank you very nice to meet you both. >> Alright you're very welcome. Alright so that's a wrap for day one here at Splunk .conf 2018, Stu and I will be back tomorrow. Day two more customers, we got senior executives coming on tomorrow, course Doug Merritt, always excited to see Doug. Go to siliconangle.com you'll see all the news theCUBE.net is where all these videos live and wikibon.com for all the research. We're out day one Splunk you're watching theCUBE we'll see you tomorrow. Thanks for watching. >> Bina: Thank you. (electronic music)

Published Date : Oct 10 2018

SUMMARY :

Brought to you by Splunk. for the infrastructure segments at AWS. everybody's talking data, we love data. You guys, you know, Dave: You're busy. That's the team we have for this segment. you guys have gotten together and formed an alliance. you know, they are on their journey to Cloud anyway. and we talk at this conference, you know Splunk started out the funny thing is, that you know, your administrator Kind of running their instances probably isn't, you know and because of that, you know, and the business impact that you saw. They are doing some technical work, you know, and we were like, you know, moving wires over and I wanted to ask you about, sort of the dev-ops And for the rest of it we will lift and shift it's something that you guys emphasize. they are saying, you know, I want fair DRAM and that piece of the compliance so the customer but also they have everything they need to be able Certainly Aurora, DynamoDB is one of the databases we use. and they have really pleasant experience. to come on theCUBE I always love to have AWS on. we'll see you tomorrow. Bina: Thank you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Doug MerrittPERSON

0.99+

DavePERSON

0.99+

60QUANTITY

0.99+

oneQUANTITY

0.99+

Andy JassyPERSON

0.99+

DougPERSON

0.99+

Bina KhimaniPERSON

0.99+

CIAORGANIZATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

StuPERSON

0.99+

SplunkORGANIZATION

0.99+

JuneDATE

0.99+

six monthsQUANTITY

0.99+

15 minutesQUANTITY

0.99+

90 daysQUANTITY

0.99+

Orlando, FloridaLOCATION

0.99+

tomorrowDATE

0.99+

10,000 peopleQUANTITY

0.99+

siliconangle.comOTHER

0.99+

60 more daysQUANTITY

0.99+

10 XQUANTITY

0.98+

fiveDATE

0.98+

bothQUANTITY

0.98+

one customerQUANTITY

0.98+

Capital OneORGANIZATION

0.98+

BinaPERSON

0.98+

LambdaTITLE

0.98+

theCUBE.netOTHER

0.96+

Splunk CloudTITLE

0.96+

hundreds of thousandsQUANTITY

0.96+

.conf2018EVENT

0.96+

six years agoDATE

0.96+

day oneQUANTITY

0.95+

SplunkPERSON

0.95+

VMworldORGANIZATION

0.95+

two exceptionsQUANTITY

0.95+

Day twoQUANTITY

0.95+

last nightDATE

0.94+

both typesQUANTITY

0.94+

applicationsQUANTITY

0.93+

Partner EcosystemORGANIZATION

0.93+

Partner EcosystemsORGANIZATION

0.9+

eachQUANTITY

0.9+

Jon Rooney, Splunk | Splunk .conf18


 

>> Announcer: Live from Orlando, Florida. It's theCube. Covering .conf18, brought to you by Splunk. >> We're back in Orlando, Dave Vellante with Stu Miniman. John Rooney is here. He's the vice president of product marketing at Splunk. Lot's to talk about John, welcome back. >> Thank you, thanks so much for having me back. Yeah we've had a busy couple of days. We've announced a few things, quite a few things, and we're excited about what we're bringing to market. >> Okay well let's start with yesterday's announcements. Splunk 7.2 >> Yup. _ What are the critical aspects of 7.2, What do we need to know? >> Yeah I think first, Splunk Enterprise 7.2, a lot of what we wanted to work on was manageability and scale. And so if you think about the core key features, the smart storage, which is the ability to separate the compute and storage, and move some of that cool and cold storage off to blob. Sort of API level blob storage. A lot of our large customers were asking for it. We think it's going to enable a ton of growth and enable a ton of use cases for customers and that's just sort of smart design on our side. So we've been real excited about that. >> So that's simplicity and it's less costly, right? Free storage. >> Yeah and you free up the resources to just focus on what are you asking out of Splunk. You know running the searches and the safe searches. Move the storage off to somewhere else and when you need it you pull it back when you need it. >> And when I add an index or I don't have to both compute and storage, I can add whatever I need in granular increments, right? >> Absolutely. It just enables more graceful and elastic expansiveness. >> Okay that's huge, what else should we know about? >> So workload management, which again is another manageability and scale feature. It's just the ability to say the great thing about Splunk is you put your data in there and multiple people can ask questions of that data. It's just like an apartment building that has ... You know if you only have one hot water heater and a bunch of people are taking a shower at the same time, maybe you want to give some privileges to say you know, the penthouse they're going to get the hot water first. Other people not so much. And that's really the underlying principle behind workload management. So there are certain groups and certain people that are running business critical, or mission critical, searches. We want to make sure they get the resources first and then maybe people that are experimenting or kind of kicking the tires. We have a little bit of a gradation of resources. >> So that's essentially programmatic SLAs. I can set those policies, I can change them. >> Absolutely, it's the same level of granular control that say you were on access control. It's the same underlying principle. >> Other things? Go ahead. >> Yeah John just you guys always have some cool, pithy statements. One of the things that jumped out to me in the keynotes, because it made me laugh, was the end of metrics. >> John: Yes. >> You've been talking about data. Data's at the ... the line I heard today was Splunk users are at the crossroads of data so it gives a little insight about what you're doing that's different ways of managing data 'cause every company can interact with the same data. Why is the Splunk user, what is it different, what do they do different, and how is your product different? >> Yeah I mean absolutely. I think the core of what we've always done and Doug talked about it in the keynote yesterday is this idea of this expansive, investigative search. The idea that you're not exactly sure what the right question is so you want to go in, ask a question of the data, which is going to lead you to another question, which is going to lead you to another question, and that's that finding a needle in a pile of needles that Splunk's always great at. And we think of that as more the investigative expansive search. >> Yeah so when I think back I remember talking with companies five years ago when they'd say okay I've got my data scientists and finding which is the right question to ask once I'm swimming in the data can be really tough. Sounds like you're getting answers much faster. It's not necessarily a data scientist, maybe it is. We say BMW on stage. >> Yeah. >> But help us understand why this is just so much simpler and faster. >> Yeah I mean again it's the idea for the IT and security professionals to not necessarily have to know what the right question is or even anticipate the answer, but to find that in an evolving, iterative process. And the idea that there's flexibility, you're in no way penalized, you don't have to go back and re-ingest the data or do anything to say when you're changing exactly what your query is. You're just asking the question which leads to another question, And that's how we think about on the investigative side. From a metric standpoint, we do have additional ... The third big feature that we have in Splunk Enterprise 7.2 is an improved metrics visualization experience. Is the idea of our investigative search which we think we are the best in the industry at. When you're not exactly sure what you're looking for and you're doing a deep dive, but if you know what you're looking for from a monitoring standpoint you're asking the same question again and again and again, over and again. You want be able to have an efficient and easy way to track that if you're just saying I'm looking for CPU utilization or some other metric. >> Just one last follow up on that. I look ... the name of the show is .conf >> Yes. >> Because it talks about the config file. You look at everywhere, people are in the code versus gooey and graphical and visualization. What are you hearing from your user base? How do you balance between the people that want to get in there versus being able to point and click? Or ask a question? >> Yeah this company was built off of the strength of our practitioners and our community, so we always want to make sure that we create a great and powerful experience for those technical users and the people that are in the code and in the configuration files. But you know that's one of the underlying principles behind Splunk Next which was a big announcement part of day one is to bring that power of Splunk to more people. So create the right interface for the right persona and the right people. So the traditional Linux sys admin person who's working in IT or security, they have a certain skill set. So the SPL and those things are native to them. But if you are a business user and you're used to maybe working in Excel or doing pivot tables, you need a visual experience that is more native to the way you work. And the information that's sitting in Splunk is valuable to you we just want to get it to you in the right way. And similar to what we talked about today in the keynote with application developers. The idea of saying well everything that you need is going to be delivered in a payload and json objects makes a lot of sense if you're a modern application developer. If you're a business analyst somewhere that may not make a lot of sense so we want to be able to service all of those personas equally. >> So you've made metrics a first class citizen. >> John: Absolutely. >> Opening it up to more people. I also wanted to ask you about the performance gains. I was talking to somebody and I want to make sure I got these numbers right. It was literally like three orders of magnitude faster. I think the number was 2000 times faster. I don't know if I got that number right, it just sounds ... Implausible. >> That's specifically what we're doing around the data fabric search which we announced in beta on day one. Simply because of the approach to the architecture and the approach to the data ... I mean Splunk is already amazingly fast, amazingly best in class in terms of scale and speed. But you realize that what's fast today because of the pace and growth of data isn't quite so fast two, three, four years down the road. So we're really focused looking well into the future and enabling those types of orders of magnitude growth by completely re imagining and rethinking through what the architecture looks like. >> So talk about that a little bit more. Is that ... I was going to say is that the source of the performance gain? Is it sort of the architecture, is it tighter code, was it a platform do over? >> No I mean it wasn't a platform do over, it's just the idea that in some cases the idea of thinking like I'm federating a search between one index here and one index there, to have a virtualization layer that also taps into compute. Let's say living in a patchy Kafka, taking advantage of those sorts of open source projects and open source technologies to further enable and power the experiences that our customers ultimately want. So we're always looking at what problems our customers are trying to solve. How do we deliver to them through the product and that constant iteration, that constant self evaluation is what drives what we're doing. >> Okay now today was all about the line of business. We've been talking about, I've used the term land and expand about a hundred times today. It's not your term but others have used it in the industry and it's really the template that you're following. You're in deep in sec ops, you're in deep in IT, operations management, and now we're seeing just big data permeate throughout the organization. Splunk is a tool for business users and you're making it easier for them. Talk about Splunk business flow. >> Absolutely, so business flow is the idea that we had ... Again we learned from our customers. We had a couple of customers that were essentially tip of the spear, doing some really interesting things where as you described, let's say the IT department said well we need to pull in this data to check out application performance and those types of things. The same data that's following through is going to give you insight into customer behavior. It's going to give you insight into coupons and promotions and all the things that the business cares about. If you're a product manager, if you're sitting in marketing, if you're sitting in promotions, that's what you want to access and you want to be able to access that in real time. So the challenge is that we're now stepping you with things like business flow is how do you create an interface? How do you create an experience that again matches those folks and how they think about the world? The magic, the value that's sitting in the data is we just have to surface it for the right way for the right people. >> Now the demo, Stu knows I hate demos, but the demo today was awesome. And I really do, I hate demos because most of them are just so boring but this demo was amazing. You took a bunch of log data and a business user ingested it and looked at it and it was just a bunch of data. >> Yeah. >> Like you'd expect and go eh what am I supposed to do with this and then he pushed button and then all of a sudden there was a flow chart and it showed the flow of the customer through the buying pattern. Now maybe that's a simpler use case but it was still very powerful. And then he isolated on where the customer actually made a phone call to the call center because you want to avoid if possible and then he looked at the percentage of drop outs, which was like 90% in that case, versus the percentage of drop outs in a normal flow which was 10%- Oop something's wrong, drilled in, fixed the problem. He showed how he fixed it, oh graphically beautiful. Is it really that easy? >> Yeah I mean I think if you think about what we've done in computing over the last 40 years. If you think about even the most basic word processor, the most basic spreadsheet work, that was done by trained technicians 30-40 years ago. But the democratization of data created this notion of the information worker and we're a decade or so now plus into big data and the idea that oh that's only highly trained professionals and scientists and people that have PHDs. There's always going to be an aspect of the market or an aspect of the use cases that is of course going to be that level of sophistication, but ultimately this is all work for an information worker. If you're an information worker, if you're responsible for driving business results and looking at things, it should be the same level of ease as your traditional sort of office suite. >> So I want to push on that a little if I can. So and just test this, because it looked so amazingly simple. Doug Merritt made the point yesterday that business processes they used to be codified. Codifying business processes is a waste of time because business processes are changing so fast. The business process that you used in the example was a very linear process, admittedly. I'm going to search for a product, maybe read a review, I'm going to put it in my cart, I'm going to buy it. You know, very straightforward. But business processes as we know are unpredictable now. Can that level of simplicity work and the data feed in some kind of unpredictable business process? >> Yeah and again that's our fundamental difference. How we've done it differently than everyone in the market. It's the same thing we did with IT surface intelligence when we launched that back in 2015 because it's not a tops down approach. We're not dictating, taking sort of a central planning approach to say this is what it needs to look like. The data needs to adhere to this structure. The structure comes out of the data and that's what we think. It's a bit of a simplification, but I'm a marketing guy and I can get away with it. But that's where we think we do it differently in a way that allows us to reach all these different users and all these different personas. So it doesn't matter. Again that business process emerges from the data. >> And Stu, that's going to be important when we talk about IOT but jump in here. >> Yeah so I wanted to have you give us a bit of insight on the natural language processing. >> John: Yeah natural language processing. >> You've been playing with things like the Alexa. I've got a Google Home at home, I've got Alexa at home, my family plays with it. Certain things it's okay for but I think about the business environment. The requirements in what you might ask Alexa to ask Splunk seems like that would be challenging. You're got a global audience. You know, languages are tough, accents are tough, syntax is really really challenging. So give us the why and where are we. Is this nascent things? Do you expect customers to really be strongly using this in the near future? >> Absolutely. The notion of natural language search or natural language computing has made huge strides over the last five or six years and again we're leveraging work that's done elsewhere. To Dave's point about demos ... Alexa it looks good on stage. Would we think, and if you're to ask me, we'll see. We'll always learn from the customers and the good thing is I like to be wrong all the time. These are my hypotheses, but my hypothesis is the most actual relevant use of that technology is not going to be speech it's going to be text. It's going to be in Slack or Hipchat where you have a team collaborating on an issue or project and they say I'm looking for this information and they're going to pass that search via text into Splunk and back via Slack in a way that's very transparent. That's where I think the business cases are going to come through and if you were to ask me again, we're starting the betas we're going to learn from our customers. But my assumption is that's going to be much more prevalent within our customer base. >> That's interesting because the quality of that text presumably is going to be much much better, at least today, than what you get with speech. We know well with the transcriptions we do of theCUBE interviews. Okay so that's it. ML and MLP I thought I heard 4.0, right? >> Yeah so we've been pushing really hard on the machine learning tool kit for multiple versions. That team is heavily invested in working with customers to figure out what exactly do they want to do. And as we think about the highly skilled users, our customers that do have data scientists, that do have people that understand the math to go in and say no we need to customize or tweak the algorithm to better fit our business, how do we allow them essentially the bare metal access to the technology. >> We're going to leave dev cloud for Skip if that's okay. I want to talk about industrial IOT. You said something just now that was really important and I want to just take a moment to explain to the audience. What we've seen from IOT, particularly from IT suppliers, is a top down approach. We're going to take our IT framework and put it at the edge. >> Yes. >> And that's not going to work. IOT, industrial IOT, these process engineers, it's going to be a bottoms up approach and it's going to be standard set by OT not IT. >> John: Yes. >> Splunk's advantage is you've got the data. You're sort of agnostic to everything else. Wherever the data is, we're going to have that data so to me your advantage with industrial IOT is you're coming at it from a bottoms up approach as you just described and you should be able to plug into the IOT standards. Now having said that, a lot of data is still analog but that's okay you're pulling machine data. You don't really have tight relationships with the IOT guys but that's okay you got a growing ecosystem. >> We're working on it. >> But talk about industrial IOT and we'll get into some of the challenges. >> Yeah so interestingly we first announced the Industrial Asset Intelligence product at the Hannover Messe show in Germany, which is this massive like 300,000 it's a city, it's amazing. >> I've been, Hannover. One hotel, huge show, 400,000 people. >> Lot of schnitzel (laughs) I was just there. And the interesting thing is it's the first time I'd been at a show really first of all in years where people ... You know if you go to an IT or security show they're like oh we know Splunk, we love Splunk, what's in the next version. It was the first time we were having a lot of people come up to us saying yeah I'm a process engineer in an industrial plant, what's Splunk? Which is a great opportunity. And as you explain the technology to them their mindset is very different in the sense they think of very custom connectors for each piece. They have a very, almost bespoke or matched up notion, of a sense to a piece of equipment. So for an example they'll say oh do you have a connector for and again, I don't have the machine numbers, but like the Siemens 123 machine. And I'll be like well as long as it's textural structural to semi structural data ideally with a time stamp, we can ingest and correlate that. Okay but then what about the Siemens ABC machine? Well the idea that, the notion that ... we don't care where the source is as long as there's a sensor sending the data in a format that we can consume. And if you think back to the beginning of the data stream processor demo that Devani and Eric gave yesterday that showed the history over time, the purple boxes that were built, like we can now ingest data via multiple inputs and via multiple ways into Splunk. And that hopefully enables the IOT ecosystems and the machine manufacturers, but more importantly, the sensor manufacturers because it feels like in my understanding of the market we're still at a point of a lot of folks getting those sensors instrumented. But once it's there and essentially the faucet's turned on, we can pull it all in and we can treat it and ingest it just as easily as we can data from AWS Kineses or Apache Access logs or MySequel logs. >> Yeah and so instrumenting the windmill, to use the metaphor, is not your job. Connectivity to the windmill is not your job, but once those steps have been taken and the business takes those steps because there's a business case, once that's done then the data starts flowing and that's where you come in. >> And there's a tremendous amount of incentive in the industry right now to do that level of instrumentation and connectivity. So it feels like that notion of instrument connect then do the analytics, we're sitting there well positioned once all those things are in place to be one of the top providers for those analytics. >> John I want to ask you something. Stu and I were talking about this at our kickoff and I just want to clarify it. >> Doug Merritt said that he didn't like the term unstructured data. I think that's what he said yesterday, it's just data. My question is how do you guys deal with structured data because there is structured data. Bringing transaction processing data and analytics data together for whatever reason. Whether it's fraud detection, to give the buyer an offer before you lose them, better customer service. How do you handle that kind of structured data that lives in IBM mainframes or whatever. USS mainframes in the case of Carnival. >> Again we want to be able to access data that lives everywhere. And so we've been working with partners for years to pull data off mainframes. Again, the traditional in outs aren't necessarily there but there are incentives in the market. We work with our ecosystem to pull that data to give it to us in a format that makes sense. We've long been able to connect to traditional relational databases so I think when people think of structured data they think about oh it's sitting in a relational database somewhere in Oracle or MySequel or SQL Server. Again, we can connect to that data and that data is important to enhance things particularly for the business user. Because if the log says okay whatever product ID 12345, but the business user needs to know what product ID 12345 is and has a lookup table. Pull it in and now all of a sudden you're creating information that's meaningful to you. But structure again, there's fluidity there. Coming from my background a Json object is structured. You can the same way Theresa Vu in the demo today unfurled in the dev cloud what a Json object looks like. There's structure there. You have key value pairs. There's structure to key value pairs. So all of those things, that's why I think to Doug's point, there's fluidity there. It is definitely a continuum and we want to be able to add value and play at all ends of that continuum. >> And the key is you guys your philosophy is to curate that data in the moment when you need it and then put whatever schema you want at that time. >> Absolutely. Going back to this bottoms up approach and how we approach it differently from basically everyone else in the industry. You pull it in, we take the data as is, we're not transforming or changing or breaking the data or trying to put it into a structure anywhere. But when you ask it a question we will apply a structure to give you the answer. If that data changes when you ask that question again, it's okay it doesn't break the question. That's the magic. >> Sounds like magic. 16,000 customers will tell you that it actually works. So John thanks so much for coming to theCUBE it was great to see you again. >> Thanks so much for having me. >> You're welcome. Alright keep it right there everybody. Stu and I will be back. You're watching theCUBE from Splunk conf18 #splunkconf18. We'll be right back. (electronic drums)

Published Date : Oct 3 2018

SUMMARY :

brought to you by Splunk. He's the vice president of product marketing at Splunk. and we're excited about what we're bringing to market. Okay well let's start with yesterday's announcements. _ What are the critical aspects of 7.2, and move some of that cool and cold storage off to blob. So that's simplicity and it's less costly, right? Move the storage off to somewhere else and when you need it It just enables more graceful and elastic expansiveness. It's just the ability to say the great thing about Splunk is So that's essentially programmatic SLAs. Absolutely, it's the same level of granular control that Other things? One of the things that jumped out to me in the keynotes, Why is the Splunk user, what is it different, and Doug talked about it in the keynote yesterday is ask once I'm swimming in the data can be really tough. But help us understand why this is just so much And the idea that there's flexibility, you're in no way I look ... the name of the show is You look at everywhere, people are in the code versus So the SPL and those things are native to them. I also wanted to ask you about the performance gains. Simply because of the approach to the architecture and Is it sort of the architecture, is it tighter code, it's just the idea that in some cases the idea of and it's really the template that you're following. So the challenge is that we're now stepping you with things but the demo today was awesome. made a phone call to the call center because it should be the same level of ease as your traditional The business process that you used in the example It's the same thing we did with IT surface intelligence And Stu, that's going to be important when we talk about Yeah so I wanted to have you give us a bit of insight The requirements in what you might ask Alexa to ask Splunk It's going to be in Slack or Hipchat where you have a team That's interesting because the quality of that text bare metal access to the technology. We're going to take our IT framework and put it at the edge. And that's not going to work. Wherever the data is, we're going to have that data some of the challenges. Industrial Asset Intelligence product at the I've been, Hannover. And that hopefully enables the IOT ecosystems and the Yeah and so instrumenting the windmill, once all those things are in place to be one of the top John I want to ask you something. Doug Merritt said that he didn't like the term but the business user needs to know what product ID 12345 is curate that data in the moment when you need it to give you the answer. it was great to see you again. Stu and I will be back.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Doug MerrittPERSON

0.99+

DavePERSON

0.99+

JohnPERSON

0.99+

Dave VellantePERSON

0.99+

OrlandoLOCATION

0.99+

John RooneyPERSON

0.99+

90%QUANTITY

0.99+

Jon RooneyPERSON

0.99+

GermanyLOCATION

0.99+

2015DATE

0.99+

IBMORGANIZATION

0.99+

DougPERSON

0.99+

ExcelTITLE

0.99+

SplunkORGANIZATION

0.99+

10%QUANTITY

0.99+

AWSORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

Orlando, FloridaLOCATION

0.99+

yesterdayDATE

0.99+

StuPERSON

0.99+

Theresa VuPERSON

0.99+

2000 timesQUANTITY

0.99+

BMWORGANIZATION

0.99+

400,000 peopleQUANTITY

0.99+

each pieceQUANTITY

0.99+

todayDATE

0.99+

HannoverLOCATION

0.99+

EricPERSON

0.99+

threeQUANTITY

0.99+

DevaniPERSON

0.99+

one indexQUANTITY

0.99+

four yearsQUANTITY

0.99+

16,000 customersQUANTITY

0.99+

twoQUANTITY

0.99+

300,000QUANTITY

0.98+

first timeQUANTITY

0.98+

oneQUANTITY

0.98+

One hotelQUANTITY

0.97+

SiemensORGANIZATION

0.97+

SQL ServerTITLE

0.97+

30-40 years agoDATE

0.96+

five years agoDATE

0.96+

bothQUANTITY

0.96+

OneQUANTITY

0.95+

LinuxTITLE

0.95+

Hannover MesseEVENT

0.95+

one hot water heaterQUANTITY

0.94+

firstQUANTITY

0.94+

SplunkTITLE

0.94+

KafkaTITLE

0.94+

AlexaTITLE

0.92+

three ordersQUANTITY

0.92+

OracleORGANIZATION

0.92+

day oneQUANTITY

0.91+

.confOTHER

0.87+

#splunkconf18EVENT

0.86+

MySequelTITLE

0.86+

third big featureQUANTITY

0.85+

Cory Minton & Colin Gallagher & Cory Minton, Dell EMC | Splunk .conf 2017


 

>> Narrator: Live from Washington D.C. it's theCUBE, covering .conf2017. Brought to you by Splunk. (techno music) >> Well welcome back here on theCUBE as we continue our coverage at .conf2017. Splunks get together here in the nation's capital, Washington D.C. We are live here on theCUBE along with Dave Vellante. I'm John Walls. Glad to have you with us here for two days of coverage. We're joined now by Team Dell EMC I guess you could say. Colin Gallagher, who's the Senior Director of VxRail Product Marketing. Colin, good to see you, sir. >> Likewise. >> And Cory Minton, many time Cuber. Colin, you're a Cuber, as well. Principle Engineer, Data Analytical Leader at Dell EMC, and BigDataBeard.com, right? >> Yes, sir. >> Alright, and just in case, you have a special session going on. They're going to be handing these out a little bit later. So, I'm going to let you know that I'm prepared >> Cory: I love that, that's perfect. >> With you and your many legions of fans, allow me to join the club. >> That's awesome. Well welcome, we're so glad to have you. You've got a big data beard. You don't have to have a beard to talk big data at Dell EMC, but it certainly is not frowned upon if you do. >> John: Alright, well this would be the only way I'd ever grow one. >> There you go. >> I can promise you that. >> Looks good on you. >> I like the color, though, too. Anyway, they'll be handing these out at the special session. That'll be a lot of fun. Fellows, big announcement last week where you've got a marriage of sorts with Splunk technology and what Dell EMC is offering on VxRail. Tell us a little bit about that. Ready Systems is how you're branding this new offer. >> So we announced our Ready Systems for Splunk. These are turnkey offerings of Dell EMC technology pre-certified and pre-validated with Splunk and pre-sized. So we give you the option to buy from us both your Splunk solution and the underlying infrastructure that's been certified and validated in a wide variety of flavors based on top of VxRail, based on top of VxRack, based on top of some of our other storage products, as well, that gives you a full turnkey implementation for Splunk. So as Splunk is moving from the land of the hoodies and the experimenters to more mainstream running the business, these are the solutions that IT professionals can trust from both brands that IT professionals (mumbles). >> So you're both a Splunk reseller and a seller of infrastructure, is that right? >> Indeed. So we actually, we joined Splunk in a partnership as a strategic alliance partner a little over a year ago. And that gave us the opportunity to act as a reseller for Splunk. And we've recently gone through a rationalization of their catalog, so we actually have now an expanded offering. So, customers have more choice with us in terms of the offers that we provide from Splunk. And then part of our alliance relationship is that not only are we a reseller, but because of our relationship they now commit engineering and resources to us to help validate our solutions. So we actually work hand in hand with their partner engineering team to make sure that the solutions that we're designing from an infrastructure perspective at least meet or exceed the hardware requirements that Splunk wants to see their platform run on top of. >> Dave: Okay, cool. So you're a data guy. >> Indeed. >> You've been watching the evolution of things like Hadoop. When I look at the way in which customers deal with Hadoop, you know, ingest, you know, clean or transform, analyze, etc., etc., operationalize, there seem to be a lot of parallels between what goes on in that big data world and then the Splunk world, although Splunk is a package, it seems to be an integrated system. What are the similarities? What are the differences? And, what are the requirements for infrastructure? >> I think that the ecosystems, like you said, it's open source versus a commercial platform with a specific objective. And if you look at Splunk's deployment and their development over the years they've really started going from what was really a Google search for log, as Doug talked about today in the kickoff, to really being a robust analytics platform. So I think there's a lot of parallels in terms of technology. We're still ... It's designed to do many of the same things, which is I need to ingest data into somewhere, I need to make sense of it. So, we index it or do some sort of curation process to where then I can ask questions of it. And whether you choose to go the open source route, which is a very popular route, or you choose to go a commercial platform like Splunk, it really depends on your underlying call it ethos, right? It's that fundamental buy versus build, right? For somebody to achieve some of the business outcomes of like deploying a security event and information management tool like Splunk can do, to do that in open source may require some development, some integration of disparate open source platforms. I think Splunk is really good about focusing specifically on the business outcome that they're trying to drive and speeding their customers' time to value with that specific outcome in mind, whereas I think the open source community, like the Hadoop community, I think it offers maybe some ability to do some things that Splunk maybe wouldn't be interested in, things like rich media analytics, things that aren't good for Splunk indexing. >> Are there unique attributes of a data rich workload that you've accommodated that's maybe different from a traditional enterprise workload, and what are those? >> Yeah, so at the end of the day any application is going to have specific bottlenecks, right? One of the basis of performance engineering is move the bottleneck, right? In enterprise applications we had this evolution of originally they were kind of deployed in a server, and then we saw virtualization and shared storage really come in vogue for a number of years. And that's true in these applications, these data rich applications, as well. I think what we're starting to see is that regardless of what the workload is, whether it's a traditional business application like Oracle, SAP, or Microsoft or it's a data application like Splunk, anytime it becomes critical to the operation of a business organizations have to start to do things that we've done to every enterprise IT app in the past, which is we align it to our strategy. Is it highly available? Is it redundant? Is it built on hardware that we can be confident in that's going to be up and running when we need it? So I think from a performance and an engineering perspective, we treat each workload special, right? So we look at what Splunk requirements are and we understand that their requirements may be slightly different than running SAP or Oracle, and that's why we build the bespoke systems like our Ready System for Splunk specifically, right? It's not a catch all that hey it works for everything. It is a specifically designed platform to run Splunk exceptionally well. >> So Colin, a lot of the data practitioners that I talk to at this show and other data oriented shows like, "Ah, infrastructure. "I don't care about infrastructure." Why should they care about infrastructure? Why does infrastructure matter, and what are the things that they should know? >> Infrastructure does matter. I mean infrastructure, if youre infrastructure isn't there, if your infrastructure isn't highly available, as Cory said, if it lets you down in the middle of something, your business is going to shut down, right? Any user can say, "Talk about what happened "the last time you had a data center event, "and how long were you offline, "and what did that really mean for your business? "What's the cost of downtime for you?" And everything we build at an application level and a software level really rests on an infrastructure foundation, right? Infrastructure is the foundation of your data center and the foundation of your IT, and so infrastructure does matter in the sense that, as Cory said, as you build mission critical platforms on it the infrastructure needs to be highly reliable, highly available, and trusted, and that's what we really focus on bringing. And as applications like Splunk evolve more into that mainstream world, they need to be built on that mission critical, reliable, managed infrastructure, right? It's one thing for infrastructure development, and this kind of happens in the history of IT, as well. It happened in client server back in the day. You know, new applications ... Even the web environment I remember a company was running, one of my clients was running a web server under their secretary's desk, and she was administering in half time. You would never have a large company doing that. >> They'd be back up (mumbles). Before you leave. >> As it becomes more important it becomes more central, but also it becomes more important to centrally manage those, right? I'm a 15 year storage veteran, for good or for worse, and what we really sell in storage is selling centralized management of that storage. That's the value that we bring from centralized infrastructure versus a bunch of servers that are sitting distributed around the environment under someone's desk is that centralized management, the ability to share the resources across them, the ability to take one down while the others keep running, shift that workload over and shift it back. And that's what we can do with our Ready Systems. We can bring that level of shared management, shared performance management, to the Splunk world. >> I'll tell you, one of the things that we talked about, we talked about in a number of sessions this week, is application owners, specifically the folks that are here at this conference, need to understand that when they decide to make changes at the application level, whether they like infrastructure or they think it's valuable or not, what they need to understand is that there are impacts, and that if you look at the exciting things that were announced today around Enterprise Security updates, right? Enterprise Security is an interesting app from Splunk, but if a customer goes from just having Splunk Enterprise to running Enterprise Security as a premium application, there's significant downstream impacts on infrastructure that if the application team doesn't account for they can basically put themselves in a corner from a performance and a capacity perspective that can cause serious problems and slow down the business outcome that they're trying to achieve because they didn't think about the infrastructure impacts. >> Well, and what they want really is they want infrastructure that they can code, right? And we talked about this at VMworld we were talking about off camera that cloud model, bringing that cloud model to your data as oppose to trying to force your business into the cloud. So what about Ready Systems mimics that cloud model? Is it a cloud like infrastructure? Wondering if you could talk-- >> Yeah, I think it's that cloud like experience. Because we know we're in a multi cloud world, right? Cloud is not a place, cloud is an operating model, right? And so I think that the Ready Systems specifically provides a couple of things that are that cloud like experience, which is simple ordering and configuration and consumption that is aligned to the application, right? So we actually align the sizing of the system to the license size and the expected experience that this one customer would have so they get that very curated bespoke system that's designed specifically for them, but in a very easy to consume fashion that's also validated by the software vendor, in this case Splunk, that they say, "These are known good configurations "that you will be successful with." So we give customers that comfort that, "Hey, this is a proven way "to deploy this application successfully, "and you don't have to go through "a significant architecture design concept "to get to that cloud like experience." Then you layer in the fact that what makes up the Ready System, which is it is a platform powered by, in the VxRail case powered by VMware, right, ESX and vSAN, which obviously if you look at any of the cloud providers everything is virtualized at the end of the day for the most part, or at least most of the environments are. And so we give, and VMware has been focused on that for years and years of giving that cloud like experience to their customers. >> You talk about, you mentioned selling, sort of reseller, you've got this partnership growing, you're a customer. So, you have all these hats, right, and connections with Splunk. What does that do for you you think just in general? What kind of value do you put on that having these multiple perspectives to how they operate whether it's in your environment or what you're doing for your customers using their insights? >> Yeah, I think at the end of the day we're here to make it simpler for customers. So if we do the work, and we invest the time and energy and resources in this partnership, and we go do the validation, we do the joint engineering, we do the joint certification, that's work that customers don't have to do, and that's value that we can deliver to them that whatever reason they buy Splunk for whatever workload or business outcome they're trying to achieve, we accelerate it. That's one of the biggest values, right? And then you look at who do they interact with in the field? Well, it's engineers from our awesome presales team from around the world that we've actually trained in Splunk. So we have now north of 25 folks that have Splunk SE certifications that are actually Dell EMC employees that are out working with Splunk customers to build platforms and achieve that value very, very quickly. And then them understanding that, "Oh, by the way, Dell EMC is also a user of Splunk, "a great customer of Splunk "and a number of interesting use cases "that we're actually replatforming now "and drinking our own Kool Aid so to say," that I think it just lends credibility to it. And that's a lot of the reason why we've made the investments in being part of this awesome show, but also in doing things like providing the applications. So we actually have four apps in Splunkbase that are available to monitor Dell EMC platforms using Splunk. So I think customers just get a wholistic experience that they've got a technology partner that wants to see them be successful deploying Splunk. >> I wonder if we could talk about stacks, because I've heard Chad Sack-edge talk about stack wars, tongue and cheek, but his point is that customers have to make bets. You've heard him talk about this. You've got the cloud stacks, whether it's Azure or AWS or Google. Obviously VMware has a prominent stack, maybe the most prominent stack. And there's still the open source, whether it's Hadoop or OpenStack. Should we be thinking about the Splunk stack? Is that emerging as a stack, or is it a combination of Splunk and these other? >> You know, we actually had that conversation today with some of the partner engineering team, and I don't know that I would today. I think Splunk continues to be, it's its own application in many cases. And I actually think that a lot of what Splunk is about is actually making sure that those stacks all work. So there was even announcements made today about a new app. So they have a new app for Pivotal Cloud Foundry, right? So if you think about stacks for application development, if you're going to hit push on a new application you're going to need to monitor it. Splunk is one of those things that persistent. The data is persistent. You want to keep large amounts of data for long periods of time so that you can build your models, understand what's really going on in the background, but then you need that real time reporting of, "Hey, if I hit push on a Cloud Foundry app "and all of a sudden I have an impact "to the service that's underlying it "because there's some microservice that gets broken, "if I don't have that monitoring platform "that can tell me that and correlate that event "and give me the guidance to not only alert against it "but actually go investigate it and act against it, "I'm in trouble." The stacks, I think many of them have their own monitoring capabilities, but I think Splunk has proven it that they are invested in being the monitoring and the data fabric that I think is wanting to help all the stacks be successful. So I don't necessarily put it in the stack. And I kind of don't put Hadoop in its own stack, either, because I think at the end of the day Hadoop needs a stack for deployment models. So you may see it go from a physical construct of being, a bit trying to be its own software that controls the underlying hardware, but I think you're seeing abstraction layers happen everywhere. They're containerizing Hadoop now. Virtualization of Hadoop is legit. Most of the big cloud providers talk about the decoupling of compute from storage in Hadoop for persistent and transient clusters. So I think the stacks will be interesting for application development, and applications like Splunk will be one of two things. They'll either consume one of those stacks for deployment or they'll be a standalone monitoring tool that makes us successful. >> So you don't see in the near term anyway Splunk becoming an application development platform the way that a lot of the-- >> Cory: They may have visions of it. That's not, yeah. >> They haven't laid that out there. It's something that we've been bounding around here. >> Yeah, I think it's interesting. Again, I think it goes back to .. Because the flexibility in what you can do with Splunk. I mean we've developed some of our own applications to help monitor Dell EMC storage platforms, and that's, it's interesting. But in terms of building what we'd I guess we'd consider like traditional seven factor app development, I don't know that it provides it. >> Yeah, well it's interesting because, I'm noodling here, Doug Merritt said, "Hey, we think we're going to be the next five billion, "10 billion, 20 billion dollar ecosystem slash company," and so you start to wonder, "Okay, how does that TAM grow to that point? That's one avenue that we considered. I want to talk about the anatomy of a transaction and how that's evolved. Colin, you mentioned Client Server, and you think about data rich applications going from sort of systems of record and the transactions associated with that. And while there were many going to Client Server and HTTP, and then now mobile apps really escalated that. And now with containers, with microservices, the amount of data and the complexity of transactions is greater and greater and greater. As a technologist, I wonder if you could sort of add some color to that. >> Yeah, I think as we kind of go down a path of application stacks are interesting, but at the end of the day we're still delivering a service, right? At the end of the day it's always about how do I deliver service, whether it's a business service, it's a mobile application, which is a service where I could get closer to my customer, I could transact business with them on a different model, I think all of it ... Because everything has gone digital, everything we do is digital, you're seeing more and more machines get created, there's more and more IP addressed devices out there on the planet that are creating data, and this machine generated data deluge that we're under right now it ain't slowing down, right? And so as we create these additional devices, somebody has got to make sense of this stuff. And if you listen to a lot of the analysts they talk about machine data is the most target rich in terms of business value, and it's their fastest growing. And it's now at a scale because we've now created so many devices that are creating their own logs, creating their own transactional data, right, there's just not that many tools that out of the box make it simple to collect the data, search the data, and derive value from it in the way that Splunk does. You can get to a lot of the things that Splunk can deliver from an outcome other ways with other platforms, but the simplicity and the ability to do it with a platform that out of the box does it and has a vibrant community of folks that will help you get there, it's a pretty big deal. So I think it's, you know, it's interesting. I don't know, like under the covers microservices are certainly interesting. They're still services. They're just smaller and packaged slightly differently and shared in a different way. >> And a lot more of them. >> Yeah, and scaled differently, right? And I totally get that, but at the end of the day we're still from a Splunk perspective and from a data perspective, we've still got to make sense of all of it. >> Right, well, I think the difference is just the amount of data. You talked about kind of new computing models, serverless sort of, stateless, IoT coming into play. It's just the data curve is reshaping. >> Well, it's not just the amount of data, it's the number of sources. The data is exploding, but also, as Cory mentioned, it's exploding because it's coming from so many places. Your refrigerator can generate data for you now, right? Every single ... Everything that generates Internet, anything doing anything now really has a microprocessor in it. I don't know if you guys saw my escape room at VMworld. There were 12 microprocessors running this escape room. So one of the things we played about doing was bring it here and trying to Splunk the escape room to actually see real time what the data was doing. And we weren't able to ship it back from Barcelona in time, but it would've been interesting to see, because you can see just the centers that are in that room real time and being able to correlate all that. And that's the value of Splunk is being able to pull that from those disparate sources altogether and give you those analytics. >> Yeah, it's funny you talk about an IoT use case. So we've got these... Our partner, who's a joint partner of both Dell EMC and Splunk, we actually have these Misfit devices that are activity trackers. And we're actually-- >> Misfit device? >> Misfit. Yeah, it's a brand. >> John: Love it. >> It's fitting, I think. But we have these devices that we gave away to a number of the attendees here, and we actually asked them if they're willing to participate. They can actually use the app on your phone to grab the data. And by simply going to a website they can allow us to pull the data from their device about their activity, about their sleep. And so we actually have in our booth and in Arrow's booth we're Splunking Conf and it's called How Happy is Conf? And so you can actually see Splunk running, and by the way, it's running in Arrow's lab. It's running on top of Dell EMC infrastructure designed for Splunk. You can actually see us Splunking how happy conf attendees are. And we're measuring happiness by their sleep. How much sleep-- >> John: Sleep quality and-- >> The exercise, the number of steps, right? So we have a little battle going between-- >> Is more sleep or less sleep happy? >> Are consumption behaviors also tracked on that? I just want to know. I'm curious. >> It's voluntary. You'd have to provide that. >> Alright, because that's another measure of happiness. >> It certainly is. But it's just a great use case where we talk about IoT and the number of sources of data that Splunk as a platform ... It's very, very simple to deploy that platform, have a web service that's able to pull that data from an API from a platform that's not ours, right, but bring that data into our environment, use Splunk to ingest and index that data, then actually create some interesting dashboards. It's a real world use case, right? Now, how much people really want to (mumbles) Splunk health devices we'll determine, but in the IoT context it's an absolute analog for what a lot of organizations are trying to do. >> Interesting, good stuff. Gentlemen, thanks for being with us. We appreciate that. Cory, it's probably not the real deal, but as close as I'm going to go. Good luck with your session. We appreciate the time to both of you, and you and your Misfit. Back with more here on theCUBE coming up in just a bit here in Washington D.C. (techno music)

Published Date : Sep 26 2017

SUMMARY :

Brought to you by Splunk. Glad to have you with us here for two days of coverage. and BigDataBeard.com, right? So, I'm going to let you know that I'm prepared allow me to join the club. You don't have to have a beard to talk big data at Dell EMC, John: Alright, well this would be the only way I like the color, though, too. So we give you the option to buy from us is that not only are we a reseller, So you're a data guy. When I look at the way in which customers deal with Hadoop, and speeding their customers' time to value Is it built on hardware that we can be confident in So Colin, a lot of the data practitioners that I talk to and the foundation of your IT, Before you leave. the ability to share the resources across them, and that if you look at the exciting things bringing that cloud model to your data of giving that cloud like experience to their customers. What does that do for you you think just in general? that I think it just lends credibility to it. but his point is that customers have to make bets. so that you can build your models, Cory: They may have visions of it. It's something that we've been bounding around here. Because the flexibility in what you can do with Splunk. "Okay, how does that TAM grow to that point? but the simplicity and the ability to do it with a platform but at the end of the day just the amount of data. So one of the things we played about doing that are activity trackers. Yeah, it's a brand. and by the way, it's running in Arrow's lab. I just want to know. You'd have to provide that. and the number of sources of data We appreciate the time to both of you,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Doug MerrittPERSON

0.99+

DavePERSON

0.99+

ColinPERSON

0.99+

SplunkORGANIZATION

0.99+

Cory MintonPERSON

0.99+

John WallsPERSON

0.99+

15 yearQUANTITY

0.99+

Colin GallagherPERSON

0.99+

JohnPERSON

0.99+

12 microprocessorsQUANTITY

0.99+

CoryPERSON

0.99+

Washington D.C.LOCATION

0.99+

10 billionQUANTITY

0.99+

DougPERSON

0.99+

BarcelonaLOCATION

0.99+

Enterprise SecurityTITLE

0.99+

Dell EMCORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

two daysQUANTITY

0.99+

todayDATE

0.99+

last weekDATE

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.99+

VMworldORGANIZATION

0.99+

both brandsQUANTITY

0.99+

Chad SackPERSON

0.99+

this weekDATE

0.99+

AWSORGANIZATION

0.99+

HadoopTITLE

0.98+

Ready SystemsORGANIZATION

0.98+

one thingQUANTITY

0.98+

OracleORGANIZATION

0.98+

VMwareORGANIZATION

0.98+

.conf2017EVENT

0.98+

ArrowORGANIZATION

0.98+

GoogleORGANIZATION

0.97+

DellORGANIZATION

0.97+

Josh Rogers, Syncsort | Splunk .conf2017


 

>> Narrator: Live from Washington D.C., it's theCUBE. Covering Dotcom 2017. Brought to you by Splunk. >> And welcome back to the nation's capital. The Cube, continuing our coverage of Dotcom 2017. At Splunk's annual get together and coming to Washington D.C. for the first time. Huge success, 7,000 plus attendees, 65 countries. I forget the millions of miles. Was it three million miles traveling? >> Let's see, was it three million? It was 30 million. >> Maybe 30 million. >> Yeah. It's a big number. >> 30 million miles. Dave Vellante and John Walls here on theCUBE. I'd say off to a roaring start here, to say the least. Josh Rogers joins us, he's the CEO of Syncsort. And Josh, good to have you on theCUBE. Good to you see sir. >> Thanks sir. Thanks for having me. >> Good week for you, big week for you. Couple of announcements that you made here recently. Go ahead and share with us a little bit about those. >> Sure, so we made two announcements yesterday. The first is a new product, it's called Transaction Tracing, it's an add on to our Ironstream product. Ironstream is a solution that delivers mainframe machine data to Splunk Enterprise, and has integration points on the security and on the IT service intelligence components within Splunk. What Transaction Tracing does, the new product introduction, is it adds additional capabilities to understand and trace a transaction that could begin on a mobile device and follow it all the way through the multiple hops it will take to ultimately transact against a mainframe. And when that transaction hits the mainframe, there's several things that you want to understand. One is, you want to understand how is is performing, how is it affecting my mainframe environment. Is it causing problems in other places? And you want to be able to look at that transaction, or that application, as a service. And so you want to be able to track that whole service end to end. And so what we've done with Transaction Tracing is created an ability for Splunk customers to be able to surface all of that data, collate it together, and get a unified view of both how the service is behaving, the performance that characteristics it's delivering to the customers that are utilizing the service, and then the impacts that it's having on the mainframe. All of which are, core components of understanding how you're IT operations are performing. And kind of all about what Splunk is supporting. We're just adding on additional capabilities for Splunk customers. >> So I wonder if I could follow up on Transaction Tracing. So I remember about 20 years ago, David Floyer did a piece of research, when we were working together at a former company, and I was struck at the time by the number of subsequent transactions that had to occur just to get an outcome of a check process. >> Right, right, right. >> I mean it was like some orders of magnitude >> Right. >> greater. Add to that mobile transactions, I can't imagine with all the internet traffic and other activities going on, now add to that big data, and security, and fraud detection, and all the other things that we're doing with the data. The number of ancillary transactions >> Right. >> has got to be enormous. Hence the need presumably for Transaction Tracing. >> Absolutely. >> So maybe talk about the market need, and why Syncsort? You would think doesn't the mainframe have all this stuff integrated into it? Maybe talk about that. >> Yeah sure, so I think one of the things to understand is that the mainframe compute volumes continue to go up. I think people just tend to think about mainframes as a environment that perhaps isn't growing, but in fact, it is growing. And one of the key drivers is this new transaction workload that is driven in part my mobile, and other devices. And so what you have if you're running a mainframe is I'm experiencing increase in my transaction workloads, I need to figure out how to kind of support that. But I also have a lot more characteristics I care about, security, performance, et cetera. And so I need deeper analytics. And of course, they are difficult systems. You need to understand the mainframe, you need to understand how KICKS and DB2 interact and support a transaction. But you also need to understand kind of this next generation analytic environment, how can I leverage that to actually get the insight I want. And that's really what we call, it's an example of, a big iron to big data challenge. And so what Syncsort's been incredibly focused on is helping customers understand the very specific use cases that are included in that big iron to big data space, and providing very differentiated solutions with very deep differentiation to solve those specific use cases. And Transaction Tracing is a good example of that. It sounds fairly narrow, but it's incredibly important if you're a bank and you want to give your customers an ability to kind of check account balances, interact with you in a way that they haven't in the past. >> Well, it's one of those things that we talk about you know depth apps, in depth apps, this is a depth app. >> Right. >> Alright, okay. And then in terms of the Splunk relationship, where does that fit in, and what are the swim lanes between you and Splunk? >> Well we view Splunk as a key platform in the world today for kind of understanding IT operations and security. We view them as incredibly powerful from a platform perspective. And we also view them as a partner that we can add value to. That we can provide access to data that enrich their platform and allows their customers to get more value of it, and that we can do that in a unique way. And so we have a very close relationship with Splunk. And that's not just at a go to market level, it's also at a product management and engineering level. We work very closely to make sure that our products integrate well with Splunk. So we've got deep integration with IT service intelligence, we've got deep integration with enterprise security, and we'll continue to drive deeper integration into the Splunk platform. So when a customer comes across a scenario where they want to ingest mainframe data, they can be assured that they will get no better product on the marketplace than Syncsort Ironstream and associated modules, in terms of both how it will perform on its own, but also how it will integrate with Splunk. >> So that deep integration something that's always interesting to us on theCUBE. Lot of times you see Barney deals. Barney, I love you, you love me, let's do a press release. And so one of the ways in which we measure, or try to measure, the intensity of the integration is the engineering that's involved. So I wonder if you could, sort of double click on that. >> Sure. >> Is it kind of just making sure you're familiar with the APIs? Are you actually doing integration and engineering on both sides? Maybe you could talk about that. >> Well, so I'll talk about our integration with enterprise, security, and IT service intelligence. >> Dave: Great. >> And those are, you can think of those as specific applications to support deep analytics. And these are Splunk offerings. Deep analytics around those two areas of confidence. Such that a user can rapidly build a set of dashboards that would allow them to answer the questions you want to answer if you're focused on IT service intelligence or understanding security. Fundamentally they're data models. They've gone out and mapped what are all the data elements that you need, what's the structure that you need of that data model, to be able to answer the questions that a security minded analyst would want to answer. That allows you to, if you map the data sources into those data models, that would allow you to rapidly build those to that dashboards that support those types of roles on the enterprise. What we've done is taken the very large amount of mainframe machine data that gets produced, generally it's an SMF record, so there's 260 types of SMF records, each one has its subtype. We've mapped it into those two data models that Splunk has created. Nobody else has done that. And what that does is it allows those customers to get a complete end to end view of how can I rapidly enhance my IT service intelligence application, or my enterprise security application with mainframe data. Which just happens to run my most sensitive applications and most voluminous applications, from a transaction perspective in my enterprise. So we thing that deep integration is a really powerful capability, and it's just an example of where we like to go deeper with our partners than what we see other companies doing. >> You know when you talked about the mobile environment a little while ago, and complexities and that, I'm always just kind of curious. With everybody talk about what that does in terms of when you're harvesting data and now you're in a non-stationary environment. And that comes with it a whole different set of characteristics and challenges. I mean, what layer of complexity do you take on when you all of a sudden you can be anywhere and feeding data at any time from any machine. >> Sure, well I mean what it creates is a lot more interaction points. So I probably interact with my bank a lot more today than I did 10 years ago, 'cause I don't have to find an ATM, or go by a branch, >> John: You never walk into a branch. >> And I did this over the weekend. I had to kind of transfer some money, right. So I just transferred it and I was in Colorado hiking, and I transferred funds between accounts. And then later on the golf course I did a wire, literally. >> John: You didn't have to transfer money on the golf course for a reason, did you? >> No, no, no, those were unrelated events. >> Just making sure. >> Lost a few, Josh? >> But that type of interaction. So you get more frequent interaction, which creates an operational challenge. Particularly when you think about the mainframe and how customers pay for that, right. They pay for it based on how much CPU they use on a monthly basis. And so what we want to do is help customers run that system as efficiently as possible. It also creates a massive analytic opportunity, because now I have a lot more data that I can start to analyze to understand trends, because I have more touchpoints. But the trick is I've got to get that data into a repository and into an analytic environment that can handle that data. And that's where I think Splunk creates such an interesting opportunity. And what we're trying to do is just add value to that, make it easy for customers to leverage all of their data. Does that make sense? >> Yeah. >> It does. How 'about the government marketplace? We're here in the District. You guys have an announcement around new partners. >> Yes. >> Maybe talk about the importance of government, and what you do in there. >> Sure, so we signed a distribution relationship with Carahsoft, also a big Splunk partner. And that is going to allow government customers to more easily take advantage of Ironstream and Transaction Tracing in these used cases. The federal government is a enormous market opportunity, it's also a big mainframe environment. There's a lot of government core, government applications, that still run on mainframe environments. In fact, I would tell you most do. IRS, Social Security, CIA, and other agencies. And so we think giving ourselves an easy route to market for these customers is a great opportunity for us, it's also a good opportunity for Splunk's customers who are in the government, 'cause they can go and buy additional capabilities that are relevant to their environment through the same partners that they've been working with Splunk. >> But is there a difference with how you deal with public and private sector then? I mean, governance and compliance, and all those things. I would assume you have different hurdles. >> They're different contract vehicles, which have different kind of requirements in them. And that's one of the values that we get with the Carahsoft relationship, is just giving us access to those various contract vehicles. Yeah. >> Talk to me a little bit about life. I mean, you've always been a private company. But you're you don't have the 90 day shot clock, you have new owners, what's the objective, maybe talk about that sort of the patience of the capital, what your priorities are with regard to these owners. Maybe discuss that a little bit. >> Yeah, sure. So just to give a little background in early July we announced and in mid August we closed a transaction whereby Centerbridge Partners acquired Syncsort and another company, Vision Solutions, from our previous owner, Clearlake Capital. And we combined the companies under the Syncsort umbrella, and myself and our leadership team is going to take the company forward. So the 90 day shot clock, I would say definitely we still care about the 90 day shot clock. We are very focused on growing this business and doing that in a consistent way on a quarterly basis. I guess the difference is I get to talk to my investors every day rather than once a quarter. But they've been great partners. The Centerbridge guys have a lot of resources, they've been incredibly helpful in helping us start to think through kind of the strategies, some of the integration work we're doing with Vision. But we think there's an opportunity to build a big business. We employed a dual strategy of organic growth focused largely in the big iron to big data spaces, as described earlier, combined with MNA. And you know, over the last 24 months we've tripled the size of Syncsort. So it's grown 3X-- >> So you are growing, that was one of my questions, were you growing. >> And in revenue, >> Substantially. >> we've doubled in employees. >> So, say that again. >> We've tripled revenue. >> You've tripled revenue. Double head count. >> And double head count. >> Okay, so you've increased profitability in theory then. >> So, and we will continue to run the same play. We're seeing acceleration in our organic place, but focus on the big iron to big data market. And we also believe there are additional data management capabilities that are relevant to our customers, that we can acquire and help point towards that big iron to big data play. And so we'll continue to look at various spaces that are interesting adjacencies that are relevant to our customers. >> And some of that revenue growth obviously is through acquisition. >> Josh: Right. >> Right, and so when you think about, you know it used to be the classic private equity play was to suck all the money out of the company, leave the carcass for somebody else to deal with. It seems like there's a new thinking. Not seems like, there is a new thinking here. Invest, acquire, increase the value, the money guys are realizing wow this, there's a lot more money to be made. >> Absolutely. I definitely-- >> The technology business. >> We have an eye towards profitable growth. But we are absolutely making investments. And as you get larger scale you can make meaningful investments in these specific areas that can help deliver really great innovation to customers. And Transaction Tracing is an example of that. And certainly I can give you others. But for sure, we are trying to build value. This is not a traditional kind of private equity play. And I also think that private equity is generally understanding there's an opportunity to create value after the catch, if you will, in the tech industry. And I was looking at an analysis last week that financial investors, private equity, for the first time ever will do more deals in technology than strategics, in 2017. And so I think that's a statement that says that there's certainly an opportunity to create long term sustained value in a private equity backed kind of model. And I think to some extent, Syncsort's been pioneering that. With a dual approach on organic growth, and on additional acquisitions. >> Well, and you've seen it, coming out of the down turn, or sort of in the down turn, a lot of these public companies were struggling. >> Right. >> I mean you certainly saw with Dell, BMC, Riverbed, Infor, all examples of private equity where there's investment going on and I think a longer term vision. >> Right. >> With some, as a I call, patient capital. Syncsort is obviously part of that. Syncsort, actually interesting, when it spun out its storage business, you know as a successful company. Catalogic is doing its thing. So Syncsort was able to monetize that. And then really focus on the core knitting. >> Yeah. >> And then figure out where in the big data space that you can make money. Which, not a lot of people were making money in the big data space. So, that's good, congratulations on that. >> I like to tell folks that we've had a really good run, but it's really the first couple of innings. The Centerbridge team is going to be incredibly supportive, and I can't wait to get started on the next leg of the journey. I think there's going to be a lot more innovation to come and I'm looking forward to it. >> Dave: Great. >> So, you're in the middle of the game. We appreciate the time here. Good luck with that, the long term plan down the road. I hope the show's going well for you. >> It's going great. >> And it's good seeing you. >> Great, thanks John. >> Thanks, Josh. >> See you Dave. >> Josh Rogers from Syncsort with us today here. Syncsort, rather, here on theCUBE. Back with more Washington D.C., theCUBE live at Dotcom 2017, right after this. (upbeat music)

Published Date : Sep 26 2017

SUMMARY :

Brought to you by Splunk. and coming to Washington D.C. for the first time. It was 30 million. It's a big number. And Josh, good to have you on theCUBE. Thanks for having me. Couple of announcements that you made here recently. And so you want to be able to track that whole service that had to occur just to get an outcome of a and fraud detection, and all the other things has got to be enormous. So maybe talk about the market need, and why Syncsort? And so what you have if you're running a mainframe you know depth apps, in depth apps, and what are the swim lanes between you and Splunk? And that's not just at a go to market level, And so one of the ways in which we measure, Maybe you could talk about that. Well, so I'll talk about our integration And those are, you can think of those And that comes with it a whole different set 'cause I don't have to find an ATM, or go by a branch, I had to kind of transfer some money, right. that I can start to analyze to understand trends, We're here in the District. and what you do in there. And that is going to allow government customers I would assume you have And that's one of the values that we get maybe talk about that sort of the patience of the capital, I guess the difference is I get to talk to my investors So you are growing, that was one of my questions, You've tripled revenue. but focus on the big iron to big data market. And some of that revenue growth Right, and so when you think about, I definitely-- And I think to some extent, Syncsort's been pioneering that. coming out of the down turn, or sort of in the down turn, I mean you certainly saw And then really focus on the core knitting. that you can make money. I think there's going to be a lot more innovation to come I hope the show's going well for you. from Syncsort with us today here.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JoshPERSON

0.99+

JohnPERSON

0.99+

John WallsPERSON

0.99+

2017DATE

0.99+

Dave VellantePERSON

0.99+

David FloyerPERSON

0.99+

DellORGANIZATION

0.99+

BMCORGANIZATION

0.99+

DavePERSON

0.99+

Josh RogersPERSON

0.99+

CIAORGANIZATION

0.99+

Clearlake CapitalORGANIZATION

0.99+

SplunkORGANIZATION

0.99+

IRSORGANIZATION

0.99+

CenterbridgeORGANIZATION

0.99+

mid AugustDATE

0.99+

ColoradoLOCATION

0.99+

CarahsoftORGANIZATION

0.99+

RiverbedORGANIZATION

0.99+

three million milesQUANTITY

0.99+

oneQUANTITY

0.99+

3XQUANTITY

0.99+

SyncsortORGANIZATION

0.99+

three millionQUANTITY

0.99+

Washington D.C.LOCATION

0.99+

early JulyDATE

0.99+

90 dayQUANTITY

0.99+

30 millionQUANTITY

0.99+

260 typesQUANTITY

0.99+

last weekDATE

0.99+

MNAORGANIZATION

0.99+

Centerbridge PartnersORGANIZATION

0.99+

millions of milesQUANTITY

0.99+

Social SecurityORGANIZATION

0.99+

65 countriesQUANTITY

0.99+

CatalogicORGANIZATION

0.99+

InforORGANIZATION

0.99+

30 million milesQUANTITY

0.99+

firstQUANTITY

0.99+

10 years agoDATE

0.99+

two data modelsQUANTITY

0.99+

yesterdayDATE

0.99+

Vision SolutionsORGANIZATION

0.99+

both sidesQUANTITY

0.98+

todayDATE

0.98+

first timeQUANTITY

0.98+

two announcementsQUANTITY

0.98+

each oneQUANTITY

0.98+

bothQUANTITY

0.98+

IronstreamORGANIZATION

0.97+

OneQUANTITY

0.96+

Dotcom 2017EVENT

0.95+

Day One Kick Off | Splunk .conf2017


 

>> Announcer: Live, from Washington, D.C, it's theCUBE. Covering .conf2017, brought to you by Splunk. >> Welcome to the District everybody, this is theCUBE, the leader in live tech coverage. My name is Dave Vellante, and I'm here with my co-host for the opening session of Splunk .conf2017, George Gilbert. This is theCUBE's seventh year of doing Splunk .conf. We have seen the evolution of this company from a pre-IPO startup into a 1.2 billion dollar growing, rapidly growing player in the big data sphere. Interestingly George, Splunk in its early days really never glommed on to the big data meme. They let others sort of, run with that. Meanwhile, Splunk was analyzing machine data, helping people solve, you know, operational problems, security problems, et cetera, growing very rapidly as a company. Getting a passionate user group together and a community together, expanding on that community. And now today, you see Splunk is at the heart of big data. As you wrote recently in one of your pieces, you need big data and big data techniques to analyze all this data. So give us your take; where are we at in this evolution of Splunk and the intersection of big data? >> Alright so, I guess the best way to frame it is, we had several years of talk, mainly from the open source big data community, which of course came out of the big tech companies, about how they were going to solve problems with essentially instrumenting the new era of applications. These are the web and mobile apps, and the big data repositories around them. And I'm going to walk through four sort of, categories. Like, define this class of apps very crisply, so we can say who fits where. >> Well let me just ask you, so we're seeing the expansion of Splunk from sort of a narrow log analysis platform, into one that is becoming really more of a platform for big data apps and big data application development and big data apps. >> Okay, let me give you the crisp answer, then. For years Hadoop said, we're the platform for big data apps. But the problem was, it was built by and for big tech companies. So it was a lot of complexity, it's something you and I have talked about for awhile. And that sort of choked its adoption beyond the very most sophisticated enterprises. Splunk started analyzing, you know, basically log data, machine data. But as that platform grew, they built it not so that they were sourcing really innovative pieces from all over the ecosystem, but so that the repository, the analytics, the user interface, the application development environment, were all built to cohere and to fit together. Which meant it was immensely easier for admins and developers to use. And if you look at their results, they're as you said, a 1.2 billion dollar company, and that's bigger than all the Hadoop vendors combined and they're growing just as fast. >> Okay so before we get into it George, I want to just sort of, set it up a little bit for our audience. So we're here in Washington D.C at the convention center; 7,000 plus attendees at this show. When we first started doing the original .conf shows, it was relatively, you know, it's still intimate but it was a much smaller show, so up to 7,000 people now. 65 countries represented here; Doug Merritt, the CEO, launched the keynote this morning. talked about people coming from 30 million miles if you aggregate; you know, Splunk's all about aggregating and analyzing all this data. If you analyze the distance that everybody traveled in aggregate, it was 30 million miles. So what's happening here, is this is the gathering, the annual gathering of the Splunk community, the conference is called .conf. And when you listen to Splunk, and when they talk about their transformation as a company, and their opportunity as a company, really going from security incident and event management, to an organization that's really starting to focus on bringing analytics and big data to the security business. So security is a huge opportunity for Splunk. It's something that they've always been pretty fundamental in and so George, part of Splunk's evolution as a platform, is to really, as you're pointing out, get more into either apps, or allowing the ecosystem to develop apps on top of their platform, right? >> Okay, so that's sort of a great segway to the question of, are they dessert topping or floor wax? Are they a platform or an app? >> The answer is yes. >> Yes. Now, what they're doing, they're taking a page out of Microsoft's playbook, and very few others have made the transition from platform to app; they started really as an app platform. But what's going on now, is they basically can take machine data about your applications and your infrastructure from wherever; across the cloud on PRIM, out at the Edge, and then they give you end-to-end visibility because you've got all that data. And they have some advanced visualization techniques; they make it now, in this release, much easier to monitor the performance metrics. But then what they're doing, when you do this end-to-end visibility, you have a greater burden on the admins to say, well when there's an alert, correlate this problem with this problem and try and figure out where it really came from. What they're starting to do, which is really significant, is build the apps on top which go deep. The apps, like Splunk User Behavior Analytics, Splunk Enterprise Security. What that means is, those apps come pre-trained to know how to read the customers' landscape, put a map together. And then also how to figure out, so when services are not acting quite right, what to investigate. So in other words, they come with an administrator knowledge baked in. >> So Splunk has all this data across its 15,000 customers; you know, billions and billions of data points, if not trillions. And they are able to infer from that data and identify the pattern, so that they can deliver essentially, prepackaged insights to customers >> Yes, you're actually putting your finger on two things that are important. First, like the applications, like user behavior analytics, which is basically for looking for bad actors and intrusion, and enterprise security, which is sort of a broader look. Those come so that they're trained to figure out your landscape and what's normal behavior. But they announced something else just this morning, which was sort of a proactive support where they take all the telemetry data from customers as they opt in, and they learn from that about what's normal and abnormal, and what's best practice and what is not. And so then they can push out proactive support. >> Okay, let's do a quick rundown. We don't have much time here, but let's talk about the cloud strategy. Splunk has a relationship with AWS. Where's Splunk in your view fit with the whole cloud, hybrid cloud, PlayOn, PRIM, in the public cloud? I know they've said publicly that 50% of their customers, or at least maybe it's their new business, is cloud only. And then the other 50% is either on PRIM, or cloud; either all on PRIM, or on PRIM and cloud, so some kind of mix. So where do they fit in the whole cloud, hybrid cloud mix? >> Okay, you also touch again on a couple key things. One is, where can they run so that customers can have the same development platform and admin experience wherever the customer data may be; whether it's on PRIM, on the Edge, or in multiple clouds? That is, they've addressed, because they're a self contained environment, So they can run on different platforms, different locations. But at the same time, when you're working with Splunk on PRIM, you're really in a very different ecosystem than when you're using it in the cloud. Because in the cloud, you might want to take advantage of special purpose machine learning tools, or special purpose analytic databases that have capabilities that are there -- >> Dave: AWS services, for example, yeah. >> Yes, that are there in the cloud. >> Is that a friction point for Splunk? Is that the point of ... You know, are there clear swim lanes, or does it start to get fuzzy? >> I would call it less a friction point, and more of a set of trade-offs that their customers will encounter that are different. >> Okay, like the integrated iPhone versus other third party; so, the tooling. >> And it's worth mentioning that, you know, to stay in that self-contained and compatible sort of platform sphere, this little biosphere wherever it may be, you lose out on the platform specific specialized services that might be on any particular platform. And the fact that you have that trade-off is goodness, as opposed to ... >> Okay, a couple other things. So we talked a little bit about the, and you and I as you say, talked about this forever, is admin and developer complexity. What's Splunk's recipe for simplifying that, and how does machine learning fit in? Okay, so on the issue of admin complexity and developer complexity, I'm going to pull up a cheat sheet here that I started pulling together. Probably the complexity is going to freak out our video support guys. But if you look at the typical open source analytic application and the pipeline that's underneath it, it's got an process phase, it's analyzing the data, it's running predictions, it's serving the data -- >> Dave: Sounds like the Hadoop pipeline. >> It is; whether it's Splunk or Hadoop, it's the same set of -- >> Dave: It's a big data workflow when you're dealing with large volumes, right? >> And whether you're dealing with Splunk or Hadoop, you have to deal with stuff like data governance, performance monitoring, scheduling, authentication authorization, resource -- >> Dave: All the enterprise level stuff that we've grown to understand and love. >> But, if in the open source ecosystem, each stage of the pipeline is a different product, and each of those admin steps is implemented differently because they're coming from different patchy projects, you've got what I call is, potentially a Frankenstein kind of product. You know, like its creator might love it, but -- >> Dave: Okay, so you're saying Splunk's strategy will be to integrate those and be in a simplified, almost like the cloud guys who would aspire to do -- >> Well, that's the other thing. See, Splunk had this wonderful thing on PRIM where they were really the only one who was unifying big data in the cloud; it hasn't happened yet. Like Amazon's answer to customers is, we take any and all comers, you can use our services, you can use others. But you will see over time, probably first by Azure and then later by Amazon -- >> Okay, so were out of time, but these are some of the things we're tracking. Watching spunks TAM expansion, the whole cloud, hybrid cloud strategy, simplifying big data complexity, where does machine learning fit in? Some of the things we didn't get into were breadth versus depth; Splunk is kind of doing both. Going deep with certain applications, but also horizontally across its platform. And then, of course, we haven't talked about IOT but we will this week. IOT and Edge processing, what's the right strategy there? We'll be unpacking that all week. Splunk is a fun crowd; I mean, you can see the t-shirts. The t-shirts are fantastic; Drop Your Breaches, The End of Meh-trix, taking the S-H out of IT. These are some of the t-shirts that you see, some of the slogans that you see around here. So Splunk, really fun company. The other thing that you note about this ecosystem, this audience, is when Splunk makes an announcement, you get genuine applause; you know laughter, applause, really, really passionate customer base. A lot of these conferences we come to, it's sort of golf claps; not here, it's really heartfelt. So George, great analysis. Thanks very much for helping us kick-off. Keep it right there, everybody; we'll be back with our next guest. It's theCUBE, we're live from the District, at Splunk .conf2017. (upbeat techno-music)

Published Date : Sep 26 2017

SUMMARY :

brought to you by Splunk. We have seen the evolution of this company and the big data repositories around them. and big data application development and big data apps. but so that the repository, Doug Merritt, the CEO, launched the keynote this morning. and then they give you end-to-end visibility and identify the pattern, First, like the applications, but let's talk about the cloud strategy. Because in the cloud, you might want to take advantage Is that the point of ... and more of a set of trade-offs Okay, like the integrated iPhone And the fact that you have that trade-off is goodness, Probably the complexity is going to freak out Dave: All the enterprise level stuff But, if in the open source ecosystem, Well, that's the other thing. These are some of the t-shirts that you see,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Doug MerrittPERSON

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

GeorgePERSON

0.99+

Washington D.CLOCATION

0.99+

SplunkORGANIZATION

0.99+

Washington, D.CLOCATION

0.99+

DavePERSON

0.99+

George GilbertPERSON

0.99+

MicrosoftORGANIZATION

0.99+

50%QUANTITY

0.99+

1.2 billion dollarQUANTITY

0.99+

65 countriesQUANTITY

0.99+

15,000 customersQUANTITY

0.99+

30 million milesQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

seventh yearQUANTITY

0.99+

FirstQUANTITY

0.99+

this weekDATE

0.99+

two thingsQUANTITY

0.99+

theCUBEORGANIZATION

0.98+

todayDATE

0.98+

OneQUANTITY

0.98+

bothQUANTITY

0.98+

eachQUANTITY

0.98+

each stageQUANTITY

0.97+

HadoopPERSON

0.96+

Day OneQUANTITY

0.96+

up to 7,000 peopleQUANTITY

0.95+

trillionsQUANTITY

0.95+

firstQUANTITY

0.95+

SplunkPERSON

0.95+

7,000 plus attendeesQUANTITY

0.93+

this morningDATE

0.93+

HadoopTITLE

0.92+

billions of data pointsQUANTITY

0.9+

billions andQUANTITY

0.9+

Splunk .conf2017EVENT

0.87+

oneQUANTITY

0.8+

one of your piecesQUANTITY

0.8+

IOTTITLE

0.8+

Splunk Enterprise SecurityTITLE

0.8+

Splunk User Behavior AnalyticsTITLE

0.79+

couple keyQUANTITY

0.79+

PRIMORGANIZATION

0.76+

Splunk .conf.EVENT

0.75+

Covering .conf2017EVENT

0.71+

SplunkTITLE

0.71+

PlayOnORGANIZATION

0.69+

PRIMCOMMERCIAL_ITEM

0.67+

EdgeTITLE

0.67+

FrankensteinTITLE

0.64+

AzureTITLE

0.56+

coupleQUANTITY

0.52+

PRIMTITLE

0.51+

Brian Goldfarb, Splunk - AWS Summit SF 2017 - #AWSSummit - #theCUBE


 

>> Narrator: Live from San Francisco, it's the Cube, covering AWS Summit 2017. Brought to you by Amazon Web Services. (upbeat music) >> Hi, welcome back to the Cube, live from the AWS Summit San Francisco. Jeff Frick, and I are here with the CMO of Splunk, Brian Goldfarb. Hey Brian, welcome to the Cube. >> Thanks, thanks for having us, we're really glad to be here. >> You've been the CMO at the Cube, the Cube, congratulations! >> Brian: Promotion, this is amazing. (laughs) >> You've been promoted. Let me start again, you've been the CMO with Splunk, am I red yet, for about six months. Talk to us about the new role that you have there, what do you, what's exciting, what's happening? >> Yeah, it has been almost six months now. It's been an amazing experience. Splunk was super attractive to me as I was looking at opportunities, because it has both an amazing product and customers who love it. And that combination, particularly in technology, is that rare first place. That's a marketer's dream. You're not creating champions, you're not convincing anyone that it's great. And so I've been coming in focusing on how do take that incredible asset, our community, and our users and really expand it. And that's been a big focus for me over the last five months. It's an amazing company. I'm very honored and lucky to be working with such a great place. And in fact, we won, "Best Places to Work." >> Lisa: Congratulations. >> For the tenth year in a row. >> The Santana row office are pretty nice. I was lucky enough to go down there and check those out when you opened them. >> Oh yeah, that's awesome. Our headquarters is in San Francisco, but as you think about the expansion of the area having facilities down in San Jose is super great for as we grow our company. >> So I guess, it's a match made in heaven, but the word on the street is you're a data guy. You want data to support everything. Data driven solutions. Data backed decision making. What a perfect fit because the essence of Splunk is basically sitting on that machine data that's flowing through the system. >> That's right. You think about where our roots are, is really how do take big data and make it useful for people. Like machine data is often forgotten. All the information flowing from sensors and hardware and servers. And as we sit here, at the Amazon Web Services show in San Francisco, all of that infrastructure is core to creating machine data. And we want to make it accessible and usable for everyone to get insights. And what we see is that manifest itself in a lot of interesting ways. I'll give you an example, Yelp. Think about food, think about reviews, but they're using Splunk for a couple things. One, make sure that their core infrastructure is up and running, obviously important. Because we need that restaurant review, you need it now. That's a very San Francisco thing. But more importantly as they've rolled out their new food delivery capabilities, all of the business analytics required to make sure that operations business runs tip top is critical. So they're using Splunk for all those pieces. >> So I wonder if you can speak a little bit about the relationship with AWS? I know you're relatively new, but Doug Merritt is relatively new. And of all of the logos that Verner went which were numerous and hard to see, (Brian laughs) he picked Doug to come up and really help out with the keynote. Obviously, Cloud, big deal, AWS, big deal. What is the relationship, how has it evolved over time, and how is this cloud-enabled delivery impacting the way Splunk does business? >> Yeah, we're very fortunate to have a wonderful partnership with Amazon Web Services. We've been a strategic partner of them for almost five years. And we made a big bet of our business on using their product to deliver our product in the cloud. Our business started 14 years ago with Splunk Enterprise, an on-premises based software solutions that's been adopted by over 13,000 customers around the globe. And we heard time and time again, as the cloud became more important in the decisions people were making, how do we get the visibility that we need both across our on-premises assets and our cloud assets? And so the relationship with Amazon has been predicated on how do we deliver Splunk in the cloud and more importantly, how do we give everyone who's now adopting Amazon at this amazing clip the visibility into all the components that they're using, so they can maintain their solutions, they can make sure things are running, they can optimize their span, et cetera. >> And it's even a building partner, right? So it's an infrastructure partner, it's a delivery slash sales channel partner, and there you can even build directly through Amazon, if I heard right today. >> That's right. So we're both a customer and a partner is one way to think about it. And today in the keynote, we announced with their new AWS marketplace, SaaS Contracts API release, that we're one of their first partners delivering our product through that new delivery model. And what's really interesting about it is today enterprises are trying to innovate faster. They get stuck sometimes through things that shouldn't matter. Procurement, legal, how do you actually get the assets that you need in order to do the things they need to do? Speed is such an important part of being successful. And now that we can deliver Splunk through the AWS marketplace, customers can easily find it. They can now easily buy it using their existing building relationship with Amazon. They can use friendly terms that are defined there. And they can buy on one year, two year or three year contracts with the appropriate term-based discount. So the longer you buy, the cheaper it is. So, procurement's happy, legal's happy, the technical user's happy 'cause they can move faster than they ever have before. >> One of things that we're hearing in a lot of enterprises is that directives coming down from the board to the CIO. You've got to move more legacy applications to the hub, but you've also got to try to find more value from digital assets. With that respect, what are some of the core functions that Splunk Enterprise on AWS is delivering to customers from a value out of our assets perspective? >> There's assets across so many different categories, so we look at, what are we doing across the infrastructure side of the business? What are we doing across the security side of the business and now this emerging category of IOT, how do we get all of the assets working together? And one of the things that we think about a lot with our customers is we have all this data. How do you apply different lenses so that different people can ask different questions of this same data and get the key insights back. So if I'm a security investigatory trying to prevent fraud, that's something that we can do, but that's also helping the people in IT maintain systems faster and it's also doing business, process management, working with supply chain and we see that happening everywhere. We were talking just before we started about this mental model that enterprises have where they're stuck in this reactive place. Something breaks, then you fix it. Or a customer complains and you deal with it and everyone's on this journey to being more proactive. How do I get notified that something broke so that I can fix it, or better yet, predictive? So we're taking machine learning and artificial intelligence concepts, baking them in directly into the Splunk platform and using that to help people go from that reactive state that they're in to this forward state of predictive intelligence and being able to fix things before they even become a problem. >> I would love to dig in a little bit deeper on IOT, 'cause you guys are into IOT when it was called machines. Machines are just a subset of the things and now, the IOT thing is really taking off. Obviously, we to the GE shows and also people are things, too, which sometimes gets forgotten in the conversations, and we all throw off a ton of digitals off, so you guys are pretty well positioned to apply your technology techniques, processes now to a whole giant new set of data flows coming off all these things. >> You put the words in my mouth. People forget about people being things. We talk about machine data, the word machine can mean anything, really. It's how do you take all of this data, correlate it together in interesting ways, then do something with it. Thing about the retail use case. Customers now have an expectation of the experience that they're going to have, higher than ever before. You just expect more, you know they have the information, so you want it. You think about beacons and knowing your preferences, so retailers need to take advantage of that and they can use technology like Splunk to really get there. Another example around customer expectation, think about travel. We all travel here, you guys probably flew in or drove in, and we have mediocre experiences at the airport in particular. We have a customer Gatwick Airports in the UK who's completely Splunked everything they're doing at the airport with a goal of reducing the amount of time that it takes to go from the front desk to your gate to less than five minutes. So on a dashboard, they can see wait times at any particular security terminal, they can redeploy assets, they get alerts, and they can monitor all the different data streams, whether it's weather data, air traffic control data, airline data, sensors from all the different parts of the airport, and pull all that together into a people-based experience to drive up that engagement. >> Gatwick, great example, and your CEO was also talking about Coca Cola on stage, for example. You've got over 13,000 customers, so as we look at where we are today with cloud users maturing, cloud providers maturing, looking at what Amazon has to date, over 90 services. As customers look at getting more legacy applications out of operations, how is Splunk helping customers on this journey to hybrid, or is hybrid a destination? What's the conversation there like with the senior leaders that you talk to down to the IT folks? >> In my job, I get the luxury of talking to hundreds of CIOs and I'll tell you, all of them see hybrid as the destination. Most of the enterprises that exist in the world have investments in things from mainframes to existing infrastructure and data centers and even as they consolidate more and more into the cloud, we're going to be in a world where people have assets in many different places. What we've seen with Amazon and why I think our partnership has been so successful is we're helping a lot of these enterprises justify and control how they're able to get to the cloud faster. We talked about innovation and speed. Being able to adopt services in the cloud in addition to what we're doing on premise is critical. And with Splunk, they get insight across all their different components. They feel that they can manage the security across both on premises and the cloud and they get the peace of mind that they have that operational visibility because they're going to be hybrid, they're going to be running in the cloud, they're always going to have their existing investments. That's kind of the state of the world for the foreseeable future. >> So, looking forward, you've been in the job about six months or so, what are your priorities for the next six months? Doug says, "alright, warm up time's over, "get to work, Brian." >> He said that on the third day. >> (laughs) On the third day. So what are some of your priorities? >> As a business, we have a collection of priorities. One is the cloud, full stop. We know that the journey to the cloud is coming full speed and what we can do around Splunk cloud and being able to fulfill and delivers services for our customers there is absolutely critical and continuing to grow that capability. And second for us is customer success. How we get people beyond single use case to multi use case. Using it in IT, how to take advantage of it in security. How do you take advantage of it in supply chain? Because that magic moment that customers have is really when they have the same data in and they get value across their entire business. For me, as the CMO, my priority is piggy back on that. First and foremost is digital. It's kind of trite, everyone's talking about it but I came from Google and sales force. I'm a performance guy and so I'm looking at how we can reconstitute the entire buyer journey from the moment someone says, "I'm interested "in a topic that's relevant to our product" to "I transact online" and that's a big initiative for what we're doing across web and sales team to work through all those pieces, and then second, I now am the chief t-shirt officer. >> Jeff: That's not an easy job. >> It's the hardest job I've ever had, 'cause I'm not in my strength and always innovating on what's next. I hear I was trending on Twitter, Doug's t-shirt versus Werner's t-shirt today. >> Jeff: That's right. >> I think we were winning. >> And you guys have the biggest t-shirt booth installation, device at trade shows than anyone rather than just giving away, in the back, the entire booth is basically built around the t-shirts. >> Oh, and we're Splunking everything, too. >> Impressive. >> And we saw a spike in traffic, too. Our store this morning after we went on stage. >> I put the picture up, so I sent the link, hopefully it will get me some Amazon affiliate money back. I don't know. >> The t-shirts match the buyer's journey. >> Of course. >> Of course, as a marketer, of course. >> Stop chasing your tail dash f. You got to connect to your logs and always keep watching. >> Before we let you go, let's get a plug in for splunk.conf. The Cube has been going, I think this will be our fifth or sixth year, I can't count that high, I'm out of fingers and toes. >> Eighth. >> Your eighth, our sixth there, I think. >> There you go, you're a regular. >> So where is it, what's the highlights this year? It's always a great event. >> Much like AWS, we're doing events all across the world all the time. We have a series called Splunk live, we just did one in San Francisco last week which are super great ways to come and learn about the product and get hands-on keyboard to improve your skills, but it all culminates in .conf which is our leading event in the category. It's going to be in D.C. this year, September 25th to 28th and that's the best place to come, learn about Splunk, get hands-on with the product, meet the product team, learn from your peers, which to me, is the thing that matters the most. To see all the innovative ideas that everyone is doing, 'cause one of the great things about Splunk is the use cases for the product are basically infinite, and so you hear more and more stories, whether it's the city of San Francisco or shazaam or Yelp or Gatwick or thousands of others and .conf is the place, so you guys are going to be there, I'm going to be there, which is the reason everyone should come, obviously. >> Exactly, t-shirts for all. >> Brian: T-shirts for everybody. >> Well, Brian Goldfarb, CMO of Splunk, I got that right this time, thank you so much. >> Brian: And the Cube. >> And the Cube, apparently. (laughs) >> Jeff: Watch out, John, we've got a new CMO. >> Lisa: Thank you so much for joining us. Great, your passion is evident, we wish you the best of luck and continued success in your role. For my co-host Jeff Frick, I'm Lisa Martin. We are live at the AWS Summit San Francisco. Stick around, we'll be right back. (upbeat music) Is changing and this entire process, you started to mention a little bit, how is-- (upbeat music)

Published Date : Apr 19 2017

SUMMARY :

Narrator: Live from San Francisco, it's the Cube, live from the AWS Summit San Francisco. to be here. Brian: Promotion, this is amazing. Talk to us about the new role that you have there, over the last five months. and check those out when you opened them. for as we grow our company. What a perfect fit because the essence of Splunk is all of the business analytics required And of all of the logos that Verner went And so the relationship with Amazon has been predicated and there you can even build directly through Amazon, So the longer you buy, the cheaper it is. directives coming down from the board to the CIO. And one of the things that we think about a lot Machines are just a subset of the things and now, at the airport with a goal of reducing the amount What's the conversation there like with the senior leaders In my job, I get the luxury of talking to hundreds of CIOs for the next six months? (laughs) On the third day. We know that the journey to the cloud is coming full speed It's the hardest job I've ever had, the entire booth is basically built around the t-shirts. And we saw a spike in traffic, too. I put the picture up, so I sent the link, You got to connect to your logs and always keep watching. Before we let you go, let's get a plug in for splunk.conf. So where is it, what's the highlights this year? and that's the best place to come, learn about Splunk, I got that right this time, thank you so much. And the Cube, apparently. We are live at the AWS Summit San Francisco.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DougPERSON

0.99+

JeffPERSON

0.99+

Jeff FrickPERSON

0.99+

Lisa MartinPERSON

0.99+

AWSORGANIZATION

0.99+

Brian GoldfarbPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

JohnPERSON

0.99+

GoogleORGANIZATION

0.99+

BrianPERSON

0.99+

LisaPERSON

0.99+

one yearQUANTITY

0.99+

fifthQUANTITY

0.99+

three yearQUANTITY

0.99+

two yearQUANTITY

0.99+

San JoseLOCATION

0.99+

San FranciscoLOCATION

0.99+

EighthQUANTITY

0.99+

eighthQUANTITY

0.99+

sixthQUANTITY

0.99+

Doug MerrittPERSON

0.99+

D.C.LOCATION

0.99+

tenth yearQUANTITY

0.99+

UKLOCATION

0.99+

SplunkORGANIZATION

0.99+

last weekDATE

0.99+

FirstQUANTITY

0.99+

todayDATE

0.99+

YelpORGANIZATION

0.99+

secondQUANTITY

0.99+

less than five minutesQUANTITY

0.99+

GEORGANIZATION

0.99+

September 25thDATE

0.99+

over 13,000 customersQUANTITY

0.99+

over 90 servicesQUANTITY

0.99+

oneQUANTITY

0.99+

bothQUANTITY

0.99+

VernerPERSON

0.99+

OneQUANTITY

0.98+

sixth yearQUANTITY

0.98+

SplunkTITLE

0.98+

third dayQUANTITY

0.98+

this yearDATE

0.98+

one wayQUANTITY

0.98+

thousandsQUANTITY

0.98+

28thDATE

0.97+

TwitterORGANIZATION

0.97+

14 years agoDATE

0.97+

first partnersQUANTITY

0.97+

GatwickLOCATION

0.97+

AWS Summit 2017EVENT

0.96+

AWS SummitEVENT

0.96+

AWS Summit SF 2017EVENT

0.96+

almost five yearsQUANTITY

0.95+