Image Title

Search Results for Sharp:

Clint Sharp, Cribl | AWS re:Invent 2022


 

(upbeat music) (background crowd chatter) >> Hello, fantastic cloud community and welcome back to Las Vegas where we are live from the show floor at AWS re:Invent. My name is Savannah Peterson. Joined for the first time. >> Yeah, Doobie. >> VIP, I know. >> All right, let's do this. >> Thanks for having me Dave, I really appreciate it. >> I appreciate you doing all the hard work. >> Yeah. (laughs) >> You, know. >> I don't know about that. We wouldn't be here without you and all these wonderful stories that all the businesses have. >> Well, when I host with John it's hard for me to get a word in edgewise. I'm just kidding, John. (Savannah laughing) >> Shocking, I've never want that experience. >> We're like knocking each other, trying to, we're elbowing. No, it's my turn to speak, (Savannah laughing) so I'm sure we're going to work great together. I'm really looking forward to it. >> Me too Dave, I feel very lucky to be here and I feel very lucky to introduce our guest this afternoon, Clint Sharp, welcome to the show. You are with Cribl. Yeah, how does it feel to be on the show floor today? >> It's amazing to be back at any conference in person and this one is just electric, I mean, there's like a ton of people here love the booth. We're having like a lot of activity. It's been really, really exciting to be here. >> So you're a re:Ieinvent alumni? Have you been here before? You're a Cube alumni. We're going to have an OG conversation about observability, I'm looking forward to it. Just in case folks haven't been watching theCUBE for the last nine years that you've been on it. I know you've been with a few different companies during that time period. Love that you've been with us since 2013. Give us the elevator pitch for Cribl. >> Yeah, so Cribl is an observability company which we're going to talk about today. Our flagship product is a telemetry router. So it just really helps you get data into the right places. We're very specifically in the observability and security markets, so we sell to those buyers and we help them work with logs and metrics and open telemetry, lots of different types of data to get it into the right systems. >> Why did observability all of a sudden become such a hot thing? >> Savannah: Such a hot topic. >> Right, I mean it just came on the scene so quickly and now it's obviously a very crowded space. So why now, and how do you guys differentiate from the crowd? >> Yeah, sure, so I think it's really a post-digital transformation thing Dave, when I think about how I interact with organizations you know, 20 years ago when I started this business I called up American Airlines when things weren't working and now everything's all done digitally, right? I rarely ever interact with a human being and yet if I go on one of these apps and I get a bad experience, switching is just as easy as booking another airline or changing banks or changing telecommunications providers. So companies really need an ability to dive into this data at very high fidelity to understand what Dave's experience with their service or their applications are. And for the same reasons on the security side, we need very, very high fidelity data in order to understand whether malicious actors are working their way around inside of the enterprise. And so that's really changed the tooling that we had, which, in prior years, it was really hard to ask arbitrary questions of that data. You really had to deal with whatever the vendor gave you or you know, whatever the tool came with. And observability is really an evolution, allowing people to ask and answer questions of their data that they really weren't planning in advance. >> Dave: Like what kind of questions are people asking? >> Yeah sure so what is Dave's performance with this application? I see that a malicious actor has made their way on the inside of my network. Where did they go? What did they do? What files did they access? What network connections did they open? And the scale of machine data of this machine to machine communication is so much larger than what you tend to see with like human generated data, transactional data, that we really need different systems to deal with that type of data. >> And what would you say is your secret sauce? Like some people come at it, some search, some come at it from security. What's your sort of superpower as Lisa likes to say? >> Yeah, so we're a customer's first company. And so one of the things I think that we've done incredibly well is go look at the market and look for problems that are not being solved by other vendors. And so when we created this category of an observability pipeline, nobody was really marketing an observability pipeline at that time. And really the problem that customers had is they have data from a lot of different sources and they need to get it to a lot of different destinations. And a lot of that data is not particularly valuable. And in fact, one of the things that we like to say about this class of data is that it's really not valuable until it is, right? And so if I have a security breach, if I have an outage and I need to start pouring through this data suddenly the data is very, very valuable. And so customers need a lot of different places to store this data. I might want that data in a logging system. I might want that data in a metric system. I might want that data in a distributed tracing system. I might want that data in a data lake. In fact AWS just announced their security data lake product today. >> Big topic all day. >> Yeah, I mean like you can see that the industry is going in this way. People want to be able to store massively greater quantities of data than they can cost effectively do today. >> Let's talk about that just a little bit. The tension between data growth, like you said it's not valuable until it is or until it's providing context, whether that be good or bad. Let's talk about the tension between data growth and budget growth. How are you seeing that translate in your customers? >> Yeah, well so data's growing in a 25% CAGR per IDC which means we're going to have two and a half times the data in five years. And when you talk to CISOs and CIOs and you ask them, is your budget growing at a 25% CAGR, absolutely not, under no circumstances am I going to have, you know, that much more money. So what got us to 2022 is not going to get us to 2032. And so we really need different approaches for managing this data at scale. And that's where you're starting to see things like the AWS security data lake, Snowflake is moving into this space. You're seeing a lot of different people kind of moving into the database for security and observability type of data. You also have lots of other companies that are competing in broad spectrum observability, companies like Splunk or companies like Datadog. And these guys are all doing it from a data-first approach. I'm going to bring a lot of data into these platforms and give users the ability to work with that data to understand the performance and security of their applications. >> Okay, so carry that through, and you guys are different how? >> Yeah, so we are this pipeline that's sitting in the middle of all these solutions. We don't care whether your data was originally intended for some other tool. We're going to help you in a vendor-neutral way get that data wherever you need to get it. And that gives them the ability to control cost because they can put the right data in the right place. If it's data that's not going to be frequently accessed let's put it in a data lake, the cheapest place we can possibly put that data to rest. Or if I want to put it into my security tool maybe not all of the data that's coming from my vendor, my vendor has to put all the data in their records because who knows what it's going to be used for. But I only use half or a quarter of that information for security. And so what if I just put the paired down results in my more expensive storage but I kept full fidelity data somewhere else. >> Okay so you're observing the observability platforms basically, okay. >> Clint: We're routing that data. >> And then creating- >> It's meta observability. >> Right, observability pipeline. When I think a data pipeline, I think of highly specialized individuals, there's a data analyst, there's a data scientist, there's a quality engineer, you know, etc, et cetera. Do you have specific roles in your customer base that look at different parts of that pipeline and can you describe that? >> Yeah, absolutely, so one of the things I think that we do different is we sell very specifically to the security tooling vendors. And so in that case we are, or not to the vendors, but to the customers themselves. So generally they have a team inside of that organization which is managing their security tooling and their operational tooling. And so we're building tooling very specifically for them, for the types of data they work with for the volumes and scale of data that they work with. And that is giving, and no other vendor is really focusing on them. There's a lot of general purpose data people in the world and we're really the only ones that are focusing very specifically on observability and security data. >> So the announcement today, the security data lake that you were talking about, it's based on the Open Cybersecurity Framework, which I think AWS put forth, right? And said, okay, everybody come on. [Savannah] Yeah, yeah they did. >> So, right, all right. So what are your thoughts on that? You know, how does it fit with your strategy, you know. >> Yeah, so we are again a customer's first neutral company. So if OCSF gains traction, which we hope it does then we'll absolutely help customers get data into that format. But we're kind of this universal adapter so we can take data from other vendors, proprietary schemas, maybe you're coming from one of the other send vendors and you want to translate that to OCSF to use it with the security data lake. We can provide customers the ability to change and reshape that data to fit into any schema from any vendor so that we're really giving security data lake customers the ability to adapt the legacy, the stuff that they have that they can't get rid of 'cause they've had it for 10 years, 20 years and nothing inside of an enterprise ever goes away. That stuff stays forever. >> Legacy. >> Well legacy is working right? I mean somebody's actually, you know, making money on top of this thing. >> We never get rid of stuff. >> No, (laughing) we just added the toolkit. It's like all the old cell phones we have, it's everything. I mean we even do it as individual users and consumers. It's all a part of our little personal library. >> So what's happened in the field company momentum? >> Yeah let's talk trends too. >> Yeah so the company's growing crazily fast. We're north of 400 employees and we're only a hundred and something, you know, a year ago. So you can kind of see we're tripling you know, year over year. >> Savannah: Casual, especially right now in a lot of companies are feeling that scale back. >> Yeah so obviously we're keeping our eye closely on the macro conditions, but we see such a huge opportunity because we're a value player in this space that there's a real flight to value in enterprises right now. They're looking for projects that are going to pay themselves back and we've always had this value prop, we're going to come give you a lot of capabilities but we're probably going to save you money at the same time. And so that's just really resonating incredibly well with enterprises today and giving us an opportunity to continue to grow in the face of some challenging headwinds from a macro perspective. >> Well, so, okay, so people think okay, security is immune from the macro. It's not, I mean- >> Nothing, really. >> No segment is immune. CrowdStrike announced today the CrowdStrike rocket ship's still growing AR 50%, but you know, stocks down, I don't know, 20% right now after our- >> Logically doesn't make- >> Okay stuff happens, but still, you know, it's interesting, the macro, because it was like, to me it's like a slingshot, right? Everybody was like, wow, pandemic, shut down. All of a sudden, oh wow, need tech, boom. >> Savannah: Yeah, digitally transformed today. >> It's like, okay, tap the brakes. You know, when you're driving down the highway and you get that slingshotting effect and I feel like that's what's going on now. So, the premise is that the real leaders, those guys with the best tech that really understand the customers are going to, you know, get through this. What are your customers telling you in terms of, you know they're spending patterns, how they're trying to maybe consolidate vendors and how does that affect you guys? >> Yeah, for sure, I mean, I think, obviously, back to that flight to value, they're looking for vendors who are aligned with their interests. So, you know, as their budgets are getting pressure, what vendors are helping them provide the same capabilities they had to provide to the business before especially from a security perspective 'cause they're going to get cut along with everybody else. If a larger organization is trimming budgets across, security's going to get cut along with everybody else. So is IT operations. And so since they're being asked to do more with less that's you know, really where we're coming in and trying to provide them value. But certainly we're seeing a lot of pressure from IT departments, security departments all over in terms of being able to live and do more with less. >> Yeah, I mean, Celip's got a great quote today. "If you're looking to tighten your belt the cloud is the place to do it." I mean, it's probably true. >> Absolutely, elastic scalability in this, you know, our new search product is based off of AWS Lambda and it gives you truly elastic scalability. These changes in architectures are what's going to allow, it's not that cloud is cheaper, it's that cloud gives you on-demand scalability that allows you to truly control the compute that you're spending. And so as a customer of AWS, like this is giving us capabilities to offer products that are scalable and cost effective in ways that we just have not been able to do in the cloud. >> So what does that mean for the customer that you're using serverless using Lambda? What does that mean for them in terms of what they don't have to do that they maybe had to previously? >> It offers us the ability to try to charge them like a truly cloud native vendor. So in our cloud product we sell a credit model whereby which you deduct credits for usage. So if you're streaming data, you pay for gigabytes. If you're searching data then you're paying for CPU consumption, and so it allows us to charge them only for what they're consuming which means we don't have to manage a whole fleet of servers, and eventually, well we go to managing our own compute quite possibly as we start to get to scale at certain customers. But Lambda allowed us to not have to launch that way, not have to run a bunch of infrastructure. And we've been able to align our charging model with something that we think is the most customer friendly which is true consumption, pay for what you consume. >> So for example, you're saying you don't have to configure the EC2 Instance or figure out the memory sizing, you don't have to worry about any of that. You just basically say go, it figures that out and you can focus on upstream, is that right? >> Yep, and we're able to not only from a cost perspective also from a people perspective, it's allowed us velocity that we did not have before, which is we can go and prototype and build significantly faster because we're not having to worry, you know, in our mature products we use EC2 like everybody else does, right? And so as we're launching new products it's allowed us to iterate much faster and will we eventually go back to running our own compute, who knows, maybe, but it's allowed us a lot faster velocity than we were able to get before. >> I like what I've heard you discuss a lot is the agility and adaptability. We're going to be moving and evolving, choosing different providers. You're very outspoken about being vendor agnostic and I think that's actually a really unique and interesting play because we don't know what the future holds. So we're doing a new game on that note here on theCUBE, new game, new challenge, I suppose I would call it to think of this as your 30 second thought leadership highlight reel, a sizzle of the most important topic or conversation that's happening theme here at the show this year. >> Yeah, I mean, for me, as I think, as we're looking, especially like security data lake, et cetera, it's giving customers ownership of their data. And I think that once you, and I'm a big fan of this concept of open observability, and security should be the same way which is, I should not be locking you in as a vendor into my platform. Data should be stored in open formats that can be analyzed by multiple places. And you've seen this with AWS's announcement, data stored in open formats the same way other vendors store that. And so if you want to plug out AWS and you want to bring somebody else in to analyze your security lake, then great. And as we move into our analysis product, our search product, we'll be able to search data in the security data lake or data that's raw in S3. And we're really just trying to give customers back control over their future so that they don't have to maintain a relationship with a particular vendor. They're always getting the best. And that competition fuels really great product. And I'm really excited for the next 10 years of our industry as we're able to start competing on experiences and giving customers the best products, the customer wins. And I'm really excited about the customer winning. >> Yeah, so customer focused, I love it. What a great note to end on. That was very exciting, very customer focused. So, yo Clint, I have really enjoyed talking to you. Thanks. >> Thanks Clint. >> Thanks so much, it's been a pleasure being on. >> Thanks for enhancing our observability over here, I feel like I'll be looking at things a little bit differently after this conversation. And thank all of you for tuning in to our wonderful afternoon of continuous live coverage here at AWS re:Ieinvent in fabulous Las Vegas, Nevada with Dave Vellante. I'm Savannah Peterson. We're theCUBE, the leading source for high tech coverage. (bright music)

Published Date : Nov 30 2022

SUMMARY :

Joined for the first time. Dave, I really appreciate it. I appreciate you that all the businesses have. it's hard for me to want that experience. I'm really looking forward to it. Yeah, how does it feel to It's amazing to be back for the last nine years and security markets, so and how do you guys And for the same reasons And the scale of machine data And what would you And so one of the things I think that the industry is going in this way. Let's talk about the am I going to have, you We're going to help you the observability and can you describe that? And so in that case we that you were talking about, it's based on So what are your thoughts on that? the ability to change I mean somebody's actually, you know, It's like all the old cell and something, you know, a year ago. of companies are feeling that scale back. that are going to pay themselves back security is immune from the macro. the CrowdStrike rocket it's interesting, the Savannah: Yeah, and you get that slingshotting effect asked to do more with less the cloud is the place to do it." it's that cloud gives you and so it allows us to charge them only and you can focus on And so as we're launching new products I like what I've heard you and security should be the same way What a great note to end on. Thanks so much, it's And thank all of you for tuning in

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

AWSORGANIZATION

0.99+

ClintPERSON

0.99+

DavePERSON

0.99+

Dave VellantePERSON

0.99+

SavannahPERSON

0.99+

Savannah PetersonPERSON

0.99+

CriblORGANIZATION

0.99+

Clint SharpPERSON

0.99+

LisaPERSON

0.99+

20 yearsQUANTITY

0.99+

25%QUANTITY

0.99+

10 yearsQUANTITY

0.99+

Las VegasLOCATION

0.99+

American AirlinesORGANIZATION

0.99+

DatadogORGANIZATION

0.99+

2022DATE

0.99+

CrowdStrikeORGANIZATION

0.99+

20%QUANTITY

0.99+

SplunkORGANIZATION

0.99+

30 secondQUANTITY

0.99+

DoobiePERSON

0.99+

a year agoDATE

0.99+

LambdaTITLE

0.99+

five yearsQUANTITY

0.99+

halfQUANTITY

0.99+

2013DATE

0.99+

first companyQUANTITY

0.99+

first timeQUANTITY

0.99+

todayDATE

0.99+

2032DATE

0.99+

CubeORGANIZATION

0.98+

20 years agoDATE

0.98+

oneQUANTITY

0.98+

CriblPERSON

0.98+

EC2TITLE

0.98+

this yearDATE

0.97+

Las Vegas, NevadaLOCATION

0.96+

two and a half timesQUANTITY

0.96+

OCSFORGANIZATION

0.93+

S3TITLE

0.91+

this afternoonDATE

0.9+

IeinventORGANIZATION

0.86+

Open Cybersecurity FrameworkTITLE

0.84+

a hundred and somethingQUANTITY

0.82+

CelipPERSON

0.81+

one of the thingsQUANTITY

0.79+

InventEVENT

0.78+

last nine yearsDATE

0.77+

a quarterQUANTITY

0.77+

first neutral companyQUANTITY

0.75+

ARQUANTITY

0.75+

first approachQUANTITY

0.74+

dataQUANTITY

0.72+

re:InventEVENT

0.7+

north of 400 employeesQUANTITY

0.67+

SnowflakeORGANIZATION

0.67+

Clint Sharp, Cribl | Cube Conversation


 

(upbeat music) >> Hello, welcome to this CUBE conversation I'm John Furrier your host here in theCUBE in Palo Alto, California, featuring Cribl a hot startup taking over the enterprise when it comes to data pipelining, and we have a CUBE alumni who's the co-founder and CEO, Clint Sharp. Clint, great to see you again, you've been on theCUBE, you were on in 2013, great to see you, congratulations on the company that you co-founded, and leading as the chief executive officer over $200 million in funding, doing this really strong in the enterprise, congratulations thanks for joining us. >> Hey, thanks John it's really great to be back. >> You know, remember our first conversation the big data wave coming in, Hadoop World 2010, now the cloud comes in, and really the cloud native really takes data to a whole nother level. You've seeing the old data architectures being replaced with cloud scale. So the data landscape is interesting. You know, Data as Code you're hearing that term, data engineering teams are out there, data is everywhere, it's now part of how developers and companies are getting value whether it's real time, or coming out of data lakes, data is more pervasive than ever. Observability is a hot area, there's a zillion companies doing it, what are you guys doing? Where do you fit in the data landscape? >> Yeah, so what I say is that Cribl and our products and we solve the problem for our customers of the fundamental tension between data growth and budget. And so if you look at IDCs data data's growing at a 25%, CAGR, you're going to have two and a half times the amount of data in five years that you have today, and I talk to a lot of CIOs, I talk to a lot of CISOs, and the thing that I hear repeatedly is my budget is not growing at a 25% CAGR so fundamentally, how do I resolve this tension? We sell very specifically into the observability in security markets, we sell to technology professionals who are operating, you know, observability in security platforms like Splunk, or Elasticsearch, or Datadog, Exabeam, like these types of platforms they're moving, protocols like syslog, they're moving, they have lots of agents deployed on every endpoint and they're trying to figure out how to get the right data to the right place, and fundamentally you know, control cost. And we do that through our product called Stream which is what we call an observability pipeline. It allows you to take all this data, manipulate it in the stream and get it to the right place and fundamentally be able to connect all those things that maybe weren't originally intended to be connected. >> So I want to get into that new architecture if you don't mind, but let me first ask you on the problem space that you're in. So cloud native obviously instrumentating, instrumenting everything is a key thing. You mentioned data got all these tools, is the problem that there's been a sprawl of things being instrumented and they have to bring it together, or it's too costly to run all these point solutions and get it to work? What's the problem space that you're in? >> So I think customers have always been forced to make trade offs John. So the, hey I have volumes and volumes and volumes of data that's relevant to securing my enterprise, that's relevant to observing and understanding the behavior of my applications but there's never been an approach that allows me to really onboard all of that data. And so where we're coming at is giving them the tools to be able to, you know, filter out noise and waste, to be able to, you know, aggregate this high fidelity telemetry data. There's a lot of growing changes, you talk about cloud native, but digital transformation, you know, the pandemic itself and remote work all these are driving significantly greater data volumes, and vendors unsurprisingly haven't really been all that aligned to giving customers the tools in order to reshape that data, to filter out noise and waste because, you know, for many of them they're incentivized to get as much data into their platform as possible, whether that's aligned to the customer's interests or not. And so we saw an opportunity to come out and fundamentally as a customers-first company give them the tools that they need, in order to take back control of their data. >> I remember those conversations even going back six years ago the whole cloud scale, horizontally scalable applications, you're starting to see data now being stuck in the silos now to have high, good data you have to be observable, which means you got to be addressable. So you now have to have a horizontal data plane if you will. But then you get to the question of, okay, what data do I need at the right time? So is the Data as Code, data engineering discipline changing what new architectures are needed? What changes in the mind of the customer once they realize that they need this new way to pipe data and route data around, or make it available for certain applications? What are the key new changes? >> Yeah, so I think one of the things that we've been seeing in addition to the advent of the observability pipeline that allows you to connect all the things, is also the advent of an observability lake as well. Which is allowing people to store massively greater quantities of data, and also different types of data. So data that might not traditionally fit into a data warehouse, or might not traditionally fit into a data lake architecture, things like deployment artifacts, or things like packet captures. These are binary types of data that, you know, it's not designed to work in a database but yet they want to be able to ask questions like, hey, during the Log4Shell vulnerability, one of all my deployment artifacts actually had Log4j in it in an affected version. These are hard questions to answer in today's enterprise. Or they might need to go back to full fidelity packet capture data to try to understand that, you know, a malicious actor's movement throughout the enterprise. And we're not seeing, you know, we're seeing vendors who have great log indexing engines, and great time series databases, but really what people are looking for is the ability to store massive quantities of data, five times, 10 times more data than they're storing today, and they're doing that in places like AWSS3, or in Azure Blob Storage, and we're just now starting to see the advent of technologies we can help them query that data, and technologies that are generally more specifically focused at the type of persona that we sell to which is a security professional, or an IT professional who's trying to understand the behaviors of their applications, and we also find that, you know, general-purpose data processing technologies are great for the enterprise, but they're not working for the people who are running the enterprise, and that's why you're starting to see the concepts like observability pipelines and observability lakes emerge, because they're targeted at these people who have a very unique set of problems that are not being solved by the general-purpose data processing engines. >> It's interesting as you see the evolution of more data volume, more data gravity, then you have these specialty things that need to be engineered for the business. So sounds like observability lake and pipelining of the data, the data pipelining, or stream you call it, these are new things that they bolt into the architecture, right? Because they have business reasons to do it. What's driving that? Sounds like security is one of them. Are there others that are driving this behavior? >> Yeah, I mean it's the need to be able to observe applications and observe end-user behavior at a fine-grain detail. So, I mean I often use examples of like bank teller applications, or perhaps, you know, the app that you're using to, you know, I'm going to be flying in a couple of days. I'll be using their app to understand whether my flight's on time. Am I getting a good experience in that particular application? Answering the question of is Clint getting a good experience requires massive quantities of data, and your application and your service, you know, I'm going to sit there and look at, you know, American Airlines which I'm flying on Thursday, I'm going to be judging them based on off of my experience. I don't care what the average user's experience is I care what my experience is. And if I call them up and I say, hey, and especially for the enterprise usually this is much more for, you know, in-house applications and things like that. They call up their IT department and say, hey, this application is not working well, I don't know what's going on with it, and they can't answer the question of what was my individual experience, they're living with, you know, data that they can afford to store today. And so I think that's why you're starting to see the advent of these new architectures is because digital is so absolutely critical to every company's customer experience, that they're needing to be able to answer questions about an individual user's experience which requires significantly greater volumes of data, and because of significantly greater volumes of data, that requires entirely new approaches to aggregating that data, bringing the data in, and storing that data. >> Talk to me about enabling customer choice when it comes around controlling their data. You mentioned that before we came on camera that you guys are known for choice. How do you enable customer choice and control over their data? >> So I think one of the biggest problems I've seen in the industry over the last couple of decades is that vendors come to customers with hugely valuable products that make their lives better but it also requires them to maintain a relationship with that vendor in order to be able to continue to ask questions of that data. And so customers don't get a lot of optionality in these relationships. They sign multi-year agreements, they look to try to start another, they want to go try out another vendor, they want to add new technologies into their stack, and in order to do that they're often left with a choice of well, do I roll out like get another agent, do I go touch 10,000 computers, or a 100,000 computers in order to onboard this data? And what we have been able to offer them is the ability to reuse their existing deployed footprints of agents and their existing data collection technologies, to be able to use multiple tools and use the right tool for the right job, and really give them that choice, and not only give them the choice once, but with the concepts of things like the observability lake and replay, they can go back in time and say, you know what? I wanted to rehydrate all this data into a new tool, I'm no longer locked in to the way one vendor stores this, I can store this data in open formats and that's one of the coolest things about the observability late concept is that customers are no longer locked in to any particular vendor, the data is stored in open formats and so that gives them the choice to be able to go back later and choose any vendor, because they may want to do some AI or ML on that type of data and do some model training. They may want to be able to forward that data to a new cloud data warehouse, or try a different vendor for log search or a different vendor for time series data. And we're really giving them the choice and the tools to do that in a way in which was simply not possible before. >> You know you are bring up a point that's a big part of the upcoming AWS startup series Data as Code, the data engineering role has become so important and the word engineering is a key word in that, but there's not a lot of them, right? So like how many data engineers are there on the planet, and hopefully more will come in, come from these great programs in computer science but you got to engineer something but you're talking about developing on data, you're talking about doing replays and rehydrating, this is developing. So Data as Code is now a reality, how do you see Data as Code evolving from your perspective? Because it implies DevOps, Infrastructure as Code was DevOps, if Data as Code then you got DataOps, AIOps has been around for a while, what is Data as Code? And what does that mean to you Clint? >> I think for our customers, one, it means a number of I think sort of after-effects that maybe they have not yet been considering. One you mentioned which is it's hard to acquire that talent. I think it is also increasingly more critical that people who were working in jobs that used to be purely operational, are now being forced to learn, you know, developer centric tooling, things like GET, things like CI/CD pipelines. And that means that there's a lot of education that's going to have to happen because the vast majority of the people who have been doing things in the old way from the last 10 to 20 years, you know, they're going to have to get retrained and retooled. And I think that one is that's a huge opportunity for people who have that skillset, and I think that they will find that their compensation will be directly correlated to their ability to have those types of skills, but it also represents a massive opportunity for people who can catch this wave and find themselves in a place where they're going to have a significantly better career and more options available to them. >> Yeah and I've been thinking about what you just said about your customer environment having all these different things like Datadog and other agents. Those people that rolled those out can still work there, they don't have to rip and replace and then get new training on the new multiyear enterprise service agreement that some other vendor will sell them. You come in and it sounds like you're saying, hey, stay as you are, use Cribl, we'll have some data engineering capabilities for you, is that right? Is that? >> Yup, you got it. And I think one of the things that's a little bit different about our product and our market John, from kind of general-purpose data processing is for our users they often, they're often responsible for many tools and data engineering is not their full-time job, it's actually something they just need to do now, and so we've really built tool that's designed for your average security professional, your average IT professional, yes, we can utilize the same kind of DataOps techniques that you've been talking about, CI/CD pipelines, GITOps, that sort of stuff, but you don't have to, and if you're really just already familiar with administering a Datadog or a Splunk, you can get started with our product really easily, and it is designed to be able to be approachable to anybody with that type of skillset. >> It's interesting you, when you're talking you've remind me of the big wave that was coming, it's still here, shift left meant security from the beginning. What do you do with data shift up, right, down? Like what do you, what does that mean? Because what you're getting at here is that if you're a developer, you have to deal with data but you don't have to be a data engineer but you can be, right? So we're getting in this new world. Security had that same problem. Had to wait for that group to do things, creating tension on the CI/CD pipelining, so the developers who are building apps had to wait. Now you got shift left, what is data, what's the equivalent of the data version of shift left? >> Yeah so we're actually doing this right now. We just announced a new product a week ago called Cribl Edge. And this is enabling us to move processing of this data rather than doing it centrally in the stream to actually push this processing out to the edge, and to utilize a lot of unused capacity that you're already paying AWS, or paying Azure for, or maybe in your own data center, and utilize that capacity to do the processing rather than having to centralize and aggregate all of this data. So I think we're going to see a really interesting, and left from our side is towards the origination point rather than anything else, and that allows us to really unlock a lot of unused capacity and continue to drive the kind of cost down to make more data addressable back to the original thing we talked about the tension between data growth, if we want to offer more capacity to people, if we want to be able to answer more questions, we need to be able to cost-effectively query a lot more data. >> You guys had great success in the enterprise with what you got going on. Obviously the funding is just the scoreboard for that. You got good growth, what are the use cases, or what's the customer look like that's working for you where you're winning, or maybe said differently what pain points are out there the customer might be feeling right now that Cribl could fit in and solve? How would you describe that ideal persona, or environment, or problem, that the customer may have that they say, man, Cribl's a perfect fit? >> Yeah, this is a person who's working on tooling. So they administer a Splunk, or an Elastic, or a Datadog, they may be in a network operations center, a security operation center, they are struggling to get data into their tools, they're always at capacity, their tools always at the redline, they really wish they could do more for the business. They're kind of tired of being this department of no where everybody comes to them and says, "hey, can I get this data in?" And they're like, "I wish, but you know, we're all out of capacity, and you know, we have, we wish we could help you but we frankly can't right now." We help them by routing that data to multiple locations, we help them control costs by eliminating noise and waste, and we've been very successful at that in, you know, logos, like, you know, like a Shutterfly, or a, blanking on names, but we've been very successful in the enterprise, that's not great, and we continue to be successful with major logos inside of government, inside of banking, telco, et cetera. >> So basically it used to be the old hyperscalers, the ones with the data full problem, now everyone's got the, they're full of data and they got to really expand capacity and have more agility and more engineering around contributions of the business sounds like that's what you guys are solving. >> Yup and hopefully we help them do a little bit more with less. And I think that's a key problem for our enterprises, is that there's always a limit on the number of human resources that they have available at their disposal, which is why we try to make the software as easy to use as possible, and make it as widely applicable to those IT and security professionals who are, you know, kind of your run-of-the-mill tools administrator, our product is very approachable for them. >> Clint great to see you on theCUBE here, thanks for coming on. Quick plug for the company, you guys looking for hiring, what's going on? Give a quick update, take 30 seconds to give a plug. >> Yeah, absolutely. We are absolutely hiring cribl.io/jobs, we need people in every function from sales, to marketing, to engineering, to back office, GNA, HR, et cetera. So please check out our job site. If you are interested it in learning more you can go to cribl.io. We've got some great online sandboxes there which will help you educate yourself on the product, our documentation is freely available, you can sign up for up to a terabyte a day on our cloud, go to cribl.cloud and sign up free today. The product's easily accessible, and if you'd like to speak with us we'd love to have you in our community, and you can join the community from cribl.io as well. >> All right, Clint Sharp co-founder and CEO of Cribl, thanks for coming to theCUBE. Great to see you, I'm John Furrier your host thanks for watching. (upbeat music)

Published Date : Mar 31 2022

SUMMARY :

Clint, great to see you again, really great to be back. and really the cloud native and get it to the right place and get it to work? to be able to, you know, So is the Data as Code, is the ability to store that need to be engineered that they're needing to be that you guys are known for choice. is the ability to reuse their does that mean to you Clint? from the last 10 to 20 years, they don't have to rip and and it is designed to be but you don't have to be a data engineer and to utilize a lot of unused capacity that the customer may have and you know, we have, and they got to really expand capacity as easy to use as possible, Clint great to see you on theCUBE here, and you can join the community Great to see you, I'm

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Clint SharpPERSON

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

10 timesQUANTITY

0.99+

ClintPERSON

0.99+

30 secondsQUANTITY

0.99+

100,000 computersQUANTITY

0.99+

ThursdayDATE

0.99+

CriblORGANIZATION

0.99+

AWSORGANIZATION

0.99+

25%QUANTITY

0.99+

American AirlinesORGANIZATION

0.99+

five timesQUANTITY

0.99+

10,000 computersQUANTITY

0.99+

2013DATE

0.99+

five yearsQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

oneQUANTITY

0.99+

over $200 millionQUANTITY

0.99+

six years agoDATE

0.99+

CUBEORGANIZATION

0.98+

a week agoDATE

0.98+

firstQUANTITY

0.98+

telcoORGANIZATION

0.98+

DatadogORGANIZATION

0.97+

todayDATE

0.97+

AWSS3TITLE

0.97+

Log4ShellTITLE

0.96+

two and a half timesQUANTITY

0.94+

last couple of decadesDATE

0.89+

first conversationQUANTITY

0.89+

OneQUANTITY

0.87+

Hadoop World 2010EVENT

0.87+

Log4jTITLE

0.83+

cribl.ioORGANIZATION

0.81+

20 yearsQUANTITY

0.8+

AzureORGANIZATION

0.8+

first companyQUANTITY

0.79+

big waveEVENT

0.79+

theCUBEORGANIZATION

0.78+

up to a terabyte a dayQUANTITY

0.77+

Azure BlobTITLE

0.77+

cribl.cloudTITLE

0.74+

ExabeamORGANIZATION

0.72+

ShutterflyORGANIZATION

0.71+

bankingORGANIZATION

0.7+

DataOpsTITLE

0.7+

waveEVENT

0.68+

lastDATE

0.67+

cribl.ioTITLE

0.66+

thingsQUANTITY

0.65+

zillion companiesQUANTITY

0.63+

syslogTITLE

0.62+

10QUANTITY

0.61+

SplunkORGANIZATION

0.6+

AIOpsTITLE

0.6+

EdgeTITLE

0.6+

Data asTITLE

0.59+

cribl.io/jobsORGANIZATION

0.58+

ElasticsearchTITLE

0.58+

ElasticTITLE

0.55+

onceQUANTITY

0.5+

problemsQUANTITY

0.48+

CodeTITLE

0.46+

SplunkTITLE

0.44+

Bill Sharp, EarthCam Inc. | Dell Technologies World 2020


 

>>from around the globe. It's the Cube with digital coverage of Dell Technologies. World Digital Experience Brought to You by Dell Technologies. >>Welcome to the Cubes Coverage of Dell Technologies World 2020. The digital coverage Find Lisa Martin And then we started to be talking with one of Dell Technologies customers. Earth Camp. Joining Me is built sharp, the senior VP of product development and strategy from Earth Camp Phil, Welcome to the Cube. >>Thank you so much. >>So talk to me a little bit. About what Earth Cam does this very interesting Web can technology? You guys have tens of thousands of cameras and sensors all over the globe give her audience and understanding of what you guys are all about. >>Sure thing. The world's leading provider of Webcam technologies and mentioned content services were leaders and live streaming time lapse imaging primary focus in the vertical construction. So a lot of these, the most ambitious, largest construction projects around the world, you see, these amazing time lapse movies were capturing all of that imagery. You know, basically, around the clock of these cameras are are sending all of that image content to us when we're generating these time lapse movies from it. >>You guys, you're headquartered in New Jersey and I was commenting before we went live about your great background. So you're actually getting to be on site today? >>Yes, Yes, that's where lives from our headquarters in Upper Saddle River, New Jersey. >>Excellent. So in terms of the types of information that you're capturing. So I was looking at the website and see from a construction perspective or some of the big projects you guys have done the Hudson Yards, the Panama Canal expansion, the 9 11 Museum. But you talked about one of the biggest focus is that you have is in the construction industry in terms of what type of data you're capturing from all of these thousands of edge devices give us a little bit of insight into how much data you're capturing high per day, how it gets from the edge, presumably back to your court data center for editing. >>Sure, and it's not just construction were also in travel, hospitality, tourism, security, architectural engineering, basically, any any industry that that need high resolution visualization of their their projects or their their performance or of their, you know, product flow. So it's it's high resolution documentation is basically our business. There are billions of files in the isil on system right now. We are ingesting millions of images a month. We are also creating very high resolution panoramic imagery where we're taking hundreds and sometimes multiple hundreds of images, very high resolution images and stitching these together to make panoramas that air up to 30 giga pixel, sometimes typically around 1 to 2 giga pixel. But that composite imagery Eyes represents millions of images per per month coming into the storage system and then being, uh, stitched together to those those composites >>the millions of images coming in every month. You mentioned Isil on talk to me a little bit about before you were working with Delhi, EMC and Power Scale. How are you managing this massive volume of data? >>Sure we had. We've used a number of other enterprise storage systems. It was really nothing was as easy to manage Azazel on really is there was there was a lot of a lot of problems with overhead, the amount of time necessary from a systems administrator resource standpoint, you to manage that, uh, and and it's interesting with the amount of data that we handle. This is being billions of relatively small files there there, you know, half a megabyte to a couple of megabytes each. It's an interesting data profile, which, which isil on really is well suited for. >>So if we think about some of the massive changes that we've all been through the last in 2020 what are some of the changes that that Earth Kemp has seen with respect to the needs for organizations? Or you mentioned other industries, like travel hospitality? Since none of us could get to these great travel destinations, Have you seen a big drive up in the demand and the need to process data more data faster? >>Yeah, that's an injury interesting point with with the Pandemic. Obviously we had to pivot and move a lot of people toe working from home, which we were able to do pretty quickly. But there's also an interesting opportunity that arose from this, where so many of our customers and other people also have to do the same. And there is an increased demand for our our technology so people can remotely collaborate. They can. They can work at a distance. They can stay at home and see what's going on in these projects sites. So we really so kind of an uptick in the in the need for our products and services. And we've also created Cem basically virtual travel applications. We have an application on the Amazon Fire TV, which is the number one app in the travel platform of people can kind of virtually travel when they can't really get out there. So it's, uh, we've been doing kind of giving back Thio to people that are having having some issues with being able to travel around. We've done the fireworks of the Washington Mall around the Statue of Liberty for the July 4th, and this year will be Webcasting and New Year's in Times Square for our 25th year, actually. So again, helping people travel virtually and be, uh, maintain can be collectivity with with each other and with their projects, >>which is so essential during these times, where for the last 67 months everyone is trying to get a sense of community, and most of us just have the Internet. So I also heard you guys were available on Apple TV, someone to fire that up later and maybe virtually travel. Um, but tell me a little bit about how working in conjunction with Delta Technologies and Power Cell How is that enabled you to manage this massive volume change you've experienced this year? Because, as you said, it's also about facilitating collaboration, which is largely online these days. >>Yeah, I mean, the the great things they're working with Dell has been just our confidence in this infrastructure. Like I said, the other systems we worked with in the past we've always found ourselves kind of second guessing. Obviously, resolutions are increasing. The camera performance is increasing. Streaming video is everything is is constantly getting bigger and better, faster. Maurits And we're always innovating. We found ourselves on previous storage platforms having to really kind of go back and look at the second guess we're at with it With with this, this did L infrastructure. That's been it's been fantastic. We don't really have to think about that as much. We just continue innovating everything scales as we needed to dio. It's it's much easier to work with, >>so you've got power scale at your core data center in New Jersey. Tell me a little bit about how data gets from thes tens of thousands of devices at the edge, back to your editors for editing and how power scale facilitates faster editing, for example. >>Basically, you imagine every one of these cameras on It's not just camera. We have mobile applications. We have fixed position of robotic cameras. There's all these different data acquisition systems were integrating with weather sensors and different types of telemetry. All of that data is coming back to us over the Internet, so these are all endpoints in our network. Eso that's that's constantly being ingested into our network and say WTO. I salon the big the big thing that's really been a timesaver Working with the video editors is, instead of having to take that content, move it into an editing environment where we have we have a whole team of award winning video editors. Creating these time lapse is we don't need to keep moving that around. We're working natively on Iselin clusters. They're doing their editing, their subsequent edits. Anytime we have to update or change these movies as a project evolves, that's all it happened right there on that live environment on the retention. Is there if we have to go back later on all of our customers, data is really kept within that 11 area. It's consolidated, its secure. >>I was looking at the Del Tech website. There's a case study that you guys did earth campaign with Deltek saying that the video processing time has been reduced 20%. So that's a pretty significant increase. I could imagine what the volumes changing so much now but on Li not only is huge for your business, but to the demands that your customers have as well, depending on where there's demands are coming from >>absolutely and and just being able to do that a lot faster and be more nimble allows us to scale. We've added actually against speaking on this pandemic, we've actually added person who we've been hiring people. A lot of those people are working remotely, as as we've stated before on it's just with the increase in business. We have to continue to keep building on that on this storage environments been been great. >>Tell me about what you guys really kind of think about with respect to power scale in terms of data management, not storage management and what that difference means to your business. >>Well, again, I mean number number one was was really eliminating the amount of resource is amount of time we have to spend managing it. We've almost eliminated any downtime of any of any kind. We have greater storage density, were able to have better visualization on how our data is being used, how it's being access so as thes as thes things, a revolving. We really have good visibility on how the how the storage system is being used in both our production and our and also in our backup environments. It's really, really easy for us Thio to make our business decisions as we innovate and change processes, having that continual visibility and really knowing where we stand. >>And you mentioned hiring folks during the pandemic, which is fantastic but also being able to do things much in a much more streamlined way with respect to managing all of this data. But I am curious in terms of of innovation and new product development. What have you been able to achieve because you've got more resource is presumably to focus on being more innovative rather than managing storage >>well again? It's were always really pushing the envelope of what the technology can do. As I mentioned before, we're getting things into, you know, 20 and 30 Giga pixel. You know, people are talking about megapixel images were stitching hundreds of these together. We've we're just really changing the way imagery is used, uh, both in the time lapse and also just in archival process. Ah, lot of these things we've done with the interior. You know, we have this virtual reality product where you can you can walk through and see in the 3 60 bubble. We're taking that imagery, and we're combining it with with these been models who are actually taking the three D models of the construction site and combining it with the imagery. And we can start doing things to visualize progress and different things that are happening on the site. Look for clashes or things that aren't built like they're supposed to be built, things that maybe aren't done on the proper schedule or things that are maybe ahead of schedule, doing a lot of things to save people, time and money on these construction sites. We've also introduced a I machine learning applications into directly into the workflow in this in the storage environment. So we're detecting equipment and people and activities in the site where a lot of that would have been difficult with our previous infrastructure, it really is seamless and working with YSL on now. >>Imagine, by being able to infuse AI and machine learning, you're able to get insight faster to be ableto either respond faster to those construction customers, for example, or alert them. If perhaps something isn't going according to plan. >>A lot of it's about schedule. It's about saving money about saving time and again, with not as many people traveling to the sites, they really just have have constant visualization of what's going on. Day to day, we're detecting things like different types of construction equipment and things that are happening on the side. We're partnering with people that are doing safety analytics and things of that nature. So these these are all things that are very important to construction sites. >>What are some of the things as we are rounding out the calendar year 2020? What are some of the things that you're excited about going forward in 2021? That Earth cam is going to be able to get into and to deliver >>it, just MAWR and more people really, finally seeing the value. I mean, I've been doing this for 20 years, and it's just it's it's It's amazing how we're constantly seeing new applications and more people understanding how valuable these visual tools are. That's just a fantastic thing for us because we're really trying to create better lives through visual information. We're really helping people with things they can do with this imagery. That's what we're all about that's really exciting to us in a very challenging environment right now is that people are are recognizing the need for this technology and really starting to put it on a lot more projects. >>Well, it's You can kind of consider an essential service, whether or not it's a construction company that needs to manage and oversee their projects, making sure they're on budget on schedule, as you said, Or maybe even just the essential nous of helping folks from any country in the world connect with a favorite favorite travel location or sending the right to help. From an emotional perspective, I think the essential nous of what you guys are delivering is probably even more impactful now, don't you think? >>Absolutely and again about connecting people and when they're at home. And recently we we webcast the president's speech from the Flight 93 9 11 observation from the memorial. There was something where the only the immediate families were allowed to travel there. We webcast that so people could see that around the world we have documented again some of the biggest construction projects out there. The new rate years greater stadium was one of the recent ones, uh, is delivering this kind of flagship content. Wall Street Journal is to use some of our content recently to really show the things that have happened during the pandemic in Times Square's. We have these cameras around the world. So again, it's really bringing awareness of letting people virtually travel and share and really remain connected during this this challenging time on and again, we're seeing a really increase demand in the traffic in those areas as well. >>I can imagine some of these things that you're doing that you're achieving now are going to become permanent, not necessarily artifacts of Cove in 19 as you now have the opportunity to reach so many more people and probably the opportunity to help industries that might not have seen the value off this type of video to be able to reach consumers that they probably could never reach before. >>Yeah, I think the whole nature of business and communication and travel on everything is really going to be changed from this point forward. It's really people are looking at things very, very differently and again, seeing the technology really can help with so many different areas that, uh, that it's just it's gonna be a different kind of landscape out there we feel on that's really, you know, continuing to be seen on the uptick in our business and how many people are adopting this technology. We're developing a lot more. Partnerships with other companies were expanding into new industries on again. You know, we're confident that the current platform is going to keep up with us and help us, you know, really scale and evolved as thes needs air growing. >>It sounds to me like you have the foundation with Dell Technologies with power scale to be able to facilitate the massive growth that you're saying and the skill in the future like you've got that foundation. You're ready to go? >>Yeah, we've been We've been We've been using the system for five years already. We've already added capacity. We can add capacity on the fly, Really haven't hit any limits. And what we can do, It's It's almost infinitely scalable, highly redundant. Gives everyone a real sense of security on our side. And, you know, we could just keep innovating, which is what we do without hitting any any technological limits with with our partnership. >>Excellent. Well, Bill, I'm gonna let you get back to innovating for Earth camp. It's been a pleasure talking to you. Thank you so much for your time today. >>Thank you so much. It's been a pleasure >>for Bill Sharp and Lisa Martin. You're watching the cubes. Digital coverage of Dell Technologies World 2020. Thanks for watching. Yeah,

Published Date : Oct 22 2020

SUMMARY :

It's the Cube with digital coverage of Dell The digital coverage Find Lisa Martin And then we started to be talking with one of Dell Technologies So talk to me a little bit. You know, basically, around the clock of these cameras are are sending all of that image content to us when we're generating So you're actually getting to be on site today? have is in the construction industry in terms of what type of data you're capturing There are billions of files in the isil on system right You mentioned Isil on talk to me a little bit about before lot of problems with overhead, the amount of time necessary from a systems administrator resource We have an application on the Amazon Fire TV, which is the number one app in the travel platform of people So I also heard you guys were available on Apple TV, having to really kind of go back and look at the second guess we're at with it With with this, thes tens of thousands of devices at the edge, back to your editors for editing and how All of that data is coming back to us There's a case study that you guys did earth campaign with Deltek saying that absolutely and and just being able to do that a lot faster and be more nimble allows us Tell me about what you guys really kind of think about with respect to power scale in to make our business decisions as we innovate and change processes, having that continual visibility and really being able to do things much in a much more streamlined way with respect to managing all of this data. of the construction site and combining it with the imagery. Imagine, by being able to infuse AI and machine learning, you're able to get insight faster So these these are all things that are very important to construction sites. right now is that people are are recognizing the need for this technology and really starting to put it on a lot or sending the right to help. the things that have happened during the pandemic in Times Square's. many more people and probably the opportunity to help industries that might not have seen the value seeing the technology really can help with so many different areas that, It sounds to me like you have the foundation with Dell Technologies with power scale to We can add capacity on the fly, Really haven't hit any limits. It's been a pleasure talking to you. Thank you so much. Digital coverage of Dell Technologies World

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

hundredsQUANTITY

0.99+

Bill SharpPERSON

0.99+

20 yearsQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

five yearsQUANTITY

0.99+

EMCORGANIZATION

0.99+

20%QUANTITY

0.99+

New JerseyLOCATION

0.99+

DellORGANIZATION

0.99+

2021DATE

0.99+

DeltekORGANIZATION

0.99+

2020DATE

0.99+

BillPERSON

0.99+

millions of imagesQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

Del TechORGANIZATION

0.99+

July 4thDATE

0.99+

MauritsPERSON

0.99+

20QUANTITY

0.99+

half a megabyteQUANTITY

0.99+

Times SquareLOCATION

0.99+

Washington MallLOCATION

0.99+

EarthLOCATION

0.99+

Delta TechnologiesORGANIZATION

0.99+

billions of filesQUANTITY

0.99+

IsilPERSON

0.99+

hundreds of imagesQUANTITY

0.99+

tens of thousands of camerasQUANTITY

0.98+

Statue of LibertyLOCATION

0.98+

todayDATE

0.98+

Upper Saddle River, New JerseyLOCATION

0.98+

bothQUANTITY

0.98+

25th yearQUANTITY

0.97+

Fire TVCOMMERCIAL_ITEM

0.97+

PhilPERSON

0.97+

Panama CanalLOCATION

0.97+

oneQUANTITY

0.96+

this yearDATE

0.96+

Apple TVCOMMERCIAL_ITEM

0.96+

Power ScaleORGANIZATION

0.96+

pandemicEVENT

0.95+

DelhiORGANIZATION

0.95+

11 areaQUANTITY

0.95+

millions of images a monthQUANTITY

0.94+

30 Giga pixelQUANTITY

0.94+

AmazonORGANIZATION

0.94+

Earth CampORGANIZATION

0.94+

Technologies World 2020EVENT

0.93+

Power CellORGANIZATION

0.93+

a couple of megabytesQUANTITY

0.93+

2 giga pixelQUANTITY

0.91+

thousands of edge devicesQUANTITY

0.9+

New Year'sEVENT

0.89+

IselinORGANIZATION

0.88+

tens of thousands of devicesQUANTITY

0.86+

around 1QUANTITY

0.85+

Technologies World 2020EVENT

0.84+

billions of relatively small filesQUANTITY

0.84+

Earth KempORGANIZATION

0.82+

Earth campLOCATION

0.81+

up to 30 giga pixelQUANTITY

0.79+

EarthCam Inc.ORGANIZATION

0.79+

PandemicORGANIZATION

0.78+

calendar year 2020DATE

0.77+

millions of images per per monthQUANTITY

0.77+

HudsonORGANIZATION

0.76+

second guessQUANTITY

0.75+

Flight 93 9 11COMMERCIAL_ITEM

0.72+

every monthQUANTITY

0.7+

last 67 monthsDATE

0.68+

9 11 MuseumLOCATION

0.68+

World 2020EVENT

0.61+

Wall Street JournalTITLE

0.61+

ThioORGANIZATION

0.6+

YardsLOCATION

0.56+

AzazelTITLE

0.56+

theseQUANTITY

0.55+

CoveLOCATION

0.55+

bubbleLOCATION

0.53+

eachQUANTITY

0.53+

Bill Sharp V1


 

>> Announcer: From around the globe, it's theCUBE! With digital coverage of Dell Technologies World, digital experience. Brought to you by Dell Technologies. >> Welcome to theCUBE's coverage of Dell Technologies World 2020, the digital coverage. I'm Lisa Martin, and I'm excited to be talking with one of Dell Technologies' customers EarthCam. Joining me is Bill Sharp, the senior VP of product development and strategy from EarthCam. Bill, welcome to theCUBE. >> Thank you so much. >> So talk to me a little bit about what EarthCam does. This is very interesting webcam technology. You guys have tens of thousands of cameras and sensors all over the globe. Give our audience an understanding of what you guys are all about. >> Sure thing. The world's leading provider of webcam technologies, you mentioned content and services, we're leaders in live streaming, time-lapse imaging, primary focus in the vertical construction. So with a lot of these, the most ambitious, largest construction projects around the world that you see these amazing time-lapse movies, we're capturing all of that imagery basically around the clock, these cameras are sending all of that image content to us and we're generating these time-lapse movies from it. >> You guys are headquartered in New Jersey. I was commenting before we went live about your great background. So you're actually getting to be onsite today? >> Yes, yes. We're live from our headquarters in upper Saddle River, New Jersey. >> Excellent, so in terms of the types of information that you're capturing, so I was looking at the website, and see from a construction perspective, some of the big projects you guys have done, the Hudson Yards, the Panama Canal expansion, the 9/11 museum. But you talked about one of the biggest focuses that you have is in the construction industry. In terms of what type of data you're capturing from all of these thousands of edge devices, give us a little bit of an insight into how much data you're capturing per day, how it gets from the edge, presumably, back to your core data center for editing. >> Sure, and it's not just construction. We're also in travel, hospitality, tourism, security, architecture, engineering, basically any industry that need high resolution visualization of their projects or their performance or their product flow. So it's high resolution documentation is basically our business. There are billions of files in the Isilon system right now. We are ingesting millions of images a month. We are also creating very high resolution panoramic imagery where we're taking hundreds and sometimes multiple hundreds of images, very high resolution images and stitching these together to make panoramas that are up to 30 gigapixel sometimes. Typically around one to two gigapixel but that composite imagery represents millions of images per month coming into the storage system and then being stitched together to those composites. >> So millions of images coming in every month, you mentioned Isilon. Talk to me a little bit about before you were working with Dell EMC and PowerScale, how were you managing this massive volume of data? >> Sure, we've used a number of other enterprise storage systems. It was really nothing was as easy to manage as Isilon really is. There was a lot of problems with overhead, the amount of time necessary from a systems administrator resource standpoint, to manage that. And it's interesting with the amount of data that we handle, being billions of relatively small files. They're, you know, a half a megabyte to a couple of megabytes each. It's an interesting data profile which Isilon really is well suited for. >> So if we think about some of the massive changes that we've all been through in the last, in 2020, what are some of the changes that EarthCam hasn't seen with respect to the needs for organizations, or you mentioned other industries like travel, hospitality, since none of us can get to these great travel destinations, have you seen a big drive up in the demand and the need to process more data faster? >> Yeah, that's an interesting point with the pandemic. I mean, obviously we had to pivot and move a lot of people to working from home, which we were able to do pretty quickly, but there's also an interesting opportunity that arose from this where so many of our customers and other people also have to do the same. And there is an increased demand for our technology. So people can remotely collaborate. They can work at a distance, they can stay at home and see what's going on in these project sites. So we really saw kind of an uptick in the need for our products and services. And we've also created some basically virtual travel applications. We have an application on the Amazon Fire TV which is the number one app in the travel platform, and people can kind of virtually travel when they can't really get out there. So it's, we've been doing kind of giving back to people that are having some issues with being able to travel around. We've done the fireworks at the Washington Mall around the Statue of Liberty for July 4th. And this year we'll be webcasting New Years in Times Square for our 25th year, actually. So again, helping people travel virtually and maintain connectivity with each other, and with their projects. >> Which is so essential during these times where for the last six, seven months, everyone is trying to get a sense of community and most of us just have the internet. So I also heard you guys were available on the Apple TV, someone should fire that up later and maybe virtually travel. But tell me a little bit about how working in conjunction with Dell Technologies and PowerScale. How has that enabled you to manage this massive volume change that you've experienced this year? Because as you said, it's also about facilitating collaboration which is largely online these days. >> Yeah, and I mean, the great things of working with Dell has been just our confidence in this infrastructure. Like I said, the other systems we've worked with in the past we've always found ourselves kind of second guessing. We're constantly innovating. Obviously resolutions are increasing. The camera performance is increasing, streaming video is, everything is constantly getting bigger and better, faster, more, and we're always innovating. We found ourselves on previous storage platforms having to really kind of go back and look at them, second guess where we're at with it. With the Dell infrastructure it's been fantastic. We don't really have to think about that as much. We just continue innovating, everything scales as we need it to do. It's much easier to work with. >> So you've got PowerScale at your core data center in New Jersey. Tell me a little bit about how data gets from these tens of thousands of devices at the edge, back to your editors for editing, and how PowerScale facilitates faster editing, for example. >> Well, basically you can imagine every one of these cameras, and it's not just cameras. It's also, you know, we have 360 virtual reality kind of bubble cameras. We have mobile applications, we have fixed position and robotic cameras. There's all these different data acquisition systems we're integrating with weather sensors and different types of telemetry. All of that data is coming back to us over the internet. So these are all endpoints in our network. So that's constantly being ingested into our network and saved to Isilon. The big thing that's really been a time saver working with the video editors is instead of having to take that content, move it into an editing environment where we have a whole team of award-winning video editors creating these time lapses. We don't need to keep moving that around. We're working natively on Isilon clusters. They're doing their editing there, and subsequent edits. Anytime we have to update or change these movies as a project evolves, that's all, can happen right there on that live environment. And the retention is there. If we have to go back later on, all of our customers' data is really kept within that one area, it's consolidated and it's secure. >> I was looking at the Dell Tech website, and there's a case study that you guys did, EarthCam did with Dell Tech saying that the video processing time has been reduced 20%. So that's a pretty significant increase. I can imagine with the volumes changing so much now, not only is huge to your business but to the demands that your customers have as well, depending on where those demands are coming from. >> Absolutely. And just being able to do that a lot faster and be more nimble allows us to scale. We've added actually, again, speaking of during this pandemic, we've actually added personnel, we've been hiring people. A lot of those people are working remotely as we've stated before. And it's just with the increase in business, we have to continue to keep building on that, and this storage environment's been great. >> Tell me about what you guys really kind of think about with respect to PowerScale in terms of data management, not storage management, and what that difference means to your business. >> Well, again, I mean, number one was really eliminating the amount of resources. The amount of time we have to spend managing it. We've almost eliminated any downtime of any kind. We have greater storage density, we're able to have better visualization on how our data is being used, how it's being accessed. So as these things are evolving, we really have good visibility on how the storage system is being used in both our production and also in our backup environments. It's really, really easy for us to make our business decisions as we innovate and change processes, having that continual visibility and really knowing where we stand. >> And you mentioned hiring folks during the pandemic, which is fantastic, but also being able to do things in a much more streamlined way with respect to managing all of this data. But I am curious in terms of innovation and new product development, what have you been able to achieve? Because you've got more resources presumably to focus on being more innovative rather than managing storage. >> Well, again, it's, we're always really pushing the envelope of what the technology can do. As I mentioned before, we're getting things into, you know, 20 and 30 gigapixels, people are talking about megapixel images, we're stitching hundreds of these together. We're just really changing the way imagery is used both in the time lapse and also just in archival process. A lot of these things we've done with the interior, we have this virtual reality product where you can walk through and see in a 360 bubble, we're taking that imagery and we're combining it with these BIM models. So we're actually taking the 3D models of the construction site and combining it with the imagery. And we can start doing things to visualize progress, and different things that are happening on the site, look for clashes or things that aren't built like they're supposed to be built, things that maybe aren't done on the proper schedule or things that are maybe ahead of schedule, doing a lot of things to save people time and money on these construction sites. We've also introduced AI and machine learning applications directly into the workflow in the storage environment. So we're detecting equipment and people and activities in the site where a lot of that would have been difficult with our previous infrastructure. It really is seamless and working with Isilon now. >> I imagine by being able to infuse AI and machine learning, you're able to get insights faster, to be able to either respond faster to those construction customers, for example, or alert them if perhaps something isn't going according to plan. >> Yeah, a lot of it's about schedule, it's about saving money, about saving time. And again, with not as many people traveling to these sites, they really just have to have constant visualization of what's going on day to day. We're detecting things like different types of construction equipment and things that are happening on the site. We're partnering with people that are doing safety analytics and things of that nature. So these are all things that are very important to construction sites. >> What are some of the things as we are rounding out the calendar year 2020, what are some of the things that you're excited about going forward in 2021, that EarthCam is going to be able to get into and to deliver? >> Just more and more people really finally seeing the value. I mean I've been doing this for 20 years and it's just, it's amazing how we're constantly seeing new applications and more people understanding how valuable these visual tools are. That's just a fantastic thing for us because we're really trying to create better lives through visual information. We're really helping people with the things they can do with this imagery. That's what we're all about. And that's really exciting to us in a very challenging environment right now is that people are recognizing the need for this technology and really starting to put it on a lot more projects. >> Well, you can kind of consider it an essential service whether or not it's a construction company that needs to manage and oversee their projects, making sure they're on budget, on schedule, as you said, or maybe even just the essentialness of helping folks from any country in the world connect with a favorite travel location, or (indistinct) to help from an emotional perspective. I think the essentialness of what you guys are delivering is probably even more impactful now, don't you think? >> Absolutely. And again about connecting people when they're at home, and recently we webcast the president's speech from the Flight 93 9/11 observation from the memorial, there was something where only the immediate families were allowed to travel there. We webcast that so people could see that around the world. We've documented, again, some of the biggest construction projects out there, the new Raiders stadium was one of the recent ones, just delivering this kind of flagship content. Wall Street Journal has used some of our content recently to really show the things that have happened during the pandemic in Times Square. We have these cameras around the world. So again, it's really bringing awareness. So letting people virtually travel and share and really remain connected during this challenging time. And again, we're seeing a real increased demand in the traffic in those areas as well. >> I can imagine some of these things that you're doing that you're achieving now are going to become permanent not necessarily artifacts of COVID-19, as you now have the opportunity to reach so many more people and probably the opportunity to help industries that might not have seen the value of this type of video to be able to reach consumers that they probably could never reach before. >> Yeah, I think the whole nature of business and communication and travel and everything is really going to be changed from this point forward. It's really, people are looking at things very, very differently. And again, seeing that the technology really can help with so many different areas that it's just, it's going to be a different kind of landscape out there we feel. And that's really continuing to be seen as on the uptick in our business and how many people are adopting this technology. We're developing a lot more partnerships with other companies, we're expanding into new industries. And again, you know, we're confident that the current platform is going to keep up with us and help us really scale and evolve as these needs are growing. >> It sounds to me like you have the foundation with Dell Technologies, with PowerScale, to be able to facilitate the massive growth that you were saying and the scale in the future, you've got that foundation, you're ready to go. >> Yeah, we've been using the system for five years already. We've already added capacity. We can add capacity on the fly, really haven't hit any limits in what we can do. It's almost infinitely scalable, highly redundant. It gives everyone a real sense of security on our side. And you know, we can just keep innovating, which is what we do, without hitting any technological limits with our partnership. >> Excellent, well, Bill, I'm going to let you get back to innovating for EarthCam. It's been a pleasure talking to you. Thank you so much for your time today. >> Thank you so much. It's been a pleasure. >> For Bill Sharp, I'm Lisa Martin, you're watching theCUBE's digital coverage of Dell Technologies World 2020. Thanks for watching. (calm music)

Published Date : Oct 6 2020

SUMMARY :

Brought to you by Dell Technologies. excited to be talking of what you guys are all about. of that image content to us to be onsite today? in upper Saddle River, New Jersey. one of the biggest focuses that you have coming into the storage system Talk to me a little bit about before the amount of time necessary and move a lot of people and most of us just have the internet. Yeah, and I mean, the great of devices at the edge, is instead of having to take that content, not only is huge to your business And just being able to means to your business. on how the storage system is being used also being able to do things and activities in the site to be able to either respond faster and things that are happening on the site. and really starting to put any country in the world see that around the world. and probably the opportunity And again, seeing that the to be able to facilitate We can add capacity on the fly, I'm going to let you get back Thank you so much. of Dell Technologies World 2020.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Dell TechnologiesORGANIZATION

0.99+

Bill SharpPERSON

0.99+

Dell TechORGANIZATION

0.99+

20 yearsQUANTITY

0.99+

hundredsQUANTITY

0.99+

DellORGANIZATION

0.99+

Dell Technologies'ORGANIZATION

0.99+

2020DATE

0.99+

New JerseyLOCATION

0.99+

BillPERSON

0.99+

five yearsQUANTITY

0.99+

2021DATE

0.99+

Washington MallLOCATION

0.99+

20QUANTITY

0.99+

Times SquareLOCATION

0.99+

COVID-19OTHER

0.99+

July 4thDATE

0.99+

billionsQUANTITY

0.99+

20%QUANTITY

0.99+

millions of imagesQUANTITY

0.99+

Fire TVCOMMERCIAL_ITEM

0.99+

25th yearQUANTITY

0.99+

AmazonORGANIZATION

0.99+

Apple TVCOMMERCIAL_ITEM

0.98+

IsilonORGANIZATION

0.98+

New YearsEVENT

0.98+

hundreds of imagesQUANTITY

0.98+

todayDATE

0.98+

bothQUANTITY

0.98+

PowerScaleORGANIZATION

0.98+

Statue of LibertyLOCATION

0.98+

30 gigapixelsQUANTITY

0.98+

tens of thousands of camerasQUANTITY

0.97+

Dell Technologies WorldORGANIZATION

0.97+

oneQUANTITY

0.96+

theCUBEORGANIZATION

0.96+

seven monthsQUANTITY

0.96+

two gigapixelQUANTITY

0.96+

Panama CanalLOCATION

0.96+

up to 30 gigapixelQUANTITY

0.96+

billions of filesQUANTITY

0.96+

this yearDATE

0.95+

Dell TechnologiesORGANIZATION

0.94+

EarthCamORGANIZATION

0.94+

360 bubbleQUANTITY

0.93+

Dell EMCORGANIZATION

0.91+

secondQUANTITY

0.9+

tens of thousands of devicesQUANTITY

0.9+

Dell Technologies World 2020EVENT

0.9+

9/11EVENT

0.89+

EarthCamCOMMERCIAL_ITEM

0.87+

millions of images a monthQUANTITY

0.87+

Saddle River, New JerseyLOCATION

0.85+

a half a megabyteQUANTITY

0.84+

pandemicEVENT

0.84+

couple of megabytesQUANTITY

0.82+

year 2020DATE

0.82+

Wall Street JournalORGANIZATION

0.81+

PowerScaleTITLE

0.8+

thousands of edge devicesQUANTITY

0.79+

one areaQUANTITY

0.79+

Raiders stadiumLOCATION

0.77+

360 virtualQUANTITY

0.77+

around oneQUANTITY

0.73+

Jordan Martin & Evyonne Sharp, Network Collective | Cisco Live 2018


 

>> Live from Orlando, Florida, it's theCUBE! Covering Cisco Live 2018. Brought to you by Cisco, NetApp, and theCUBE's ecosystem partners. (bubbly music) >> Hello everyone and welcome back to the live coverage, here with theCUBE, here in Orlando, Florida, for Cisco Live 2018. I'm John Furrier, my co-host Stu Miniman, for the next three days of wall-to-wall live coverage. We have the co-founders of the Network Collective here Eyvonne Sharp and Jordan Martin, thanks for joining us today, Network Collective. Sounds great, sounds like it's a collection of networks, so what's goin' on, what do you guys do? First let's talk about what you guys do, obviously you guys do a lot of podcasting, a lot of diggin' into the tech, what is Network Collective? >> Network Collective is a video podcast that Jordan and I started. We really felt like there was a need to build community around network engineers, and that really a lot of network engineers are very isolated in their job, there's only a couple people where they work, they know what they know, and they don't have a lot of peers. And so we see Network Collective as a way to bring network engineers together to learn about their craft, and also share with one another in a community that's more than a once-a-year conference like Cisco Live. >> That's awesome, I love the video podcasting, more than ever now the need for kind of peer review, conversations around learning because the world's shifting. In the keynote today the CEO of Cisco talked about the old way, and the new networks that are coming. We've been talking about no perimeters for years, but now security threats are real, gotta keep that strain solid, keep managing that, but also bring in a new kind of a cloud-hybrid, multi-cloud world, requires real skill adoption, new things. What're you guys seeing, what's your thoughts, and what's some of the things that you guys are exploring on your video podcast around these trends? >> You wanna take that Jordan? >> Sure. So, I think the rate of change of networking is faster than it's been in a very long time, so we've, we've had to, we've kinda not had a whole lotta churn in the things we've had to learn, I mean it's been complex and difficult, and there's been challenges in getting up to speed, but, with the transition to more developer-focused, and developer-centric model of deploying equipment, it is--and the integration of cloud into what is essentially our infrastructure. It's changing so much, that it's good to get together and have those conversations because it's very difficult to navigate this by yourself, it's, you know, it's a lot to learn. >> You know, I wanna push back little bit on that 'cause, you know, I've been in networking my whole career. When I used to, I used to speak at Interop, and I'd put down, you know, here's the rate of change, and here are the decades. And it's like, you know, okay 10 gig, here's where the standards are, here's where the first pieces are, it's gonna take years for us to deploy this. I don't disagree that change is happening overall faster, but how are people keeping up with it? Are the enterprises that the networking people work for allowing them to roll out some of these changes a little faster? So, give us a little insight as to what you're hearing from the community? >> I think, I mean technology, I mean, we've got Moore's law, right? I mean technology has always been changing rapidly, I think the thing that is different is the way network engineers need to interact with their environment. Five to 10 years ago, you could still operate in an environment where you still did a lot of static routing, for example. Now, with the cloud, with workloads moving around, there is no way to run a multi-cloud enterprise network without some serious dynamic routing chops, whether that's BGP, or EIGRP, or OSPF, or all of the above, and a lot of network engineers are still catching up with some of those technologies, they're used to being able to do things the way that they've always done them. And I think there needs to be a mind shift where we start thinking about things dynamically, in that, you know, an IP address may not live in the same geography, it may move from on-premises to the cloud to another cloud, and we have to be able to build networks that are resilient enough, and flexible enough, to be able to support that kind of mobility. >> Yeah, I love that Eyvonne. Right, you talked about the multi-cloud world. Jordan, a follow up question I have for you, how does the networking person look at things when there's a lot of the networking that are really outside of their control when you talk about really, the cloud world today. >> Sure, and before we jump there, I wanna say, the change that we're talking about though, is a bit different than what's happened. So, what we've seen traditionally would be speeds and feeds, but what's changing is the way we operate networks, and that hasn't changed a lot. Now, as for, you know, how do you view it when you don't-- well that's a challenge that everyone is facing. We see networking getting further and closer to the host. And, when we see networking inside of VMware, I mean this has been something that's been around for, you know, a while now, but, we're just getting comfortable with the idea of hypervisor, and now we've got, you know, we've got containers, and we've got networking in third party services that we don't necessarily have access to, we don't have full control over, and it's a completely different nomenclature we have to relearn all the terms because of course, no one reused the stuff we were familiar with, because this all started from a developer mindset. It all makes sense where it came from, but now we're catching up. And so it's, the challenge is not only understanding what needs to be done in all these different environments, but also understanding, just the terminology, and what is means. What is a VPC? Well VPC means something completely different to a networker that has never touched Amazon, than it does to somebody who has worked at Amazon completely there's overlapping terms and confusion around that and it's just a matter of, I think you need some broader coordination. There's been discussion about something like a full stack engineer, I think that's a pretty rare thing, I don't know how, how likely it is that you're gonna be expert level in all different disciplines, but you do need, you do need cross-team collaboration more than you have traditionally. We've had these silos, those no longer work in a multi-cloud world, it just doesn't, just doesn't work anymore. >> One of the things that came out with the keynote was, the networks next act was the main theme, as they talked about this new way, I mean, they use secure, intelligent platform, you know, for digital business, you know, level one marketing there, more complex than a few years ago and then the onslaught of new things coming, AI, augmented reality, machine learning, and I'd put blockchain in there, I thought they would put blockchain in the keynote to hype it up a bit, but, then they introduced the multi-cloud concept at that point. So in the keynote, multi-cloud didn't come up until the next act came up, so obviously that is a key part of what we're seeing, we saw Google clouds CEO Diane Greene come on. How are network engineers looking at the multi-cloud? 'Cause, I mean, how are they, toe in the water, are they puttin' the tow in the water? (chuckles) What is multi-cloud to them? Because, I mean, we talk about Kubernetes all the time, from an app standpoint, but, networks have been locked down for many, many years, you talked about some of the chops they need, what are those next chops for a network engineer when it comes to taking the road to multi-cloud? >> Sure, I mean I think if you're going to do any kind of multi-cloud interconnect, you've gotta know VGP. But at the same time, you need to understand some of those fundamental concepts that, the reason developers are pushing to the cloud, is not cost, although I've heard that a lot, that you know this cloud thing can't be cheaper, but it's really about enabling the business to move faster, and so we need to start thinking that way as network engineers more too. We have I think historically, our mentality, we've even trained our network engineers to go slow, to be very deliberate, to plan out your changes, to have these really complex change windows, and we need to start thinking differently, we need to think about how to make modular changes, and to be able to allow our workloads to move and shift in ways that don't provide a lot of risk, and I think that's a new way of thinking for networking engineers. >> Yeah. >> Well, we're sitting here in the DevNet zone, and that was one of the highlights in the keynote, talking that there are over 500 thousand developers now registered on this platform that they've built here. Bring us inside a little bit, you know, is it, what was it, John, DevNet sec? There's all of these acronyms as to, you know, how developers-- >> Yeah, NetApp was their big thing. >> how the network and the operations go together. What are you seeing, what's working, what's some of the challenges? >> I think this is a shift of necessity. As we see more problems solved in the network, we're adding complexity at that layer that hasn't been seen before, before it was routing, we just had to get traffic from one place to another, then we added security, so okay we have security, but we, we create these choke points in security where we can send all the traffic through this place and just like, we can use filtering, or some sort of identification there. Well then we start moving to cloud and we talk about dynamic workloads, and we talk about things that could just shift anywhere in the world, well now our choke point is gone, and so now we have to manage all the pieces, all the solutions, all the things we're putting into the network, but we've gotta manage it in a distributed way. And so that's where I think the automation's, why it's such a big push right now is because, we have to do it that way, there's no way to manually put these features in the network and be able to manage them at any type of scale without automating that process, and that's why, I think, we see the growth of DevNet, I mean, if you've been here the past few years, it's gone from a little thing to a much, much bigger thing, there's a lot of people looking at automation specifically, that 500 thousand number is, rather large. Really impressive that there's that many people looking at networks from a programmatic way. But in the meantime, I think that there's also, a bit of a divide here, 'cause I think that there's, a lot of people are looking this way, but I think there's, we talk about this on the show pretty often, there's really two types of networks. There's the networks at companies where, it really is, they see their network as a competitive advantage, and those places are definitely looking at automation, and they're looking at multi-cloud. But we also see another trend in networking, and that is to, I want some simple, push button, just put it out, get packets A to B, and I don't wanna mess with it, I don't want expensive engineers on staff, I don't want-- So I feel like the industry's almost coming to a divide. That we're gonna have two different types of networks, we're gonna have the network for the place that the just want packets going A to B, and they really don't want much, and the other side of that divide is gonna be very complex networks that have to be managed with automation. >> Talk about that other divide, it's between, I mean, I love that conversation, because, that almost kinda comes into like the notion of networks as a service. Because if you wanna have less expensive people there, but yet have the reliability, how do those companies grow and maintain the robust resiliency of these networks, and have the high performance, take advantage of the goodness, well what does it matter? I mean, how are they, how is the demographic of the network evolving, 'cause, either they're stunted for growth, or they have an enabler. How do you view that, how do you take that apart? >> I think we have to, we have to look at our business needs, and evaluate the technologies that we use appropriate to that. There are times for complexity, I think we've pushed, as Jordan very eloquently described, a lot of complexity down into the network, and we're working, I think, now, as the entire industry to maybe back some of that out. But one of the things that I hear a lot when we talk about automation and things like DevNet and developers is, I believe a lot of network engineers are afraid their jobs are going away, but if you look at what's going on, we have more connected devices than we've ever had before, and that's not gonna stop, and all of those connected devices need networks. And so really what's happened is we've reached a complexity inflection point, which means we have to have better tools, and I think that's really what we're talking about is, is how do we, instead of doing everything manually, how do we look at the network as a system, and manage it as a system, with tools to manage it that way. >> Your point about that jobs going away, I love that comment because, that's a sunk fallacy because, there's so much other stuff happening, talk about security, so the basic question, I mean first of all, guys your job's not goin' away! (laughs) Check! It's only a, well, kind of, you don't stay current, so it's all the learning issues, the progression for learning. But really it's the role of the network engineers and the people running the networks, I mean, I remember back in, the old way, the network guys were the top dogs, they were kickin' butt, takin' names, they ran the show, a lot was riding on the network. But as we go into this new dynamic environment, what are the roles of the network? Is it security? I mean, what are some of the things that people are pivoting to, or laddering up to from a roles standpoint that you see, in terms of a progression of new discovery, new skills. Is there a path, have you seen any patterns, for the growth of the person? >> I really think network engineers need to at least understand what the cloud is and why it exists. And they need to understand more about the applications and what they mean to the business. I think we have created a divide sometimes where, you know, my job is just to get packets from point A to B, and I don't really need to understand what we do as an organization, and I think that those days are going to be behind us, we need to understand, you know, what applications are critical, why do we need to build the systems the way that we need to build them and use that information from the business? So I think for network engineers, I think cloud security, understanding applications, and learning the business and being able to talk that language is what's gonna be most valuable to them in their career in the future. >> Yeah, we've heard the term many times, I'm a plumber! Well, I mean, implying that moving packets from A to B. It gets interesting with containers. Policy-based stuff has been known concept in networking, QOS, these are things that are well known, but when we start lookin' at the trends up the stack, we're seeing that kinda thing goin' on, service meshes for instance, they talk about services from a policy standpoint, up the stack. That's always been the challenge for the Ciscos over the past 20 years is, how to move up the stack, should they move up the stack, but I think now seems to be a good time. Your reaction guys, to that notion of moving up the stack while maintaining the purity and the goodness of good networking. (laughing) >> I think that's the big challenge right now, right? The more we mesh it all together and we don't, we don't really define the layers that we've traditionally used, the more challenging it is to have experts in that domain, because the domain just grows so incredibly large. And so there's gotta be a balance here, and I think we're trying to find that, I don't know that we've hit that yet, you know where, where we understand where networking fits into all these pieces, how far into the host, or how far into the application does networking go, we've seen certain applications not using the host TCP/IP stack, right, just to find some sort of performance benefit and it, to me that seems like we're pushing really far into this idea of, you know, well if we don't have standards and define places where these things exist, it's gonna be very much the wild, wild west for a while, until we figure out where everything's going to be. And so I think it just presents challenges and opportunities I don't know that we have the answer about how far it goes yet. >> Well let me ask you a question, a good point by the way, we agree, it's evolving, it's a moving train as they say. But as, people that might be watching that might be a Cisco customer or someone deploying a lot of Cisco networks and products in his portfolio, what's your advice to them, what're you hearing that's a good first three steps to take today? Obviously the show's goin' on here, multi-cloud is in center of the focus, this new network age is here for the CEO. What are some things that people can do now that are safe and good first steps to continue on the journey to whatever this evolves into. >> Well I think as you're building your network you need to think about modularization, you need to think about how to build it in small, manageable pieces, and, even if you're not ready to take the automation step today, you need to think about what that's gonna look like in the future, so, if you really want to automate your network you have to have consistency, consistent policy, consistent configuration across your environment, and it's never too late to start that, or too early to start that, right? And so you can think about, if I wanted to take these 10 sites and I wanted to manage them as one, how would I build it? And you can use that kind of mental framework to help guide the decisions you make, even if you're not ready to jump into a full scale automation from soup to nuts. And also I think, it's important to start playing around with automation technology, there are all kinds of tools to do that, and you can start in an are that's either dev, or QA, that's not gonna be production impacting, but you really need to get your, wrap your hands around some of the tools that exist to automate, and start playing with those. >> Stay where you're comfortable, get in, learn, get hands on. Jordan, your thoughts? >> Yeah, so, I was just over here like nodding my head furiously, 'cause everything she said, I 100% agree with. >> Ditto. (laughing) >> Yes, ditto, exactly. The only thing I would add is that we think about automation a lot in the method of config push. Right, the idea of configuring a device in an automated way, but that's not the only avenue for automation. Start by pulling information from your devices, it is really, really low risk way to start looking at your network programmatically, is to be able to go out to all of your switches, all of your routers, all of your networking devices and pull the same information and correlate that data and get yourself some information that's with a broader view. Does nothing to effect the change or state of your network, but you are now starting to look at your network that way. And I will reiterate Eyvonne's point, you cannot automate a network if it's not repeatable. If every design, every topology, every location is a special snowflake, you will never be able to automate anything because you're gonna have a hundred unique automation scripts to run a hundred unique sites. >> You'd be chasin' your tail big time. >> You'd be chasin' your tail, and so it is critical, if you're not in that state now, what you need to do is start looking how to modularize, and make repeatable config blocks in your network. >> Well guys, thanks for comin' on, Eyvonne, Jordan, thanks for comin' on, appreciate you taking the time. Final question for ya, I know it's day one, we got two more days of live coverage here, but, if you can kinda project, and in your minds eye see the development of the show, what's bubbling up as the most important story that's gonna come out of Cisco Live if you had to look at some early indications from the keynotes and some of the conversations in the hallway, what do you think is the biggest story this year for Cisco Live? >> I think for me personally, I wanna understand what Cisco's cloud strategy looks like, to know where they're going with the cloud and how they're going to help stitch together all the different services that we have. The clouds are becoming their own monolith, they each do things their own way, and still the network is what is stitching all those services together to provide access. And so I think it's important to understand that strategy, and where Cisco's goin'. >> Jordan, your thoughts? >> My, what I'm really looking for from the show this year is how Cisco is gonna make orchestration approachable. We've seen this process of automation where only the hardcore programmers could do it, then we got some tools. And these tools, as we watch as more of Cisco's product platforms start to integrate with each other I think the key piece for enterprise shops that don't have that type of resource on staff is what tools are they gonna give them to make this orchestration, between the way in and the enterprise campus, and into the cloud and in the data center, how do we tie all that together and make that to like a nice, seamless way to operate your network? >> Hey, what a great opportunity to have another podcast called under the hood, see what's goin' on, lot of chops needed, thanks for comin' on, give a quick plug for the address for your podcast, where do we find it, what's the site, Network Collective, obviously you guys are doing great things. Share the coordinates. >> Sure, you can find us at thenetworkcollective.com we usually use the hashtag: #NetworkCollective I'm on Twitter @SharpNetwork. Jordan, you wanna tell people where to find you? >> Sure, @bcjordo on Twitter, and obviously if you wanna interact with Network Collective, @NetCollectivePC on Twitter as well. >> Alright, thanks so much for the commentary, great to have a little shared, little podcast here, live on theCUBE, here in Orlando, I'm John Furrier, with Stu Miniman for our coverage at Cisco Live 2018, stay with us for more, we've got two more days of this, got day one just gettin' started, be right back after this short break. (bubbly music)

Published Date : Jun 11 2018

SUMMARY :

Brought to you by Cisco, NetApp, a lot of diggin' into the tech, and that really a lot of network engineers and the new networks that are coming. in the things we've had to learn, and here are the decades. And I think there needs to be a mind shift how does the networking I think you need some the road to multi-cloud? the business to move faster, here in the DevNet zone, how the network and the and the other side of that divide and have the high performance, and evaluate the technologies that we use and the people running and learning the business at the trends up the stack, the more challenging it is to multi-cloud is in center of the focus, and you can start in an Jordan, your thoughts? Yeah, so, I was just over here like (laughing) and pull the same information what you need to do is start and some of the and still the network is what is stitching and make that to like a nice, give a quick plug for the Sure, you can find us and obviously if you wanna much for the commentary,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JordanPERSON

0.99+

John FurrierPERSON

0.99+

CiscoORGANIZATION

0.99+

EyvonnePERSON

0.99+

Diane GreenePERSON

0.99+

JohnPERSON

0.99+

100%QUANTITY

0.99+

AmazonORGANIZATION

0.99+

Jordan MartinPERSON

0.99+

OrlandoLOCATION

0.99+

Stu MinimanPERSON

0.99+

Stu MinimanPERSON

0.99+

10 sitesQUANTITY

0.99+

Network CollectiveORGANIZATION

0.99+

Orlando, FloridaLOCATION

0.99+

thenetworkcollective.comOTHER

0.99+

NetAppORGANIZATION

0.99+

over 500 thousand developersQUANTITY

0.99+

Evyonne SharpPERSON

0.99+

CiscosORGANIZATION

0.98+

first piecesQUANTITY

0.98+

FiveDATE

0.98+

oneQUANTITY

0.98+

InteropORGANIZATION

0.98+

FirstQUANTITY

0.98+

dittoPERSON

0.98+

500 thousandQUANTITY

0.98+

this yearDATE

0.98+

@bcjordoPERSON

0.98+

Eyvonne SharpPERSON

0.98+

first stepsQUANTITY

0.98+

first three stepsQUANTITY

0.97+

OneQUANTITY

0.97+

Network CollectiveTITLE

0.97+

@NetCollectivePCORGANIZATION

0.97+

todayDATE

0.97+

theCUBEORGANIZATION

0.96+

Cisco Live 2018EVENT

0.96+

two typesQUANTITY

0.96+

10 gigQUANTITY

0.95+

two more daysQUANTITY

0.95+

10 years agoDATE

0.94+

@SharpNetworkORGANIZATION

0.94+

day oneQUANTITY

0.93+

firstQUANTITY

0.92+

TwitterORGANIZATION

0.91+

more than a once-a-yearQUANTITY

0.91+

DevNetORGANIZATION

0.9+

a hundred unique sitesQUANTITY

0.89+

MoorePERSON

0.89+

CEOPERSON

0.89+

NetAppTITLE

0.87+

GoogleORGANIZATION

0.84+

one placeQUANTITY

0.84+

past 20 yearsDATE

0.79+

DittoPERSON

0.79+

level oneQUANTITY

0.77+

couple peopleQUANTITY

0.75+

CollectiveTITLE

0.75+

Ian Massingham, MongoDB and Robbie Belson, Verizon | MongoDB World 2022


 

>>Welcome back to NYC the Cube's coverage of Mongo DB 2022, a few thousand people here at least bigger than many people, perhaps expected, and a lot of buzz going on and we're gonna talk devs. I'm really excited to welcome back. Robbie Bellson who's the developer relations lead at Verizon and Ian Massingham. Who's the vice president of developer relations at Mongo DB Jens. Good to see you. Great >>To be here. >>Thanks having you. So Robbie, we just met a few weeks ago at the, the red hat summit in Boston and was blown away by what Verizon is doing in, in developer land. And of course, Ian, you know, Mongo it's rayon Detra is, is developers start there? Why is Mongo so developer friendly from your perspective? >>Well, it's been the ethos of MongoDB since day one. You know, back when we launched the first version of MongoDB back in 2009, we've always been about making developers lives easier. And then in 2016, we announced and released MongoDB Atlas, which is our cloud managed service for MongoDB, you know, starting with a small number of regions built on top of AWS and about 2,500 adoption events per week for MongoDB Atlas. After the first year today, MongoDB Atlas provides a managed service for MongoDB developers around the world. We're present in almost a hundred cloud regions across S DCP and Azure. And that adoption number is now running at about 25,000 developers a week. So, you know, the proof are in proof is really in the metrics. MongoDB is an incredibly popular platform for developers that wanna build data-centric applications. You just can't argue with the metrics really, >>You know, Ravi, sometimes there's an analyst who come up with these theories and one of the theories I've been spouting for a long time is that developers are gonna win the edge. And now to, to see you at Verizon building out this developer community was really exciting to me. So explain how you got this started with this journey. >>Absolutely. As you think about Verizon 5g edge or mobile edge computing portfolio, we knew from the start that developers would play a central role and not only consuming the service, but shaping the roadmap for what it means to build a 5g future. And so we started this journey back in late 20, 19 and fast forward to about a year ago with Mongo, we realized, well, wait a minute, you look at the core service offerings available at the edge. We didn't know really what to do with data. We wanted to figure it out. We wanted the vote of confidence from developers. So there I was in an apartment in Colorado racing, your open source Mongo against that in the region edge versus region, what would you see? And we saw tremendous performance improvements. It was so much faster. It's more than 40% faster for thousands and thousands of rights. And we said, well, wait a minute. There's something here. So what often starts is an organic developer, led intuition or hypothesis can really expand to a much broader go to market motion that really brings in the enterprise. And that's been our strategy from day one. Well, >>It's interesting. You talk about the performance. I, I just got off of a session talking about benchmarks in the financial services industry, you know, amazing numbers. And that's one of the hallmarks of, of Mongo is it can play in a lot of different places. So you guys both have developer relations in your title. Is that how you met some formal developer relations? >>We were a >>Program. >>Yeah, I would say that Verizon is one of the few customers that we also collaborate with on a developer relations effort. You know, it's in our mutual best interest to try to drive MongoDB consumption amongst developers using Verizon's 5g edge network and their platform. So of course we work together to help, to increase awareness of MongoDB amongst mobile developers that want to use that kind of technology. >>But so what's your story on this? >>I mean, as I, as I mentioned, everything starts with an organic developer discovery. It all started. I just cold messaged a developer advocate on Twitter and here we are at MongoDB world. It's amazing how things turn out. But one of the things that's really resonated with me as I was speaking with one of, one of your leads within your organization, they were mentioning that as Mongo DVIA developed over the years, the mantra really became, we wanna make software development easy. Yep. And that really stuck with me because from a network perspective, we wanna make networking easy. Developers are not gonna care about the internals of 5g network. In fact, they want us to abstract away those complexities so that they can focus on building their apps. So what better co-innovation opportunity than taking MongoDB, making software easy, and we make the network easy. >>So how do you think about the edge? How does you know variety? I mean, to me, you know, there's a lot of edge use cases, you know, think about the home Depot or lows. Okay, great. I can put like a little mini data center in there. That's cool. That's that's edge. Like, but when I think of Verizon, I mean, you got cell towers, you've got the far edge. How do you think about edge Robbie? >>Well, the edge is a, I believe a very ambiguous term by design. The edge is the device, the mobile device, an IOT device, right? It could be the radio towers that you mentioned. It could be in the Metro edge. The CDN, no one edge is better than the other. They're all just serving different use cases. So when we talk about the edge, we're focused on the mobile edge, which we believe is most conducive to B2B applications, a fleet of IOT devices that you can control a manufacturing plant, a fleet of ground and aerial robotics. And in doing so you can create a powerful compute mesh where you could have a private network and private mobile edge computing by way of say an AWS outpost and then public mobile edge computing by way of AWS wavelength. And why keep them separate. You could have a single compute mesh even with MongoDB. And this is something that we've been exploring. You can extend Atlas, take a cluster, leave it in the region and then use realm the mobile portfolio and spread it all across the edge. So you're creating that unified compute and data mesh together. >>So you're describing what we've been expecting is a new architecture emerging, and that's gonna probably bring new economics of new use cases, right? Where are we today in that first of all, is that a reasonable premise that this is a sort of a new architecture that's being built out and where are we in that build out? How, how do you think about the, the future of >>That? Absolutely. It's definitely early days. I think we're still trying to figure it out, but the architecture is definitely changing the idea to rip out a mobile device that was initially built and envisioned for the device and only for the device and say, well, wait a minute. Why can't it live at the edge? And ultimately become multi-tenant if that's the data volume that may be produced to each of those edge zones with hypothesis that was validated by developers that we continue to build out, but we recognize that we can't, we can't get that static. We gotta keep evolving. So one of our newest ideas as we think about, well, wait a minute, how can Mongo play in the 5g future? We started to get really clever with our 5g network APIs. And I, I think we talked about this briefly last time, 5g, programmability and network APIs have been talked about for a while, but developers haven't had a chance to really use them and our edge discovery service answering the question in this case of which database is the closest database, doesn't have to be invoked by the device anymore. You can take a thin client model and invoke it from the cloud using Atlas functions. So we're constantly permuting across the entire portfolio edge or otherwise for what it means to build at the edge. We've seen such tremendous results. >>So how does Mongo think about the edge and, and, and playing, you know, we've been wondering, okay, which database is actually gonna be positioned best for the edge? >>Well, I think if you've got an ultra low latency access network using data technology, that adds latency is probably not a great idea. So MongoDB since the very formative years of the company and product has been built with performance and scalability in mind, including things like in memory storage for the storage engine that we run as well. So really trying to match the performance characteristics of the data infrastructure with the evolution in the mobile network, I think is really fundamentally important. And that first principles build of MongoDB with performance and scalability in mind is actually really important here. >>So was that a lighter weight instance of, of Mongo or not >>Necessarily? No, not necessarily. No, no, not necessarily. We do have edge cashing with realm, the mobile databases Robbie's already mentioned, but the core database is designed from day one with those performance and scalability characteristics in mind, >>I've been playing around with this. This is kind of a, I get a lot of heat for this term, but super cloud. So super cloud, you might have data on Preem. You might have data in various clouds. You're gonna have data out at the edge. And, and you've got an abstraction that allows a developer to, to, to tap services without necessarily if, if he or she wants to go deep into the S great, but then there's a higher level of services that they can actually build for their customers. So is that a technical reality from a developer standpoint, in your view, >>We support that with the Mongo DB multi-cloud deployment model. So you can place Mongo DB, Atlas nodes in any one of the three hyperscalers that we mentioned, AWS, GCP or Azure, and you can distribute your data across nodes within a cluster that is spread across different cloud providers. So that kinds of an kind of answers the question about how you do data placement inside the MongoDB clustered environment that you run across the different providers. And then for the abstraction layer. When you say that I hear, you know, drivers ODMs the other intermediary software components that we provide to make developers more productive in manipulating data in MongoDB. This is one of the most interesting things about the technology. We're not forcing developers to learn a different dialect or language in order to interact with MongoDB. We meet them where they are by providing idiomatic interfaces to MongoDB in JavaScript in C sharp, in Python, in rust, in that in fact in 12 different pro programming languages that we support as a first party plus additional community contributed programming languages that the community have created drivers for ODMs for. So there's really that model that you've described in hypothesis exist in reality, using >>Those different Compli. It's not just a series of siloed instances in, >>In different it's the, it's the fabric essentially. Yeah. >>What, what does the Verizon developer look like? Where does that individual come from? We talked about this a little bit a few weeks ago, but I wonder if you could describe it. >>Absolutely. My view is that the Verizon or just mobile edge ecosystem in general for developers are present at this very conference. They're everywhere. They're building apps. And as Ian mentioned, those idiomatic interfaces, we need to take our network APIs, take the infrastructure that's being exposed and make sure that it's leveraging languages, frameworks, automation, tools, the likes of Terraform and beyond. We wanna meet developers where they are and build tools that are easy for them to use. And so you had talked about the super cloud. I often call it the cloud continuum. So we, we took it P abstraction by abstraction. We started with, will it work in one edge? Will it work in multiple edges, public and private? Will it work in all of the edges for a given region, public or private, will it work in multiple regions? Could it work in multi clouds? We've taken it piece by piece by piece and in doing so abstracting way, the complexity of the network, meaning developers, where they are providing those idiomatic interfaces to interact with our API. So think the edge discovery, but not in a silo within Atlas functions. So the way that we're able to converge portfolios, using tools that dev developers already use know and love just makes it that much easier. Do, >>Do you feel like I like the cloud continuum cause that's really what it is. The super cloud does the security model, how does the security model evolve with that? >>At least in the context of the mobile edge, the attack surface is a lot smaller because it's only for mobile traffic not to say that there couldn't be various configuration and human error that could be entertained by a given application experience, but it is a much more secure and also reliable environment from a failure domain perspective, there's more edge zones. So it's less conducive to a regionwide failure because there's so many more availability zones. And that goes hand in hand with security. Mm. >>Thoughts on security from your perspective, I mean, you added, you've made some announcements this week, the, the, the encryption component that you guys announced. >>Yeah. We, we issued a press release this morning about a capability called queryable encryption, which actually as we record this Mark Porter, our CTO is talking about in his keynote, and this is really the next generation of security for data stored within databases. So the trade off within field level encryption within databases has always been very hard, very, very rigid. Either you have keys stored within your database, which means that your memory, so your data is decrypted while it's resident in memory on your database engine. This allow, of course, allows you to perform query operations on that data. Or you have keys that are managed and stored in the client, which means the data is permanently OBS from the engine. And therefore you can't offload query capabilities to your data platform. You've gotta do everything in the client. So if you want 10 records, but you've got a million encrypted records, you have to pull a million encrypted records to the client, decrypt them all and see performance hit in there. Big performance hit what we've got with queryable encryption, which we announced today is the ability to keep data encrypted in memory in the engine, in the database, in the data platform, issue queries from the client, but use a technology called structural encryption to allow the database engine, to make decisions, operate queries, and find data without ever being able to see it without it ever being decrypted in the memory of the engine. So it's groundbreaking technology based on research in the field of structured encryption with a first commercial database provided to bring this to market. >>So how does the mobile edge developer think about that? I mean, you hear a lot about shifting left and not bolting on security. I mean, is this, is this an example of that? >>It certainly could be, but I think the mobile edge developer still stuck with how does this stuff even work? And I think we need to, we need to be mindful of that as we build out learning journeys. So one of my favorite moments with Mongo was an immersion day. We had hosted earlier last year where we, our, from an enterprise perspective, we're focused on BW BS, but there's nothing stopping us. You're building a B2C app based on the theme of the winner Olympics. At the time, you could take a picture of Sean White or of Nathan Chen and see that it was in fact that athlete and then overlaid on that web app was the number of medals they accrued with the little trumpeteer congratulating you for selecting that athlete. So I think it's important to build trust and drive education with developers with a more simple experience and then rapidly evolve overlaying the features that Ian just mentioned over time. >>I think one of the keys with cryptography is back to the familiar messaging for the cloud offloading heavy lifting. You actually need to make it difficult to impossible for developers to get this wrong, and you wanna make it as easy as possible for developers to deal with cryptography. And that of course is what we're trying to do with our driver technology combined with structure encryption, with query encryption. >>But Robbie, your point is lots of opportunity for education. I mean, I have to say the developers that I work with, it's, I'm, I'm in awe of how they solve problems and I, and the way they solve problems, if they don't know the answer, they figure out how to go get it. So how, how are your two communities and other communities, you know, how are they coming together to, to solve such problems and share whether it's best practices or how do I do this? >>Well, I'm not gonna lie in person. Events are a bunch of fun. And one of the easiest domain knowledge exchange opportunities, when you're all in person, you can ideate, you can whiteboard, you can brainstorm. And often those conversations are what leads to that infrastructure module that an immersion day features. And it's just amazing what in person events can do, but community groups of interest, whether it's a Twitch stream, whether it's a particular code sample, we rely heavily on digital means today to upscale the developer community, but also build on by, by means of a simple port request, introduce new features that maybe you weren't even thinking of before. >>Yeah. You know, that's a really important point because when you meet people face to face, you build a connection. And so if you ask a question, you're more likely perhaps to get an answer, or if one doesn't exist in a, in a search, you know, you, oh, Hey, we met at the, at the conference and let's collaborate on this guys. Congratulations on, on this brave new world. You're in a really interesting spot. You know, developers, developers, developers, as Steve bomber says screamed. And I was glad to see Dave was not screaming and jumping up and down on the stage like that, but, but the message still resonates. So thank you, definitely appreciate. All right, keep it right there. This is Dave ante for the cubes coverage of Mago DB world 2022 from New York city. We'll be right back.

Published Date : Jun 7 2022

SUMMARY :

Who's the vice president of developer relations at Mongo DB Jens. And of course, Ian, you know, Mongo it's rayon Detra is, is developers start Well, it's been the ethos of MongoDB since day one. So explain how you versus region, what would you see? So you guys both have developer relations in your So of course we But one of the things that's really resonated with me as I was speaking with one So how do you think about the edge? It could be the radio towers that you mentioned. the idea to rip out a mobile device that was initially built and envisioned for the of the company and product has been built with performance and scalability in mind, including things like the mobile databases Robbie's already mentioned, but the core database is designed from day one So super cloud, you might have data on Preem. So that kinds of an kind of answers the question about how It's not just a series of siloed instances in, In different it's the, it's the fabric essentially. but I wonder if you could describe it. So the way that we're able to model, how does the security model evolve with that? And that goes hand in hand with security. week, the, the, the encryption component that you guys announced. So it's groundbreaking technology based on research in the field of structured So how does the mobile edge developer think about that? At the time, you could take a picture of Sean White or of Nathan Chen And that of course is what we're trying to do with our driver technology combined with structure encryption, with query encryption. and other communities, you know, how are they coming together to, to solve such problems And one of the easiest domain knowledge exchange And so if you ask a question, you're more likely perhaps to get an answer, or if one doesn't exist

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

VerizonORGANIZATION

0.99+

Robbie BellsonPERSON

0.99+

Ian MassinghamPERSON

0.99+

IanPERSON

0.99+

10 recordsQUANTITY

0.99+

RobbiePERSON

0.99+

Robbie BelsonPERSON

0.99+

ColoradoLOCATION

0.99+

2009DATE

0.99+

DavePERSON

0.99+

2016DATE

0.99+

Mark PorterPERSON

0.99+

thousandsQUANTITY

0.99+

MongoORGANIZATION

0.99+

BostonLOCATION

0.99+

AWSORGANIZATION

0.99+

MongoDBORGANIZATION

0.99+

Sean WhitePERSON

0.99+

Nathan ChenPERSON

0.99+

OlympicsEVENT

0.99+

PythonTITLE

0.99+

MongoDBTITLE

0.99+

todayDATE

0.99+

NYCLOCATION

0.99+

late 20DATE

0.99+

more than 40%QUANTITY

0.99+

two communitiesQUANTITY

0.99+

RaviPERSON

0.98+

MongoDB AtlasTITLE

0.98+

Mongo DBORGANIZATION

0.98+

oneQUANTITY

0.98+

JavaScriptTITLE

0.98+

this morningDATE

0.98+

one edgeQUANTITY

0.97+

12 different pro programming languagesQUANTITY

0.97+

New York cityLOCATION

0.97+

first versionQUANTITY

0.97+

this weekDATE

0.97+

bothQUANTITY

0.97+

AzureTITLE

0.96+

TwitterORGANIZATION

0.95+

AtlasTITLE

0.95+

C sharpTITLE

0.95+

a million encrypted recordsQUANTITY

0.95+

about 25,000 developers a weekQUANTITY

0.93+

TwitchORGANIZATION

0.93+

first yearQUANTITY

0.93+

19DATE

0.89+

Ed Walsh, ChaosSearch | CUBE Conversation May 2021


 

>>president >>so called big data promised to usher in a new era of innovation where companies competed on the basis of insights and agile decision making. There's little question that social media giants, search leaders and e commerce companies benefited. They had the engineering shops and the execution capabilities to take troves of data and turned them into piles of money. But many organizations were not as successful. They invested heavily in data architecture is tooling and hyper specialized experts to build out their data pipelines. Yet they still struggle today to truly realize they're busy. Did data in their lakes is plentiful but actionable insights aren't so much chaos. Search is a cloud based startup that wants to change this dynamic with a new approach designed to simplify and accelerate time to insights and dramatically lower cost and with us to discuss his company and its vision for the future is cuba Lem Ed Walsh had great to see you. Thanks for coming back in the cube. >>I always love to be here. Thank you very much. It's always a warm welcome. Thank you. >>Alright, so give us the update. You guys have had some big funding rounds, You're making real progress on the tech, taking it to market what's new with chaos surgery. >>Sure. Actually even a lot of good exciting things happen. In fact just this month we need some, you know, obviously announced some pretty exciting things. So we unveiled what we consider the industry first multi model data late platform that we allow you to take your data in S three. In fact, if you want to show the image you can, but basically we allow you to put your data in S three and then what we do is we activate that data and what we do is a full index of the data and makes it available through open a P. I. S. And the key thing about that is it allows your end users to use the tools are using today. So simply put your data in your cloud option charge, think Amazon S three and glacier think of all the different data. Is that a natural act? And then we do the hard work. And the key thing is to get one unified delic but it's a multi mode model access so we expose api like the elastic search aPI So you can do things like search or using cabana do log analytics but you can also do things like sequel, use Tableau looker or bring relational concepts into cabana. Things like joins in the data back end. But it allows you also to machine learning which is early next year. But what you get is that with that because of a data lake philosophy, we're not making new transformations without all the data movement. People typically land data in S. Three and we're on the shoulders of giants with us three. Um There's not a better more cost effective platform. More resilient. There's not a better queuing system out there and it's gonna cost curve that you can't beat. But basically so people store a lot of data in S. Three. Um But what their um But basically what you have to do is you E. T. L. Out to other locations. What we do is allow you to literally keep it in place. We index in place. We write our hot index to rewrite index, allow you to go after that but published an open aPI S. But what we avoid is the GTL process. So what our index does is look at the data and does full scheme of discovery normalization, were able to give sample sets. And then the refinery allows you to advance transformations using code. Think about using sequel or using rejects to change that data pull the dead apartheid things but use role based access to give that to the end user. But it's in a format that their tools understand cabana will use the elasticsearch ap or using elasticsearch calls but also sequel and go directly after data by doing that. You get a data lake but you haven't had to take the three weeks to three months to transform your data. Everyone else makes you. And you talk about the failure. The idea that Alex was put your data there in a very scalable resilient environment. Don't do transformation. It was too hard to structure for databases and data. Where else is put it there? We'll show you how value out Largely un delivered. But we're that last mile. We do exactly that. Just put it in s. three and we activated and activate it with a piece that the tools of your analysts use today or what they want to use in the future. That is what's so powerful. So basically we're on the shoulders of giants with street, put it there and we light it up and that's really the last mile. But it's this multi model but it's also this lack of transformation. We can do all the transformation that's all to virtually and available immediately. You're not doing extended GTL projects with big teams moving around a lot of data in the enterprise. In fact, most time they land and that's three and they move it somewhere and they move it again. What we're saying is now just leave in place well index and make it available. >>So the reason that it was interesting, so the reason they want to move in the S three was the original object storage cloud. It was, it was a cheap bucket. Okay. But it's become much more than that when you talk to customers like, hey, I have all this data in this three. I want to do something with it. I want to apply machine intelligence. I want to search it. I want to do all these things, but you're right. I have to move it. Oftentimes to do that. So that's a huge value. Now can I, are you available in the AWS marketplace yet? >>You know, in fact that was the other announcement to talk about. So our solution is one person available AWS marketplace, which is great for clients because they've been burned down their credits with amazon. >>Yeah, that's that super great news there. Now let's talk a little bit more about data. Like you know, the old joke of the tongue in cheek was data lakes become data swamps. You sort of know, see no schema on, right. Oh great. I can put everything into the lake and then it's like, okay, what? Um, so maybe double click on that a little bit and provide a little bit more details to your, your vision there and your philosophy. >>So if you could put things that data can get after it with your own tools on elastic or search, of course you do that. If you don't have to go through that. But everyone thinks it's a status quo. Everyone is using, you know, everyone has to put it in some sort of schema in a database before they can get access to what everyone does. They move it some place to do it. Now. They're using 1970s and maybe 1980s technology. And they're saying, I'm gonna put it in this database, it works on the cloud and you can go after it. But you have to do all the same pain of transformation, which is what takes human. We use time, cost and complexity. It takes time to do that to do a transformation for an user. It takes a lot of time. But it also takes a teams time to do it with dBS and data scientists to do exactly that. And it's not one thing going on. So it takes three weeks to three months in enterprise. It's a cost complexity. But all these pipelines for every data request, you're trying to give them their own data set. It ends up being data puddles all over this. It might be in your data lake, but it's all separated. Hard to govern. Hard to manage. What we do is we stop that. What we do is we index in place. Your dad is already necessary. Typically retailing it out. You can continue doing that. We really are just one more use of the data. We do read only access. We do not change that data and you give us a place in. You're going to write our index. It's a full rewrite index. Once we did that that allows you with the refinery to make that we just we activate that data. It will immediately fully index was performant from cabana. So you no longer have to take your data and move it and do a pipeline into elasticsearch which becomes kind of brittle at scale. You have the scale of S. Three but use the exact same tools you do today. And what we find for like log analytics is it's a slightly different use case for large analytics or value prop than Be I or what we're doing with private companies but the logs were saving clients 50 to 80% on the hard dollars a day in the month. They're going from very limited data sets to unlimited data sets. Whatever they want to keep an S. Three and glacier. But also they're getting away from the brittle data layer which is the loosen environment which any of the data layers hold you back because it takes time to put it there. But more importantly It becomes brittle at scale where you don't have any of that scale issue when using S. three. Is your dad like. So what what >>are the big use cases Ed you mentioned log analytics? Maybe you can talk about that. And are there any others that are sort of forming in the marketplace? Any patterns that you see >>Because of the multi model we can do a lot of different use cases but we always work with clients on high R. O. I use cases why the Big Bang theory of Due dad like and put everything in it. It's just proven not to work right? So what we're focusing first use cases, log analytics, why as by way with everything had a tipping point, right? People were buying model, save money here, invested here. It went quickly to no, no we're going cloud native and we have to and then on top of it it was how do we efficiently innovate? So they got the tipping point happens, everyone's going cloud native. Once you go cloud native, the amount of machine generated data that you have that comes from the environment dramatically. It just explodes. You're not managing hundreds or thousands or maybe 10,000 endpoints, you're dealing with millions or billions and also you need this insight to get inside out. So logs become one of the things you can't keep up with it. I think I mentioned uh we went to a group of end users, it was only 60 enterprise clients but we asked him what's your capture rate on logs And they said what do you want it to be 80%, actually 78 said listen we want eight captured 80 200 of our logs. That would be the ideal not everything but we need most of it. And then the same group, what are you doing? Well 82 had less than 50%. They just can't keep up with it and every everything including elastic and Splunk. They work harder to the process to narrow and keep less and less data. Why? Because they can't handle the scale, we just say landed there don't transform will make it all available to you. So for log analytics, especially with cloud native, you need this type of technology and you need to stop, it's like uh it feels so good when you stop hitting your head against the wall. Right? This detail process that this type of scale just doesn't work. So that's exactly we're delivering the second use case uh and that's with using elastic KPI but also using sequel to go after the same data representation. And we come out with machine learning. You can also do anomaly detection on the same data representation. So for a log uh analytic use case series devops setups. It's a huge value problem now the same platform because it has sequel exposed. You can do just what we use the term is agile B. I people are using you think about look or tableau power bi I uh metabolic. I think of all these toolsets that people want to give and uh and use your business or coming back to the centralized team every single week asking for new datasets. And they have to be set up like a data set. They have to do an e tail process that give access to that data where because of the way just landed in the bucket. If you have access to that with role based access, I can literally get you access that with your tool set, let's say Tableau looker. You know um these different data sets literally in five minutes and now you're off and running and if you want a new dataset they give another virtual and you're off and running. But with full governance so we can use to be in B I either had self service or centralized. Self service is kind of out of control, but we can move fast and the centralized team is it takes me months but at least I'm in control. We allow you do both fully governed but self service. Right. I got to >>have lower. I gotta excel. All right. And it's like and that's the trade off on each of the pieces of the triangle. Right. >>And they make it easy, we'll just put in a data source and you're done. But the problem is you have to E T L the data source. And that's what takes the three weeks to three months in enterprise and we do it virtually in five minutes. So now the third is actually think about um it's kind of a combination of the two. Think about uh you love the beers and diaper stories. So you know, think about early days of terror data where they look at sales out data for business and they were able to look at all the sales out data, large relational environment, look at it, they crunch all these numbers and they figured out by different location of products and the start of they sell more sticker things and they came up with an analogy which everyone talked about beers and diapers. If you put it together, you sell more from why? Because afternoon for anyone that has kids, you picked up diapers and you might want to grab a beer of your home with the kids. But that analogy 30 years ago, it's now well we're what's the shelf space now for approximate company? You know it is the website, it's actually what's the data coming from there. It's actually the app logs and you're not capturing them because you can't in these environments or you're capturing the data. But everyone's telling, you know, you've got to do an E. T. L. Process to keep less data. You've got to select, you got to be very specific because it's going to kill your budget. You can't do that with elastic or Splunk, you gotta keep less data and you don't even know what the questions are gonna ask with us, Bring all the app logs just land in S. three or glacier which is the most it's really shoulders of giants right? There's not a better platform cost effectively security resilience or through but to think about what you can stream and the it's the best queuing platform I've ever seen in the industry just landed there. And it's also very cost effective. We also compress the data. So by doing that now you match that up with actually relatively small amount of relational data and now you have the vaccine being data. But instead it's like this users using that use case and our top users are always, they start with this one then they use that feature and that feature. Hey, we just did new pricing is affecting these clients and that clients by doing this. We get that. But you need that data and people aren't able to capture it with the current platforms. A data lake. As long as you can make it available. Hot is a way to do it. And that's what we're doing. But we're unique in that. Other people are making GTL IT and put it in a in 19 seventies and 19 eighties data format called a schema. And we avoided that because we basically make S three a hot and elected. >>So okay. So I gotta I want to, I want to land on that for a second because I think sometimes people get confused. I know I do sometimes without chaos or it's like sometimes don't know where to put you. I'm like okay observe ability that seems to be a hot space. You know of course log analytics as part of that B. I. Agile B. I. You called it but there's players like elastic search their star burst. There's data, dogs, data bricks. Dream EOS Snowflake. I mean where do you fit where what's the category and how do you differentiate from players like that? >>Yeah. So we went about it fundamentally different than everyone else. Six years ago. Um Tom hazel and his band of merry men and women came up and designed it from scratch. They may basically yesterday they purposely built make s free hot analytic environment with open A. P. I. S. By doing that. They kind of changed the game so we deliver upon the true promises. Just put it there and I'll give you access to it. No one else does that. Everyone else makes you move the data and put it in schema of some format to get to it. And they try to put so if you look at elasticsearch, why are we going after? Like it just happens to be an easy logs are overwhelming. You once you go to cloud native, you can't afford to put it in a loose seen the elk stack. L is for loosen its inverted index. Start small. Great. But once you now grow it's now not one server. Five servers, 15 servers, you lose a server, you're down for three days because you have to rebuild the whole thing. It becomes brittle at scale and expensive. So you trade off I'm going to keep less or keep less either from retention or data. So basically by doing that so elastic we're not we have no elastic on that covers but we allow you to well index the data in S. Tree and you can access it directly through a cabana interface or an open search interface. Api >>out it's just a P. >>It's open A P. I. S. It's And by doing that you've avoided a whole bunch of time cost, complexity, time of your team to do it. But also the time to results the delays of doing that cost. It's crazy. We're saving 50-80 hard dollars while giving you unlimited retention where you were dramatically limited before us. And as a managed service you have to manage that Kind of Clunky. Not when it starts small, when it starts small, it's great once at scale. That's a terrible environment to manage the scale. That's why you end up with not one elasticsearch cluster, dozens. I just talked to someone yesterday had 125 elasticsearch clusters because of the scale. So anyway, that's where elastic we're not a Mhm. If you're using elastic it scale and you're having problems with the retired off of cost time in the, in the scale, we become a natural fit and you don't change what your end users do. >>So the thing, you know, they had people here, this will go, wow, that sounds so simple. Why doesn't everybody do this? The reason is it's not easy. You said tom and his merry band. This is really hard core tech. Um and it's and it's it's not trivial what you've built. Let's talk about your secret sauce. >>Yeah. So it is a patented technology. So if you look at our, you know, component for architecture is basically a large part of the 90% of value add is actually S. Three, I gotta give S three full kudos. They built a platform that we're on shoulders of giants. Um But what we did is we purpose built to make an object storage a hot alec database. So we have an index, like a database. Um And we basically the data you bring a refinery to be able to do all the advanced type of transformation but all virtually done because we're not changing the source of record, we're changing the virtual views And then a fabric allows you to manage and be fully elastic. So if we have a big queries because we have multiple clients with multiple use cases, each multiple petabytes, we're spending up 1800 different nodes after a particular environment. But even with all that we're saving them 58%. But it's really the patented technology to do this, it took us six years by the way, that's what it takes to come up with this. I come upon it, I knew the founder, I've known tom tom a stable for a while and uh you know his first thing was he figured out the math and the math worked out. Its deep tech, it's hard tech. But the key thing about it is we've been in market now for two years, multiple use cases in production at scale. Um Now what you do is roadmap, we're adding a P. I. So now we have elasticsearch natural proofpoint. Now you're adding sequel allows you open up new markets. But the idea for the person dealing with, you know, so we believe we deliver on the true promise of Data Lakes and the promise of Data lakes was put it there, don't focus on transferring. It's just too hard. I'll get insights out and that's exactly what we do. But we're the only ones that do that everyone else makes you E. T. L. At places. And that's the innovation of the index in the refinery that allows the index in place and give virtual views in place at scale. Um And then the open api is to be honest, uh I think that's a game. Give me an open api let me go after it. I don't know what tool I'm gonna use next week every time we go into account they're not a looker shop or Tableau Sharp or quick site shop there, all of them and they're just trying to keep up with the businesses. Um and then the ability to have role based access where actually can give, hey, get them their own bucket, give them their own refinery. As long as they have access to the data, they can go to their own manipulation ends up being >>just, >>that's the true promise of data lakes. Once we come out with machine learning next year, now you're gonna rip through the same embassy and the way we structured the data matrices. It's a natural fit for things like tensorflow pytorch, but that's, that's gonna be next year just because it's a different persona. But the underlining architecture has been built, what we're doing is trying to use case that time. So we worked, our clients say it's not a big bang. Let's nail a use case that works well. Great R. O. I great business value for a particular business unit and let's move to the next. And that's how I think it's gonna be really. That's what if you think about gardener talks about, if you think about what really got successful in data, where else in the past? That's exactly it wasn't the big bang, it was, let's go and nail it for particular users. And that's what we're doing now because it's multi model, there's a bunch of different use cases, but even then we're focusing on these core things that are really hard to do with other relational only environments. Yeah, I >>can see why you're still because you know, you haven't been well, you and I have talked about the api economy for forever and then you've been in the storage world so long. You know what a nightmare is to move data. We gotta, we gotta jump. But I want to ask you, I want to be clear on this. So you are your cloud cloud Native talked to frank's Lukman maybe a year ago and I asked him about on prem and he's like, no, we're never doing the halfway house. We are cloud all the >>way. I think >>you're, I think you have a similar answer. What what's your plan on Hybrid? >>Okay. We get, there's nothing about technology, we can't go on, but we are 100 cloud native or only in the public cloud. We believe that's a trend line. Everyone agrees with us, we're sticking there. That's for the opportunity. And if you can run analytics, There's nothing better than getting to the public cloud like Amazon and he was actually, that were 100 cloud native. Uh, we love S three and what would be a better place to put this is put the next three and we just let you light it up and then I guess if I'm gonna add the commercial and buy it through amazon marketplace, which we love that business model with amazon. It's >>great. Ed thanks so much for coming back in the cube and participating in the startup showcase. Love having you and best of luck. Really exciting. >>Hey, thanks again, appreciate it. >>All right, thank you for watching everybody. This is Dave Volonte for the cube. Keep it right there.

Published Date : May 14 2021

SUMMARY :

They had the engineering shops and the execution capabilities to take troves of data and Thank you very much. taking it to market what's new with chaos surgery. But basically what you have to do is you E. T. L. Out to other locations. But it's become much more than that when you talk You know, in fact that was the other announcement to talk about. Like you know, the old joke of the tongue in cheek was data lakes become data swamps. You have the scale of S. Three but use the exact same tools you do today. are the big use cases Ed you mentioned log analytics? So logs become one of the things you can't keep up with it. And it's like and that's the trade off on each of But the problem is you have to E T L the data I mean where do you fit where what's the category and how do you differentiate from players like that? no elastic on that covers but we allow you to well index the data in S. And as a managed service you have to manage that Kind of Clunky. So the thing, you know, they had people here, this will go, wow, that sounds so simple. the source of record, we're changing the virtual views And then a fabric allows you to manage and be That's what if you think about gardener talks about, if you think about what really got successful in data, So you are your cloud cloud I think What what's your plan on Hybrid? to put this is put the next three and we just let you light it up and then I guess if I'm gonna add Love having you and best of luck. All right, thank you for watching everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VolontePERSON

0.99+

Ed WalshPERSON

0.99+

15 serversQUANTITY

0.99+

80%QUANTITY

0.99+

58%QUANTITY

0.99+

three monthsQUANTITY

0.99+

three weeksQUANTITY

0.99+

May 2021DATE

0.99+

two yearsQUANTITY

0.99+

90%QUANTITY

0.99+

Five serversQUANTITY

0.99+

hundredsQUANTITY

0.99+

1970sDATE

0.99+

amazonORGANIZATION

0.99+

1980sDATE

0.99+

yesterdayDATE

0.99+

five minutesQUANTITY

0.99+

AWSORGANIZATION

0.99+

millionsQUANTITY

0.99+

S threeTITLE

0.99+

three daysQUANTITY

0.99+

AmazonORGANIZATION

0.99+

six yearsQUANTITY

0.99+

50QUANTITY

0.99+

one serverQUANTITY

0.99+

EdPERSON

0.99+

Tom hazelPERSON

0.99+

twoQUANTITY

0.99+

three weeksQUANTITY

0.99+

78QUANTITY

0.99+

S. threeLOCATION

0.99+

thirdQUANTITY

0.99+

next yearDATE

0.99+

less than 50%QUANTITY

0.99+

tomPERSON

0.99+

billionsQUANTITY

0.99+

threeQUANTITY

0.99+

thousandsQUANTITY

0.99+

next weekDATE

0.99+

dozensQUANTITY

0.99+

50-80QUANTITY

0.98+

Six years agoDATE

0.98+

125 elasticsearch clustersQUANTITY

0.98+

bothQUANTITY

0.98+

a year agoDATE

0.98+

early next yearDATE

0.97+

Tableau SharpORGANIZATION

0.97+

AlexPERSON

0.97+

todayDATE

0.97+

firstQUANTITY

0.97+

first thingQUANTITY

0.96+

30 years agoDATE

0.96+

eachQUANTITY

0.96+

one personQUANTITY

0.96+

S. TreeTITLE

0.96+

10,000 endpointsQUANTITY

0.96+

second useQUANTITY

0.95+

82QUANTITY

0.95+

one thingQUANTITY

0.94+

TableauTITLE

0.94+

60 enterprise clientsQUANTITY

0.93+

oneQUANTITY

0.93+

eightQUANTITY

0.93+

1800 different nodesQUANTITY

0.91+

excelTITLE

0.9+

80 200 of our logsQUANTITY

0.89+

this monthDATE

0.89+

S. ThreeTITLE

0.88+

agileTITLE

0.88+

ChaosSearchORGANIZATION

0.86+

S. ThreeTITLE

0.86+

Dream EOS SnowflakeTITLE

0.85+

cabanaLOCATION

0.85+

100 cloudQUANTITY

0.83+

a dayQUANTITY

0.81+

Amanda Silver, Microsoft & Scott Johnston, Docker | DockerCon Live 2020


 

>> Narrator: From around the globe, it's theCUBE with digital coverage of Dockercon Live 2020, brought to you by Docker and it's ecosystem partners. >> Everyone welcome back to Dockercon 2020, #Docker20. This is theCUBE and Docker's coverage of Dockercon 20. I'm John Furrier in the Palo Alto studios with our quarantine crew, we got a great interview segment here and big news around developer workflow code to cloud. We've got Amanda Silver, Corporate Vice President, product for developer tools at Microsoft and Scott Johnson, the CEO of Docker. Scott had a great Keynote talking about this relationship news has hit about the extension of the Microsoft partnership. So congratulations, Amanda, welcome to theCUBE. >> Thanks for having me. >> Amanda, tell us about what your role is at Microsoft. You guys are well known in the developer community. You had to develop a ecosystem even when I was in college going way back. Very modern now, the cloud is the key, code to cloud, that's the theme. Tell us about your role at Microsoft. >> Yeah, so I basically run the product, Product Design and User Research team that works on our developer tools at Microsoft. And so that includes the Visual Studio product as well as Visual Studio code that's become pretty popular in the last few years but it also includes things like the dotNET runtime and the TypeScript programming language, as well as all of our Azure tooling. >> What's your thoughts on the relationship with Docker? Obviously the news extension of an existing relationship, Microsoft's got a lot of tools, you got a lot of things you guys are doing, bringing the cloud to every business. Tell us about your thoughts on this relationship with Docker? >> Yeah well, we're very excited about the partnership for sure. Our goal is really to make sure that Azure is a fantastic place where all developers can kind of bring their code and they feel welcome. They feel natural. We really see a unique opportunity to make the experience really great for the Docker community by creating more integrated and seamless experience across Docker desktop, Windows and Visual Studio and we really appreciate how Docker has kind of, supported our Windows ecosystem to run in Docker as well. >> Scott, this relationship and an extension with Microsoft is really, I think, impressive and also notable because Microsoft's got so many tools out there and they have so successful with Azure. You guys have been so successful with your developer community but this also is a reflective of the new Docker. Can you share your thoughts on how this partnership with Microsoft, extending the way it is, with the growth of the cloud is a reflection of the new Docker? >> Yeah, absolutely John, it's a great question. One of the things that we've really been focused on since November is fully embracing the ecosystem and all the partnerships and all the possibilities of that ecosystem and part of that is just reality that we're a smaller company now and we can't do it all, nor should we do it all. Part of it's the reality that developers love choice and no one's going to change their minds on choice, and third is just acknowledging that there's so much creativity and so much energy outside the four walls of Docker that we'd be silly not to take advantage of that and welcome it and embrace it and provide that as a phenomenal experience for our developers. So this is a great example of that. The Snyk partnership we announced last week is a great example of that and you're going to see many more partnerships like this going forward that are reflective of exactly this point. >> You've been a visionary on the product side, interviewed before. Also deploying is more important than ever, that whole workflow simplifying, it's not getting complex, people want choice, building code, managing code, deploying code. This has been a big focus of yours. Can you just share your thoughts on where Microsoft comes in? Because they got stuff too, you've got stuff, it all works together. What's your thoughts? >> Right, so it needs to work together because developers want to focus on their app. They don't want to focus on duct taping and stringing together different siloed pools. So you can see in the demo and you'll see in demonstrations later throughout the conference, just the seamless experience that a developer gets in the Docker command line inner operating with Visual Studio Code, with the Docker command line and then deploying to Azure and what's wonderful about the partnership is that both parties put real engineering effort and design effort into making it a great experience. So a lot of the complexities around configuration, around default settings, around security, user management, all of that is abstracted out and taken away from the developers so they can focus on applications and getting those applications deployed to the cloud as quickly as possible. Getting their apps from code to cloud is the watchword or the call to action for this partnership and we think we've really hit it out of the park with the integration that you saw. >> Great validation in the critical part of the workflow you guys been part of. Amanda, we're living in a time we're doing these remote interviews. The COVID crisis has shown the productivity gains of working at home and working, sheltering in place but it also has highlighted the focus of developers, mainly who have also worked at home. They're been kind of used to this, you see the rigs. I saw at Microsoft build some amazing rigs from the studio, so these guys streaming their code demos. This is a Cambrian explosion of new kinds of productivity. You got the world's getting more complex at scale. This is what cloud does. What's your thoughts on this? 'Cause the tooling, there's more tools than ever, right? >> Yeah. >> I still got to deploy code. It's got to be more agile, it's got to be faster, it's got to be at scale. This is what you guys believe in. What's your thinking on all these tooling and abstraction layers? And the end of the day, developers still got to do their job. >> Yeah, well, absolutely. And now even more than ever, I think we've certainly seen over the past few months, a more rapid acceleration of digital transformation that has really happened in the past few years. Paper processes are now becoming digital processes all of a sudden. Everybody needs to work and learn from home and so there's just this rapid acceleration to kind of move everything to support our new remote first lifestyle. But even more so, we now have remote development teams actually working from home as well in a variety of different kinds of environments, whether they're using their own personal machine to connect to their infrastructure or they're using a work issued machine. It's more important than ever that developers are productive but they are productive as a team. Software is a team sport, we all need to be able to work together and to be able to collaborate. And one of the most important aspects of agility for developers is consistency. And what Docker really enables with containerization, is to make the infrastructure consistent and repeatable so that as developers are moving through the lifecycle from their local desktop and developing on their local desktop, to a test environment and to staging and to production, it's really, it's infrastructure for developers as well as operations. And so, that infrastructure, that's completely customizable for what the developers operating system of choice is, what their app stack is, all of those dependencies kind of running together. And so that's what really enables developers to be really agile and have a really fast iteration cycle but also to have that consistency across all of their development team. And we now need to think about things like, how are we actually going to bring on interns for the summer and make sure that they can actually set up their developer boxes in a consistent way that we can actually support them and things like Docker really help with that. >> As your container instances and Visual Studio cloud that you guys have has had great success. There's a mix and match formula here and the other day, developers want to ship the code. What's the message that you guys are sending here with this because I think productivity is one, simplification is the other but as developers, we're on the front lines and they're shipping in real time. This is a big part of the value proposition that you guys bringing to the table. >> Yeah, the core message is that any developer and their code is welcome (laughs) and that we really want to support them, empower them and increase their velocity and the impact that they can have. And so, having things like the fact that the Docker CLI is natively integrated into the Azure experience is a really important aspect of making sure that developers are feeling welcome and feeling comfortable. And now that the Docker CLI tools that are part of Docker desktop have access to native commands that work well with Azure container instances, Azure container instances, if anybody is unfamiliar with that, is the simplest and fastest way to kind of set up containers in Azure and so we believe that developers have really been looking for a really simple way to kind of get containers on Azure and now we have that really consistent experience across our servers, services and our tools. Visual Studio code and Visual Studio extensions make full use of Docker desktop and the Docker CLI so that they can get that combination of the productivity and the power that they're looking for. And in fact, we've integrated these as a design point since very early on in our partnership when we've been partnering with Docker for quite a while. >> Amanda, I want to ask you about the tool chain. We've heard about workflows, making it simpler. Bottom line from a developer standpoint, what's the bottom line for me? What does this mean to me, everyday developer out there? >> I really think it means, your productivity on your terms. And so, Microsoft has been a developer company since the very beginning with Bill Gates and GW Basic. And it's actually similar for Docker. They really have a developer first point of view, which certainly speaks to my heart and so one of the things that we're really trying to do with Docker is to make sure that we can create a workflow that's super productive at every stage of the developer experience, no matter which stack they're actually targeting, whether there's targeting Node or Python, or dotNET and C Sharp or Java, we really want to make sure that we have a super simple experience that you can actually initiate all of these commands, create Docker container images and use the Docker compose files. And then, just kind of do that consistently, as you're deploying it all the way up into your infrastructure in Azure. And the other thing that we really want to make sure is that that even post deployment, you can actually inspect and diagnose these containers and images without having to leave the tool. So we also think about the process of writing the code but also the process of kind of managing the code and remediating issues that might come up in production. And so we really want you to be able to look at containers up in the Azure, that are deployed into Azure and make sure that they're running and healthy and that if something's wrong, that you can actually open up a shell and be in an interactive mode and be able to look at the logs from those containers and even inspect one to see environment variables or other details. >> Yeah, that's awesome. Writing code, managing code and then you got to deploy, right? So what I've been loving about the past generation of Agile is deployment's been faster to play off all the time. Scott, this brings up that the ease of use but you'll want to actually leverage automation. This is the trend that you want to get into. You want to make it easy to write code, manage code but during the deployment phase, that's a big innovation. That's the last point, making that better and stronger. What's your thoughts on simplifying that? >> Well, as a big part of this partnership, John, that Docker and Microsoft embarked on, as you saw from the demo in the keynote, all within the Docker command line, the developer's able to do it in two simple commands, deploy an app, define and compose from their desktop to Azure. And there's a whole slew of automation and pre-configured smart defaults or sane defaults that have gone on behind the scenes and it a lot of hardcore engineering work on part of Docker-Microsoft together to simplify that and make that easy. And that goes exactly to your point, which is, the simpler you can make it, make an abstract way to kind of underline plumbing and infrastructure, the faster Devs can get their application from code to cloud. >> Scott, you've been a product CEO, you've been a product person now you're the CEO but you have a product back when you've been involved with a relationship with Microsoft for a long time. What's the state of the market right now? I see Microsoft has evolved because just the performance, corporate performance, the shift to the cloud has been phenomenal. Now developers getting more empowered, there's more demand for the pressure to put developers to do more and more creativity. So you've seen this evolve, this relationship, what does it mean? >> Yeah, it's honestly a wonderful question, John and I want to thank Amanda and the entire Microsoft team for being long standing partners with us on this journey. So it might not be known to everyone on today's day's event but Microsoft came to the very first Dockercon event way back in June 2014 and I had the privilege of greeting them and welcoming them and then they were full on, ready to see what all the excitement about Docker was about and really embraced it. And you mentioned kind of openness in Microsoft's growth over time in that dimension and we think Docker, together with Microsoft have really shown what an open developer community can do. That started back in 2014 and then we embarked on an open source collaboration around the Docker command line of the Docker engine, bringing that Docker engine from Linux and now moving it to Windows applications. And so all the sudden the promise of write once and use the same primitives, the same formats, the same command lines, as you can with Linux onto Windows applications, we brought that promise to the market. And it's been an ongoing journey together with Microsoft on open standards base, developer facing friendliness, ease of use, fast time to deploy and this partnership that we announced yesterday and we highlighted at the keynote is just another example of that ongoing relationship, laser-like focused on developer productivity and helping teams build great apps. >> Why do you like Azure in the cloud for Docker? Can you share why? >> Well, as Amanda has been sharing, it's super focused on, what are the needs of developers to help them continue to stay focused on their apps and not have their cognitive load burdened by other aspects of getting their apps to the cloud and Azure does a phenomenal job of simplifying and providing sane defaults out of the box. And as we've been talking about, it's also very open to partner integrations like the one we've announced yesterday and highlighted that make it just easy for development teams to choose their tools and build their apps and deploy them onto Azure as quickly as possible. So it's a phenomenal platform for developers and we're very excited and proud to partner with Microsoft on it. >> Amanda on your side, I see Docker's got millions of developers. you guys got millions of developers even more. How do you see the developers in Microsoft's side engaging with Docker desktop and Docker hub? Where does it all fit? I mentioned earlier how I see Docker context really improving the way that individuals and teams work with their environments in making sure that they're consistent but I think this really comes together as we work with Docker desktop and Docker Hub. When developers sign in to Docker Hub from Docker desktop, everything kind of lights up and so they can see all of the images in their repositories and they can also see the cloud environments that they're running them in. And so, once you sign into the Hub, you can see all the contexts that map to the logical environments they have access to, like Dev, NQA and maybe staging. And another use case that's really important is that we can access the same integration environment. So, I can have microservices that I've been working on but I can also see microservices that my teammates and their logs from the services that they've been working on, which I think is really great and certainly helps with team productivity. The other thing too, is that this also really helps with hybrid cloud deployments, where, you might have some on-premises hosted containers and you might have some that's hosted in a public cloud. And so you can see all of those things through your Docker Hub. >> Well, I got to say, I love the code to cloud tagline, I think that's very relevant and catchy. And I think, I guess to me what I'm seeing and I'd love to get your thoughts, Amanda on this is you oversee a key part of Microsoft's business that's important for developers, just the vibe and people are amped up right now. I know people are tensed, anxiety with the COVID-19 crisis but I think people are generally agreeing that this is going to be a massive inflection point for just more headroom needed for developers to accelerate their value on the front lines. What's your personal take on this? You've seen these waves before but now in this time, what are you most excited about? What are you optimistic about? What's your view on the opportunities? Can you share your thoughts, because people are going to get back to work. They're working now remotely but if we go back to hybrid world, they're going to be jamming on projects. >> Yeah, for sure but people are jamming on projects right now and I think that in a lot of ways, developers are first responders in that they are... Developers are always trying to support somebody else. We're trying to support somebody else's workflow and so we have examples of people who are creating new remote systems to be able to schedule meetings in hospitals for the doctors who are actually the first responders taking care of patients but at the end of the day, it's the developer who's actually creating that solution. And so we're being called to duty right now and so we need to make sure that we're actually there to support the needs of our users and that we're basically cranking on code as fast as we can. And to be able to do that, we have to make sure that every developer is empowered and they can move quickly but also that they can collaborate really quickly. And so I think that Docker Hub, Docker kind of helps you ensure that you have that consistency but you also have that connection to the infrastructure that's hosted by your your organization. >> I think you nailed, that's amazing insight. I think that's... The current situation in the community matters because there's a lot of frontline work being done to your point but then we got to rebuild, the modernization is happening as well coming out of this so there's going to be that. And there's a lot of camaraderie going on and massive community involvement I'm seeing more of. The empathy but also now there's going to be the building, the creation, the new creation. So, Scott, this is going to call for more simplicity and to abstract away the complexities. This is the core issue. >> Well, that's exactly right. And it is time to build and we're going to build our way out of this and it is the community that's responding. And so in some sense, Microsoft and Docker are there to support that moory energy and give them the tools to go and identify and have an impact as quickly as possible. I referenced in the keynote, completely bottoms up organic adoption of Docker desktop and Docker Hub in racing to provide solutions against the COVID-19 virus. It's a war against this pandemic that is heavily dependent on applications and data. And there's over 200 projects, community projects on Docker Hub today, where you've got tools and containers and data analysis all in service to the COVID-19 battle that's being fought. And then as you said, John, as we get through the other side, there's entire industries that are completely rethinking their approach that were largely offline before but now see the imperative and the importance of going online. And that tectonic shift, nearly overnight of offline to online behavior and commerce and social and going down the list, that requires new application development. And I'm very pleased about this partnership is that together, we're giving developers the tools to really take advantage of that opportunity and go and build our way out of it. >> Well, Scott, congratulations on a great extended partnership with Microsoft and the Docker brand. I'm a big fan from day one. I know you guys have pivoted on a new trajectory, which is phenomenal, very community oriented, very open source, very open. So congratulations on that. Amanda, thanks for spending the time to come on. I'll give you the final word. Take a minute to talk about what's new at Microsoft for the folks that know Microsoft, know they have a developer mindset from day one. Cloud is exploding, code to cloud. What's the update? What's the new narrative? What should people know about Microsoft with developer community? Can you share some data for the folks that aren't in the community or might want to join or the folks in the community who want to get an update? >> Yeah, it's a great kind of question. Right now, I think we are all really focused on making sure that we can empower developers throughout the world and that includes both those who are building solutions for their organizations today but also, I think we're going to end up with a ton of new developers over this next period, who are really entering the workforce and learning to create digital solutions. Overall, there's a massive developer shortage across the world. There's so much opportunity for developers to kind of address a lot of the needs that we're seeing out of organizations, again, across the world. And so I think it's just a really exciting time to be a developer and my only hope is that basically we're building tools that actually enable them to solve the problem. >> Awesome insight, and thank you so much for your time. Code to cloud developers are cranking away, they're the first responders, going to take care of business and then continue to build out the modern applications. And when you have a crisis like this, people cut right through the noise and get right to the tools that matter. So thanks for sharing the Microsoft-Docker partnership and the things that you guys are working on together. Thanks for your time. >> Thank you. >> Thank you. >> Okay, this is theCUBE's coverage. We are at Dockercon 2020 Digital. This is theCUBE Virtual. I'm John Furrier, bringing all the action, more coverage. Stay with us for more Dockercon Virtual after this short break. (gentle music)

Published Date : May 29 2020

SUMMARY :

brought to you by Docker and Scott Johnson, the CEO of Docker. is the key, code to cloud, And so that includes the bringing the cloud to every business. and we really appreciate of the new Docker? and all the possibilities on the product side, and taken away from the developers of the workflow you guys been part of. And the end of the day, developers and to be able to collaborate. and the other day, developers And now that the Docker CLI tools What does this mean to me, and so one of the things that and then you got to deploy, right? And that goes exactly to your point, the shift to the cloud and I had the privilege of and highlighted that make it just easy How do you see the developers and you might have some that's I love the code to cloud tagline, and that we're basically cranking and to abstract away the complexities. and it is the community that's responding. the time to come on. and learning to create digital solutions. and the things that you guys all the action, more coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmandaPERSON

0.99+

ScottPERSON

0.99+

MicrosoftORGANIZATION

0.99+

2014DATE

0.99+

Amanda SilverPERSON

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

Visual Studio CodeTITLE

0.99+

DockerORGANIZATION

0.99+

Scott JohnsonPERSON

0.99+

NovemberDATE

0.99+

Palo AltoLOCATION

0.99+

Visual StudioTITLE

0.99+

last weekDATE

0.99+

PythonTITLE

0.99+

June 2014DATE

0.99+

WindowsTITLE

0.99+

JavaTITLE

0.99+

yesterdayDATE

0.99+

DockerTITLE

0.99+

NodeTITLE

0.99+

LinuxTITLE

0.99+

both partiesQUANTITY

0.99+

over 200 projectsQUANTITY

0.99+

bothQUANTITY

0.99+

SnykORGANIZATION

0.99+

Docker HubTITLE

0.99+

thirdQUANTITY

0.98+

Docker hubTITLE

0.98+

OneQUANTITY

0.98+

Docker CLITITLE

0.98+

Amanda Silver, Microsoft & Scott Johnston, Docker | DockerCon Live 2020


 

>>From around the globe. It's the view with digital coverage of Docker con live 2020 brought to you by Docker and its ecosystem partners. >>LeBron. Welcome back to DockerCon 2020 hashtag Docker 20 this is the cube and Dockers coverage of Docker con 20 I'm Sean for you and the Palo Alto studios with our quarantine crew. We've got a great interview segment here in big news around developer workflow code to cloud. We've got Amanda silver corporate vice president, product for developer tools at Microsoft and Scott Johnson, the CEO of Docker. Scott had a great keynote talking about this relationship news has hit about the extension of the Microsoft partnership. So congratulations Amanda. Welcome to the cube. >>Thanks for having me. >>Amanda, tell us a bit about what your role is at Microsoft. You guys are well known in the developer community to develop an ecosystem when even when I was in college going way back, very modern. Now cloud is, is the key code to cloud. That's the theme. Tell us about your role at Microsoft. >>Yeah. So I basically run the product, uh, product design and user research team that works on our developer tools that Microsoft and so that includes the visual studio product as well as visual studio code. Um, that's become pretty popular in the last few years, but it also includes things like the.net runtime and the TypeScript programming language as well as all of our Azure tooling. >>What's your thoughts on the relationship with Docker? I'll show you the news extension of an existing relationship. Microsoft's got a lot of tools. You've got a lot of things you guys are doing, bringing the cloud to every business. Tell us about your thoughts on this relationship with Donker. >>Yeah, well we're very excited about the partnership for sure. Um, you know, our goal is really to make sure that Azure is a fantastic place where all developers can kind of bring their code and they feel welcome. They feel natural. Uh, we really see a unique opportunity to make the experience really great for Docker, for the Docker community by creating more integrated and seamless experience across Docker, desktop windows and visual studio. And we really appreciate how, how Docker is kind of, you know, supported our windows ecosystem to run in Docker as well. >>Scott, this relationship and an extension with Microsoft is really, uh, I think impressive and also notable because Microsoft's got so many, so many tools out there and they have so successful with Azure. You guys have been so successful with your developer community, but this also is reflective of the new Docker. Uh, could you share your thoughts on how this partnership with Microsoft extending the way it is with the growth of the cloud is a reflection of the new Docker? >>Yeah, absolutely. John's great question. One of the things that we've really been focused on since November is fully embracing the ecosystem and all the partnerships and all the possibilities of that ecosystem. And part of that is just reality. That we're a smaller company now and we can't do it all, nor should we do it all. Part of us. The reality that developers love voice and no one's gonna change their minds on choice. And third is just acknowledging that there's so much creativity and so much energy. The four walls of Docker that we'd be building, not the big advantage of that and welcome it and embrace it and provide that as a phenomenal experience part of Alfred's. So this is a great example of that. The sneak partnership we announced last week is a grant to have that and you're going to see many more of uh, partnerships like this going forward that are reflective of exactly this point. >>You've been a visionary on the product side of the interviewed before. Also deploying is more important than ever. That whole workflow, simplifying, it's not getting complex. People want choice, building code, managing code, deploying code. This has been a big focus of yours. Can you just share your thoughts on where Microsoft comes in because they got stuff too. You've got stuff, it all works together. What's your thoughts? >>Right? So it needs to work together, right? Because developers want to focus on their app. They don't want to focus on duct taping and springing together different siloed pools, right? So you can see in the demo and you'll see in, uh, demonstrations later throughout the conference. Just the seamless experience that a developer gets in the document man line inter-operating with visual studio code with the Docker command line and then deploying to Azure and what's what's wonderful about the partnership is that both parties put real engineering effort and design effort into making it a great experience. So a lot of the complexities around the figuration around default settings around uh, security, user management, all of that is abstracted out and taken away from the developer so they can focus on applications and getting those applications deployed to the proudest quickly as possible. Getting their app from code to cloud is the wok word or the or the call to action for this partnership. And we think we really hit it out of the park with the integration that you saw, >>Great validation and a critical part of the workflow. You guys have been part of Amanda, we're living in a time we're doing these remote interviews. The coven crisis has shown the productivity gains of working at home and working in sheltering in place, but also as highlighted, the focus of developers mainly who have also worked at home. They've kind of used to this. Do you see the rigs? I saw her at Microsoft build some amazing rigs from the studio. So these guys streaming their code demos. This is, um, a Cambrin explosion of new kinds of productivity. And yet the world's getting more complex at scale. This is what cloud does. What's your thoughts on this? Cause the tooling is more tools than ever, right? So I still gotta deploy code. It's gotta be more agile. It's gotta be faster. It's gotta be at scale. This is what you guys believe in. What's your thinking on all these tooling and abstraction layers and the end of the day, don't you still got to do their job? >>Yeah, well, absolutely. And now, even more than ever. I mean, I think we've, we've certainly seen over the past few months, uh, uh, a more rapid acceleration of digital transformation. And it's really happened in the past few years. Uh, you know, paper processes are now becoming digit digital processes. All of a sudden, you know, everybody needs to work and learn from home. And so there's just this rapid acceleration to kind of move everything to support our new remote lifestyle. Um, but even more so, you know, we now have remote development teams actually working from home as well in a variety of different kinds of, uh, environments. Whether they're using their own personal machine to connect to their infrastructure or they're using a work issued machine. You know, it's more important than ever that developers are productive, but they are productive as a team. Right? Software is a team sport. >>We all need to be able to work together and to be able to collaborate. And one of the most important aspects of agility for developers is consistency. And, uh, what Docker really enables is, uh, with, with containerization is to make the infrastructure consistent and repeatable so that as developers are moving through the life cycle from their local, local dev desktop and developing on their local desktop to a test environment and to staging and to production, it's really, it's infrastructure of or, or developers as well as operations. And so it's that, that infrastructure that's completely customizable for what the developer's operating system of choices, what their app stack is, all of those dependencies kind of running together. And so that's what really enables developers to be really agile and have a really, really fast iteration cycle but also to have that consistency across all of their development team. And you know, we, we now need to think about things like how are we actually going to bring on interns for the summer, uh, and make sure that they can actually set up their developer boxes in a consistent way that we can actually support them. And things like Docker really helped with that >>As your container instances and a visual studio cloud that you guys have has had great success. Um, there's a mix and match formula here. At the end of the day, developers want to ship the code. What's the message that you guys are sending here with this? Because I think productivity is one, simplification is the other, but as developers on the front lines and they're shipping in real time, this is a big part of the value proposition that you guys are bringing to the table. >>Yeah, I mean the, the core message is that any developer and their code is welcome, uh, and that we really want to support them and power them and increase their velocity and the impact that they can have. Um, and so, you know, having things like the fact that the Docker CLI is natively integrated into the Azure experience, uh, is a really important aspect of making sure that developers are feeling welcome and feeling comfortable. Um, and now that the Docker CLI tools are, that are part of Docker desktop, have access to native commands that work well with Azure container instances. Uh, Azure container instances, if anybody's on familiar with that, uh, is the simplest and fastest way to kind of set up containers and Azure. And, and so we believe that developers have really been looking for a really simple way to kind of get containers on Azure. And now we that really consistent experience across our service services and our tools and visual studio code and visual studio extensions make full use of Docker desktop and the Docker CLI so that they can get that combination of the productivity and the power that they're looking for. And in fact, we've, we've integrated these as a design point since very early on in our partnership when we've been partnering with, with Docker for quite a while. >>Amanda, I want to ask you about the, the, the, the tool chain. We've heard about workflows, making it simpler, bottom line, from a developer standpoint, what's the bottom line for me? What does this mean to me? Uh, every day developer out there? >>Um, I, I mean, I really think it means you know, your productivity on your terms. Um, and so, you know, Microsoft has been a developer company since the very, very beginning with, you know, bill Gates and, and, uh, GW basic. Um, and it's actually similar for Docker, right? They really have a developer first point of view, uh, which certainly speaks to my heart. And so one of the things that we're really trying to do with, with Docker is to make sure that we can create a workflow that's super productive at every stage of the developer experience, no matter which stack they're actually targeting, whether there's targeting node or Python or.net and C-sharp or Java. Uh, we really want to make sure that we have a super simple experience that you can actually initiate all of these commands, create, you know, Docker container images and use the compose Docker compose files. >>Um, and then, you know, just kind of do that consistently as you're deploying it all the way up into your infrastructure in Azure. And the other thing that we really want to make sure is that that even post deployment, you can actually inspect and diagnose these containers and images without having to leave the tool. Um, so we, we also think about the process of writing the code, but also the process of kind of managing the code and remediating issues that might come up in production. And so, you know, we really want you to be able to look at containers up in the Azure. Uh, up that are deployed into Azure and make sure that they're running and healthy and that if there, if something's wrong, that you can actually open up a shell and be in an interactive mode and be able to look at the logs from those containers and even inspect when to see environment variables or other details. >>Yeah, that's awesome. You know, writing code, managing code, and then you've got to deploy, right? So what I've been loving about the, the past generation of agile is deployment's been fast to deploy all the time. Scott, this brings up that the ease of use, but you want to actually leverage automation. This is the trend that you want to get in. You want, you don't want, you want to make it easy to write code, manage code. But during the deployment phase, that's a big innovation. That's the last point. Making that better and stronger. What's your thoughts on simplifying that? >>So that was a big part of this partnership, John, that the Docker in Microsoft embarked on and as you saw from the demo and the keynote, um, all within the man line, the developers able to do in two simple commands, deploy an app, uh, defining compose from the desktop to Azure and there's a whole slew of automation and pre-configured smart defaults or sane defaults that have gone on behind the scenes and that took a lot of hardcore engineering work on part of Docker and Microsoft together to simplify that and make that easy and that, that goes exactly to your point. We just like the simpler you can make it more, you can abstract a way to kind of underlying plumbing and infrastructure. The faster devs can get there. Their application from code to cloud. >>Scott, you've been a product CEO, you've been a product person, a CEO, but you have a product background. You've been involved with the relationship with Microsoft for a long time. What's the state of the market right now? I mean, obviously Microsoft has evolved. Look at just the performance corporate performance. The shift to the cloud has been phenomenal. Now developers getting more empowered, there's more demand for the pressure to put on developers to do more and more, more creativity. So you've seen this evolve, this relationship, what does it mean? >>Yeah, it's honestly a wonderful question, John. And I want to thank Amanda and the entire Microsoft team for being long standing partners with us on this journey. So it's might not be known to everyone on today's, uh, day's event. But Microsoft came to the very first Docker con event, uh, way back in June, 2014 and I had the privilege of, of reading them and welcoming them and they're, they were full on ready to see what all the excitement about Docker was about and really embrace it. And you mentioned kind of openness and Microsoft's growth over that, uh, over time in that dimension. And we think kind of Docker together with Microsoft have really shown what an open developer community can do. And that started back in 2014 and then we embarked on an open source collaboration around the Docker command line of the Docker engine, bringing that Docker engine from Linux and now moving it to windows applications. And so all of a sudden the promise of right ones and use the same primitives, the same formats, the same fan lines, uh, as you can with Linux onto windows applications. We brought that promise to the market and it's been an ongoing journey together with Microsoft of open standards based, developer facing friendliness, ease of use, fast time to deploy. And this, this partnership that we announced yesterday and we highlighted at the keynote is just another example of that ongoing relationship laser like focused on developer productivity and helping teams build great apps. >>Why do you like Azure in the cloud for Docker? Can you share why? >>Well, it's as Amanda has been sharing, it's super focused on what are the needs of developers to help them continue to stay focused on their apps and not have their cognitive load burdened by other aspects of getting their apps to the cloud. And Azure, phenomenal job of simplifying and providing sane defaults out of the box. And as we've been talking about, it's also very open to partner like the one we've announced >>Yesterday and highlighted, you know, but >>Uh, make it just easy for development teams to choose their tools and build their apps and deploy them onto Azure. It's possible. So, uh, it's, it's a phenomenal plan, one for developers and we're very excited and proud of partner with Microsoft on it. >>Amanda, on your side, I see DACA has got millions of developers. You guys got millions of developers even more. How do you see the developers in Microsoft side engaging with Docker desktop and Docker hub? Where does it all fit? >>I think it's a great question. I mean, I mentioned earlier how the Docker context can help individuals and teams kind of work in their environments work. Let me try that over. I mentioned earlier how I, how I see Docker context really improving the way that individuals and teams work with their environments and making sure that they're consistent. But I think this really comes together as we work with Docker desktop and Docker hub. Uh, when developers sign into Docker hub from Docker desktop, everything kind of lights up. And so they can see all of the images in their repositories and they can also see the cloud environments they're running them in. And so, you know, once you sign into the hub, you can see all the contexts that map to the logical environments that they have access to like dev and QA and maybe staging. And another use case that's really important is that, you know, we can access the same integration environment. >>So, so I could have, you know, microservices that I've been working on, but I can also see microservices that my, my teammates and their logs, uh, from the services that they've been working on, which I think is really, really great and certainly helps with, with team productivity. The other thing too is that this also really helps with hybrid cloud deployments, right? Where, you know, you might have some on premises, uh, hosted containers and you might have some that's hosted in a public cloud. And so you can see all of those things, uh, through your Docker hub. >>Well, I got to say I love the code to cloud tagline. I think that's very relevant and, and catchy. Um, and I think, I guess to me what I'm seeing, and I'd love to get your thoughts, Amanda, on this, as you oversee a key part of Microsoft's business that's important for developers, just the vibe and people are amped up right now. I know people are tense and anxiety with the covert 19 crisis, but I think people are generally agreeing that this is going to be a massive inflection point for just more headroom needed for developers to accelerate their value on the front lines. What's your personal take on this and you've seen these ways before, but now in this time, what are you most excited about? What are you optimist about? What's your view on the opportunities? Can you share your thoughts? Because people are going to get back to work or they're working now remotely, but when we go back to hybrid world, they're going to be jamming on projects. >>Yeah, for sure. But I mean, people are jamming on projects right now. And I think that, you know, in a lot of ways, uh, developers are our first responders in, you know, in that they are, developers are always trying to support somebody else, right? We're trying to somebody else's workflow and you know, so we have examples of people who are, uh, creating new remote systems to be able to, uh, schedule meetings in hospitals or the doctors who are actually the first, first responders taking care of patients. But at the end of the day, it's the developer who's actually creating that solution, right? And so we're being called the duty right now. Um, and so we need to make sure that we're actually there to support the needs of our users and that we're, we're basically cranking on code as fast as we can. Uh, and to be able to do that, we have to make sure that every developer is empowered and they can move quickly, but also that they can collaborate freely. And so, uh, I think that, you know, Docker hub Docker kind of helps you ensure that you have that consistency, but you also have that connection to the infrastructure that's hosted by your, your organization. >>I think you nailed that amazing insight. And I think that's, you know, the current situation in the community matters because there's a lot of um, frontline work being done to your point. But then we've got to rebuild. The modernization is happening as well coming out of this. So there's going to be that and there's a lot of comradery going on and massive community involvement. I'm seeing more of, you know, the empathy, but also now there's going to be the building, the creation, the new creation. So Scott, this is going to call for more simplicity and to abstract away the complexities. This is the core issue. >>Well that's exactly right and it is time to build, right? Um, and we're going to build our way out of this. Um, and it is the community that's responding. And so in some sense, Microsoft and Docker are there to support that, that community energy and give them the tools to go. And identify and have an impact as quickly as possible. We have referenced in the keynote, um, completely bottoms up organic adoption of Docker desktop and Docker hub in racing to provide solutions against the COBIT 19 virus. Right? It's a, it's a war against this pandemic that is heavily dependent on applications and data and there's over 200 projects, community projects on Docker hub today where you've got uh, cools and containers and data analysis all in service to the photo at 19 battle that's being fought. And then as you said, John, as we, as we get through this, the other side, there's entire industries that are completely rethinking their approach that were largely offline before that. Now see the imperative and the importance of going online and that tectonic shift nearly overnight of offline to online behavior and commerce and social and go on down the list that requires new application development. And I'm very pleased about this partnership is that together we're giving developers the tools to really take advantage of that opportunity and go and build our way out of it. >>Well, Scott, congratulations on a great extended partnership with Microsoft and the Docker brand. You know, I'm a big fan of from day one. I know you guys have pivoted on a new trajectory which is very community oriented, very open source, very open. So congratulations on that Amanda. Thanks for spending the time to come on. I'll give you the final word. Take a minute to talk about what's new at Microsoft. For the folks that know Microsoft, know they have a developer mindset from day one cloud is exploding code to cloud. What's the update? What's the new narrative? What should people know about Microsoft with developer community? Can you share from some, some, some uh, data for the folks that aren't in the community or might want to join with folks in the community who want to get an update? >>Yeah, it's a, it's a great, great kind of question. I mean, you know, right now I think we are all really focused on making sure that we can empower developers throughout the world and that includes both those who are building solutions for their organizations today. But also I think we're going to end up with a ton of new developers over this next period who are really entering the workforce and uh, and learning to create, you know, digital solutions overall. There's a massive developer shortage across the world. Um, there's so much opportunity for developers to kind of, you know, address a lot of the needs that we're seeing out of organizations again across the world. Um, and so I think it's just a really exciting time to be a developer. Uh, and you know, my, my uh, my only hope is that basically we're, we're building tools that actually enable them to solve problems. >>Awesome insight and thank you so much for your time code to cloud developers are cranking away that the first responders are going to take care of business and then continue to build out the modern applications. And when you have a crisis like this, people cut right through the noise and get right to the tools that matter. So thanks for sharing the Microsoft Docker partnership and the things that you guys are working on together. Thanks for your time. Okay. This is the cubes coverage. We are Docker con 2020 digital is the cube virtual. I'm Sean for bringing all the action. More coverage. Stay with us for more Docker con virtual. After this short break.

Published Date : May 21 2020

SUMMARY :

con live 2020 brought to you by Docker and its ecosystem partners. coverage of Docker con 20 I'm Sean for you and the Palo Alto studios with our quarantine crew. Now cloud is, is the key code to cloud. Um, that's become pretty popular in the last few years, but it also includes things You've got a lot of things you guys are doing, bringing the cloud to every business. Um, you know, our goal is really to Uh, could you share your thoughts on how this partnership with Microsoft extending the way it is with the One of the things that we've really been focused on since Can you just share your thoughts on where Microsoft And we think we really hit it out of the park with the integration that you saw, and the end of the day, don't you still got to do their job? And so there's just this rapid acceleration to kind of move everything to support And you know, we, we now need to think about on the front lines and they're shipping in real time, this is a big part of the value proposition that you guys are bringing to the table. Um, and so, you know, Amanda, I want to ask you about the, the, the, the tool chain. Um, I, I mean, I really think it means you know, your productivity on your terms. And so, you know, we really want you to be able to look at containers up in the This is the trend that you want to get in. We just like the simpler you can make it more, you can abstract a way to kind of underlying plumbing and infrastructure. What's the state of the market the same fan lines, uh, as you can with Linux onto windows applications. and providing sane defaults out of the box. Uh, make it just easy for development teams to choose their tools and build their apps and deploy them onto Azure. How do you see the developers in Microsoft side engaging with Docker desktop And so, you know, once you sign into the hub, you can see all the contexts that map to the logical environments that they have And so you can see all of those Um, and I think, I guess to me what I'm seeing, you know, Docker hub Docker kind of helps you ensure that you have that consistency, And I think that's, you know, the current situation in the community matters Um, and it is the community that's responding. Thanks for spending the time to come on. Um, there's so much opportunity for developers to kind of, you know, So thanks for sharing the Microsoft Docker partnership and the things that you guys are working on together.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmandaPERSON

0.99+

ScottPERSON

0.99+

MicrosoftORGANIZATION

0.99+

JohnPERSON

0.99+

2014DATE

0.99+

DockerORGANIZATION

0.99+

last weekDATE

0.99+

Scott JohnsonPERSON

0.99+

NovemberDATE

0.99+

SeanPERSON

0.99+

Amanda SilverPERSON

0.99+

LinuxTITLE

0.99+

JavaTITLE

0.99+

PythonTITLE

0.99+

yesterdayDATE

0.99+

DockerTITLE

0.99+

thirdQUANTITY

0.99+

over 200 projectsQUANTITY

0.99+

both partiesQUANTITY

0.99+

windowsTITLE

0.99+

OneQUANTITY

0.99+

bothQUANTITY

0.98+

Docker CLITITLE

0.98+

firstQUANTITY

0.98+

AlfredPERSON

0.98+

June, 2014DATE

0.97+

todayDATE

0.97+

YesterdayDATE

0.97+

Docker conEVENT

0.97+

Palo AltoLOCATION

0.97+

DockerCon 2020EVENT

0.96+

Scott JohnstonPERSON

0.96+

LeBronPERSON

0.95+

two simple commandsQUANTITY

0.95+

Rich Gaston, Micro Focus | Virtual Vertica BDC 2020


 

(upbeat music) >> Announcer: It's theCUBE covering the virtual Vertica Big Data Conference 2020 brought to you by Vertica. >> Welcome back to the Vertica Virtual Big Data Conference, BDC 2020. You know, it was supposed to be a physical event in Boston at the Encore. Vertica pivoted to a digital event, and we're pleased that The Cube could participate because we've participated in every BDC since the inception. Rich Gaston this year is the global solutions architect for security risk and governance at Micro Focus. Rich, thanks for coming on, good to see you. >> Hey, thank you very much for having me. >> So you got a chewy title, man. You got a lot of stuff, a lot of hairy things in there. But maybe you can talk about your role as an architect in those spaces. >> Sure, absolutely. We handle a lot of different requests from the global 2000 type of organization that will try to move various business processes, various application systems, databases, into new realms. Whether they're looking at opening up new business opportunities, whether they're looking at sharing data with partners securely, they might be migrating it to cloud applications, and doing migration into a Hybrid IT architecture. So we will take those large organizations and their existing installed base of technical platforms and data, users, and try to chart a course to the future, using Micro Focus technologies, but also partnering with other third parties out there in the ecosystem. So we have large, solid relationships with the big cloud vendors, with also a lot of the big database spenders. Vertica's our in-house solution for big data and analytics, and we are one of the first integrated data security solutions with Vertica. We've had great success out in the customer base with Vertica as organizations have tried to add another layer of security around their data. So what we will try to emphasize is an enterprise wide data security approach, where you're taking a look at data as it flows throughout the enterprise from its inception, where it's created, where it's ingested, all the way through the utilization of that data. And then to the other uses where we might be doing shared analytics with third parties. How do we do that in a secure way that maintains regulatory compliance, and that also keeps our company safe against data breach. >> A lot has changed since the early days of big data, certainly since the inception of Vertica. You know, it used to be big data, everyone was rushing to figure it out. You had a lot of skunkworks going on, and it was just like, figure out data. And then as organizations began to figure it out, they realized, wow, who's governing this stuff? A lot of shadow IT was going on, and then the CIO was called to sort of reign that back in. As well, you know, with all kinds of whatever, fake news, the hacking of elections, and so forth, the sense of heightened security has gone up dramatically. So I wonder if you can talk about the changes that have occurred in the last several years, and how you guys are responding. >> You know, it's a great question, and it's been an amazing journey because I was walking down the street here in my hometown of San Francisco at Christmastime years ago and I got a call from my bank, and they said, we want to inform you your card has been breached by Target, a hack at Target Corporation and they got your card, and they also got your pin. And so you're going to need to get a new card, we're going to cancel this. Do you need some cash? I said, yeah, it's Christmastime so I need to do some shopping. And so they worked with me to make sure that I could get that cash, and then get the new card and the new pin. And being a professional in the inside of the industry, I really questioned, how did they get the pin? Tell me more about this. And they said, well, we don't know the details, but you know, I'm sure you'll find out. And in fact, we did find out a lot about that breach and what it did to Target. The impact that $250 million immediate impact, CIO gone, CEO gone. This was a big one in the industry, and it really woke a lot of people up to the different types of threats on the data that we're facing with our largest organizations. Not just financial data; medical data, personal data of all kinds. Flash forward to the Cambridge Analytica scandal that occurred where Facebook is handing off data, they're making a partnership agreement --think they can trust, and then that is misused. And who's going to end up paying the cost of that? Well, it's going to be Facebook at a tune of about five billion on that, plus some other finds that'll come along, and other costs that they're facing. So what we've seen over the course of the past several years has been an evolution from data breach making the headlines, and how do my customers come to us and say, help us neutralize the threat of this breach. Help us mitigate this risk, and manage this risk. What do we need to be doing, what are the best practices in the industry? Clearly what we're doing on the perimeter security, the application security and the platform security is not enough. We continue to have breaches, and we are the experts at that answer. The follow on fascinating piece has been the regulators jumping in now. First in Europe, but now we see California enacting a law just this year. They came into a place that is very stringent, and has a lot of deep protections that are really far-reaching around personal data of consumers. Look at jurisdictions like Australia, where fiduciary responsibility now goes to the Board of Directors. That's getting attention. For a regulated entity in Australia, if you're on the Board of Directors, you better have a plan for data security. And if there is a breach, you need to follow protocols, or you personally will be liable. And that is a sea change that we're seeing out in the industry. So we're getting a lot of attention on both, how do we neutralize the risk of breach, but also how can we use software tools to maintain and support our regulatory compliance efforts as we work with, say, the largest money center bank out of New York. I've watched their audit year after year, and it's gotten more and more stringent, more and more specific, tell me more about this aspect of data security, tell me more about encryption, tell me more about money management. The auditors are getting better. And we're supporting our customers in that journey to provide better security for the data, to provide a better operational environment for them to be able to roll new services out with confidence that they're not going to get breached. With that confidence, they're not going to have a regulatory compliance fine or a nightmare in the press. And these are the major drivers that help us with Vertica sell together into large organizations to say, let's add some defense in depth to your data. And that's really a key concept in the security field, this concept of defense in depth. We apply that to the data itself by changing the actual data element of Rich Gaston, I will change that name into Ciphertext, and that then yields a whole bunch of benefits throughout the organization as we deal with the lifecycle of that data. >> Okay, so a couple things I want to mention there. So first of all, totally board level topic, every board of directors should really have cyber and security as part of its agenda, and it does for the reasons that you mentioned. The other is, GDPR got it all started. I guess it was May 2018 that the penalties went into effect, and that just created a whole Domino effect. You mentioned California enacting its own laws, which, you know, in some cases are even more stringent. And you're seeing this all over the world. So I think one of the questions I have is, how do you approach all this variability? It seems to me, you can't just take a narrow approach. You have to have an end to end perspective on governance and risk and security, and the like. So are you able to do that? And if so, how so? >> Absolutely, I think one of the key areas in big data in particular, has been the concern that we have a schema, we have database tables, we have CALMS, and we have data, but we're not exactly sure what's in there. We have application developers that have been given sandbox space in our clusters, and what are they putting in there? So can we discover that data? We have those tools within Micro Focus to discover sensitive data within in your data stores, but we can also protect that data, and then we'll track it. And what we really find is that when you protect, let's say, five billion rows of a customer database, we can now know what is being done with that data on a very fine grain and granular basis, to say that this business process has a justified need to see the data in the clear, we're going to give them that authorization, they can decrypt the data. Secure data, my product, knows about that and tracks that, and can report on that and say at this date and time, Rich Gaston did the following thing to be able to pull data in the clear. And that could be then used to support the regulatory compliance responses and then audit to say, who really has access to this, and what really is that data? Then in GDPR, we're getting down into much more fine grained decisions around who can get access to the data, and who cannot. And organizations are scrambling. One of the funny conversations that I had a couple years ago as GDPR came into place was, it seemed a couple of customers were taking these sort of brute force approach of, we're going to move our analytics and all of our data to Europe, to European data centers because we believe that if we do this in the U.S., we're going to violate their law. But if we do it all in Europe, we'll be okay. And that simply was a short-term way of thinking about it. You really can't be moving your data around the globe to try to satisfy a particular jurisdiction. You have to apply the controls and the policies and put the software layers in place to make sure that anywhere that someone wants to get that data, that we have the ability to look at that transaction and say it is or is not authorized, and that we have a rock solid way of approaching that for audit and for compliance and risk management. And once you do that, then you really open up the organization to go back and use those tools the way they were meant to be used. We can use Vertica for AI, we can use Vertica for machine learning, and for all kinds of really cool use cases that are being done with IOT, with other kinds of cases that we're seeing that require data being managed at scale, but with security. And that's the challenge, I think, in the current era, is how do we do this in an elegant way? How do we do it in a way that's future proof when CCPA comes in? How can I lay this on as another layer of audit responsibility and control around my data so that I can satisfy those regulators as well as the folks over in Europe and Singapore and China and Turkey and Australia. It goes on and on. Each jurisdiction out there is now requiring audit. And like I mentioned, the audits are getting tougher. And if you read the news, the GDPR example I think is classic. They told us in 2016, it's coming. They told us in 2018, it's here. They're telling us in 2020, we're serious about this, and here's the finds, and you better be aware that we're coming to audit you. And when we audit you, we're going to be asking some tough questions. If you can't answer those in a timely manner, then you're going to be facing some serious consequences, and I think that's what's getting attention. >> Yeah, so the whole big data thing started with Hadoop, and Hadoop is open, it's distributed, and it just created a real governance challenge. I want to talk about your solutions in this space. Can you tell us more about Micro Focus voltage? I want to understand what it is, and then get into sort of how it works, and then I really want to understand how it's applied to Vertica. >> Yeah, absolutely, that's a great question. First of all, we were the originators of format preserving encryption, we developed some of the core basic research out of Stanford University that then became the company of Voltage; that build-a-brand name that we apply even though we're part of Micro Focus. So the lineage still goes back to Dr. Benet down at Stanford, one of my buddies there, and he's still at it doing amazing work in cryptography and keeping moving the industry forward, and the science forward of cryptography. It's a very deep science, and we all want to have it peer-reviewed, we all want to be attacked, we all want it to be proved secure, that we're not selling something to a major money center bank that is potentially risky because it's obscure and we're private. So we have an open standard. For six years, we worked with the Department of Commerce to get our standard approved by NIST; The National Institute of Science and Technology. They initially said, well, AES256 is going to be fine. And we said, well, it's fine for certain use cases, but for your database, you don't want to change your schema, you don't want to have this increase in storage costs. What we want is format preserving encryption. And what that does is turns my name, Rich, into a four-letter ciphertext. It can be reversed. The mathematics of that are fascinating, and really deep and amazing. But we really make that very simple for the end customer because we produce APIs. So these application programming interfaces can be accessed by applications in C or Java, C sharp, other languages. But they can also be accessed in Microservice Manor via rest and web service APIs. And that's the core of our technical platform. We have an appliance-based approach, so we take a secure data appliance, we'll put it on Prim, we'll make 50 of them if you're a big company like Verizon and you need to have these co-located around the globe, no problem; we can scale to the largest enterprise needs. But our typical customer will install several appliances and get going with a couple of environments like QA and Prod to be able to start getting encryption going inside their organization. Once the appliances are set up and installed, it takes just a couple of days of work for a typical technical staff to get done. Then you're up and running to be able to plug in the clients. Now what are the clients? Vertica's a huge one. Vertica's one of our most powerful client endpoints because you're able to now take that API, put it inside Vertica, it's all open on the internet. We can go and look at Vertica.com/secure data. You get all of our documentation on it. You understand how to use it very quickly. The APIs are super simple; they require three parameter inputs. It's a really basic approach to being able to protect and access data. And then it gets very deep from there because you have data like credit card numbers. Very different from a street address and we want to take a different approach to that. We have data like birthdate, and we want to be able to do analytics on dates. We have deep approaches on managing analytics on protected data like Date without having to put it in the clear. So we've maintained a lead in the industry in terms of being an innovator of the FF1 standard, what we call FF1 is format preserving encryption. We license that to others in the industry, per our NIST agreement. So we're the owner, we're the operator of it, and others use our technology. And we're the original founders of that, and so we continue to sort of lead the industry by adding additional capabilities on top of FF1 that really differentiate us from our competitors. Then you look at our API presence. We can definitely run as a dup, but we also run in open systems. We run on main frame, we run on mobile. So anywhere in the enterprise or one in the cloud, anywhere you want to be able to put secure data, and be able to access the protect data, we're going to be there and be able to support you there. >> Okay so, let's say I've talked to a lot of customers this week, and let's say I'm running in Eon mode. And I got some workload running in AWS, I've got some on Prim. I'm going to take an appliance or multiple appliances, I'm going to put it on Prim, but that will also secure my cloud workloads as part of a sort of shared responsibility model, for example? Or how does that work? >> No, that's absolutely correct. We're really flexible that we can run on Prim or in the cloud as far as our crypto engine, the key management is really hard stuff. Cryptography is really hard stuff, and we take care of all that, so we've all baked that in, and we can run that for you as a service either in the cloud or on Prim on your small Vms. So really the lightweight footprint for me running my infrastructure. When I look at the organization like you just described, it's a classic example of where we fit because we will be able to protect that data. Let's say you're ingesting it from a third party, or from an operational system, you have a website that collects customer data. Someone has now registered as a new customer, and they're going to do E-commerce with you. We'll take that data, and we'll protect it right at the point of capture. And we can now flow that through the organization and decrypt it at will on any platform that you have that you need us to be able to operate on. So let's say you wanted to pick that customer data from the operational transaction system, let's throw it into Eon, let's throw it into the cloud, let's do analytics there on that data, and we may need some decryption. We can place secure data wherever you want to be able to service that use case. In most cases, what you're doing is a simple, tiny little atomic efetch across a protected tunnel, your typical TLS pipe tunnel. And once that key is then cashed within our client, we maintain all that technology for you. You don't have to know about key management or dashing. We're good at that; that's our job. And then you'll be able to make those API calls to access or protect the data, and apply the authorization authentication controls that you need to be able to service your security requirements. So you might have third parties having access to your Vertica clusters. That is a special need, and we can have that ability to say employees can get X, and the third party can get Y, and that's a really interesting use case we're seeing for shared analytics in the internet now. >> Yeah for sure, so you can set the policy how we want. You know, I have to ask you, in a perfect world, I would encrypt everything. But part of the reason why people don't is because of performance concerns. Can you talk about, and you touched upon it I think recently with your sort of atomic access, but can you talk about, and I know it's Vertica, it's Ferrari, etc, but anything that slows it down, I'm going to be a concern. Are customers concerned about that? What are the performance implications of running encryption on Vertica? >> Great question there as well, and what we see is that we want to be able to apply scale where it's needed. And so if you look at ingest platforms that we find, Vertica is commonly connected up to something like Kafka. Maybe streamsets, maybe NiFi, there are a variety of different technologies that can route that data, pipe that data into Vertica at scale. Secured data is architected to go along with that architecture at the node or at the executor or at the lowest level operator level. And what I mean by that is that we don't have a bottleneck that everything has to go through one process or one box or one channel to be able to operate. We don't put an interceptor in between your data and coming and going. That's not our approach because those approaches are fragile and they're slow. So we typically want to focus on integrating our APIs natively within those pipeline processes that come into Vertica within the Vertica ingestion process itself, you can simply apply our protection when you do the copy command in Vertica. So really basic simple use case that everybody is typically familiar with in Vertica land; be able to copy the data and put it into Vertica, and you simply say protect as part of the data. So my first name is coming in as part of this ingestion. I'll simply put the protect keyword in the Syntax right in SQL; it's nothing other than just an extension SQL. Very very simple, the developer, easy to read, easy to write. And then you're going to provide the parameters that you need to say, oh the name is protected with this kind of a format. To differentiate it between a credit card number and an alphanumeric stream, for example. So once you do that, you then have the ability to decrypt. Now, on decrypt, let's look at a couple different use cases. First within Vertica, we might be doing select statements within Vertica, we might be doing all kinds of jobs within Vertica that just operate at the SQL layer. Again, just insert the word "access" into the Vertica select string and provide us with the data that you want to access, that's our word for decryption, that's our lingo. And we will then, at the Vertica level, harness the power of its CPU, its RAM, its horsepower at the node to be able to operate on that operator, the decryption request, if you will. So that gives us the speed and the ability to scale out. So if you start with two nodes of Vertica, we're going to operate at X number of hundreds of thousands of transactions a second, depending on what you're doing. Long strings are a little bit more intensive in terms of performance, but short strings like social security number are our sweet spot. So we operate very very high speed on that, and you won't notice the overhead with Vertica, perse, at the node level. When you scale Vertica up and you have 50 nodes, and you have large clusters of Vertica resources, then we scale with you. And we're not a bottleneck and at any particular point. Everybody's operating independently, but they're all copies of each other, all doing the same operation. Fetch a key, do the work, go to sleep. >> Yeah, you know, I think this is, a lot of the customers have said to us this week that one of the reasons why they like Vertica is it's very mature, it's been around, it's got a lot of functionality, and of course, you know, look, security, I understand is it's kind of table sticks, but it's also can be a differentiator. You know, big enterprises that you sell to, they're asking for security assessments, SOC 2 reports, penetration testing, and I think I'm hearing, with the partnership here, you're sort of passing those with flying colors. Are you able to make security a differentiator, or is it just sort of everybody's kind of got to have good security? What are your thoughts on that? >> Well, there's good security, and then there's great security. And what I found with one of my money center bank customers here in San Francisco was based here, was the concern around the insider access, when they had a large data store. And the concern that a DBA, a database administrator who has privilege to everything, could potentially exfil data out of the organization, and in one fell swoop, create havoc for them because of the amount of data that was present in that data store, and the sensitivity of that data in the data store. So when you put voltage encryption on top of Vertica, what you're doing now is that you're putting a layer in place that would prevent that kind of a breach. So you're looking at insider threats, you're looking at external threats, you're looking at also being able to pass your audit with flying colors. The audits are getting tougher. And when they say, tell me about your encryption, tell me about your authentication scheme, show me the access control list that says that this person can or cannot get access to something. They're asking tougher questions. That's where secure data can come in and give you that quick answer of it's encrypted at rest. It's encrypted and protected while it's in use, and we can show you exactly who's had access to that data because it's tracked via a different layer, a different appliance. And I would even draw the analogy, many of our customers use a device called a hardware security module, an HSM. Now, these are fairly expensive devices that are invented for military applications and adopted by banks. And now they're really spreading out, and people say, do I need an HSM? Well, with secure data, we certainly protect your crypto very very well. We have very very solid engineering. I'll stand on that any day of the week, but your auditor is going to want to ask a checkbox question. Do you have HSM? Yes or no. Because the auditor understands, it's another layer of protection. And it provides me another tamper evident layer of protection around your key management and your crypto. And we, as professionals in the industry, nod and say, that is worth it. That's an expensive option that you're going to add on, but your auditor's going to want it. If you're in financial services, you're dealing with PCI data, you're going to enjoy the checkbox that says, yes, I have HSMs and not get into some arcane conversation around, well no, but it's good enough. That's kind of the argument then conversation we get into when folks want to say, Vertica has great security, Vertica's fantastic on security. Why would I want secure data as well? It's another layer of protection, and it's defense in depth for you data. When you believe in that, when you take security really seriously, and you're really paranoid, like a person like myself, then you're going to invest in those kinds of solutions that get you best in-class results. >> So I'm hearing a data-centric approach to security. Security experts will tell you, you got to layer it. I often say, we live in a new world. The green used to just build a moat around the queen, but the queen, she's leaving her castle in this world of distributed data. Rich, incredibly knowlegable guest, and really appreciate you being on the front lines and sharing with us your knowledge about this important topic. So thanks for coming on theCUBE. >> Hey, thank you very much. >> You're welcome, and thanks for watching everybody. This is Dave Vellante for theCUBE, we're covering wall-to-wall coverage of the Virtual Vertica BDC, Big Data Conference. Remotely, digitally, thanks for watching. Keep it right there. We'll be right back right after this short break. (intense music)

Published Date : Mar 31 2020

SUMMARY :

Vertica Big Data Conference 2020 brought to you by Vertica. and we're pleased that The Cube could participate But maybe you can talk about your role And then to the other uses where we might be doing and how you guys are responding. and they said, we want to inform you your card and it does for the reasons that you mentioned. and put the software layers in place to make sure Yeah, so the whole big data thing started with Hadoop, So the lineage still goes back to Dr. Benet but that will also secure my cloud workloads as part of a and we can run that for you as a service but can you talk about, at the node to be able to operate on that operator, a lot of the customers have said to us this week and we can show you exactly who's had access to that data and really appreciate you being on the front lines of the Virtual Vertica BDC, Big Data Conference.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AustraliaLOCATION

0.99+

EuropeLOCATION

0.99+

TargetORGANIZATION

0.99+

VerizonORGANIZATION

0.99+

VerticaORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

Dave VellantePERSON

0.99+

May 2018DATE

0.99+

NISTORGANIZATION

0.99+

2016DATE

0.99+

BostonLOCATION

0.99+

2018DATE

0.99+

San FranciscoLOCATION

0.99+

New YorkLOCATION

0.99+

Target CorporationORGANIZATION

0.99+

$250 millionQUANTITY

0.99+

50QUANTITY

0.99+

Rich GastonPERSON

0.99+

SingaporeLOCATION

0.99+

TurkeyLOCATION

0.99+

FerrariORGANIZATION

0.99+

six yearsQUANTITY

0.99+

2020DATE

0.99+

one boxQUANTITY

0.99+

ChinaLOCATION

0.99+

CTITLE

0.99+

Stanford UniversityORGANIZATION

0.99+

JavaTITLE

0.99+

FirstQUANTITY

0.99+

oneQUANTITY

0.99+

AWSORGANIZATION

0.99+

U.S.LOCATION

0.99+

this weekDATE

0.99+

National Institute of Science and TechnologyORGANIZATION

0.99+

Each jurisdictionQUANTITY

0.99+

bothQUANTITY

0.99+

VerticaTITLE

0.99+

RichPERSON

0.99+

this yearDATE

0.98+

Vertica Virtual Big Data ConferenceEVENT

0.98+

one channelQUANTITY

0.98+

one processQUANTITY

0.98+

GDPRTITLE

0.98+

SQLTITLE

0.98+

five billion rowsQUANTITY

0.98+

about five billionQUANTITY

0.97+

OneQUANTITY

0.97+

C sharpTITLE

0.97+

BenetPERSON

0.97+

firstQUANTITY

0.96+

four-letterQUANTITY

0.96+

Vertica Big Data Conference 2020EVENT

0.95+

HadoopTITLE

0.94+

KafkaTITLE

0.94+

Micro FocusORGANIZATION

0.94+

Mike Haag, Red Canary | Splunk .conf19


 

>>Live from Las Vegas. That's the Q covering splunk.com 19 brought to you by Splunk. >>Hey, welcome back. Every once the Q's live coverage here in Las Vegas for Splunk's dot com 2019 it's Splunk's 10th year having the events, the cubes coverage seven years, the cube independent media company breaking down, extracting the signal from the noise dot on the top people, top experts, tell them the stories that matter. We're here with Mike EG, director of applied research for coming red Canary. Mike, thanks for coming on. I appreciate it. Thank you. So red Canary is a company doing here. What's the focus? What does it company do? Take a minute to explain red County area and why you're here at.com. Sure, thank you. So we are a managed endpoint detection and response organization. We partner with organizations of all sizes to help them eradicate evil, for instance. So we help them with monitoring their environment. We investigate, respond and act on threats or so on the notes here, you guys have a topic session finding titled finding evil is never an accident, how to hunt in bots. >>So using bots, hunting down evil, you guys are out there doing this as a business. What does it mean? What does he, what if, first of all, what is evil and how do you hunt it down? Take us through that Sarah. So the talk is based around the boss of the SOC data set that was released by Splunk. They have version two, version one and version three will be coming out soon and they just released version four here. And so the talks all focused on how to find evil within bots. The three are actually V forum, sorry, the one that just came out. And so what we do as an organization is we help businesses get through their data, kind of like your guys' mission as well. Like get through them all the haystack, find the bad things and present that to our customers in a really fast way. >>So that's kind of where we are today. Archives to find the good content. Great experts like yourself tell about your role. You're like a researcher, but it's not like you're sitting back there applied research we applied means it's not like just making it up, you know the next moonshot, you guys are applied specifically to hunting down evil. That's your role. What does that entail? You guys have to sit back, zoom back, look at the data that the Splunk's providing some benefits with their, they're exposing their data. What does it mean to hunt down? What's, what's the requirements? How do you set that up? What are you looking at you going through day? Those are the dashboards. What are the what? What, what do you deal with and your job? >> Yeah, so like a day to day or like kind of what our team does is we focus on like what's going on previously, what are we seeing in the wild? >>Like what campaigns are happening and then my role within my team is focused on what's coming. So what are, what are red team's working on? What are pen testers looking into? Take that information, begin testing and begin building proof of concepts. Put that back into our products so that whether it's two weeks, six months, two years, we have coverage for it, no matter what. So a of us, a lot of our time is generating proof of concepts on what may be coming. So there's a lot of very unique things that may be in the wild today. And then there's some things that we may never see that are just very novel and kind of once, once, once a time kind of thing. Right? >> So you know, we love talking about data that we've been covering data since 2010 the thing that's interesting and I want to get your thoughts on this because you know, eval has arbitrage built into it. >>They know where to hide. And so the question is, is that what are you looking at matters, right? So the so, so, so there's a lot of exposure. But the question I have for you is, what is the problem that you're solving? Why do you guys exist? Was it because evil was better to adversaries? Were better at hiding? Is it automation can solve patterns they haven't seen yet? Because if you automate something you haven't seen yet, so is it new things? So why, what's the problem statement that you guys are attacking? Yeah. So hit it. It's a lot. There's a lot, there's a lot to inbox. Um, so like in particular in this instance, seeing something that happened yesterday and then what's happening today is actors are working to break process lineage within what's happening on the employee. Because actors know that everything's happening on an employment. >>Yes, there's traffic coming in, but there's execution going on in a single place on that box. So their whole tactic now is to try to break that lineage. So it's not Microsoft word spawning something. It's now Microsoft word opens and as spawns over there off another process, right? So we're here to monitor those types of behaviors. And that's pretty much like the core of red Canary. We've always focused on the end points. We only do emblem implant based products. We don't like monitor networks. We don't monitor firewalls or anything like that. We're very focused, uh, hyper focus on employee behaviors. And so, and that, that's the cool part about our job is we get to see all the really new things that are happening. And if you look at it, these breaches in the past, it's happening on the endpoint and that's probably where we are. >>And actually day the Canary in the coal mines all expression, everyone knows that or if older might know that. But you know, identifying and being that early warning detection system really kind of was the whole purpose of the Canary in the coal mine, red Canary red teams. I'm kind of putting it together. What are some of the things that you've seen that, that as an example of why you exist? Because it, is it new things, is it that, you know, Hey, our known thing or balls, what are some of the examples that you can point to that, that point of why you guys exist? Yeah, sure. Um, a good example is kind of like the looking forward stuff where red team's going, where actor's going. So a lot of them are moving to C sharp and.net Tradecraft, which is very native to the operating system. >>And windows. Um, so if they're doing that, they're moving away from what they're always, what they've been used to the last few years, which is PowerShell. So our sales kind of dead then now we're going to C sharp and.net. So a lot of our focus today is how can we better detect those? And vendors are moving that way too. They're, they're starting to see that they have to evolve their products to the next level order to detect these behaviors. Cause I mean that's, that's the whole reason why a lot of these EDR vendors are here. Right? And, and it's all data like you said. And so feeding it into a Sam or with a Splunk in particular, you're able to correlate those behaviors and look at very specific things and find it real well know. One of the things that a lot of security practitioners and experts and advisors have been looking at over years is data. >>So it's not, it's no secret data and critical. But one of the things that's interesting is that data availability has always been an issue. Sharing data. And then the message here@splunk.com for the 19 is interesting. You've got data diversity now exposure to the fabric search concept there they got accelerated and realtime times too. We've always had that. But as it kind of comes together, they're looking to get more diverse aperture to data. Yup. Is that still an ongoing challenge and what are, cause if you have a blind spot, you only, this is where the potential danger. How do you guys talk about that? What's the narrative around diverse data sets? How to deal with them effectively and then if blind spots exist, what do they look like or how do you figure that out? Yeah, we, so I, I've been with red Canary for over three years, about three years now. >>And one of the things I started at was a technical account manager incident handler. And so I helped a lot of our customers go from, we bought you red Canary to monitor points, but what should we do next? And so we, our incident handling team will come in and assist a customer with, you guys should start going down this road. Like, how are you bringing everything together? How are you analyzing your data down to just operationalizing like some use cases and playbooks within their data. Like you got EDR. Now let's look at your firewalls. How, how rich of that data can be helped enrich what the EDR information like here's the IP address and carbon black response. Where's it going this way on your firewall or your appliance is going out and you know, and things like that. So we have a whole team dedicated to it and that's like the focus of the. >>We took a poll in our, we have a, you know, this acumen operate for 10 years. It's our seventh year squad, Dave and I took a poll of our cube community, um, but 5,000 alumni and we asked them about cloud security, which vendors are the best and Splunk is clearly number one in third party data management. I got him out, he's got a category but cloud security. How should the cloud vendors provide security, Google, AWS and Azure. But outside of the core cloud providers, Splunk's number one, clearly across the board. How is Splunk doing in your mind? How do you guys work with Splunk? What's the dynamic? What's your relationship with Splunk and where Splunk position in your mind? Because as cloud becomes more prevalent with cloud native, born in the cloud and with hybrid there's a unification, not just with data. They have infrastructure operations. >>Yup. So Splunk role and then their future prospects share. Um, so red Canary uses Splunk too. So we, we process I think like 30 terabytes plus of data a day coming to our engine that we built. And that's the kind of like proprietary piece of red Canary. 30 terabytes of data flows through. We use a like a DSL, like a language that sits on top of it, that queries they're looking for those behaviors. We send those tip offs as we call to Splunk and we actually track a lot of the efficiencies of our detectors that way. So we look for how low detectors doing, is it triggering, is that false positives? How many false positives over time. And then also how much time our analysts are spending on those detectors. You know, they get a detector or a in event and they review that event and they're spending 2030 minutes on it and well what's wrong with it? >>Is there something going on here? Do we need to cut something back and fix it? So we use Splunk a lot of, for like the analytics piece of just how our operation works. It's awesome. It's really neat to see >> him for, one of the things that I've been proud of with covering Splunk is we showed them early when they were just started, then they went public. Yeah. Just watching how they've grown. That did a lot of great things. But now the theme is applications on top of Splunk. They're an enabling platform. They had a couple of key pillars. I want you to talk about where you guys fit and where you see the upside. So swamp has the developer area, which is, they have all these deck, new developers, security and compliance and fraud, um, foundations and platform stuff. And then the it ops does this analytics, AI ops, they've got signal FX, cloud native. >>So those are the kind of the four key areas around their apps, their app strategy. Do you guys cut across all those? You are you guys developing? Are you doing all, what's the, what's the red Canary fit into that? Yeah, it seems like you've probably our cross section. Yeah, probably most likely fitting into a few areas within Ed's. My team has developed a couple apps for Splunk, so we've published those. We have like a app that we pushed out. We have a carbon black response app, which we co-developed many years ago. Those things are all out there. We've helped other people with their apps and, but yeah, it's, it's a little mix of everything. And I think the big core thing that we're all looking to today is like how can we use more of the machine learning toolkit with Splunk, um, for our customers and for us internally. >>Like how can we predict things better with it? So there's, there's a lot of little bit of focus of that same thing. In your opinion, B2B out in the field, you mean the front lines, now you're in research, you got that holistic view, you're looking down at the, on the field, the battlefield, if you will, the adversaries will evil out there. What do you look for? I mean, what's the, what's the triggering event for you? How do you know when you need to jump in and get full ready, alert and really kind of sound off that, you know, that Canary alarm saying, Hey, you know, let's take action here or let's kind of like look at that and take us through some of those priorities. What's the, some of the workflow you go through? Yeah, so um, we'll end up either sending a detection to a customer and either they'll trigger like, Hey, can you give us more context around this event that happened? >>Or it will be, we had a pen test, red team, bad thing happen. Can someone else investigate further? And so I'll come in might from my perspective, I'll come in kind of like a, almost like a tier three in a way. We'll come in, we'll do the additional research beyond what our detectors already caught looking for. Many things, you know, did, was there something we missed that we can do better at detecting next time? Is there any new behaviors involved with something drop that you know, that the actor had left within the environment that may have gone by antivirus prevention controls, anything like that. Um, and then also just understanding their trade craft. Right? So we track a lot of teams and disturbed behaviors and we're able to kind of explore and you know, build those you gotta you gotta be on everything. Basically you gotta survey the entire landscape. >>Yep. You come in post event. Yeah. Do the collateral damage analysis and the dead map. That's a really cool thing about like the Splunk boss's a sock data set. Right. And that's where my talks a lot about is it's a very like, basic talk, but it focuses on how to go from beginning to end investigating this big incident that happened. You know, cause when you get an a detection from like in organization you might just find that it was delivered to a word doc, a couple of things executed. But was there something else that happened? Right? And there's like your Canadian Nicole mind piece, right. You know, finding other things that occurred within the organization and helping ideally your data essentially is the foundation for essentially preventative side. So it's, yes, it's kind of a closed loop kind of life cycle of yep. Leverage operating leverage data standpoint. >>Yeah, it's a solid point. We, I coined the term like three years ago called driving, driving prevention with detection. So take all your detection logic and understanding and things you see with products, even EDR Avi, and use that to drive your prevention. So it's just a way that if you're just alerting on everything, take that data and put it into your preventative preventative controls. So Michael got asked you, how is cloud, how is cloud changing the security formulas? Because obviously scale and data are big themes we hear all the time. I mean has been around is not a new thing. But the constant theme that I see in all my cube interviews we've done over the years and this year is the Nord scale comes up, is unprecedented scale, both in data volume, surface area needs for things like red Canary teams to be in there. What do you see with the impact the cloud is it really should change the game in any way? >>He has it's speed as new cloud. It's the speed of new cloud technology that seems to constantly be coming out. Like one day it's Docker, next day it's Coobernetti's and then there's going to be something tomorrow. Right? Like it just constantly changes. So how can vendors keep up with logging, making sure it's the right type of logging and being able to write detection on it or even detect anything out of it. Right. One, the diversity too is a great point. I want to know. Firstly, blogs are great. Yeah, you got tracing. So you have, so there's now different signaling. Yeah. So this app now a new thing that you got to stay on top. Oh, totally. Like look at any, any MSSP, they have thousands of data sources coming in. And now I want you to monitor my Coubernetties cluster that scales horizontally from 100 to 5,000 all day, every day like Netflix or something. >>Right? And I want you to find the bad things in that. It's a lot going on. And this is where machine learning and automation come into play because the observability you need the machine learning. They've got to categorize this. Okay. Again, humans do all this. No, yeah, it takes a machine. I'm using machines with human intelligence in a way, right? So have a human driving the machine to pull out those indicators, those notables. Michael, thanks for coming on. Great insight. Great signal from the noise. You're still distracting there. Great stuff. Final question for that to end the segment. In your opinion, what's the top story in the security industry that needs to be continually told and covered and reported on? >> Ooh, that's, that's a good one. Um, you hear any threats, platform development, new stacks developing. Is there like a one area that you think deep that's the high order bit in terms of like impact? Yeah. I think focus on, I'm going to say point cause that's where everything's executing and everything's happening. Um, and that's the biggest thing that it's only gonna get more challenging with IOT edge and industrial IOT. Yes. The edge is the end point. End points are changing. The definition is changing at exact right stuff coming on from red Canary here in the queue, the Canary in the coal mine. That's the cube. Brand-new. The signal here from.com 19. I'm John furrier back with more after this short break.

Published Date : Oct 22 2019

SUMMARY :

splunk.com 19 brought to you by Splunk. So we help them with monitoring their environment. And so the talks all focused on how to find evil within bots. What, what do you deal with and your job? And then there's some things that we may never see that are just very novel and kind So you know, And so the question is, is that what are you looking at matters, And if you look at it, these breaches in the past, it's happening on the endpoint and that's probably where we are. Um, a good example is kind of like the looking forward stuff where red team's going, And, and it's all data like you How to deal with them effectively and then if blind spots exist, what do they look like or how do you figure that out? And so I helped a lot of our customers go from, we bought you red Canary to monitor points, We took a poll in our, we have a, you know, this acumen operate for 10 years. And that's the kind of like proprietary for like the analytics piece of just how our operation works. him for, one of the things that I've been proud of with covering Splunk is we showed them early You are you guys developing? How do you know when you need to jump in and get Is there any new behaviors involved with something drop that you know, that the actor had left You know, cause when you get an a detection from like in organization you might just find that it was delivered you see with products, even EDR Avi, and use that to drive your prevention. So this app now a new thing that you got to stay on top. So have a human driving the machine to Um, and that's the biggest thing that it's only gonna get more challenging

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MichaelPERSON

0.99+

DavePERSON

0.99+

SarahPERSON

0.99+

MikePERSON

0.99+

Mike HaagPERSON

0.99+

two weeksQUANTITY

0.99+

six monthsQUANTITY

0.99+

Mike EGPERSON

0.99+

two yearsQUANTITY

0.99+

10 yearsQUANTITY

0.99+

30 terabytesQUANTITY

0.99+

AWSORGANIZATION

0.99+

yesterdayDATE

0.99+

seven yearsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

SplunkORGANIZATION

0.99+

todayDATE

0.99+

Las VegasLOCATION

0.99+

2010DATE

0.99+

MicrosoftORGANIZATION

0.99+

NicolePERSON

0.99+

100QUANTITY

0.98+

over three yearsQUANTITY

0.98+

C sharp and.netORGANIZATION

0.98+

three years agoDATE

0.98+

oneQUANTITY

0.98+

5,000 alumniQUANTITY

0.98+

tomorrowDATE

0.98+

red CanaryORGANIZATION

0.98+

this yearDATE

0.98+

10th yearQUANTITY

0.98+

next dayDATE

0.98+

SOCORGANIZATION

0.97+

FirstlyQUANTITY

0.97+

5,000QUANTITY

0.97+

about three yearsQUANTITY

0.97+

2019DATE

0.96+

OneQUANTITY

0.96+

AzureORGANIZATION

0.95+

Splunk .conf19OTHER

0.95+

bothQUANTITY

0.95+

singleQUANTITY

0.95+

here@splunk.comOTHER

0.95+

2030 minutesQUANTITY

0.94+

NetflixORGANIZATION

0.94+

19OTHER

0.93+

threeQUANTITY

0.93+

red CountyLOCATION

0.9+

CoobernettiORGANIZATION

0.9+

at.comOTHER

0.88+

SplunkPERSON

0.87+

John furrierPERSON

0.87+

a dayQUANTITY

0.85+

seventh year squadQUANTITY

0.84+

four key areasQUANTITY

0.81+

firstQUANTITY

0.8+

onceQUANTITY

0.79+

yearsDATE

0.77+

version fourOTHER

0.77+

one areaQUANTITY

0.76+

PowerShellORGANIZATION

0.75+

everyQUANTITY

0.73+

red CanaryLOCATION

0.72+

19QUANTITY

0.72+

messageOTHER

0.71+

version threeOTHER

0.71+

SamPERSON

0.71+

William Toll, Acronis | Acronis Global Cyber Summit 2019


 

>>from Miami Beach, Florida It's the key. You covering a Cronus Global Cyber Summit 2019. Brought to you by a Cronus. >>Hello, everyone. Welcome to the Cube coverage here in Miami Beach Front and Blue Hotel with Cronus Global Cyber Summit 2019 2 days of coverage. Where here, Getting all the action. What's going on in cyber tools and platforms are developing a new model of cybersecurity. Cronus Leader, Fast growing, rapidly growing back in here in the United States and globally. We're here. William Toll, head of product marketing Cronus. Thanks for coming. I appreciate it. >>Thanks, John. I'm excited. You're >>here so way were briefed on kind of the news. But you guys had more news here. First great key notes on then special guest Shark tank on as well. That's a great, great event. But you had some news slip by me. You guys were holding it back. >>So we've opened our A p I, and that's enabling a whole ecosystem to build on top of our cyber protection solutions. >>You guys have a platform infrastructure platform and sweet asserts from backup all the way through protection. All that good stuff as well. Partners. That's not a channel action platforms are the MoD has been rapidly growing. That's 19 plus years. >>And now, with the opening of our AP, eyes were opening the possibility for even Maur innovation from third parties from Eyes V's from managed service providers from developers that want to build on our platform and deliver their solutions to our ecosystem. >>You guys were very technical company and very impressed with people. Actually, cyber, you gotta have the chops, you can't fake it. Cyber. You guys do a great job, have a track record, get the P I. C B Also sdk variety, different layers. So the FBI is gonna bring out more goodness for developers. You guys, I heard a rumor. Is it true that you guys were launching a developer network? >>That's right. So the Cronus developer network actually launches today here in the show, and we're inviting developed officials. That's official. Okay. And they can go to developers that Cronus dot com and when they go in there, they will find a whole platform where they can gain access to forums, documentation and logs, and all of our software development kids as well as a sandbox, so developers can get access to the platform. Start developing within minutes. >>So what's the attraction for Iess fees and developers? I mean, you guys are here again. Technical. What is your pitch developers? Why would they be attracted to your AP eyes? And developer Resource is >>sure it's simple. Our ecosystem way have over 50,000 I t channel partners and they're active in small businesses. Over 500,000 business customers and five million and customers all benefit from solutions that they bring to our cyber cloud solutions >>portal. What type of solutions are available in the platform today? >>So their solutions that integrate P s a tools professional service is automation are mm tools tools for managing cloud tools for managing SAS applications. For example, one of our partners manages office 3 65 accounts. And if you put yourselves in the shoes of a system administrator who's managing multiple SAS applications now, they can all be managed in the Cronus platform. Leverage our user experience. You I s t k and have a seamless experience for that administrator to manage everything to have the same group policies across all of this >>depression. That success with these channel a channel on Channel General, but I s freeze and managed service ROMs. Peace. What's the dynamic between Iess, freeze and peace? You unpack that? >>Sure. So a lot of m s peace depend on certain solutions. One of our partners is Connectwise Connectwise here they're exhibiting one sponsors at at this show and their leader in providing managed to lose management solutions for M s. He's to manage all of their customers, right? And then all the end points. >>So if I participate in the developer network, is that where I get my the FBI's someone get the access to these AP eyes? >>So you visits developer data cronies dot com. You come in, you gain access to all the AP eyes. Documentation way Have libraries that'll be supporting six languages, including C sharp Python, java. Come in, gain access to those documentation and start building. There's a sandbox where they could test their code. There's SD K's. There's examples that are pre built and documentation and guides on how to use those s >>So customer the end. You're in customers or your channel customers customer. Do they get the benefits of the highest stuff in there? So in other words, that was the developer network have a marketplace where speed push their their solutions in there. >>Also launching. Today we have the Cronus Cyber Cloud Solutions portal and inside there there's already 30 integrations that we worked over the years to build using that same set of AP eyes and SD case. >>Okay, so just get this hard news straight. Opening up the AP eyes. That's right. Cronus Developer Network launched today and Cloud Solutions Portal. >>That's right, Cyber Cloud Solutions Portal Inside there there's documentation on all the different solutions that are available today. >>What's been the feedback so far? Those >>It's been great. You know, if we think about all the solutions that we've already integrated, we have hundreds of manage service providers using just one solution that we've already integrated. >>William, we're talking before we came on camera about the old days in this business for a long time just a cube. We've been documenting the i t transformation with clouds in 10 years. I've been in this in 30 years. Ways have come and gone and we talked to see cells all the time now and number one constant pattern that emerges is they don't want another tour. They want a solid date looking for Jules. Don't get me wrong, the exact work fit. But they're looking for a cohesive platform, one that's horizontally scaled that enables them to either take advantage of a suite of service. Is boy a few? That's right. This is a trend. Do you agree with that? What you're saying? I totally agree >>with that, right? It makes it much easier to deal with provisioning, user management and billing, right? Think about a man of service provider and all of their customers. They need that one tool makes their lives so much easier. >>And, of course, on event would not be the same. We didn't have some sort of machine learning involved. How much his machine learning been focused for you guys and what's been some of the the innovations that come from from the machine. I mean, you guys have done >>artificial intelligence is critical today, right? It's, uh, how we're able to offer some really top rated ransomware protection anti malware protection. We could not do that without artificial intelligence. >>Final question for you. What's the top story shows week If you have to kind of boil it down high order bit for the folks that couldn't make it. Watching the show. What's the top story they should pay attention to? >>Top story is that Cronus is leading the effort in cyber protection. And it's a revolution, right? We're taking data protection with cyber security to create cyber protection. Bring that all together. Really? Democratize is a lot of enterprise. I t. And makes it accessible to a wider market. >>You know, we've always said on the Q. Go back and look at the tapes. It's a date. A problem that's right. Needed protection. Cyber protection. Working him, >>Cronus. Everything we do is about data. We protect data from loss. We protect data from theft and we protect data from manipulation. It's so critical >>how many customers you guys have you? I saw some stats out there. Founded in 2003 in Singapore. Second headquarters Whistle in 2000 a global company, 1400 employees of 32 offices. Nice nice origination story. They're not a Johnny come lately has been around for a while. What's the number? >>So five million? Any customers? 500,000 business customers. 50,000 channel partners. >>Congratulations. Thanks. Thanks for having us here in Miami Beach. Thanks. Not a bad venue. As I said on Twitter just a minute ago place. Thanks for Thanks. All right, John. Just a cube coverage here. Miami Beach at the front in Blue Hotel for the Cyber Global Cyber Security Summit here with Cronus on John Kerry back with more coverage after this short break.

Published Date : Oct 14 2019

SUMMARY :

Brought to you by a Cronus. Welcome to the Cube coverage here in Miami Beach Front and Blue Hotel with Cronus Global You're But you guys had more news here. to build on top of our cyber protection solutions. You guys have a platform infrastructure platform and sweet asserts from backup all the way through from developers that want to build on our platform and deliver their solutions to So the FBI is gonna bring out more So the Cronus developer network actually launches today here in the show, I mean, you guys are here again. and customers all benefit from solutions that they bring to What type of solutions are available in the platform today? experience for that administrator to manage everything to have the same group policies What's the dynamic between One of our partners is Connectwise Connectwise here they're exhibiting one So you visits developer data cronies dot com. So customer the end. Today we have the Cronus Cyber Cloud Solutions portal and inside there That's right. documentation on all the different solutions that are available today. You know, if we think about all the solutions that we've already integrated, We've been documenting the i t transformation with clouds in 10 years. It makes it much easier to deal with provisioning, user management that come from from the machine. We could not do that without artificial intelligence. What's the top story shows week If you have to kind of boil it down high order bit for the folks Top story is that Cronus is leading the effort in cyber protection. You know, we've always said on the Q. Go back and look at the tapes. and we protect data from manipulation. What's the number? So five million? Miami Beach at the front in Blue Hotel for the Cyber

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

William TollPERSON

0.99+

SingaporeLOCATION

0.99+

2003DATE

0.99+

Miami BeachLOCATION

0.99+

32 officesQUANTITY

0.99+

WilliamPERSON

0.99+

five millionQUANTITY

0.99+

2000DATE

0.99+

FBIORGANIZATION

0.99+

John KerryPERSON

0.99+

30 integrationsQUANTITY

0.99+

Miami Beach, FloridaLOCATION

0.99+

United StatesLOCATION

0.99+

CronusORGANIZATION

0.99+

OneQUANTITY

0.99+

1400 employeesQUANTITY

0.99+

Cyber Global Cyber Security SummitEVENT

0.99+

TodayDATE

0.99+

FirstQUANTITY

0.99+

oneQUANTITY

0.99+

over 50,000QUANTITY

0.99+

Cronus Developer NetworkORGANIZATION

0.99+

2 daysQUANTITY

0.99+

30 yearsQUANTITY

0.99+

javaTITLE

0.99+

JulesPERSON

0.99+

hundredsQUANTITY

0.99+

six languagesQUANTITY

0.99+

AcronisORGANIZATION

0.99+

10 yearsQUANTITY

0.98+

todayDATE

0.98+

one solutionQUANTITY

0.98+

Cronus Global Cyber Summit 2019EVENT

0.98+

Cloud Solutions PortalTITLE

0.98+

19 plus yearsQUANTITY

0.98+

Cronus Global Cyber Summit 2019EVENT

0.97+

JohnnyPERSON

0.97+

Over 500,000 business customersQUANTITY

0.96+

500,000 business customersQUANTITY

0.95+

one toolQUANTITY

0.95+

Blue HotelLOCATION

0.95+

Eyes VORGANIZATION

0.95+

one sponsorsQUANTITY

0.95+

50,000 channel partnersQUANTITY

0.92+

Acronis Global Cyber Summit 2019EVENT

0.9+

3 65 accountsQUANTITY

0.9+

Connectwise ConnectwiseORGANIZATION

0.9+

Cloud Solutions PortalTITLE

0.89+

C sharp PythonTITLE

0.88+

a minute agoDATE

0.88+

Second headquartersQUANTITY

0.87+

Cronus dot comORGANIZATION

0.85+

CronusPERSON

0.83+

CubeORGANIZATION

0.82+

WhistleORGANIZATION

0.76+

Channel GeneralORGANIZATION

0.76+

TwitterORGANIZATION

0.74+

P I.TITLE

0.72+

Cyber Cloud SolutionsTITLE

0.69+

CronusTITLE

0.65+

CyberORGANIZATION

0.62+

tQUANTITY

0.52+

Shark tankORGANIZATION

0.44+

BlueORGANIZATION

0.44+

SASTITLE

0.43+

Kenny Oxler, American Cancer Society | Boomi World 2019


 

>>live from Washington, D. C. It's the Cube covering Bumi World 19. >>Do you buy movie? >>Welcome to the Cube. I'm Lisa Martin at Bumi World 2019 in Washington, D. C. Been here all day. Had some great conversations. One of my favorite things about movie is how impactful they are making their customers. And I'm very pleased to welcome the CEO off American Cancer Society. Kenny Ocular Kenny, Welcome to the Cube. >>Thank you. Happy to be here. >>Really? Enjoyed your keynote this morning on stage with Chris McNab. You know, the American Cancer Society is one of those organizations. I think that that impacts every single person on this planet in some way or another. We've all been touched by cancer, and it's so it's so interesting to look at it as how is technology fueling the American Cancer Society? Your CEO talk to us a little bit about what you guys are doing with booming. How Bhumi is really helping you guys two integrate all these different systems so that an agency is old and historic as a C s is is really transforming to be a modern kind of cloud driven organization. >>Yeah, I think all organizations now are becoming I t organizations. It's their heart, and it's important for us to the American Cancer Society to interact with. Our constituents are volunteers. Our patients are staff right in a digital way. So it is critically important that we are right there with everybody else, uh, interacting with them. And so, whether they're on the go and doing it on their mobile phone or, you know, at the doctor's office talking with their doctor about treatment options that were there to help them get them what they need, an information for their best chance to beat the beat the disease. >>So talk to me first about the business transformation that the American Cancer Society winter before your time there. But first it was. We have all these different organizations different leadership, different I t infrastructure, different financial operations model. Talk to us about first powdered it transform from a business like process perspective and then start looking at digital transformation. >>So some of it happened at the same time the organization made the decision back in about 2012 to consolidate other organizations. We were we kind of run ran regionally at the time and each independent, different region. There were 13 different regions kind of ran independently with their own I T systems. There were some shared technologies that we had of the organization, starting in about 2012 decided that no, we wanted to centralize our model and come together. We thought it was a more efficient manner and allowed us, in essence, to doom or for our mission, which is the ultimate goal. So there was a lot of consolidation around people on organization. Some of the processes I will say, God, God consolidated. Some are still going through some of that transformation. So after we kind of keep brought the organization's together and some of the people together, we kind of looked at Where are we with our technology and how do we move forward into the 21st century and do that effectively? And so at the time, we did kind of an analysis of our current state. As I mentioned in the keynote, we had a lot of technologies >>that were just older, had kind of run their course for >>end of life or just become that, you know, over change over a decade of changes and just being a monstrous the e meth or systems. That way, we're really struggling to keep up right both in terms of change and enhancement and delivering those capabilities back to our constituents. So we decided that no, it's time for us to move to a new and technology modernization effort, and we really wanted to be on the cloud first strategy. So we were looking at our cloud vendors and everything else. And one of the big selections was, as we chose Salesforce's R C R M platform we chose. Net Suite is our financially rp platform that we we could consolidate all those. And then as a part of that, we were looking at all of the leftover processes that weren't standardized, that we were still doing differently, that we could simplify. So taking stuff from 21 steps down to six steps if we could, you know, et cetera, and bringing that along with the transformation just to create more efficiencies for us and then, at the end of the day, driving a better end user experience with your volunteer, your staff, your patient, et cetera, >>it's a tremendous amount of data just in a serum like cells fours and Oracle Net Sweet. What was the thought and the opportunity to actually put an integration platform to enable that data to be shared between the applications and enabled whether it's providers or as you said volunteers, and we'll talk about that? And second, to be ableto have an experience that allows them to get whatever is that they're looking for. Talk to us about integration and sort of that driving kind of hub centralized hub aspect. >>Yeah. I mean, with any business data is key. And historically, we had our data was was >>spread out across multiple systems but then didn't always sync up. So you'd have you know you'd pull a report out of one system and say something different than when you looked at another system. So one of the key foundational tenets with the transformation was is we wanted our data to be in sync. We >>wanted to be able to see the same things no matter where you were looking. At that way, we we were all looking at the same information and basically a single source of truth. Yes, and boom. He was a critical component of that, right? With their integration platform, they were going to be our integration hub that is going >>to keep everything in sync. So we knew we had over, Um well, we had 100 and 20 applications that ultimately were a part of it. There were probably 20 major ones that had most of our data in there. And then boom. He is integrating all of those. So when information's coming across, whether it's coming in from, ah, donation made or an event participants or a patient referral form, all of that data comes in, comes in through Bumi, and it's propagated, orchestrated across the systems as it needs to be to make sure that it has all of the right information in it, that the data is as clean as we can make it, and it's all in sync. At the end of the day, >>that's critical. Having the data is great, but if you actually can't utilize an extract values from that, it's I don't want a worthless, but it's clearly the value, and they're you know, >>it's a lot harder to make good business decisions without good data, >>right? And when we're talking about something like patients dealing with with very, very scary situations, being able to Matt, whether it's matching a volunteer with ah mentor with a patient is going through something similar that could be game changing in lives and really kind of propagate. Talk to me about this service match that you guys have built with Bhumi. I think it's such a great service that you guys are delivering. Tell us about that. What it's enabling. >>So service matches an application that is part of our road to recovery program, where we provide rides for cancer patients to and from cancer treatment So often when you're getting chemo therapy, driving after chemotherapy is not an option. And ah, lot of a patient's have trouble with caregivers and family, always helping them. So the American Cancer Society provides this program to provide those rides free of charge for cancer patients. And the service match application is about connecting those patients to volunteers for the rides. So if if a patient calls in, they say, I need a ride, this is what time I'm going etcetera. They can do that now online as well, and we can connect them with a volunteer. So then that goes out to our volunteer community and somebody can say I can do that. I can help this person out, connects them up so that they can get to their treatments on time. >>That's so fantastic. And such a impact that you guys could make isn't something where you guys were integrating on the background with, like, a rideshare service or these just folks like Hey, I've got a car that seats five I want to help is it is available. It is. It is available to anybody. Anybody can >>volunteer, and most of the rides are handled volunteers If we cannot find a volunteer, we have a lot of great partners that worked with the American Cancer Society. They can provide those rideshare opportunities, so we'll make it happen and and get the patient to their treatment >>to talk to me about the ability to do that. That's a one great application of what you guys are doing with Bhumi. What was the actual building? That application? How long did it take to be able to say, Hey, we had this idea? We can connect these systems. We can facilitate something that's critical in the care of the patients. What was that kind of build an implementation like because when we talked a lot about time to value. And we've talked about that a lot today. So talk to me about it through that lens >>eso for us. We started on we're all on spreadsheets, right and paper. And yeah, it was it was about a 12 months process actually build some of the the service match application itself. The bony implementation came in as part of our transformation to make sure that all of the systems were integrated with that. So as people are requesting rides or whether that's through the call center or going through the website, that that information is there, that they can help patients with it. So if they need to change the schedule or do something different, that those all take place and that everybody has the latest information, it also enables us has were as changes are happening or even the rides are taking place. Notifications air going back out and back and forth so that everybody is up to date on all of the activity that's taking place. >>And to date, you guys have helped with service match alone Nearly 30,000 patients. >>Yeah, we we service. I think It's 30,000 patients a year. >>Wow. >>On the on the platform, we, uh, over 500,000 rides have been delivered since its inception. >>And And when was that inception? >>I'd have to look at the date. I don't >>know. A couple years ago were in the last. >>It's been It's probably been in over a decade now. >>Okay, that's awesome. So another thing. I'm curious. Four volunteers who want to do to raise funds to support the American Cancer Society is integration kind of essential component. You're smiling. So I think the answer is I think I know the answer. Talk to us about how, um, Bhumi is helping a CS to deliver, you know, a more seamless, a better fundraising experience for anybody that wants to actually go out and do that. >>Yeah. So we have a lot of donation processing systems that that that we leverage As for the American Cancer Society, because part of what we want to do is make it easy for people to raise money and raise it in their way. Right? So we have multiple systems, both from all the events that we do, whether it's the relay for life, for the making strides against breast cancer, which are two of our major event platforms. But we also have raised your way platforms. So if you want to do it yourself and you want to host a wine front razer with your friends and raise some money, we can absolutely help you do that as well. And what we do is we take all that information from all of that that from those events, and then bring that into the system so that we know what happened when who you were, so we can properly thank you. You can also get your tax credits and and all of the other things that go along with it. So >>that's awesome. So I want to ask you from a CEO's perspective, Bumi being a A single instance multi tenant cloud application delivered as a service to you and your previous role before you came to the American Cancer Society was insurance. Talk to me about that as a differentiator. What is that as a. C s continues to scale on, offer more programs and have more data to integrate roomies architecture and your perspective is that something that gives the A. C. S really a leg up to be able to do more, more. >>Absolutely. I think boonies, low code development strategy is is a differentiated for anybody that's using the platform it. We have been able to deliver Maur integrations in a shorter amount of time with our transformation than I've done in the past with other integration platforms or just developing it. I'll say the old fashioned way with Java or C sharp. So I think I think it's an integration platform. It's it's It's a real game changer in terms of what enterprises can do in terms of delivering, uh, faster and with Maur stability and performance than in the past, >>which is critical for many businesses that obviously yours included. They also take a look back at your previous role in a different industry. How is the role of the CEO changing in your perspective as things are moving to the cloud? But there's the explosion of edge and this consume arised implementation, right or influence because as consumers, we have access to everything and we want to be able to transact anything, whether it's signing up to be a volunteer or an actual patient needing to have access to records or a ride? How How is that consumers ation effect changing the role of the CEO, opening up more opportunities? >>Yeah, that's a big question. >>Sorry. It's >>okay. Um, yeah, I think the role of C I. O. Is changing significantly in terms of they are required to be more of a business leader are as much as a business leader as as any of the other C suite executives. And it is justice critical for them to understand the business where it's going be a part of the strategy with it and helped drive. From that perspective, The consumer ization component is actually in some ways, I think, making the c i o in the i t. Job a little bit harder. There's, um there's a lot that goes into making sure that what we're doing is secure on, performs well and sometimes just the overall consumer ization of technology. It looks so easy sometimes, and sometimes it's easy to underestimate some of the the complex nature of what we're doing and the level of security that needs to be applied to make sure that were protecting our constituents and making sure that their data is safe and secure. >>How does Boonmee help facilitate doctors? You right? We talk about security all the time. In any industry. How is what you're doing with Louis giving you maybe that peace of mind or or the confidence that what's being moved around as data and applications migrate, that you've got a secure, safe environment? That data? >>Yeah, I think Bumi does several things. First off, they've got a lot of security certifications is a part of their program. They make it relatively easy to to leverage that they allow us to deploy the the atoms where we need to. So whether that's on Prem or in our own tenants, behind our firewalls, all of those things will allow us to deploy it in whatever method we feel is most secure based on the data that we're trying to move >>except Well, Kenny, it's been a pleasure having you on the Cube just really quickly. Where can we go if we want to become a volunteered to help patients >>san sir dot org's >>cancer dot org's Awesome Kenny has been a pleasure. Thank you so much. Thank congratulations on the massive impact that A C S is making not just with Bhumi, but in the lives of many, many people. We appreciate your time. >>We're very excited and happy. We can help. >>All right. I'm Lisa Martin. You're watching the Cube from Bhumi World 2019. Thanks for watching.

Published Date : Oct 2 2019

SUMMARY :

live from Washington, D. C. It's the Cube covering Kenny Ocular Kenny, Welcome to the Cube. Happy to be here. Your CEO talk to us a little bit about what you guys are doing with booming. So it is critically important that we are right there with everybody else, So talk to me first about the business transformation that the American Cancer Society winter before the people together, we kind of looked at Where are we with our technology and how down to six steps if we could, you know, et cetera, and bringing that along with the transformation Talk to us about integration and sort of that driving kind of hub centralized hub we had our data was was So one of the key foundational tenets with the transformation was is we wanted our data to be we we were all looking at the same information and basically a single source of truth. and it's propagated, orchestrated across the systems as it needs to be to make sure that it has all Having the data is great, but if you actually can't utilize an extract values Talk to me about this service match that you guys have built with Bhumi. So service matches an application that is part of our road to recovery program, And such a impact that you guys could make isn't something we have a lot of great partners that worked with the American Cancer Society. How long did it take to be able to say, Hey, we had this idea? So if they need to change the schedule or do something different, that those all take place and Yeah, we we service. On the on the platform, we, uh, over 500,000 rides I'd have to look at the date. Talk to us about how, um, Bhumi is helping a CS to deliver, systems, both from all the events that we do, whether it's the relay for life, for the making strides against breast cancer, delivered as a service to you and your previous role before you came to the American Cancer Society was insurance. I'll say the old fashioned way with Java or C sharp. How How is that consumers ation effect changing the role of It's security that needs to be applied to make sure that were protecting our constituents maybe that peace of mind or or the confidence that what's being moved around as is most secure based on the data that we're trying to move Where can we go if we want to become impact that A C S is making not just with Bhumi, but in the lives of many, many people. We can help. Thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Chris McNabPERSON

0.99+

American Cancer SocietyORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

Kenny OxlerPERSON

0.99+

American Cancer SocietyORGANIZATION

0.99+

100QUANTITY

0.99+

twoQUANTITY

0.99+

30,000 patientsQUANTITY

0.99+

six stepsQUANTITY

0.99+

21st centuryDATE

0.99+

20 applicationsQUANTITY

0.99+

KennyPERSON

0.99+

Washington, D. C.LOCATION

0.99+

Washington, D. C.LOCATION

0.99+

21 stepsQUANTITY

0.99+

todayDATE

0.99+

MattPERSON

0.99+

bothQUANTITY

0.99+

JavaTITLE

0.99+

Four volunteersQUANTITY

0.99+

OneQUANTITY

0.99+

secondQUANTITY

0.99+

over 500,000 ridesQUANTITY

0.98+

firstQUANTITY

0.98+

one systemQUANTITY

0.98+

oneQUANTITY

0.98+

FirstQUANTITY

0.98+

13 different regionsQUANTITY

0.98+

BhumiPERSON

0.98+

BhumiORGANIZATION

0.98+

fiveQUANTITY

0.97+

SalesforceORGANIZATION

0.97+

singleQUANTITY

0.97+

BumiORGANIZATION

0.97+

eachQUANTITY

0.97+

first strategyQUANTITY

0.97+

Nearly 30,000 patientsQUANTITY

0.97+

C sharpTITLE

0.95+

single sourceQUANTITY

0.94+

20 majorQUANTITY

0.93+

2019DATE

0.92+

2012DATE

0.9+

12 monthsQUANTITY

0.88+

Kenny Ocular KennyPERSON

0.86+

couple years agoDATE

0.85+

a yearQUANTITY

0.8+

over a decadeQUANTITY

0.79+

this morningDATE

0.78+

every single personQUANTITY

0.78+

CubeCOMMERCIAL_ITEM

0.78+

BoonmeeORGANIZATION

0.77+

GodPERSON

0.76+

Net SuiteTITLE

0.73+

C SORGANIZATION

0.71+

Cube fromTITLE

0.71+

Boomi WorldTITLE

0.7+

Bumi World 2019EVENT

0.61+

Bhumi WorldTITLE

0.61+

PremORGANIZATION

0.6+

C. STITLE

0.57+

racle Net SweetORGANIZATION

0.57+

major event platformsQUANTITY

0.57+

cancer dot orgOTHER

0.56+

aboutQUANTITY

0.55+

LouisPERSON

0.49+

Bumi World 19TITLE

0.47+

orgOTHER

0.4+

2018-01-26 Wikibon Action Item with Peter Burris


 

>> Hi, I'm Peter Burris. Welcome to Wikibon's Action Item. (light instrumental music) No one can argue that big data and related technologies have had significant impact on how businesses run, especially digital businesses. The evidence is everywhere. Just watch Amazon as it works its way through any number of different markets. It's highly dependent upon what you can get out of big data technologies to do a better job of anticipating customer needs, predict best actions, make recommendations, et cetera. On the other hand, nobody can argue, however, that the overall concept of big data has had significant issues from a standpoint of everybody being able to get similar types of value. It just hasn't happened. There have been a lot of failures. So today, from our Palo Alto studios, I've asked David Floyer, who's with me here, Jim Kobielus and Ralph Finos and George Gilbert are on the line, and what we're going to talk about is effectively where are we with big data pipelines and from a maturity standpoint to better increase the likelihood that all businesses are capable of getting value out of this. Jim, why don't you take us through it. What's the core issue as we think about the maturing of machine analytics, big data pipelines? >> Yeah, the core issue is the maturation of the machine learning pipeline, how mature is it? And the way Wikibon looks at the maturation of the machine learning pipeline independent of the platforms that are used to implement that pipeline are three issues. To what extent has it been standardized? Is there a standard conception, various tasks, phases, functions, and their sequence. Number two, to what extent has this pipeline at various points or end to end been automated to enable through point consistency. And number three, to what extent has this pipeline been accelerated not through just automation but through a very (static) and collaboration and handling things like governance in a repeatable way? Those are core issues in terms of the ML pipeline. But in the broader sense, the ML pipeline is only one work stream in the broader application development pipeline that includes code, development, and testing the pipeline. So really dev ops is really the broader phenomenon here. ML pipeline is one segment of the dev ops pipeline. >> So we need to start thinking about how we can envision the ML pipeline creating assets that businesses can use in a lot of different ways. For those assets specifically or models, machine learning models that can be used in more high value analytic systems. This pressure has been in place for quite a while. But David Floyer, there's a reason why right now this has become important. Why don't you give us a quick overview of kind of like where does this go? Why now? >> Why now? Why now is because automation is in full swing, and you've just seen the Amazon having the ability now to automate warehouses, and they've just announced the ability to automate stores, brick and mortar stores. You go in. You pick something up. You walk out. And that's all you have to do. No lines at checkout. No people in the checkout, a completely automated store. So that business model of automation of business processes is, to me, what all this has to lead up to. We have to take the existing automation that we have, which is the systems of record and other automation that we've had for many other years, and then we have to take the new capabilities of AI and other areas of automation, and apply those to those existing automation and start on this journey. It's a 10 year journey or more to automating as many of those business processes as possible. Something like 80% or 90% are there and can be automated. It's an exciting future, but what we have to focus on is being able to do it now and start doing it now. >> So that requires that we really do take an asset-oriented approach to all of this. At the end of the day, it's impossible to imagine business taking on increasing complexity within the technology infrastructure if it hasn't taken care of business in very core ways, not the least of which is do we have, as a business, have a consistent approach to thinking about how we build these models? So Jim, you've noted that there's kind of three overarching considerations. Help us go into it a little bit. Where are the problems that businesses are facing? Where are they seeing the lack of standardization creating the greatest issues? >> Yeah, well, first of all, the whole notion of a machine learning pipeline has a long vintage. It actually descends from the notion of a data mining pipeline, but the data mining industry, years ago, consolidated or had a consensus around some model called Crisp. I won't bore you with the details there. Taking it forward to an analytical pipeline or a machine learning pipeline, the critical issues we see now is the type of asset that's being built and productionized is a machine learning model, which is a statistical model that is increasingly built on artificial neural networks, you know, to drive things like data learning. Some of the critical things up front, the preparation of all the data in terms of ingest and transformation and cleansing, that's an old set of problems well established, and there's a lot of tools on the market that do that really well. That's all critical for data preparation prior to the modeling process really truly beginning. >> So is that breaking down, Jim? Is that the part that's breaking down? Is that the upfront understanding of the processes, or is it somewhere else in the pipeline process that is-- >> Yeah, it's in the middle, Peter. The modeling itself for machine learning is where, you know, there's a number of things that have to happen for these models to be highly predictive. A, you have to do something called feature engineering, and that's really fundamentally looking for the predictors in large data sets that you can build into models. And you can use various forms. So feature engineering is a highly manual process that to some increasingly is being automated. But a lot of it is really leading edge technology is in the research institutes of the world. That's a huge issue of how to automate more of the upfront feature engineering. That feeds into the second core issue is that there's 10 zillion ways to skin the statistical model cat, the algorithms. You know, from the older models, the port vic-machine, to the newer artificial neural networks convolution. Blah, blah, blah. So a core issue, okay, you have a feature set through feature engineering, which of the 10 zillion algorithms should you use to actually build the model based on that feature set. So there are tools on the market that can accelerate some of these selection and testing of those alternate ways of building out those models. But once again, that highly manual process, traditionally manual process and selecting the items, building the models, still needs a lot of manual care and feeding to really be done right. It's human judgment. You really need high power data scientists. And then three, once you have the models built, training them. Training is critical with actual data to determine whether the models actually are predictive or do face recognition or whatever it is with a high degree of accuracy. Training itself is a very complicated pipeline in its own right. It takes a lot of time. It takes a lot of resources, a lot of storage. You got to, you know, your data link and so forth. The whole issue of standardizing on training of machine learning models is a black art on its own. And I'm just scratching the surface of these issues that are outstanding in terms of actually getting greater automation into a highly manual, highly expert-driven process. Go ahead, David. >> Jim, can I just break in? You've mentioned three things. They're very much in the AI portion of this discussion. The endpoint has to be something which allows automation of the business process, and fundamentally, it's real time automation. I think you would agree with that. So the outcome of that model then has to be a piece of code that is going to be as part of the overall automation system in the enterprise and has to fit in, and if it's going to be real time, it's got to be really fast as well. >> In other words, if the asset that's created by this pipeline is going to be used in some other set of activities? >> Correct, so it needs to be tested in that set of activities and part of a normal curve. So what is the automation? What is that process to get that code into a place where it can actually be useful to the enterprise and save money? >> Yeah, David, it's called dev ops, and really dev ops means a number of different things including especially a source code, code control repository. You know, in the broader scheme of things that repository for your code for dev ops for continuous release and cycles needs to be expanded, and it's scoped to include machine learning models, deep learning, whatever it is you're building based on the data. What I'm getting at is a deepening repository of what I call logic that is driving your applications. It's code. It's Java, C++, or Sharp or whatever. It's statistical and predictive model. It's orchestration models you're using for BPM and so forth. It's maybe graph models. It's a deep and thickening layer of logic that needs to be pushed into your downstream applications to drive these levels of automation. >> Peter: So Jim? >> It has to be governed and consolidated. >> So Jim? The bottom line is we need maturity in the pipeline associated with machine learning and big data so that we can increase maturity in how we apply those assets elsewhere in the organization? Have I got that right? >> Right. >> George, what is that going to look like? >> Well, I want to build on what Jim was talking about earlier and my way of looking at this, at the pipeline, is actually to break it out into four different ones. And actually, Jim, as he's pointed out, there's more than potentially four. But the first is the design time for the applications, these new modern, operational, analytic applications, and I'll tie that back to the systems of record and effect. The second is the run time pipeline for these new operational, analytic applications, and those applications really have a separate pipeline for design time and run time of the machine learning models. And the reason I keep them separate is they are on a separate development and deployment and administration scaffolding from the operational applications. And the way it works with the systems of record, which of course, we're not going to be tearing out for decades, they might call out to one of these new applications, feed in some predictors, or have some calculated, and then they get a prediction or a prescription back for the system of record. I think the parts-- >> So George, what has to happen is we have to be able to ensure that the development activities that actually build the applications the business finds valuable and the processes by which we report into the business some of the outcomes of these things and the pipelines associated with building these models, which are the artifacts and the assets created by the pipelines, all have to come together. Are we talking about a single machine or big data pipeline? George, you mentioned four. Are we going to see pipelines for machine learning and pipelines for deep learning and pipelines for other types of AI? Are we going to see a portfolio of pipelines? What do you guys think? >> I think so, but here's the thing. I think there's going to be a consolidated data lake from which all of these pipelines draw the data that are used for modeling and downstream deployment. But if you look at training of models, you know, deep learning models, which are like their name indicates, they're deep, hierarchical. They're used for things like image recognition and so forth. The data there is video and speech and so forth. And there's different kinds of algorithms that they use to build, and there's different types of training that needs to happen for deep learning versus like other machine learning models versus whatever else-- >> So Jim, let me stop you because-- >> There are different processes. >> Jim, let me stop you. So I want to get to the meat of this guys. Tell me what a user needs to do from a design standpoint to inform their choice of pipeline building, and then secondarily, what kind of tools they're going to need. Does it start with the idea that there's different algorithms? There's different assets that are being created at the model level? Is it really going to feed that? And that's going to lead to a choice of tools? Is it the application requirements? How mature, how standardized, can we really put in place conventions for doing this now so it becomes a strategic business capability? >> I think there has to be a recognition. There's different use cases downstream. 'Cause these are different types of applications entirely built from AI in the broadest sense. And they require different data, different algorithm. But you look at the use cases. So in other words, the use cases, like Chatbox. That's a use case now for AI. That's a very different use case from say self-driving vehicle. So those need entirely different pipelines in every capacity to be able to build out and deploy and manage those disparate applications. >> Let me make sure I got this, Jim. What you're saying is that the process of creating a machine learning asset, a model, is going to be different at the pipeline level. It's not going to be different at the data level. It's going to be different at the pipeline level. George, does that make sense? Is that right? Do you see it that way, too, as we talk to folks? >> I do see what Jim is saying in the sense that if you're using sort of operational tooling or guardrails to maintain the fidelity of your model that's being called by an existing system of record, that's a very different tooling from what's going to be managing your IOT models, which have to get distributed and which may have sort of a central canonical version and then an edge specific instance. In other words, I do think we're going to see different tooling because we're going to see different types of applications being fed and maintained by these models. >> Organizationally, we might have a common framework or approach, but the different use cases will drive different technology selections, and those pipelines themselves will be regarded as assets that generate machine learning and other types of assets that then get applied inside these automation applications. Have I got that right, guys? >> Yes. >> Yes. A quick example to illustrate exactly what we're referring to here. So IOT, George brought up IOT analytics with AI built in its edge applications. We're going to see a bifurcation between IOT analytic applications where the training of the models is done in a centralized way because you've got huge amounts of data that needs to be training these very complex models that are running in the cloud but driving all these edge nodes and gateways and so forth, but then you're going to have another pipeline for edge-based training of models for things like autonomous operation where more of the actual training will happen at the edges, at the perimeter. It'll be different types of training using different types of data with different types of time lags and so forth built in. But there will be distinct pipelines that need to be managed in a broader architecture. >> So issues like the ownership of the data, the intellectual property control, the data, the location of the data, the degree to which regulatory compliance is associated with it, how it gets tested, all those types of issues are going to have an impact on the nature of the pipelines that we build here. >> Yes. >> So look, one of the biggest challenges that every IT organization has, in fact every business has, is the challenge that if you have this much going on, the slowest part of it slows everything else down. So there's always an impedance mismatch organizationally. Are we going to see a forcing of data science, application development, routines, practices, and conventions start to come together because the app development world, which is being asked to go faster and faster and faster is at some point in time say, I can't wait for these guys to do their sandbox stuff? What do you think, guys? Are we going to see that? David, I'll look at you first, and Jim, I'll go to you. >> Sure, I think that the central point of control for this is going to have to be the business case for developing this automation, and therefore from that, what's required in that system of record. >> Peter: Where the money is. >> Where the money is. What is required to make that automation happen, and therefore from that, what are you going to pick as your ways of doing that? And I think that at the moment, it seems to me as an outsider, it's much more driven by the data scientists rather than the people, the business line, and eventually the application developers themselves. I think that shift has to happen. >> Well, yeah, well, one of our predictions has been that the tools are improving and that that's going to allow for a separation, increased specialization of the data science world, and we'll see the difference between people who are really doing data science and people who are doing support work. And I think what we're saying here is those people who do support work are going to end up moving closer to the application development world. Jim, I think that's basically some research that you've done as well. Have I got that right? Okay, so let me wrap up our Action Item here. David Floyer, do you have a quick observation, a quick Action Item for this segment? >> For this segment? The Action Item to me is putting together a business case for automation, the fundamental reduction of costs and improvement of business model, and that to me, is what starts this off. How are you going to save money? Where is it most important? Where in your business model is it most important? And what we've done is some very recent research is put out a starting point for this discussion, a business model of a 10 billion dollar company, and we're predicting that it saves 14 billion dollars. >> Let's come to that. The Action Item is basically, start getting serious about this stuff because based on business cases, yeah. All right, so let me summarize very quickly. For Jim Kobielus and George Gilbert and Ralph Finos, who seem to have disappeared off our screens and David Floyer, our Action Item is this. That the leaders in the industry, in the digital world, are starting to apply things like machine learning, deep learning, and other AI forms very aggressively to compete, and that's going to force everybody to get better at this. The challenge, of course, is if you're forcing, or if you're spending most of your time on the underlying technology, you're not spending most of your time figuring out how to actually deliver the business results. Our expectation is that over the course of the next year, one of the things that are going to happen significantly within organizations will be a drive to improve the degree to which machine learning pipelines become more standardized reflecting of good data science practices within the business which itself will change based on the nature of the business, regulatory businesses versus non-regulatory businesses, for example. Having those activities be reflected in the tooling choices, have those tooling choices then be reflected in the types of models you want to build, and those models, those machine learning models ultimately reflecting the needs of the business case. This is going to be a domain that requires a lot of thought in a lot of IT organizations, a lot of inventions yet to be done here. But it's going to, we believe, drive a degree of specialization within the data science world as the tools improve and a realignment of crucial value-creating activities within the business so that what is data science becomes data science. What's more support, what's more related to building these pipelines, and operating these pipelines becomes more associated with dev ops and application development overall. All right, so for the Wikibon team, Jim Kobielus, Ralph Finos, George Gilbert, and here in the studio with me, David Floyer, this has been Wikibon's Action Item. We look forward to seeing you again. (light instrumental music)

Published Date : Jan 26 2018

SUMMARY :

that the overall concept of big data has had of the platforms that are used to implement the ML pipeline creating assets the ability to automate stores, brick and mortar stores. At the end of the day, it's impossible to imagine Some of the critical things up front, the preparation and that's really fundamentally looking for the predictors So the outcome of that model then has to be What is that process to get that code into a place where it that needs to be pushed into your downstream applications at the pipeline, is actually to break it out created by the pipelines, all have to come together. that needs to happen for deep learning versus And that's going to lead to a choice of tools? I think there has to be a recognition. It's not going to be different at the data level. or guardrails to maintain the fidelity of your model or approach, but the different use cases will drive huge amounts of data that needs to be training the location of the data, the degree to which is the challenge that if you have this much going on, is going to have to be the business case for developing and eventually the application developers themselves. and that that's going to allow for a separation, and that to me, is what starts this off. Our expectation is that over the course

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JimPERSON

0.99+

DavidPERSON

0.99+

David FloyerPERSON

0.99+

Jim KobielusPERSON

0.99+

GeorgePERSON

0.99+

Peter BurrisPERSON

0.99+

George GilbertPERSON

0.99+

Ralph FinosPERSON

0.99+

AmazonORGANIZATION

0.99+

80%QUANTITY

0.99+

10 yearQUANTITY

0.99+

PeterPERSON

0.99+

2018-01-26DATE

0.99+

14 billion dollarsQUANTITY

0.99+

90%QUANTITY

0.99+

WikibonORGANIZATION

0.99+

10 billion dollarQUANTITY

0.99+

10 zillion algorithmsQUANTITY

0.99+

10 zillion waysQUANTITY

0.99+

threeQUANTITY

0.99+

three issuesQUANTITY

0.99+

next yearDATE

0.99+

firstQUANTITY

0.99+

JavaTITLE

0.99+

oneQUANTITY

0.99+

todayDATE

0.98+

secondQUANTITY

0.98+

fourQUANTITY

0.97+

one segmentQUANTITY

0.96+

second core issueQUANTITY

0.92+

three thingsQUANTITY

0.89+

yearsDATE

0.87+

Palo AltoLOCATION

0.87+

single machineQUANTITY

0.85+

ChatboxTITLE

0.82+

decadesQUANTITY

0.79+

one work streamQUANTITY

0.79+

C+TITLE

0.77+

Number twoQUANTITY

0.69+

number threeQUANTITY

0.69+

three overarching considerationsQUANTITY

0.65+

SharpTITLE

0.53+

IOTORGANIZATION

0.49+

Rob Thomas, IBM | BigDataNYC 2016


 

>> Narrator: Live from New York, it's the Cube. Covering Big Data New York City 2016. Brought to you by headline sponsors: Cisco, IBM, Nvidia, and our ecosystem sponsors. Now, here are your hosts, Dave Vellante and Jeff Frick. >> Welcome back to New York City, everybody. This is the Cube, the worldwide leader in live tech coverage. Rob Thomas is here, he's the GM of products for IBM Analytics. Rob, always good to see you, man. >> Yeah, Dave, great to see you. Jeff, great to see you as well. >> You too, Rob. World traveller. >> Been all over the place, but good to be here, back in New York, close to home for one day. (laughs) >> Yeah, at least a day. So the whole community is abuzz with this article that hit. You wrote it last week. It hit NewCo Shift, I guess just today or yesterday: The End of Tech Companies. >> Rob: Yes. >> Alright, and you've got some really interesting charts in there, you've got some ugly charts. You've got HDP, you've got, let's see... >> Rob: You've got Imperva. >> TerraData, Imperva. >> Rob: Yes. >> Not looking pretty. We talked about this last year, just about a year ago. We said, the nose of the plane is up. >> Yep. >> Dave: But the planes are losing altitude. >> Yep. >> Dave: And when the funding dries up, look out. Interesting, some companies still are getting funding, so this makes rip currents. But in general, it's not pretty for pure play, dupe companies. >> Right. >> Dave: Something that you guys predicted, a long time ago, I guess. >> So I think there's a macro trend here, and this is really, I did a couple months of research, and this is what went into that end of tech companies post. And it's interesting, so you look at it in the stock market today: the five highest valued companies are all tech companies, what we would call. And that's not a coincidence. The reality is, I think we're getting past the phase of there being tech companies, and tech is becoming the default, and either you're going to be a tech company, or you're going to be extinct. I think that's the MO that every company has to operate with, whether you're a retailer, or in healthcare, or insurance, in banking, it doesn't matter. If you don't become a tech company, you're not going to be a company. That's what I was getting at. And so some of the pressures I was highlighting was, I think what's played out in enterprise software is what will start to play out in other traditional industries over the next five years. >> Well, you know, it's interesting, we talk about these things years and years and years in advance and people just kind of ignore it. Like Benioff even said, more SaaS companies are going to come out of non-tech companies than tech companies, OK. We've been talking for years about how the practitioners of big data are actually going to make more money than the big data vendors. Peter Goldmacher was actually the first, that was one of his predictions that hit true. Many of them didn't. (laughs) You know, Peter's a good friend-- >> Rob: Peter's a good friend of mine as well, so I always like pointing out what he says that's wrong. >> But, but-- >> Thinking of you, Peter. >> But we sort of ignored that, and now it's all coming to fruition, right? >> Right. >> Your article talks about, and it's a long read, but it's not too long to read, so please read it. But it talks about how basically every industry is, of course, getting disrupted, we know that, but every company is a tech company. >> Right. >> Or else. >> Right. And, you know, what I was, so John Battelle called me last week, he said hey, I want to run this, he said, because I think it's going to hit a nerve with people, and we were talking about why is that? Is it because of the election season, or whatever. People are concerned about the macro view of what's happening in the economy. And I think this kind of strikes at the nerve that says, one is you have to make this transition, and then I go into the article with some specific things that I think every company has to be doing to make this transition. It starts with, you've got to rethink your capital structure because the investments you made, the distribution model that you had that got you here, is not going to be sufficient for the future. You have to rethink the tools that you're utilitizing and the workforce, because you're going to have to adopt a new way to work. And that starts at the top, by the way. And so I go through a couple different suggestions of what I think companies should look at to make this transition, and I guess what scares me is, I visit companies all over the world, I see very few companies making these kind of moves. 'Cause it's a major shake-up to culture, it's a major shake-up to how they run their business, and, you know, I use the Warren Buffett quote, "When the tide goes out, you can see who's been swimming naked." The tide may go out pretty soon here, you know, it'll be in the next five years, and I think you're going to see a lot of companies that thought they could never be threatened by tech, if you will, go the wrong way because they're not making those moves now. >> Well, let's stay cognitive, now that we're on this subject, because you know, you're having a pretty frank conversation here. A lot of times when you talk to people inside of IBM about cognitive and the impact it's going to have, they don't want to talk about that. But it's real. Machines have always replaced humans, and now we're seeing that replacement of cognitive functions, so that doesn't mean value can't get created. In fact, way more value is going to be created than we can even imagine, but you have to change the way in which you do things in order to take advantage of that. >> Right, right. One thing I say in the article is I think we're on the cusp of the great reskilling, which is, you take all the traditional IT jobs, I think over the next decade half those jobs probably go away, but they're replaced by a new set of capabilities around data science and machine learning, and advanced analytics, things that are leveraging cognitive capabilities, but doing it with human focus as well. And so, you're going to see a big shift in skills. This is why we're partnering with companies like Galvanize, I saw Jim Deters when I was walking in. Galvanize is at the forefront of helping companies do that reskilling. We want to help them do that reskilling as well, and we're going to provide them a platform that automates the process of doing a lot of these analytics. That's what the new project Dataworks, the new Watson project is all about, is how we begin to automate what have traditionally been very cumbersome and difficult problems to solve in an organization, but we're helping clients that haven't done that reskilling yet, we're helping them go ahead and get an advantage through technology. >> Rob, I want to follow up too on that concept on the capital markets and how this stuff is measured, because as you pointed out in your article, valuations of the top companies are huge. That's not a multiple of data right now. We haven't really figured that out, and it's something that we're looking at, the Wikibon team is how do you value the data from what used to be liability 'cause you had to put it on machines and pay for it. Now it's really the driver, there's some multiple of data value that's driving those top-line valuations that you point out in that article. >> You know it's interesting, and nobody has really figured that out, 'cause you don't see it showing up, at least I don't think, in any stock prices, maybe CoStar would be one example where it probably has, they've got a lot of data around commercial real estate, that one sticks out to me, but I think about in the current era that we're in there's three ways to drive competitive advantage: one is economies of scale, low-cost manufacturing; another is through network effects, you know, a number of social media companies have done that well; but third is, machine learning on a large corpus of data is a competitive advantage. If you have the right data assets and you can get better answers, your models will get smarter over time, how's anybody going to catch up with you? They're not going to. So I think we're probably not too far from what you say, Jeff, which is companies starting to be looked at as a value of their data assets, and maybe data should be on the balance sheet. >> Well that's what I'm saying, eventually does it move to the balance sheet as something that you need to account for? Because clearly there's something in the Apple number, in the Alphabet number, in the Microsoft number, that's more than regular. >> Exactly, it's not just about, it's not just about the distribution model, you know, large companies for a long time, certainly in tech, we had a huge advantage because of distribution, our ability to get to other countries face to face, but as the world has moved to the Internet and digital sales and try/buy, it's changed that. Distribution can still be an advantage, but is no longer the advantage, and so companies are trying to figure out what are the next set of assets? It used to be my distribution model, now maybe it's my data, or perhaps it's the insight that I develop from the data. That's really changed. >> Then, in the early days of the sort of big data meme taking off, people would ask, OK, how can I monetize the data? As opposed to what I think they're really asking is, how could I use data to support making money? >> Rob: Right. Right. >> And that's something a lot of people I don't think really understood, and it's starting to come into focus now. And then, once you figure that out, you can figure out what data sources, and how to get quality in that data and enrich that data and trust that data, right? Is that sort of a logical sequence that companies are now going through? >> It's an interesting observation, because you think about it, the companies that were early on in purely monetizing data, companies like Dun & Bradstreet come to mind, Nielsen come to mind, they're not the super-fast-growing companies today. So it's kind of like, there was an era where data monetization was a viable strategy, and there's still some of that now, but now it's more about, how do you turn your data assets into a new business model? There was actually a great, new Clay Christensen article, it was published I think last week, talking about companies need to develop new business models. We're at the time, everybody's kind of developed in, we sell hardware, we sell software, we sell services, or whatever we sell, and his point was now is the time to develop a new business model, and those will, now my view, those will largely be formed on the basis of data, so not necessarily just monetizing the data, to your point, Dave, but on the basis of that data. >> I love the music industry, because they're always kind of out at the front of this evolving business model for digital assets in this new world, and it keeps jumping, right? It jumped, it was free, then people went ahead and bought stuff on iTunes, now Spotify has flipped it over to a subscription model, and the innovation of change in the business model, not necessarily the products that much, it's very different. The other thing that's interesting is just that digital assets don't have scarcity, right? >> Rob: Right. >> There's scarcity around the data, but not around the assets, per se. So it's a very different way of thinking about distribution and kind of holding back, how do you integrate with other people's data? It's not, not the same. >> So think about, that's an interesting example, because think about the music, there's a great documentary on Netflix about Tower Records, and how Tower Records went through the big spike and now is kind of, obviously no longer really around. Same thing goes for the Blockbusters of the world. So they got disrupted by digital, because their advantage was a distribution channel that was in the physical world, and that's kind of my assertion in that post about the end of tech companies is that every company is facing that. They may not know it yet, but if you're in agriculture, and your traditional dealer network is how you got to market, whether you know it or not, that is about to be disrupted. I don't know exactly what form that will take, but it's going to be different. And so I think every company to your point on, you know, you look at the music industry, kind of use it as a map, that's an interesting way to look at a lot of industries in terms of what could play out in the next five years. >> It's interesting that you say though in all your travels that people aren't, I would think they would be clamoring, oh my gosh, I know it's coming, what do I do, 'cause I know it's coming from an angle that I'm not aware of as opposed to, like you say a lot of people don't see it coming. You know, it's not my industry. Not going to happen to me. >> You know it's funny, I think, I hear two, one perception I hear is, well, we're not a tech company so we don't have to worry about that, which is totally flawed. Two is, I hear companies that, I'd say they use the right platitudes: "We need to be digital." OK, that's great to say, but are you actually changing your business model to get there? Maybe not. So I think people are starting to wake up to this, but it's still very much in its infancy, and some people are going to be left behind. >> So the tooling and the new way to work are sort of intuitive. What about capital structure? What's the implication to capital structures, how do you see that changing? So it's a few things. One is, you have to relook at where you're investing capital today. The majority of companies are still investing in what got them to where they are versus where they need to be. So you need to make a very conscious shift, and I use the old McKinsey model of horizon one, two and three, but I insert the idea that there should be a horizon zero, where you really think about what are you really going to start to just outsource, or just altogether stop doing, because you have to aggressively shift your investments to horizon two, horizon three, you've really got to start making bets on the future, so that's one is basically a capital shift. Two is, to attract this new workforce. When I talked about the great reskilling, people want to come to work for different reasons now. They want to come to work, you know, to work in the right kind of office in the right location, that's going to require investment. They want a new comp structure, they're no longer just excited by a high base salary like, you know, they want participation in upside, even if you're a mature company that's been around for 50 years, are you providing your employees meaningful upside in terms of bonus or stock? Most companies say, you know, we've always reserved that stuff for executives. That's not, there's too many other companies that are providing that as an alternative today, so you have to rethink your capital structure in that way. So it's how you spend your money, but also, you know, as you look at the balance sheet, how you actually are, you know, I'd say spreading money around the company, and I think that changes as well. >> So how does this all translate into how IBM behaves, from a product standpoint? >> We have changed a lot of things in IBM. Obviously we've made a huge move towards what we think is the future, around artificial intelligence and machine learning with everything that we've done around the Watson platform. We've made huge capital investments in our cloud capability all over the world, because that is an arms race right now. We've made a huge change in how we're hiring, we're rebuilding offices, so we put an office in Cambridge, downtown Boston. Put an office here in New York downtown. We're opening the office in San Francisco very soon. >> Jeff: The Sparks Center downtown. >> Yeah. So we've kind of come to urban areas to attract this new type of skill 'cause it's really important to us. So we've done it in a lot of different ways. >> Excellent. And then tonight we're going to hear more about that, right? >> Rob: Yes. >> You guys have a big announcement tonight? >> Rob: Big announcement tonight. >> Ritica was on, she showed us a little bit about what's coming, but what can you tell us about what we can expect tonight? >> Our focus is on building the first enterprise platform for data, which is steeped in artificial intelligence. First time you've seen anything like it. You think about it, the platform business model has taken off in some sectors. You can see it in social media, Facebook is very much a platform. You can see it in entertainment, Netflix is very much a platform. There hasn't really been a platform for enterprise data and IP. That's what we're going to be delivering as part of this new Watson project, which is Dataworks, and we think it'll be very interesting. Got a great ecosystem of partners that will be with us at the event tonight, that're bringing their IP and their data to be part of the platform. It will be a unique experience. >> What do you, I know you can't talk specifics on M&A, but just in general, in concept, in terms of all the funding, we talked last year at this event how the whole space was sort of overfunded, overcrowded, you know, and something's got to give. Do you feel like there's been, given the money that went in, is there enough innovation coming out of the Hadoop big data ecosystem? Or is a lot of that money just going to go poof? >> Well, you know, we're in an interesting time in capital markets, right? When you loan money and get back less than you loan, because interest rates are negative, it's almost, there's no bad place to put money. (laughing) Like you can't do worse than that. But I think, you know the Hadoop ecosystem, I think it's played out about like we envisioned, which is it's becoming cheap storage. And I do see a lot of innovation happening around that, that's why we put so much into Spark. We're now the number one contributor around machine learning in the Spark project, which we're really proud of. >> Number one. >> Yes, in terms of contributions over the last year. Which has been tremendous. And in terms of companies in the ecos-- look, there's been a lot of money raised, which means people have runway. I think what you'll see is a lot of people that try stuff, it doesn't work out, they'll try something else. Look, there's still a lot of great innovation happening, and as much as it's the easiest time to start a company in terms of the cost of starting a company, I think it's probably one of the hardest times in terms of getting time and attention and scale, and so you've got to be patient and give these bets some time to play out. >> So you're still sanguine on the future of big data? Good. When Rob turns negative, then I'm concerned. >> It's definitely, we know the endpoint is going to be massive data environments in the cloud, instrumented, with automated analytics and machine learning. That's the future, Watson's got a great headstart, so we're proud of that. >> Well, you've made bets there. You've also, I mean, IBM, obviously great services company, for years services led. You're beginning to automate a lot of those services, package a lot of those services into industry-specific software and other SaaS products. Is that the future for IBM? >> It is. I mean, I think you need it two ways. One is, you need domain solutions, verticalized, that are solving a specific problem. But underneath that you need a general-purpose platform, which is what we're really focused on around Dataworks, is providing that. But when it comes to engaging a user, if you're not engaging what I would call a horizontal user, a data scientist or a data engineer or developer, then you're engaging a line-of-business person who's going to want something in their lingua franca, whether that's wealth management and banking, or payer underwriting or claims processing in healthcare, they're going to want it in that language. That's why we've had the solutions focus that we have. >> And they're going to want that data science expertise to be operationalized into the products. >> Rob: Yes. >> It was interesting, we had Jim on and Galvanize and what they're doing. Sharp partnership, Rob, you guys have, I think made the right bets here, and instead of chasing a lot of the shiny new toys, you've sort of thought ahead, so congratulations on that. >> Well, thanks, it's still early days, we're still playing out all the bets, but yeah, we've had a good run here, and look forward to the next phase here with Dataworks. >> Alright, Rob Thomas, thanks very much for coming on the Cube. >> Thanks guys, nice to see you. >> Jeff: Appreciate your time today, Rob. >> Alright, keep it right there, everybody. We'll be back with our next guest right after this. This is the Cube, we're live from New York City, right back. (electronic music)

Published Date : Sep 28 2016

SUMMARY :

Brought to you by headline sponsors: This is the Cube, the worldwide leader Jeff, great to see you as well. Been all over the So the whole community is abuzz Alright, and you've got some We said, the nose of the plane is up. Dave: But the planes But in general, it's not you guys predicted, and tech is becoming the default, than the big data vendors. friend of mine as well, about, and it's a long read, because the investments you made, A lot of times when you of the great reskilling, on that concept on the capital markets and you can get better answers, as something that you need to account for? the distribution model, you know, Rob: Right. and it's starting to come into focus now. now is the time to develop and the innovation of change but not around the assets, per se. Blockbusters of the world. It's interesting that you but are you actually but I insert the idea that all over the world, because 'cause it's really important to us. to hear more about that, right? the first enterprise platform for data, of the Hadoop big data ecosystem? in the Spark project, which and as much as it's the on the future of big data? the endpoint is going to be Is that the future for IBM? they're going to want it in that language. And they're going to want lot of the shiny new toys, and look forward to the next thanks very much for coming on the Cube. This is the Cube, we're live

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

DavePERSON

0.99+

NvidiaORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

JeffPERSON

0.99+

PeterPERSON

0.99+

Rob ThomasPERSON

0.99+

John BattellePERSON

0.99+

Dave VellantePERSON

0.99+

Peter GoldmacherPERSON

0.99+

RobPERSON

0.99+

San FranciscoLOCATION

0.99+

Jeff FrickPERSON

0.99+

New York CityLOCATION

0.99+

CoStarORGANIZATION

0.99+

last weekDATE

0.99+

yesterdayDATE

0.99+

CambridgeLOCATION

0.99+

AppleORGANIZATION

0.99+

New YorkLOCATION

0.99+

BenioffPERSON

0.99+

New York CityLOCATION

0.99+

tonightDATE

0.99+

Warren BuffettPERSON

0.99+

MicrosoftORGANIZATION

0.99+

GalvanizeORGANIZATION

0.99+

firstQUANTITY

0.99+

twoQUANTITY

0.99+

oneQUANTITY

0.99+

last yearDATE

0.99+

Jim DetersPERSON

0.99+

todayDATE

0.99+

last yearDATE

0.99+

two waysQUANTITY

0.99+

Clay ChristensenPERSON

0.99+

three waysQUANTITY

0.99+

AlphabetORGANIZATION

0.99+

OneQUANTITY

0.99+

TwoQUANTITY

0.99+

thirdQUANTITY

0.99+

iTunesTITLE

0.99+

one dayQUANTITY

0.99+

JimPERSON

0.99+

SpotifyORGANIZATION

0.99+

NielsenORGANIZATION

0.99+

Tower RecordsORGANIZATION

0.98+

IBM AnalyticsORGANIZATION

0.98+

NetflixORGANIZATION

0.98+

WikibonORGANIZATION

0.98+

one exampleQUANTITY

0.98+

McKinseyORGANIZATION

0.98+

FacebookORGANIZATION

0.98+

First timeQUANTITY

0.98+

Brian Biles, Datrium & Benjamin Craig, Northrim Bank - #VMworld - #theCUBE


 

>> live from the Mandalay Bay Convention Center in Las Vegas. It's the king covering via World 2016 brought to you by IBM Wear and its ecosystem sponsors. Now here's your host stool minimum, >> including I Welcome back to the Q bomb stew. Minuteman here with my co host for this segment, Mark Farley, and we'll get the emerald 2016 here in Las Vegas. It's been five years since we've been in Vegas, and a lot of changes in five years back Elsa do this morning was talking about five years from now. They expect that to be kind of a crossover between public Cloud becomes majority from our research. We think that flash, you know, capacities. You know, you really are outstripping, You know, traditional hard disk drives within five years from now. So the two guests I have for this program, Brian Vials, is the CEO of Day Tree. Um, it's been a year since we had you on when you came out of stealth on really excited cause your customer along. We love having customers on down from Alaska, you know, within sight view of of of Russia. Maybe on Did you know Ben Craig, who's the c i O of Northern Bank. Thank you so much for coming. All right, so we want to talk a lot to you, but real quick. Ryan, why do you give us kind of the update on the company? What's happened in the last year where you are with the product in customer deployments? >> Sure. Last year, when we talked, daydream was just coming out of stealth mode. So we were introducing the notion of what we're doing. Starting in kind of mid Q. One of this year, we started shipping and deploying. Thankfully, one of our first customers was Ben. And, uh, you know, our our model of, ah, sort of convergence is different from anything else that you'll see a v m world. I think hearing Ben tell about his experience in deployment philosophy. What changed for him is probably the best way to understand what we do. >> All right, so and great leading. Start with first. Can you tell us a little bit about north from bank? How many locations you have your role there. How long you've been there? Kind of a quick synopsis. >> Sure. Where we're growing. Bank one of three publicly traded publicly held companies in the state of Alaska. We recently acquired residential mortgage after acquiring the last Pacific Bank. And so we have locations all the way from Fairbanks, Alaska, where it gets down to negative 50 negative, 60 below Fahrenheit down to Bellevue, Washington. And to be perfectly candid, what's helped propel some of that growth has been our virtual infrastructure and our virtual desktop infrastructure, which is predicated on us being able to grow our storage, which kind of ties directly into what we've got going on with a tree and >> that that that's great. Can you talk to you know what we're using before what led you to day tree? Um, you know, going with the startup is you know, it's a little risky, right? I thought, Cee Io's you buy on risk >> Well, and as a very conservative bank that serves a commercial market, risk is not something that way by into a lot. But it's also what propels some of our best customers to grow with us. And in this case, way had a lot of faith in the people that joined the company. From an early start, I personally knew a lot of the team from sales from engineering from leadership on That got us interested. Once we kind of got the hook way learned about the technology and found out that it was really the I dare say we're unicorn of storage that we've been looking for. And the reason is because way came from a ray based systems and we have the same revolution that a lot of customers did. We started out with a nice, cosy, equal logic system. We evolved into a nimble solution the hybrid era, if you will, of a raise. And we found that as we grew, we ran into scalability problems. A soon as we started tackling beady eye, we found that we immediately needed to segregate our workloads. Obviously, because servers and production beauty, I have a completely different read right profile. As we started looking at some of the limitations as we grew our video structure, we had to consider upgrading all our processors, all of our solid state drives, all of the things that helped make that hybrid array support our VD infrastructure, and it's costly. And so we did that once and then we grew again because maybe I was so darn popular. within our organization. At that time, we kind of caught wind of what was going on with the atrium, and it totally turned the paradigm on top of its head for what we were looking for. >> How did it? Well, I just heard that up, sir. How did the date Reum solution impact the or what did you talk about? The reed, Right balance? What was it about the day trim solution that solved what was the reed right? Balance you there for the >> young when we ran out of capacity with our equal logic, we had to go out and buy a whole new member when he ran out of capacity with are nimble, had to go out and buy a whole new controller. When we run out of capacity with day tree, um, solution, we literally could go out and get commoditized solid state drives one more into our local storage and end up literally impacting our performance by a magnifier. That's huge. So the big difference between day trim and these >> are >> my words I'm probably gonna screw this up, Bryant, So feel free to jump in, and in my opinion day trip starts out with a really good storage area network appliance, and then they basically take away all of you. I interface to it and stick it out on the network for durable rights. Then they move all of the logic, all of the compression, all of the D duplication. Even the raid calculations on to software that I call a hyper driver that runs the hyper visor level on each host. So instead of being bound by the controller doing all the heavy lifting, you now have it being done by a few extra processors, a few extra big of memory out on their servers. That puts the data as close as humanly possible, which is what hyper converging. But it also has this very durable back end that ensures that your rights are protected. So instead of having to span my storage across all of my hosts, I still have all the best parts of a durable sand on all the best parts of high performance. By bringing that that data closer to where the host. So that's why Atrium enabled us to be able to grow our VD I infrastructure literally overnight. Whenever we ran out of performance, we just pop in another drive and go and the performances is insane. We just finished writing a 72 page white paper for VM, where we did our own benchmarking. Um, using my OMETER sprayers could be using our secondary data center Resource is because they were, frankly, somewhat stagnant, and we knew that we'd be able to get with most level test impossible. And we found that we were getting insane amounts of performance, insane amounts of compression. And by that I can quantify we're getting 132,000 I ops at a little bit over a gig a sec running with two 0.94 milliseconds of late and see that's huge. And one of the things that we always used to compare when it came to performance was I ops and throughput. Whenever we talk to any storage vendor, they're always comparing. But we never talked about lately because Leighton See was really network bound and their storage bender could do anything about that. But by bringing the the brain's closer to the hosts, it solves that problem. And so now our latent C that was like a 25 minutes seconds using a completely unused, nimble storage sand was 2.94 milliseconds. What that translated into was about re X performance increase. So when we went from equal logic to nimble, we saw a multiplier. There we went from nimble toed D atrium. We saw three Export Supplier, and that translated directly into me being able to send our night processors home earlier. Which means less FT. Larger maintenance window times, faster performance for all of our branches. So it went on for a little bit there. But that's what daydreams done for us, >> right? And just to just to amplify that part of the the approached atrium Staking is to assume that host memory of some kind or another flash for now is going to become so big and so cheap that reads will just never leave the host at some point. And we're trying to make that point today. So we've increased our host density, for example, since last year, flash to 16 terabytes per host. Raw within line di Dupin compression. That could be 50 a 100 terabytes. So we have customers doing fairly big data warehouse operations where the reeds never leave the host. It's all host Flash Leighton see and they can go from an eight hour job to, ah, one hour job. It's, you know, and in our model, we sell a system that includes a protected repositories where the rights go. That's on a 10 big network. You buy hosts that have flash that you provisions from your server vendor? Um, we don't charge extra for the software that we load on the host. That does all the heavy lifting. It does the raid compression d do cloning. What have you It does all the local cashing. So we encourage people to put as much flash and as many hosts as possible against that repositories, and we make it financially attractive to do that. >> So how is the storage provisioned? Is it a They're not ones. How? >> So It all shows up, and this is one of the other big parts that is awesome for us. It shows up his one gigantic NFS datastore. Now it doesn't actually use NFS. Itjust presents that way to be anywhere. But previously we had about 34 different volumes. And like everybody else on the planet who thin provisions, we had to leave a buffer zone because we'd have developers that would put a bm where snapshot on something patches. Then forget about it, Philip. The volume bring the volume off lying panic ensues. So you imagine that 30 to 40% of buffer space times each one of those different volumes. Now we have one gigantic volume and each VM has its performance and all of its protection managed individually at the bm level. And that's huge because no longer do you have to set protection performance of the volume level. You can set it right in the B m. Um, >> so you don't even see storage. >> You don't ever have to log into the appliance that all you >> do serve earless storage lists. Rather, this is what we're having. It's >> all through the place. >> And because because all the rights go off, host the rights, don't interrupt each other the host on interrupt together. So we actually going to a lot of links to make sure that happens. So there's an isolation host, a host. That means if you want a provisional particular host for a particular set of demands, you can you could have VD I next door to data warehouse and you know the level of intensity doesn't matter to each other. So it's very specifically enforceable by host configuration or by managing the VM itself. Justus, you would do with the M where >> it gets a lot more flexibility than we would typically get with a hyper converge solution that has a very static growth and performance requirements. >> So when you talk about hyper convergence, the you know, number one, number two and number three things that we usually talk about is, you know, simplicity. So you're a pretty technical guy. You obviously understand this. Well, can you speak to beyond the, you know, kind of ecological nimble and how you scale that house kind of the day's your experience. How's the ongoing, how much you after, you know, test and tweak and adjust things? And how much is it? Just work? >> Well, this is one of the reasons that we went with the atrium is well, you know, when it comes down to it with a hyper converge solution, you're spanning all of your storage across your host, right? We're trying to make use of those. Resource is, but we just recently had one of our server's down because it had a problem with his bios for a little over 10 days. Troubleshooting it. It just doesn't want to stay up. If we're in a full hyper converged infrastructure and that was part of the cluster, that means that our data would've had to been migrated off of that hostess. Well, which is kind of a big deal. I love the idea of having a rock solid, purpose built, highly available device that make sure that my rights are there for me, but allows me to have the elastic configuration that I need on my host to be able to grow them as I see fit. And also to be able to work directly with my vendors to get the pricing points that I need for each. My resource is so our Oracle Servers Exchange Server sequel servers. We could put in some envy Emmy drives. It'll screen like a scalded dog, and for all of our file print servers, I t monitoring servers. We can go with Cem Samsung 8 50 e b o. Drives pop him in a couple of empty days, and we're still able to crank out the number of I ops that we need to be able. Thio appreciate between those at a very low cost point, but with a maximum amount of protection on that data. So that was a big song. Points >> are using both envy. Emmy and Block. >> We actually going through a server? Refresh. Right now, it's all part of the white paper that way. Just felt we decided to go with Internal in Vienna drives to start with two two terabyte internal PC cards. And then we have 2.5 inch in Vienna ready on the front load. But we also plumbed it to be able to use solid state drive so that we have that flexibility in the future to be able to use those servers as we see fit. So again, very elastic architecture and allows us to be kind of a control of what performance is assigned to each individual host. >> So what APS beyond VD? I Do you expect to use this for? Are you already deploying it further? >> VD I is our biggest consumer of resource is our users have come to expect that instant access to all of their applications eventually way have the ability to move the entire data center onto the day trim and so One of the things that we're currently completing this year is the rollout of beady eye to the remaining 40% of our branches. 60% of them are already running through the eye. And then after that, we're probably gonna end up taking our core servers and migrating them off and kind of through attrition, using some of our older array based technology for testing death. All >> right, so I can't let you go without asking you a bit. Just you're in a relationship with GM Ware House Veum. We're meeting your needs. Is there anything from GM wear or the storage ecosystem around them that would kind of make your job easier? >> Yes. If they got rid of the the Sphere Web client, that would be great. I am not a fan of the V Sphere Web client at all, and I wish they'd bring back the C Sharp client like to get that on the record because I tried to every single chance I could get. No, the truth is the integration between the day tree, um and being where is it's super tight. It's something I don't have to think about. It makes it easy for me to be able to do my job at the end of the day. That's what we're looking for. So I think the biggest focus that a lot of the constituents that air the Anchorage being where user group leader of said group are looking for stability and product releases and trying to make sure that there's more attention given to que es on some of the recent updates that they have. Hyper visor Weber >> Brian, I'll give you the final word takeaways that you want people to know about your company, your customers coming out. >> Of'em World. We're thrilled to be here for the second year, thrilled to be here with Ben. It's a It's a great, you know, exciting period for us. As a vendor, we're just moving into sort of nationwide deployment. So check us out of here at the show. If you're not, check us out on the Web. There's a lot of exciting things happening in convergence in general and atriums leading the way in a couple of interesting ways. All >> right, Brian and Ben, thank you so much for joining us. You know, I don't think we've done a cube segment in Alaska yet. so maybe we'll have to talk to you off camera about that. Recommended. All right. We'll be back with lots more coverage here from the emerald 2016. Thanks for watching the Cube. >> You're good at this. >> Oh, you're good.

Published Date : Aug 30 2016

SUMMARY :

It's the king covering We think that flash, you know, So we were introducing the notion of what we're doing. How many locations you have your role there. And so we have locations all the way from Fairbanks, Alaska, where it gets down to negative 50 negative, Um, you know, going with the startup is you know, it's a little risky, right? at some of the limitations as we grew our video structure, we had to consider How did the date Reum solution impact the or what we had to go out and buy a whole new member when he ran out of capacity with are nimble, had to go out and buy a whole new So instead of being bound by the controller doing all the heavy lifting, you now have it being You buy hosts that have flash that you provisions from your server vendor? So how is the storage provisioned? So you imagine that 30 to 40% of buffer space times Rather, this is what we're having. So we actually going to a lot of links to make sure that happens. it gets a lot more flexibility than we would typically get with a hyper converge solution that has a very static How's the ongoing, how much you after, you know, test and tweak and adjust things? Well, this is one of the reasons that we went with the atrium is well, you know, Emmy and Block. so that we have that flexibility in the future to be able to use those servers as we see fit. have the ability to move the entire data center onto the day trim and so One of the things that we're currently right, so I can't let you go without asking you a bit. focus that a lot of the constituents that air the Anchorage being where user group leader Brian, I'll give you the final word takeaways that you want people to know about your company, It's a It's a great, you know, exciting period for us. so maybe we'll have to talk to you off camera about that.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Mark FarleyPERSON

0.99+

Brian VialsPERSON

0.99+

RyanPERSON

0.99+

AlaskaLOCATION

0.99+

ViennaLOCATION

0.99+

30QUANTITY

0.99+

VegasLOCATION

0.99+

Ben CraigPERSON

0.99+

one hourQUANTITY

0.99+

BenPERSON

0.99+

BrianPERSON

0.99+

RussiaLOCATION

0.99+

Last yearDATE

0.99+

132,000QUANTITY

0.99+

60%QUANTITY

0.99+

eight hourQUANTITY

0.99+

Las VegasLOCATION

0.99+

40%QUANTITY

0.99+

last yearDATE

0.99+

PhilipPERSON

0.99+

2.94 millisecondsQUANTITY

0.99+

50QUANTITY

0.99+

BryantPERSON

0.99+

Day TreeORGANIZATION

0.99+

72 pageQUANTITY

0.99+

16 terabytesQUANTITY

0.99+

two guestsQUANTITY

0.99+

Brian BilesPERSON

0.99+

2.5 inchQUANTITY

0.99+

25 minutes secondsQUANTITY

0.99+

Northern BankORGANIZATION

0.99+

GMORGANIZATION

0.99+

five yearsQUANTITY

0.99+

EmmyPERSON

0.98+

oneQUANTITY

0.98+

100 terabytesQUANTITY

0.98+

Mandalay Bay Convention CenterLOCATION

0.98+

Cee IoORGANIZATION

0.98+

second yearQUANTITY

0.98+

Pacific BankORGANIZATION

0.98+

ElsaPERSON

0.98+

each hostQUANTITY

0.98+

AtriumORGANIZATION

0.98+

10 big networkQUANTITY

0.98+

twoQUANTITY

0.98+

Leighton SeeORGANIZATION

0.98+

firstQUANTITY

0.98+

bothQUANTITY

0.97+

Northrim BankORGANIZATION

0.97+

OracleORGANIZATION

0.97+

first customersQUANTITY

0.96+

this yearDATE

0.96+

0.94 millisecondsQUANTITY

0.96+

60 below FahrenheitQUANTITY

0.96+

OneQUANTITY

0.96+

Bellevue, WashingtonLOCATION

0.96+

over 10 daysQUANTITY

0.96+

each VMQUANTITY

0.96+

eachQUANTITY

0.96+

todayDATE

0.95+

C SharpORGANIZATION

0.95+

2016DATE

0.95+

five years backDATE

0.95+

a yearQUANTITY

0.94+

GM Ware House VeumORGANIZATION

0.93+

World 2016EVENT

0.92+

about 34 different volumesQUANTITY

0.91+

two terabyteQUANTITY

0.91+

three publicly traded publicly held companiesQUANTITY

0.9+

threeQUANTITY

0.88+

mid Q. OneDATE

0.88+

DatriumORGANIZATION

0.88+

each individual hostQUANTITY

0.87+

MinutemanPERSON

0.86+

50 negativeQUANTITY

0.83+

VTITLE

0.83+

Flash LeightonORGANIZATION

0.83+

#VMworldORGANIZATION

0.82+

Fairbanks, AlaskaLOCATION

0.82+

Benjamin CraigPERSON

0.8+

single chanceQUANTITY

0.78+