Ed Walsh & Thomas Hazel | A New Database Architecture for Supercloud
(bright music) >> Hi, everybody, this is Dave Vellante, welcome back to Supercloud 2. Last August, at the first Supercloud event, we invited the broader community to help further define Supercloud, we assessed its viability, and identified the critical elements and deployment models of the concept. The objectives here at Supercloud too are, first of all, to continue to tighten and test the concept, the second is, we want to get real world input from practitioners on the problems that they're facing and the viability of Supercloud in terms of applying it to their business. So on the program, we got companies like Walmart, Sachs, Western Union, Ionis Pharmaceuticals, NASDAQ, and others. And the third thing that we want to do is we want to drill into the intersection of cloud and data to project what the future looks like in the context of Supercloud. So in this segment, we want to explore the concept of data architectures and what's going to be required for Supercloud. And I'm pleased to welcome one of our Supercloud sponsors, ChaosSearch, Ed Walsh is the CEO of the company, with Thomas Hazel, who's the Founder, CTO, and Chief Scientist. Guys, good to see you again, thanks for coming into our Marlborough studio. >> Always great. >> Great to be here. >> Okay, so there's a little debate, I'm going to put you right in the spot. (Ed chuckling) A little debate going on in the community started by Bob Muglia, a former CEO of Snowflake, and he was at Microsoft for a long time, and he looked at the Supercloud definition, said, "I think you need to tighten it up a little bit." So, here's what he came up with. He said, "A Supercloud is a platform that provides a programmatically consistent set of services hosted on heterogeneous cloud providers." So he's calling it a platform, not an architecture, which was kind of interesting. And so presumably the platform owner is going to be responsible for the architecture, but Dr. Nelu Mihai, who's a computer scientist behind the Cloud of Clouds Project, he chimed in and responded with the following. He said, "Cloud is a programming paradigm supporting the entire lifecycle of applications with data and logic natively distributed. Supercloud is an open architecture that integrates heterogeneous clouds in an agnostic manner." So, Ed, words matter. Is this an architecture or is it a platform? >> Put us on the spot. So, I'm sure you have concepts, I would say it's an architectural or design principle. Listen, I look at Supercloud as a mega trend, just like cloud, just like data analytics. And some companies are using the principle, design principles, to literally get dramatically ahead of everyone else. I mean, things you couldn't possibly do if you didn't use cloud principles, right? So I think it's a Supercloud effect, you're able to do things you're not able to. So I think it's more a design principle, but if you do it right, you get dramatic effect as far as customer value. >> So the conversation that we were having with Muglia, and Tristan Handy of dbt Labs, was, I'll set it up as the following, and, Thomas, would love to get your thoughts, if you have a CRM, think about applications today, it's all about forms and codifying business processes, you type a bunch of stuff into Salesforce, and all the salespeople do it, and this machine generates a forecast. What if you have this new type of data app that pulls data from the transaction system, the e-commerce, the supply chain, the partner ecosystem, et cetera, and then, without humans, actually comes up with a plan. That's their vision. And Muglia was saying, in order to do that, you need to rethink data architectures and database architectures specifically, you need to get down to the level of how the data is stored on the disc. What are your thoughts on that? Well, first of all, I'm going to cop out, I think it's actually both. I do think it's a design principle, I think it's not open technology, but open APIs, open access, and you can build a platform on that design principle architecture. Now, I'm a database person, I love solving the database problems. >> I'm waited for you to launch into this. >> Yeah, so I mean, you know, Snowflake is a database, right? It's a distributed database. And we wanted to crack those codes, because, multi-region, multi-cloud, customers wanted access to their data, and their data is in a variety of forms, all these services that you're talked about. And so what I saw as a core principle was cloud object storage, everyone streams their data to cloud object storage. From there we said, well, how about we rethink database architecture, rethink file format, so that we can take each one of these services and bring them together, whether distributively or centrally, such that customers can access and get answers, whether it's operational data, whether it's business data, AKA search, or SQL, complex distributed joins. But we had to rethink the architecture. I like to say we're not a first generation, or a second, we're a third generation distributed database on pure, pure cloud storage, no caching, no SSDs. Why? Because all that availability, the cost of time, is a struggle, and cloud object storage, we think, is the answer. >> So when you're saying no caching, so when I think about how companies are solving some, you know, pretty hairy problems, take MySQL Heatwave, everybody thought Oracle was going to just forget about MySQL, well, they come out with Heatwave. And the way they solve problems, and you see their benchmarks against Amazon, "Oh, we crush everybody," is they put it all in memory. So you said no caching? You're not getting performance through caching? How is that true, and how are you getting performance? >> Well, so five, six years ago, right? When you realize that cloud object storage is going to be everywhere, and it's going to be a core foundational, if you will, fabric, what would you do? Well, a lot of times the second generation say, "We'll take it out of cloud storage, put in SSDs or something, and put into cache." And that adds a lot of time, adds a lot of costs. But I said, what if, what if we could actually make the first read hot, the first read distributed joins and searching? And so what we went out to do was said, we can't cache, because that's adds time, that adds cost. We have to make cloud object storage high performance, like it feels like a caching SSD. That's where our patents are, that's where our technology is, and we've spent many years working towards this. So, to me, if you can crack that code, a lot of these issues we're talking about, multi-region, multicloud, different services, everybody wants to send their data to the data lake, but then they move it out, we said, "Keep it right there." >> You nailed it, the data gravity. So, Bob's right, the data's coming in, and you need to get the data from everywhere, but you need an environment that you can deal with all that different schema, all the different type of technology, but also at scale. Bob's right, you cannot use memory or SSDs to cache that, that doesn't scale, it doesn't scale cost effectively. But if you could, and what you did, is you made object storage, S3 first, but object storage, the only persistence by doing that. And then we get performance, we should talk about it, it's literally, you know, hundreds of terabytes of queries, and it's done in seconds, it's done without memory caching. We have concepts of caching, but the only caching, the only persistence, is actually when we're doing caching, we're just keeping another side-eye track of things on the S3 itself. So we're using, actually, the object storage to be a database, which is kind of where Bob was saying, we agree, but that's what you started at, people thought you were crazy. >> And maybe make it live. Don't think of it as archival or temporary space, make it live, real time streaming, operational data. What we do is make it smart, we see the data coming in, we uniquely index it such that you can get your use cases, that are search, observability, security, or backend operational. But we don't have to have this, I dunno, static, fixed, siloed type of architecture technologies that were traditionally built prior to Supercloud thinking. >> And you don't have to move everything, essentially, you can do it wherever the data lands, whatever cloud across the globe, you're able to bring it together, you get the cost effectiveness, because the only persistence is the cheapest storage persistent layer you can buy. But the key thing is you cracked the code. >> We had to crack the code, right? That was the key thing. >> That's where the plans are. >> And then once you do that, then everything else gets easier to scale, your architecture, across regions, across cloud. >> Now, it's a general purpose database, as Bob was saying, but we use that database to solve a particular issue, which is around operational data, right? So, we agree with Bob's. >> Interesting. So this brings me to this concept of data, Jimata Gan is one of our speakers, you know, we talk about data fabric, which is a NetApp, originally NetApp concept, Gartner's kind of co-opted it. But so, the basic concept is, data lives everywhere, whether it's an S3 bucket, or a SQL database, or a data lake, it's just a node on the data mesh. So in your view, how does this fit in with Supercloud? Ed, you've said that you've built, essentially, an enabler for that, for the data mesh, I think you're an enabler for the Supercloud-like principles. This is a big, chewy opportunity, and it requires, you know, a team approach. There's got to be an ecosystem, there's not going to be one Supercloud to rule them all, so where does the ecosystem fit into the discussion, and where do you fit into the ecosystem? >> Right, so we agree completely, there's not one Supercloud in effect, but we use Supercloud principles to build our platform, and then, you know, the ecosystem's going to be built on leveraging what everyone else's secret powers are, right? So our power, our superpower, based upon what we built is, we deal with, if you're having any scale, or cost effective scale issues, with data, machine generated data, like business observability or security data, we are your force multiplier, we will take that in singularly, just let it, simply put it in your object storage wherever it sits, and we give you uniformity access to that using OpenAPI access, SQL, or you know, Elasticsearch API. So, that's what we do, that's our superpower. So I'll play it into data mesh, that's a perfect, we are a node on a data mesh, but I'll play it in the soup about how, the ecosystem, we see it kind of playing, and we talked about it in just in the last couple days, how we see this kind of possibly. Short term, our superpowers, we deal with this data that's coming at these environments, people, customers, building out observability or security environments, or vendors that are selling their own Supercloud, I do observability, the Datadogs of the world, dot dot dot, the Splunks of the world, dot dot dot, and security. So what we do is we fit in naturally. What we do is a cost effective scale, just land it anywhere in the world, we deal with ingest, and it's a cost effective, an order of magnitude, or two or three order magnitudes more cost effective. Allows them, their customers are asking them to do the impossible, "Give me fast monitoring alerting. I want it snappy, but I want it to keep two years of data, (laughs) and I want it cost effective." It doesn't work. They're good at the fast monitoring alerting, we're good at the long-term retention. And yet there's some gray area between those two, but one to one is actually cheaper, so we would partner. So the first ecosystem plays, who wants to have the ability to, really, all the data's in those same environments, the security observability players, they can literally, just through API, drag our data into their point to grab. We can make it seamless for customers. Right now, we make it helpful to customers. Your Datadog, we make a button, easy go from Datadog to us for logs, save you money. Same thing with Grafana. But you can also look at ecosystem, those same vendors, it used to be a year ago it was, you know, its all about how can you grow, like it's growth at all costs, now it's about cogs. So literally we can go an environment, you supply what your customer wants, but we can help with cogs. And one-on one in a partnership is better than you trying to build on your own. >> Thomas, you were saying you make the first read fast, so you think about Snowflake. Everybody wants to talk about Snowflake and Databricks. So, Snowflake, great, but you got to get the data in there. All right, so that's, can you help with that problem? >> I mean we want simple in, right? And if you have to have structure in, you're not simple. So the idea that you have a simple in, data lake, schema read type philosophy, but schema right type performance. And so what I wanted to do, what we have done, is have that simple lake, and stream that data real time, and those access points of Search or SQL, to go after whatever business case you need, security observability, warehouse integration. But the key thing is, how do I make that click, click, click answer, and do it quickly? And so what we want to do is, that first read has to be fast. Why? 'Cause then you're going to do all this siloing, layers, complexity. If your first read's not fast, you're at a disadvantage, particularly in cost. And nobody says I want less data, but everyone has to, whether they say we're going to shorten the window, we're going to use AI to choose, but in a security moment, when you don't have that answer, you're in trouble. And that's why we are this service, this Supercloud service, if you will, providing access, well-known search, well-known SQL type access, that if you just have one access point, you're at a disadvantage. >> We actually talked about Snowflake and BigQuery, and a different platform, Data Bricks. That's kind of where we see the phase two of ecosystem. One is easy, the low-hanging fruit is observability and security firms. But the next one is, what we do, our super power is dealing with this messy data that schema is changing like night and day. Pipelines are tough, and it's changing all the time, but you want these things fast, and it's big data around the world. That's the next point, just use us alongside, or inside, one of their platforms, and now we get the best of both worlds. Our superpower is keeping this messy data as a streaming, okay, not a batch thing, allow you to do that. So, that's the second one. And then to be honest, the third one, which plays you to Supercloud, it also plays perfectly in the data mesh, is if you really go to the ultimate thing, what we have done is made object storage, S3, GCS, and blob storage, we made it a database. Put, get, complex query with big joins. You know, so back to your original thing, and Muglia teed it up perfectly, we've done that. Now imagine if that's an ecosystem, who would want that? If it's, again, it's uniform available across all the regions, across all the clouds, and it's right next to where you are building a service, or a client's trying, that's where the ecosystem, I think people are going to use Superclouds for their superpowers. We're really good at this, allows that short term. I think the Snowflakes and the Data Bricks are the medium term, you know? And then I think eventually gets to, hey, listen if you can make object storage fast, you can just go after it with simple SQL queries, or elastic. Who would want that? I think that's where people are going to leverage it. It's not going to be one Supercloud, and we leverage the super clouds. >> Our viewpoint is smart object storage can be programmable, and so we agree with Bob, but we're not saying do it here, do it here. This core, fundamental layer across regions, across clouds, that everyone has? Simple in. Right now, it's hard to get data in for access for analysis. So we said, simply, we'll automate the entire process, give you API access across regions, across clouds. And again, how do you do a distributed join that's fast? How do you do a distributed join that doesn't cost you an arm or a leg? And how do you do it at scale? And that's where we've been focused. >> So prior, the cloud object store was a niche. >> Yeah. >> S3 obviously changed that. How standard is, essentially, object store across the different cloud platforms? Is that a problem for you? Is that an easy thing to solve? >> Well, let's talk about it. I mean we've fundamentally, yeah we've extracted it, but fundamentally, cloud object storage, put, get, and list. That's why it's so scalable, 'cause it doesn't have all these other components. That complexity is where we have moved up, and provide direct analytical API access. So because of its simplicity, and costs, and security, and reliability, it can scale naturally. I mean, really, distributed object storage is easy, it's put-get anywhere, now what we've done is we put a layer of intelligence, you know, call it smart object storage, where access is simple. So whether it's multi-region, do a query across, or multicloud, do a query across, or hunting, searching. >> We've had clients doing Amazon and Google, we have some Azure, but we see Amazon and Google more, and it's a consistent service across all of them. Just literally put your data in the bucket of choice, or folder of choice, click a couple buttons, literally click that to say "that's hot," and after that, it's hot, you can see it. But we're not moving data, the data gravity issue, that's the other. That it's already natively flowing to these pools of object storage across different regions and clouds. We don't move it, we index it right there, we're spinning up stateless compute, back to the Supercloud concept. But now that allows us to do all these other things, right? >> And it's no longer just cheap and deep object storage. Right? >> Yeah, we make it the same, like you have an analytic platform regardless of where you're at, you don't have to worry about that. Yeah, we deal with that, we deal with a stateless compute coming up -- >> And make it programmable. Be able to say, "I want this bucket to provide these answers." Right, that's really the hope, the vision. And the complexity to build the entire stack, and then connect them together, we said, the fabric is cloud storage, we just provide the intelligence on top. >> Let's bring it back to the customers, and one of the things we're exploring in Supercloud too is, you know, is Supercloud a solution looking for a problem? Is a multicloud really a problem? I mean, you hear, you know, a lot of the vendor marketing says, "Oh, it's a disaster, because it's all different across the clouds." And I talked to a lot of customers even as part of Supercloud too, they're like, "Well, I solved that problem by just going mono cloud." Well, but then you're not able to take advantage of a lot of the capabilities and the primitives that, you know, like Google's data, or you like Microsoft's simplicity, their RPA, whatever it is. So what are customers telling you, what are their near term problems that they're trying to solve today, and how are they thinking about the future? >> Listen, it's a real problem. I think it started, I think this is a a mega trend, just like cloud. Just, cloud data, and I always add, analytics, are the mega trends. If you're looking at those, if you're not considering using the Supercloud principles, in other words, leveraging what I have, abstracting it out, and getting the most out of that, and then build value on top, I think you're not going to be able to keep up, In fact, no way you're going to keep up with this data volume. It's a geometric challenge, and you're trying to do linear things. So clients aren't necessarily asking, hey, for Supercloud, but they're really saying, I need to have a better mechanism to simplify this and get value across it, and how do you abstract that out to do that? And that's where they're obviously, our conversations are more amazed what we're able to do, and what they're able to do with our platform, because if you think of what we've done, the S3, or GCS, or object storage, is they can't imagine the ingest, they can't imagine how easy, time to glass, one minute, no matter where it lands in the world, querying this in seconds for hundreds of terabytes squared. People are amazed, but that's kind of, so they're not asking for that, but they are amazed. And then when you start talking on it, if you're an enterprise person, you're building a big cloud data platform, or doing data or analytics, if you're not trying to leverage the public clouds, and somehow leverage all of them, and then build on top, then I think you're missing it. So they might not be asking for it, but they're doing it. >> And they're looking for a lens, you mentioned all these different services, how do I bring those together quickly? You know, our viewpoint, our service, is I have all these streams of data, create a lens where they want to go after it via search, go after via SQL, bring them together instantly, no e-tailing out, no define this table, put into this database. We said, let's have a service that creates a lens across all these streams, and then make those connections. I want to take my CRM with my Google AdWords, and maybe my Salesforce, how do I do analysis? Maybe I want to hunt first, maybe I want to join, maybe I want to add another stream to it. And so our viewpoint is, it's so natural to get into these lake platforms and then provide lenses to get that access. >> And they don't want it separate, they don't want something different here, and different there. They want it basically -- >> So this is our industry, right? If something new comes out, remember virtualization came out, "Oh my God, this is so great, it's going to solve all these problems." And all of a sudden it just got to be this big, more complex thing. Same thing with cloud, you know? It started out with S3, and then EC2, and now hundreds and hundreds of different services. So, it's a complex matter for a lot of people, and this creates problems for customers, especially when you got divisions that are using different clouds, and you're saying that the solution, or a solution for the part of the problem, is to really allow the data to stay in place on S3, use that standard, super simple, but then give it what, Ed, you've called superpower a couple of times, to make it fast, make it inexpensive, and allow you to do that across clouds. >> Yeah, yeah. >> I'll give you guys the last word on that. >> No, listen, I think, we think Supercloud allows you to do a lot more. And for us, data, everyone says more data, more problems, more budget issue, everyone knows more data is better, and we show you how to do it cost effectively at scale. And we couldn't have done it without the design principles of we're leveraging the Supercloud to get capabilities, and because we use super, just the object storage, we're able to get these capabilities of ingest, scale, cost effectiveness, and then we built on top of this. In the end, a database is a data platform that allows you to go after everything distributed, and to get one platform for analytics, no matter where it lands, that's where we think the Supercloud concepts are perfect, that's where our clients are seeing it, and we're kind of excited about it. >> Yeah a third generation database, Supercloud database, however we want to phrase it, and make it simple, but provide the value, and make it instant. >> Guys, thanks so much for coming into the studio today, I really thank you for your support of theCUBE, and theCUBE community, it allows us to provide events like this and free content. I really appreciate it. >> Oh, thank you. >> Thank you. >> All right, this is Dave Vellante for John Furrier in theCUBE community, thanks for being with us today. You're watching Supercloud 2, keep it right there for more thought provoking discussions around the future of cloud and data. (bright music)
SUMMARY :
And the third thing that we want to do I'm going to put you right but if you do it right, So the conversation that we were having I like to say we're not a and you see their So, to me, if you can crack that code, and you need to get the you can get your use cases, But the key thing is you cracked the code. We had to crack the code, right? And then once you do that, So, we agree with Bob's. and where do you fit into the ecosystem? and we give you uniformity access to that so you think about Snowflake. So the idea that you have are the medium term, you know? and so we agree with Bob, So prior, the cloud that an easy thing to solve? you know, call it smart object storage, and after that, it's hot, you can see it. And it's no longer just you don't have to worry about And the complexity to and one of the things we're and how do you abstract it's so natural to get and different there. and allow you to do that across clouds. I'll give you guys and we show you how to do it but provide the value, I really thank you for around the future of cloud and data.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Walmart | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
NASDAQ | ORGANIZATION | 0.99+ |
Bob Muglia | PERSON | 0.99+ |
Thomas | PERSON | 0.99+ |
Thomas Hazel | PERSON | 0.99+ |
Ionis Pharmaceuticals | ORGANIZATION | 0.99+ |
Western Union | ORGANIZATION | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
Bob | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Nelu Mihai | PERSON | 0.99+ |
Sachs | ORGANIZATION | 0.99+ |
Tristan Handy | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
two years | QUANTITY | 0.99+ |
Supercloud 2 | TITLE | 0.99+ |
first | QUANTITY | 0.99+ |
Last August | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
dbt Labs | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Ed | PERSON | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Jimata Gan | PERSON | 0.99+ |
third one | QUANTITY | 0.99+ |
one minute | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
first generation | QUANTITY | 0.99+ |
third generation | QUANTITY | 0.99+ |
Grafana | ORGANIZATION | 0.99+ |
second generation | QUANTITY | 0.99+ |
second one | QUANTITY | 0.99+ |
hundreds of terabytes | QUANTITY | 0.98+ |
SQL | TITLE | 0.98+ |
five | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
Databricks | ORGANIZATION | 0.98+ |
a year ago | DATE | 0.98+ |
ChaosSearch | ORGANIZATION | 0.98+ |
Muglia | PERSON | 0.98+ |
MySQL | TITLE | 0.98+ |
both worlds | QUANTITY | 0.98+ |
third thing | QUANTITY | 0.97+ |
Marlborough | LOCATION | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
today | DATE | 0.97+ |
Supercloud | ORGANIZATION | 0.97+ |
Elasticsearch | TITLE | 0.96+ |
NetApp | TITLE | 0.96+ |
Datadog | ORGANIZATION | 0.96+ |
One | QUANTITY | 0.96+ |
EC2 | TITLE | 0.96+ |
each one | QUANTITY | 0.96+ |
S3 | TITLE | 0.96+ |
one platform | QUANTITY | 0.95+ |
Supercloud 2 | EVENT | 0.95+ |
first read | QUANTITY | 0.95+ |
six years ago | DATE | 0.95+ |
Angie Perez Thomas | Special Program Series: Women of the Cloud
(upbeat music) >> Hey everyone, welcome to theCUBE's special program series Women of the Cloud, brought to you by AWS. I'm your host, Lisa Martin. Very pleased to welcome Angie Perez Thomas the area sales leader from AWS as my next guest. Angie, welcome to theCUBE. It's great to have you here. >> I'm super excited. Thank you so much, Lisa. >> Of course. Talk to me a little bit about you a little bit about your role in sales at AWS. >> Yeah, absolutely. So I'm a tenure Amazonian so I've been with AWS for about 10 years here. And as you mentioned, I'm the area sales leader and so my team supports new enterprise customers and executives who are just starting their journey into the cloud. >> Talk a little bit about some of your career paths. Did you have a linear path? You said tenure Amazonian, linear path maybe more Zig-zaggy. I'd love to get some of your recommendations for those who may be early in their tech careers looking to grow their careers. What are some of the experiences that you've had that you think are have shaped your career? >> Yeah, absolutely. So, you know, mine have, I've gone back and forth through different roles, both in leadership and as an IC and I'd probably say I've got three recommendations for those looking to grow their career in technology. So the first one is prioritize your time to actually think about what career experiences you want in in your fullness of your career. And so this actually may look like sitting down reserving time to actually deep think about what are those experiences you're looking to gain but also doing research on other careers of those who may inspire you and kind of collecting those ideas. My second recommendation is around documenting, writing down those career aspirations and actually putting it within and memorializing it within a document. So I've applied Amazon's working backwards methodology myself and applied that on my career and writing my own career press release. And so it's dated in 2029. It's got a headline and you know, it's a physical document of my own career aspirations. And third, I recommend sharing this documentation with others. You know, I really enjoy receiving and reading what others are wanting to do with their career aspirations and helping provide feedback and guidance. And so what we find is people genuinely want to help others. >> I agree. I love your recommendations for really being mindful, being thoughtful about what it is that you want to do doing that research, and then actually documenting it. I think it's so wonderful that you're taking Amazon's working backward approach from the press release going this is where I want to be in five years or in 10 years. And then putting that on paper. I still connect a lot with things like you that you put down on paper that you want to accomplish or something about writing it down that actually helps to you bring it to fruition. And then to your point is great about sharing it with others that can be mentors, that can be sponsors. I'm sure you've had some great mentors and sponsors along your career path that have probably helped you pretty successful. >> Yeah, absolutely. It's been really an effective tool for communicating with those who have helped me navigate as well. >> Talk a little bit about some of the successes now we'll switch gears but we'll continue on the success train. Some of the successes that you've had helping organizations really navigate, migrate to the cloud and and become successful businesses as a result. >> Yeah, no, absolutely. So across my tenure at AWS, you know I've truly enjoyed working with our customer executives and helping them deliver on their business outcomes. And so just recently I met with the COO of a real estate firm here in the Pacific Northwest and the COO has an initiative to identify and modify home titles and deeds with decades old discriminatory language and restrictions. So, although not invisible, due to the Fair Housing Act of 1968, racial covenants they're still present in millions of home titles across the United States today. And so partnering with AWS and using our cloud technology, you know, our teams together were able to build an application that was able to where homeowners are able to look up their titles you know, analyze it for discriminatory language and be able to submit it for modification. And so this, you know, today it can be done manually, but partnering with AWS, our teams were able to address modifying titles and deeds at scale. And so it's truly incredible what cloud computing has enabled just all of us to accomplish together. And so I kind of think of it like this our a catalyst for change is our customers and AWS and our partners is the how to accelerate that change. So it's really this partnership >> I love that accelerating change is so important across so many aspects of life, but the example that you gave is so, it's such an interesting use case. I wouldn't think that there is discriminatory language in deeds for houses, but the fact that it's probably a pervasive problem globally and the ability to help organizations to be able to change that for the better with cloud, with automation at scale is huge. I can imagine that's a use case that can be replicated surely across the states and more. >> Yeah, it's definitely gained interest across with different real estate forms across the United States. So we're really excited to be partnering and having impact on this change. >> And it's also an example of tech for good. I mean, we talk about that all the time but the fact that there's discriminatory language and housing deeds is still kind of blows my mind. But and we've seen so much in tech in terms of diversity and equity and inclusion but from a diversity perspective there's still a lot more to do. I'd love to get your opinion on what you think some of the the present day challenges are with respect to diversity in tech and maybe some of the things you think can be changed to for the better. >> Yeah, so you know, there's been a huge focus on, you know hiring for diverse talent in the tech industry for a number of years. And where I think we as an industry have an opportunity is to improve in investing and developing in this diverse talent and try to really think about how are we building up the skillsets to build today's and tomorrow's leaders. And so when I think about this it requires senior leaders to be really intentional about building a diverse ecosystem of talent and investing in this diverse talent. And let me clarify a little bit when I talk about investing in diverse talent, you know, this expands outside of just mentoring. This includes sponsoring, coaching, really providing opportunities where this talent has the ability to have a seat at the table. Getting into the room where it all happens. And so by doing so we're helping this talent build their skillsets to learn what questions are being asked within, the room? How are others communicating with each other? So that they can build the skillset so not only have a seat at the table but can be really leading with that seat at the table. And I would say last, we as companies we tend to or you know, we in the industry, we tend to just focus on developing those within our companies. And where I see a need is to really challenge the industry to reach outside of our own companies in diverse talent. And so developing just that ecosystem because not just thinking about the roles that are open today but really building the skillsets for the roles and and senior level positions that are going to be open tomorrow and making sure we're developing this talent to raise their hand and be the leading candidate for those opportunities. >> I love how you said kind of really a couple things that you know, with all the women in this program that I've spoken to is a common theme in terms of diversity and it's really about senior leaders making investments. And another thing that you said that's spot on is doing it with intention. There's so much to be gained by having an intention with diversity, thought diversity. To your point, going outside, it sounds to me like kind of let's go outside of our comfort zones to bring in different thoughts, different perspectives be able to grow them in their career because of course technologies and products and solutions can only get better the more diversity of thought we have. >> Yeah, no, absolutely. It's really being intentional. We as senior leaders, we have a law on our plate. And so yes this is an additional thing to be thinking about but it really has impact and change in driving the right things both for our customers and for the industry as well. And so it's an investment that's worth making. >> And speaking of that investment worth making I liked how you said, let's have some forethought about what are some of the roles that are going to be there in the future. How are some of the roles today going to be evolving? How do you see your role evolving in the next few years? How do you see cloud evolving and what excites you about that? >> Yeah, well, cloud has really been helping our customers move faster and adapt to just the ever changing landscape. I mean it's over the last couple years it's been very real for all of us to see. And so my role has moved from just being an advisor to a CIO to actually being an advisor to both the CEO and board of directors and when they come speak to us, cost or cloud is not just about cost savings, it truly is about helping a CEO deliver on their business outcomes. So I'll give an example. We're working with a growing community bank and their executive team has embarked on a transformation to becoming a digital first bank. And so when we think about the economic factors that they're working with them to come to mind. The first, their move towards online banking has it's accelerated with the pandemic really creating that customer experience of which when you think about local banks, you think about community where everybody knows your name over in the brick and mortar down the road. Well they have to bridge that community and trust into the digital world. And second, they needed to improve on operational efficiencies. And so they have to strategically think about what investments they're going to make to balance inflation while driving growth. And so where I've been finding both myself and my teams is having a seat at the table with these executives, helping them make these strategic business decisions. And we know we're successful when our customers are able to deliver on those business outcomes. They meet those objectives, they exceed those objectives. And then we know we've just exceeded customer expectation when our partnership actually shows up in their next earnings call. You know, it's really special. >> Oh, I bet it is. I mean, being able to be that influential in terms of an organization's success I love how you talked about kind of a career evolution that your career has evolved from now you're really with the board of directors having a seat at the table there. My last question for you is kind of on that front Angie is what are some of the changes in in the tech workforce that you've seen the last few years and what are some of the things that you're excited about that are down the road? >> Yeah, so a couple things where I've really seen change and evolution has been in the leadership level. We are needing to lead with empathy and really think about inclusion as a cornerstone skillset. So for our customers, our partners, our employees we've really moved into this hybrid environment. We're both leaders and team norms. We're challenged to change. We have to adapt. And so really having inclusion as that foundational skillset is a requirement for both today and tomorrow's leaders. What I'm really excited about is on the innovation front. Anyone can innovate now, you don't need to be a part of the R&D division of a company. We're seeing that cloud is providing tools all the way down to the elementary student level. So when you think about that, just think the imagination of our youth, brought to life with cloud technology. I mean, the future really is bright. >> It is. That horizon is endless. And I'm going to take some of your advice, Angie I loved that you talked about from your own perspective and your recommendations for the audience. Write that down, write your own press release in terms of what you want to see down the road. I'm going to take your advice, I'm going to do that. I thank you so much for joining me on the program. You've been so inspiring. Your career path has been impressive. What you're seeing in terms of innovation and cloud coming next is incredibly exciting. Thank you so much for your time, Angie. >> Thank you Lisa. >> For Angie Perez Thomas. I'm Lisa Martin. You're watching theCUBE's special program series Women of the Cloud, brought to you by AWS. We'll see you soon. (upbeat music)
SUMMARY :
Women of the Cloud, brought to you by AWS. Thank you so much, Lisa. Talk to me a little bit about you And as you mentioned, What are some of the experiences to do with their career aspirations And then to your point is great for communicating with those Some of the successes that you've had and the COO has an initiative to identify and the ability to help and having impact on this change. and maybe some of the things the industry to reach There's so much to be gained and for the industry as well. that are going to be there in the future. And so they have to that are down the road? We are needing to lead with empathy And I'm going to take Women of the Cloud, brought to you by AWS.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Angie | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Angie Perez Thomas | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
United States | LOCATION | 0.99+ |
Pacific Northwest | LOCATION | 0.99+ |
five years | QUANTITY | 0.99+ |
Fair Housing Act of 1968 | TITLE | 0.99+ |
10 years | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
2029 | DATE | 0.99+ |
second | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
Women of the Cloud | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
first | QUANTITY | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
about 10 years | QUANTITY | 0.98+ |
first bank | QUANTITY | 0.98+ |
first one | QUANTITY | 0.97+ |
Thomas Been, DataStax | AWS re:Invent 2022
(intro music) >> Good afternoon guys and gals. Welcome back to The Strip, Las Vegas. It's "theCUBE" live day four of our coverage of "AWS re:Invent". Lisa Martin, Dave Vellante. Dave, we've had some awesome conversations the last four days. I can't believe how many people are still here. The AWS ecosystem seems stronger than ever. >> Yeah, last year we really noted the ecosystem, you know, coming out of the isolation economy 'cause everybody had this old pent up demand to get together and the ecosystem, even last year, we were like, "Wow." This year's like 10x wow. >> It really is 10x wow, it feels that way. We're going to have a 10x wow conversation next. We're bringing back DataStax to "theCUBE". Please welcome Thomas Bean, it's CMO. Thomas welcome to "theCUBE". >> Thanks, thanks a lot, thanks for having me. >> Great to have you, talk to us about what's going on at DataStax, it's been a little while since we talked to you guys. >> Indeed, so DataStax, we are the realtime data company and we've always been involved in technology such as "Apache Cassandra". We actually created to support and take this, this great technology to the market. And now we're taking it, combining it with other technologies such as "Apache Pulse" for streaming to provide a realtime data cloud. Which helps our users, our customers build applications faster and help them scale without limits. So it's all about mobilizing all of this information that is going to drive the application going to create the awesome experience, when you have a customer waiting behind their mobile phone, when you need a decision to take place immediately to, that's the kind of data that we, that we provide in the cloud on any cloud, but especially with, with AWS and providing the performance that technologies like "Apache Cassandra" are known for but also with market leading unit economics. So really empowering customers to operate at speed and scale. >> Speaking of customers, nobody wants less data slower. And one of the things I think we learned in the in the pan, during the pandemic was that access to realtime data isn't nice to have anymore for any business. It is table stakes, it's competitive advantage. There's somebody right behind in the rear view mirror ready to take over. How has the business model of DataStax maybe evolved in the last couple of years with the fact that realtime data is so critical? >> Realtime data has been around for some time but it used to be really niches. You needed a lot of, a lot of people a lot of funding actually to, to implement these, these applications. So we've adapted to really democratize it, made super easy to access. Not only to start developing but also scaling. So this is why we've taken these great technologies made them serverless cloud native on the cloud so that developers could really start easily and scale. So that be on project products could be taken to the, to the market. And in terms of customers, the patterns is we've seen enterprise customers, you were talking about the pandemic, the Home Depot as an example was able to deliver curbside pickup delivery in 30 days because they were already using DataStax and could adapt their business model with a real time application that combines you were just driving by and you would get the delivery of what exactly you ordered without having to go into the the store. So they shifted their whole business model. But we also see a real strong trend about customer experiences and increasingly a lot of tech companies coming because scale means success to them and building on, on our, on our stack to, to build our applications. >> So Lisa, it's interesting. DataStax and "theCUBE" were started the same year, 2010, and that's when it was the beginning of the ascendancy of the big data era. But of course back then there was, I mean very little cloud. I mean most of it was on-prem. And so data stacks had, you know, had obviously you mentioned a number of things that you had to do to become cloud friendly. >> Thomas: Yes. >> You know, a lot of companies didn't make it, make it through. You guys just raised a bunch of dough as well last summer. And so that's been quite a transformation both architecturally, you know, bringing the customers through. I presume part of that was because you had such a great open source community, but also you have a unique value problem. Maybe you could sort of describe that a little. >> Absolutely, so the, I'll start with the open source community where we see a lot of traction at the, at the moment. We were always very involved with, with the "Apache Cassandra". But what we're seeing right now with "Apache Cassandra" is, is a lot of traction, gaining momentum. We actually, we, the open source community just won an award, did an AMA, had a, a vote from their readers about the top open source projects and "Apache Cassandra" and "Apache Pulse" are part of the top three, which is, which is great. We also run a, in collaboration with the Apache Project, the, a series of events around the, around the globe called "Cassandra Days" where we had tremendous attendance. We, some of them, we had to change venue twice because there were more people coming. A lot of students, a lot of the big users of Cassandra like Apple, Netflix who spoke at these, at these events. So we see this momentum actually picking up and that's why we're also super excited that the Linux Foundation is running the Cassandra Summit in in March in San Jose. Super happy to bring that even back with the rest of the, of the community and we have big announcements to come. "Apache Cassandra" will, will see its next version with major advances such as the support of asset transactions, which is going to make it even more suitable to more use cases. So we're bringing that scale to more applications. So a lot of momentum in terms of, in terms of the, the open source projects. And to your point about the value proposition we take this great momentum to which we contribute a lot. It's not only about taking, it's about giving as well. >> Dave: Big committers, I mean... >> Exactly big contributors. And we also have a lot of expertise, we worked with all of the members of the community, many of them being our customers. So going to the cloud, indeed there was architectural work making Cassandra cloud native putting it on Kubernetes, having the right APIs for developers to, to easily develop on top of it. But also becoming a cloud company, building customer success, our own platform engineering. We, it's interesting because actually we became like our partners in a community. We now operate Cassandra in the cloud so that all of our customers can benefit from all the power of Cassandra but really efficiently, super rapidly, and also with a, the leading unit economies as I mentioned. >> How will the, the asset compliance affect your, you know, new markets, new use cases, you know, expand your TAM, can you explain that? >> I think it will, more applications will be able to tap into the power of, of "NoSQL". Today we see a lot on the customer experience as IOT, gaming platform, a lot of SaaS companies. But now with the ability to have transactions at the database level, we can, beyond providing information, we can go even deeper into the logic of the, of the application. So it makes Cassandra and therefore Astra which is our cloud service an even more suitable database we can address, address more even in terms of the transaction that the application itself will, will support. >> What are some of the business benefits that Cassandra delivers to customers in terms of business outcomes helping businesses really transform? >> So Cassandra brings skill when you have millions of customers, when you have million of data points to go through to serve each of the customers. One of my favorite example is Priceline, who runs entirely on our cloud service. You may see one offer, but it's actually everything they know about you and everything they have to offer matched while you are refreshing your page. This is the kind of power that Cassandra provide. But the thing to say about "Apache Cassandra", it used to be also a database that was a bit hard to manage and hard to develop with. This is why as part of the cloud, we wanted to change these aspects, provide developers the API they like and need and what the application need. Making it super simple to operate and, and, and super affordable, also cost effective to, to run. So the the value to your point, it's time to market. You go faster, you don't have to worry when you choose the right database you're not going to, going to have to change horse in the middle of the river, like sixth month down the line. And you know, you have the guarantee that you're going to get the performance and also the best, the best TCO which matters a lot. I think your previous person talking was addressing it. That's also important especially in the, in a current context. >> As a managed service, you're saying, that's the enabler there, right? >> Thomas: Exactly. >> Dave: That is the model today. I mean, you have to really provide that for customers. They don't want to mess with, you know, all the plumbing, right? I mean... >> Absolutely, I don't think people want to manage databases anymore, we do that very well. We take SLAs and such and even at the developer level what they want is an API so they get all the power. All of of this powered by Cassandra, but now they get it as a, and it's as simple as using as, as an API. >> How about the ecosystem? You mentioned the show in in San Jose in March and the Linux Foundation is, is hosting that, is that correct? >> Yes, absolutely. >> And what is it, Cassandra? >> Cassandra Summit. >> Dave: Cassandra Summit >> Yep. >> What's the ecosystem like today in Cassandra, can you just sort of describe that? >> Around Cassandra, you have actually the big hyperscalers. You have also a few other companies that are supporting Cassandra like technologies. And what's interesting, and that's been a, a something we've worked on but also the "Apache Project" has worked on. Working on a lot of the adjacent technologies, the data pipelines, all of the DevOps solutions to make sure that you can actually put Cassandra as part of your way to build these products and, and build these, these applications. So the, the ecosystem keeps on, keeps on growing and actually the, the Cassandra community keeps on opening the database so that it's, it's really easy to have it connect to the rest of the, the rest environment. And we benefit from all of this in our Astra cloud service. >> So things like machine learning, governance tools that's what you would expect in the ecosystem forming around it, right? So we'll see that in March. >> Machine learning is especially a very interesting use case. We see more and more of it. We recently did a, a nice video with one of our customers called Unifour who does exactly this using also our abstract cloud service. What they provide is they analyze videos of sales calls and they help actually the sellers telling them, "Okay here's what happened here was the customer sentiment". Because they have proof that the better the sentiment is, the shorter the sell cycle is going to be. So they teach the, the sellers on how to say the right things, how to control the thing. This is machine learning applied on video. Cassandra provides I think 200 data points per second that feeds this machine learning. And we see more and more of these use cases, realtime use cases. It happens on the fly when you are on your phone, when you have a, a fraud maybe to detect and to prevent. So it is going to be more and more and we see more and more of these integration at the open source level with technologies like even "Feast" project like "Apache Feast". But also in the, in, in the partners that we're working with integrating our Cassandra and our cloud service with. >> Where are customer conversations these days, given that every company has to be a data company. They have to be able to, to democratize data, allow access to it deep into the, into the organizations. Not just IT or the data organization anymore. But are you finding that the conversations are rising up the, up the stack? Is this, is this a a C-suite priority? Is this a board level conversation? >> So that's an excellent question. We actually ran a survey this summer called "The State of the Database" where we, we asked these tech leaders, okay what's top of mind for you? And real time actually was, was really one of the top priorities. And they explained for the one that who call themselves digital leaders that for 71% of them they could correlate directly the use of realtime data, the quality of their experience or their decision making with revenue. And that's really where the discussion is. And I think it's something we can relate to as users. We don't want the, I mean if the Starbucks apps take seconds to to respond there will be a riot over there. So that's, that's something we can feel. But it really, now it's tangible in, in business terms and now then they take a look at their data strategy, are we equipped? Very often they will see, yeah, we have pockets of realtime data, but we're not really able to leverage it. >> Lisa: Yeah. >> For ML use cases, et cetera. So that's a big trend that we're seeing on one end. On the other end, what we're seeing, and it's one of the things we discussed a lot at the event is that yeah cost is important. Growth at all, at all cost does not exist. So we see a lot of push on moving a lot of the workloads to the cloud to make them scale but at the best the best cost. And we also see some organizations where like, okay let's not let a good crisis go to waste and let's accelerate our innovation not at all costs. So that we see also a lot of new projects being being pushed but reasonable, starting small and, and growing and all of this fueled by, by realtime data, so interesting. >> The other big topic amongst the, the customer community is security. >> Yep. >> I presume it's coming up a lot. What's the conversation like with DataStax? >> That's a topic we've been working on intensely since the creation of Astra less than two years ago. And we keep on reinforcing as any, any cloud provider not only our own abilities in terms of making sure that customers can manage their own keys, et cetera. But also integrating to the rest of the, of the ecosystem when some, a lot of our customers are running on AWS, how do we integrate with PrivateLink and such? We fit exactly into their security environment on AWS and they use exactly the same management tool. Because this is also what used to cost a lot in the cloud services. How much do you have to do to wire them and, and manage. And there are indeed compliance and governance challenges. So that's why making sure that it's fully connected that they have full transparency on what's happening is, is a big part of the evolution. It's always, security is always something you're working on but it's, it's a major topic for us. >> Yep, we talk about that on pretty much every event. Security, which we could dive into, but we're out of time. Last question for you. >> Thomas: Yes. >> We're talking before we went live, we're both big Formula One fans. Say DataStax has the opportunity to sponsor a team and you get the whole side pod to, to put like a phrase about DataStax on the side pod of this F1 car. (laughter) Like a billboard, what does it say? >> Billboard, because an F1 car goes pretty fast, it will be hard to, be hard to read but, "Twice the performance at half the cost, try Astra a cloud service." >> Drop the mike. Awesome, Thomas, thanks so much for joining us. >> Thank for having me. >> Pleasure having you guys on the program. For our guest, Thomas Bean and Dave Vellante, I'm Lisa Martin and you're watching "theCUBE" live from day four of our coverage. "theCUBE", the leader in live tech coverage. (outro music)
SUMMARY :
the last four days. really noted the ecosystem, We're going to have a 10x Thanks, thanks a lot, we talked to you guys. in the cloud on any cloud, in the pan, during the pandemic was And in terms of customers, the patterns is of the ascendancy of the big data era. bringing the customers through. A lot of students, a lot of the big users members of the community, of the application. But the thing to say Dave: That is the model today. even at the developer level of the DevOps solutions the ecosystem forming around it, right? the shorter the sell cycle is going to be. into the organizations. "The State of the Database" where we, of the things we discussed the customer community is security. What's the conversation of the ecosystem when some, Yep, we talk about that Say DataStax has the opportunity to "Twice the performance at half the cost, Drop the mike. guys on the program.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Thomas | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Cassandra | PERSON | 0.99+ |
March | DATE | 0.99+ |
San Jose | LOCATION | 0.99+ |
Dave | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Thomas Bean | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
DataStax | ORGANIZATION | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Linux Foundation | ORGANIZATION | 0.99+ |
71% | QUANTITY | 0.99+ |
Thomas Been | PERSON | 0.99+ |
One | QUANTITY | 0.99+ |
theCUBE | TITLE | 0.99+ |
last year | DATE | 0.99+ |
sixth month | QUANTITY | 0.99+ |
Thomas Bean | PERSON | 0.99+ |
Unifour | ORGANIZATION | 0.99+ |
30 days | QUANTITY | 0.99+ |
Home Depot | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Priceline | ORGANIZATION | 0.99+ |
Twice | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
Starbucks | ORGANIZATION | 0.99+ |
twice | QUANTITY | 0.99+ |
2010 | DATE | 0.98+ |
10x | QUANTITY | 0.98+ |
Today | DATE | 0.98+ |
Cassandra Summit | EVENT | 0.97+ |
millions of customers | QUANTITY | 0.97+ |
last summer | DATE | 0.97+ |
theCUBE | ORGANIZATION | 0.96+ |
this summer | DATE | 0.96+ |
both | QUANTITY | 0.96+ |
pandemic | EVENT | 0.95+ |
TAM | ORGANIZATION | 0.95+ |
today | DATE | 0.95+ |
Cassandra | TITLE | 0.95+ |
one end | QUANTITY | 0.95+ |
This year | DATE | 0.94+ |
DataStax | TITLE | 0.94+ |
day four | QUANTITY | 0.94+ |
half | QUANTITY | 0.93+ |
Apache Cassandra | ORGANIZATION | 0.93+ |
top three | QUANTITY | 0.93+ |
Cassandra Days | EVENT | 0.92+ |
Apache | ORGANIZATION | 0.91+ |
NoSQL | TITLE | 0.89+ |
200 data points per second | QUANTITY | 0.89+ |
Apache Project | ORGANIZATION | 0.88+ |
Billboard | ORGANIZATION | 0.88+ |
less than | DATE | 0.88+ |
The Strip, Las Vegas | LOCATION | 0.87+ |
one offer | QUANTITY | 0.85+ |
Cassandra | ORGANIZATION | 0.85+ |
Thomas Cornely Indu Keri Eric Lockard Accelerate Hybrid Cloud with Nutanix & Microsoft
>>Okay, we're back with the hybrid Cloud power panel. I'm Dave Ante, and with me our Eric Lockard, who's the corporate vice president of Microsoft Azure Specialized Thomas Corn's, the senior vice president of products at Nutanix. And Indu Carey, who's the Senior Vice President of engineering, NCI and nnc two at Nutanix. Gentlemen, welcome to the cube. Thanks for coming on. >>It's to be >>Here. Have us, >>Eric, let's, let's start with you. We hear so much about cloud first. What's driving the need for hybrid cloud for organizations today? I mean, I not just ev put everything in the public cloud. >>Yeah, well, I mean the public cloud has a bunch of inherent advantages, right? I mean it's, it has effectively infinite capacity, the ability to, you know, innovate without a lot of upfront costs, you know, regions all over the world. So there is a, a trend towards public cloud, but you know, not everything can go to the cloud, especially right away. There's lots of reasons. Customers want to have assets on premise, you know, data gravity, sovereignty and so on. And so really hybrid is the way to achieve the best of both worlds, really to kind of leverage the assets and investments that customers have on premise, but also take advantage of, of the cloud for bursting or regionality or expansion, especially coming outta the pandemic. We saw a lot of this from work from home and, and video conferencing and so on, driving a lot of cloud adoption. So hybrid is really the way that we see customers achieving the best of both worlds. >>Yeah, it makes sense. I wanna, Thomas, if you could talk a little bit, I don't wanna inundate people with the acronyms, but, but the Nutanix cloud clusters on Azure, what is that? What problems does it solve? Give us some color there please. >>Yeah, there, so, you know, cloud clusters on Azure, which we actually call NC two to make it simple and SONC two on Azure is really our solutions for hybrid cloud, right? And you about hybrid cloud, highly desirable customers want it. They, they know this is the right way to do it for them, given that they wanna have workloads on premises at the edge, any public clouds, but it's complicated. It's hard to do, right? And the first thing that you did with just silos, right? You have different infrastructure that you have to go and deal with. You have different teams, different technologies, different areas of expertise and dealing with different portals, networkings get complicated, security gets complicated. And so you heard me say this already, you know, hybrid can be complex. And so what we've done, we then c to Azure is we make that simple, right? We allow teams to go and basically have a solution that allows you to go and take any application running on premises and move it as is to any Azure region where Ncq is available. Once it's running there, you keep the same operating model, right? And that's, so that's actually super valuable to actually go and do this in a simple fashion, do it faster, and basically do hybrid in a more cost effective fashion, know for all your applications. And that's really what's really special about NC two Azure today. >>So Thomas, just a quick follow up on that. So you're, you're, if I understand you correctly, it's an identical experience. Did I get that right? >>This is, this is the key for us, right? Is when you think you're sending on premises, you are used to way of doing things of how you run your applications, how you operate, how you protect them. And what we do here is we extend the Nutanix operating model two workloads running in Azure using the same core stack that you're running on premises, right? So once you have a cluster deploying C to an Azure, it's gonna look like the same cluster that you might be running at the edge or in your own data center using the same tools you, using the same admin constructs to go protect the workloads, make them highly available, do disaster recovery or secure them. All of that becomes the same. But now you are in Azure, and this is what we've spent a lot of time working with Americanist teams on, is you actually have access now to all of those suites of Azure services in from those workloads. So now you get the best of both world, you know, and we bridge them together and you get seamless access of those services between what you get from Nutanix, what you get from Azure. >>Yeah. And as you alluded to, this is traditionally been non-trivial and people have been looking forward to this for, for quite some time. So Indu, I want to understand from an engineering perspective, your team had to work with the Microsoft team, and I'm sure there was this, this is not just a press releases or a PowerPoint, you had to do some some engineering work. So what specific engineering work did you guys do and what's unique about this relative to other solutions in the marketplace? >>So let me start with what's unique about this, and I think Thomas and Eric both did a really good job of describing that the best way to think about what we are delivering jointly with Microsoft is that it speeds of the journey to the public cloud. You know, one way to think about this is moving to the public cloud is sort of like remodeling your house. And when you start remodeling your house, you know, you find that you start with something and before you know it, you're trying to remodel the entire house. And that's a little bit like what journey to the public cloud sort of starts to look like when you start to refactor applications. Because it wasn't, most of the applications out there today weren't designed for the public cloud to begin with. NC two allows you to flip that on its head and say that take your application as is and then lift and shift it to the public cloud, at which point you start the refactor journey. >>And one of the things that you have done really well with the NC two on Azure is that NC two is not something that sits by Azure side. It's fully integrated into the Azure fabric, especially the software defined network and SDN piece. What that means is that, you know, you don't have to worry about connecting your NC two cluster to Azure to some sort of an net worth pipe. You have direct access to the Azure services from the same application that's now running on an NC two cluster. And that makes your refactoring journey so much easier. Your management plan looks the same, your high performance notes let the NVMe notes, they look the same. And really, I mean, other than the facts that you're doing something in the public cloud, all the nutanix's goodness that you're used to continue to receive that, there is a lot of secret sauce that we have had to develop as part of this journey. >>But if we had to pick one that really stands out, it is how do we take the complexity, the network complexity of a public cloud, in this case Azure, and make it as familiar to Nutanix's customers as the VPC construc, the virtual private cloud construc that allows them to really think of that on-prem networking and the public cloud networking in very similar terms. There's a lot more that's gone on behind the scenes. And by the way, I'll tell you a funny sort of anecdote. My dad used to say when I drew up that, you know, if you really want to grow up, you have to do two things. You have to like build a house and you have to marry your kid off to someone. And I would say our dad a third do a flow development with the public cloud provider of the partner. This has been just an absolute amazing journey with Eric and the Microsoft team, and you're very grateful for their >>Support. I, I need NC two for my house. I live in a house that was built in, it's 1687 and we connect all to new and it's, it is a bolt on, but, but, but, and so, but the secret sauce, I mean there's, there's a lot there, but is it a PAs layer? You didn't just wrap it in a container and shove it into the public cloud, You've done more than that. I'm inferring, >>You know, the, it's actually an infrastructure layer offering on top of fid. You can obviously run various types of platform services. So for example, down the road, if you have a containerized application, you'll actually be able to TA it from OnPrem and run it on C two. But the NC two offer itself, the NCAA offer itself is an infrastructure level offering. And the trick is that the storage that you're used to the high performance storage that you know, define tenants to begin with, the hypervisor that you're used to, the network constructs that you're used to light MI segmentation for security purposes, all of them are available to you on NC two in Azure, the same way that we're used to do on-prem. And furthermore, managing all of that through Prism, which is our management interface and management console also remains the same. That makes your security model easier, that makes your management challenge easier, that makes it much easier for an application person or the IT office to be able to report back to the board that they have started to execute on the cloud mandate and they've done that much faster than they'll be able to otherwise. >>Great. Thank you for helping us understand the plumbing. So now Thomas, maybe we can get to like the customers. What, what are you seeing, what are the use cases that are, that are gonna emerge for the solution? >>Yeah, I mean we've, you know, we've had a solution for a while, you know, this is now new on Azure's gonna extend the reach of the solution and get us closer to the type of use cases that are unique to Azure in terms of those solutions for analytics and so forth. But the kind of key use cases for us, the first one you know, talks about it is a migration. You know, we see customers on that cloud journey. They're looking to go and move applications wholesale from on premises to public cloud. You know, we make this very easy because in the end they take the same concept that are around the application and make them, we make them available Now in the Azure region, you can do this for any applications. There's no change to the application, no networking change. The same IP will work the same whether you're running on premises or in Azure. >>The app stays exactly the same, manage the same way, protected the same way. So that's a big one. And you know, the type of drivers point politically or maybe I wanna go do something different or I wanna go and shut down location on premises, I need to do that with a given timeline. I can now move first and then take care of optimizing the application to take advantage of all that Azure has to offer. So migration and doing that in a simple fashion, in a very fast manner is, is a key use case. Another one, and this is classic for leveraging public cloud force, which are doing on premises, is disaster recovery. And something that we refer to as elastic disaster recovery, being able to go and actually configure a secondary site to protect your on premises workloads. But I think that site sitting in Azure as a small site, just enough to hold the data that you're replicating and then use the fact that you cannot get access to resources on demand in Azure to scale out the environment, feed over workloads, run them with performance, potentially fill them back to on premises and then shrink back the environment in Azure to again, optimize cost and take advantage of elasticity that you get from public cloud models. >>And then the last one, building on top of that is just the fact that you cannot get bursting use cases and maybe running a large environment, typically desktop, you know, VDI environments that we see running on premises and I have, you know, a seasonal requirement to go and actually enable more workers to go get access the same solution. You could do this by sizing for the large burst capacity on premises wasting resources during the rest of the year. What we see customers do is optimize what they're running on premises and get access to resources on demand in Azure and basically move the workload and now basically get combined desktop running on premises desktops running on NC two on Azure, same desktop images, same management, same services, and do that as a burst use case during, say you're a retailer that has to go and take care of your holiday season. You know, great use case that we see over and over again for our customers, right? And pretty much complimenting the notion of, look, I wanna go to desktop as a service, but right now, now I don't want to refactor the entire application stack. I just won't be able to get access to resources on demand in the right place at the right time. >>Makes sense. I mean this is really all about supporting customers', digital transformations. We all talk about how that was accelerated during the pandemic and, but the cloud is a fundamental component of the digital transformations. And Eric, you, you guys have obviously made a commitment between Microsoft and and Nutanix to simplify hybrid cloud and that journey to the cloud. How should customers, you know, measure that? What does success look like? What's the ultimate vision here? >>Well, the ultimate vision is really twofold. I think the one is to, you know, first is really to ease a customer's journey to the cloud to allow them to take advantage of all the benefits to the cloud, but to do so without having to rewrite their applications or retrain their, their administrators and or, or to obviate their investment that they already have in platforms like, like Nutanix. And so the, the work that companies have done together here, you know, first and foremost is really to allow folks to come to the cloud in the way that they want to come to the cloud and take really the best of both worlds, right? Leverage, leverage their investment in the capabilities of the Nutanix platform, but do so in conjunction with the advantages and and capabilities of of Azure, you know. Second, it is really to extend some of the cloud capabilities down onto the on-premise infrastructure. And so with investments that we've done together with Azure arc for example, we're really extending the Azure control plane down onto on-premise Nutanix clusters and bringing the capabilities that that provides to the Nutanix customer as well as various Azure services like our data services and Azure SQL server. So it's really kind of coming at the problem from, from two directions. One is from kind of traditional on-prem up into the cloud, and then the second is kind of from the cloud leveraging the investment customers have in in on-premise hci. >>Got it. Thank you. Okay, last question. Maybe each of you could just give us one key takeaway for our audience today. Maybe we start with with with with Thomas and then Indu and then Eric you can bring us home. >>Sure. So the key takeaway is, you know, you takes cloud clusters on Azure is ngi, you know, this is something that we've had tremendous demand from our customers, both from the Microsoft side and the Nutanix side going, going back years literally, right? People have been wanting to go and see this, this is now live GA open for business and you know, we're ready to go and engage and ready to scale, right? This is our first step in a long journey in a very key partnership for us at Nutanix. >>Great Indu >>In our Dave. In a prior life about seven or eight, eight years ago, I was a part of a team that took a popular patch preparation software and moved it to the public cloud. And that was a journey that took us four years and probably several hundred million dollars. And if we had had NC two then it would've saved us half the money, but more importantly would've gotten there in one third the time. And that's really the value of this. >>Okay. Eric, bring us home please. >>Yeah, I'll just point out like this is not something that's just both on or something. We, we, we started yesterday. This is something the teams, both companies have been working on together for, for years really. And it's, it's a way of, of deeply integrating Nutanix into the Azure Cloud and with the ultimate goal of, of again, providing cloud capabilities to the Nutanix customer in a way that they can, you know, take advantage of the cloud and then compliment those applications over time with additional Azure services like storage, for example. So it really is a great on-ramp to the cloud for, for customers who have significant investments in, in Nutanix clusters on premise, >>Love the co-engineering and the ability to take advantage of those cloud native tools and capabilities, real customer value. Thanks gentlemen. Really appreciate your time. >>Thank >>You. Thank you. Thank you. >>Okay, keep it right there. You're watching. Accelerate hybrid cloud, that journey with Nutanix and Microsoft technology on the cube. You're leader in enterprise and emerging tech coverage >>Organizations are increasingly moving towards a hybrid cloud model that contains a mix of on premises public and private clouds. A recent study confirms 83% of businesses agree that hybrid multi-cloud is the ideal operating model. Despite its many benefits, deploying a hybrid cloud can be challenging, complex, slow and expensive require different skills and tool sets and separate siloed management interfaces. In fact, 87% of surveyed enterprises believe that multi-cloud success will require simplified management of mixed infrastructures >>With Nutanix and Microsoft. Your hybrid cloud gets the best of both worlds. The predictable costs, performance control and data sovereignty of a private cloud and the scalability, cloud services, ease of use and fractional economics of the public cloud. Whatever your use case, Nutanix cloud clusters simplifies IT. Operations is faster and lowers risk for migration projects, lowers cloud TCO and provides investment optimization and offers effortless, limitless scale and flexibility. Choose NC two to accelerate your business in the cloud and achieve true hybrid cloud success. Take a free self-guided 30 minute test drive of the solutions provisioning steps and use cases at nutanix.com/azure td. >>Okay, so we're just wrapping up accelerate hybrid cloud with Nutanix and Microsoft made possible by Nutanix where we just heard how Nutanix is partnering with cloud and software leader Microsoft to enable customers to execute on a true hybrid cloud vision with actionable solutions. We pushed and got the answer that with NC two on Azure, you get the same stack, the same performance, the same networking, the same automation, the same workflows across on-prem and Azure Estates. Realizing the goal of simplifying and extending on-prem workloads to any Azure region to move apps without complicated refactoring and to be able to tap the full complement of native services that are available on Azure. Remember, all these videos are available on demand@thecube.net and you can check out silicon angle.com for all the news related to this announcement and all things enterprise tech. Please go to nutanix.com as of course information about this announcement and the partnership, but there's also a ton of resources to better understand the Nutanix product portfolio. There are white papers, videos, and other valuable content, so check that out. This is Dave Ante for Lisa Martin with the Cube, your leader in enterprise and emerging tech coverage. Thanks for watching the program and we'll see you next time.
SUMMARY :
the senior vice president of products at Nutanix. I mean, I not just ev put everything in the public cloud. I mean it's, it has effectively infinite capacity, the ability to, you know, I wanna, Thomas, if you could talk a little bit, I don't wanna inundate people with the And the first thing that you did with just silos, right? Did I get that right? C to an Azure, it's gonna look like the same cluster that you might be running at the edge this is not just a press releases or a PowerPoint, you had to do some some engineering and shift it to the public cloud, at which point you start the refactor journey. And one of the things that you have done really well with the NC two on Azure is And by the way, I'll tell you a funny sort of anecdote. and shove it into the public cloud, You've done more than that. to the high performance storage that you know, define tenants to begin with, the hypervisor that What, what are you seeing, what are the use cases that are, that are gonna emerge for the solution? the first one you know, talks about it is a migration. And you know, the type of drivers point politically And pretty much complimenting the notion of, look, I wanna go to desktop as a service, during the pandemic and, but the cloud is a fundamental component of the digital transformations. and bringing the capabilities that that provides to the Nutanix customer Maybe each of you could just give us one key takeaway ngi, you know, this is something that we've had tremendous demand from our customers, And that's really the value of this. into the Azure Cloud and with the ultimate goal of, of again, Love the co-engineering and the ability to take advantage of those cloud native Thank you. and Microsoft technology on the cube. of businesses agree that hybrid multi-cloud is the ideal operating model. economics of the public cloud. We pushed and got the answer that with NC two on Azure, you get the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Thomas | PERSON | 0.99+ |
Eric | PERSON | 0.99+ |
Eric Lockard | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
four years | QUANTITY | 0.99+ |
Dave Ante | PERSON | 0.99+ |
demand@thecube.net | OTHER | 0.99+ |
Indu Carey | PERSON | 0.99+ |
nutanix | ORGANIZATION | 0.99+ |
NCAA | ORGANIZATION | 0.99+ |
87% | QUANTITY | 0.99+ |
30 minute | QUANTITY | 0.99+ |
83% | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
both companies | QUANTITY | 0.99+ |
two directions | QUANTITY | 0.99+ |
PowerPoint | TITLE | 0.99+ |
One | QUANTITY | 0.99+ |
Azure | TITLE | 0.99+ |
NC two | TITLE | 0.98+ |
yesterday | DATE | 0.98+ |
second | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
NCI | ORGANIZATION | 0.98+ |
both worlds | QUANTITY | 0.98+ |
first step | QUANTITY | 0.98+ |
one key takeaway | QUANTITY | 0.98+ |
Azure Cloud | TITLE | 0.97+ |
today | DATE | 0.97+ |
C two | TITLE | 0.97+ |
two things | QUANTITY | 0.96+ |
one | QUANTITY | 0.95+ |
Ncq | LOCATION | 0.95+ |
nnc two | ORGANIZATION | 0.95+ |
several hundred million dollars | QUANTITY | 0.95+ |
Indu | PERSON | 0.95+ |
third | QUANTITY | 0.94+ |
Azure SQL | TITLE | 0.94+ |
eight years ago | DATE | 0.94+ |
silicon angle.com | OTHER | 0.93+ |
Thomas Cornely Indu Keri Eric Lockard Nutanix Signal
>>Okay, we're back with the hybrid Cloud power panel. I'm Dave Ante and with me our Eric Lockhart, who's the corporate vice president of Microsoft Azure, Specialized Thomas Corny, the senior vice president of products at Nutanix, and Indu Care, who's the Senior Vice President of engineering, NCI and nnc two at Nutanix. Gentlemen, welcome to the cube. Thanks for coming on. >>It's to >>Be here. Have us, >>Eric, let's, let's start with you. We hear so much about cloud first. What's driving the need for hybrid cloud for organizations today? I mean, I wanna just ev put everything in the public cloud. >>Yeah, well, I mean, the public cloud has a bunch of inherent advantages, right? I mean, it's, it has effectively infinite capacity, the ability to, you know, innovate without a lot of upfront costs, you know, regions all over the world. So there is a, a trend towards public cloud, but you know, not everything can go to the cloud, especially right away. There's lots of reasons. Customers want to have assets on premise, you know, data gravity, sovereignty and so on. And so really hybrid is the way to achieve the best of both worlds, really to kind of leverage the assets and investments that customers have on premise, but also take advantage of, of the cloud for bursting or regionality or expansion, especially coming outta the pandemic. We saw a lot of this from work from home and, and video conferencing and so on, driving a lot of cloud adoption. So hybrid is really the way that we see customers achieving the best of both worlds. >>Yeah, makes sense. I wanna, Thomas, if you could talk a little bit, I don't wanna inundate people with the acronyms, but, but the Nutanix Cloud clusters on Azure, what is that? What problems does it solve? Give us some color there, please. >>That is, so, you know, cloud clusters on Azure, which we actually call NC two to make it simple. And so NC two on Azure is really our solutions for hybrid cloud, right? And you think about the hybrid cloud, highly desirable customers want it. They, they know this is the right way to do for them, given that they wanna have workloads on premises at the edge, any public clouds. But it's complicated. It's hard to do, right? And the first thing that you deal with is just silos, right? You have different infrastructure that you have to go and deal with. You have different teams, different technologies, different areas of expertise and dealing with different portals. Networkings get complicated, security gets complicated. And so you heard me say this already, you know, hybrid can be complex. And so what we've done, we then c to Azure is we make that simple, right? We allow teams to go and basically have a solution that allows you to go and take any application running on premises and move it as is to any Azure region where ncq is available. Once it's running there, you keep the same operating model, right? And that's something actually super valuable to actually go and do this in a simple fashion, do it faster, and basically do, do hybrid in a more cost effective fashion, know for all your applications. And that's really what's really special about NC Azure today. >>So Thomas, just a quick follow up on that. So you're, you're, if I understand you correctly, it's an identical experience. Did I get that right? >>This is, this is the key for us, right? Is when you think you're sending on premises, you are used to way of doing things of how you run your applications, how you operate, how you protect them. And what we do here is we extend the Nutanix operating model two workloads running in Azure using the same core stack that you're running on premises, right? So once you have a cluster deploying C to an Azure, it's gonna look like the same cluster that you might be running at the edge or in your own data center, using the same tools, using, using the same admin constructs to go protect the workloads, make them highly available with disaster recovery or secure them. All of that becomes the same, but now you are in Azure, and this is what we've spent a lot of time working with Americanist teams on, is you actually have access now to all of those suites of Azure services in from those workloads. So now you get the best of both world, you know, and we bridge them together and you get seamless access of those services between what you get from Nutanix, what you get from Azure. >>Yeah. And as you alluded to, this is traditionally been non-trivial and people have been looking forward to this for, for quite some time. So Indu, I want to understand from an engineering perspective, your team had to work with the Microsoft team, and I'm sure there was this, this is not just a press releases or a PowerPoint, you had to do some some engineering work. So what specific engineering work did you guys do and what's unique about this relative to other solutions in the marketplace? >>So let me start with what's unique about this, and I think Thomas and Eric both did a really good job of describing that the best way to think about what we are delivering jointly with Microsoft is that it speeds up the journey to the public cloud. You know, one way to think about this is moving to the public cloud is sort of like remodeling your house. And when you start remodeling your house, you know, you find that you start with something and before you know it, you're trying to remodel the entire house. And that's a little bit like what journey to the public cloud sort of starts to look like when you start to refactor applications. Because it wasn't, most of the applications out there today weren't designed for the public cloud to begin with. NC two allows you to flip that on its head and say that take your application as is and then lift and shift it to the public cloud, at which point you start the refactor journey. >>And one of the things that you have done really well with the NC two on Azure is that NC two is not something that sits by Azure side. It's fully integrated into the Azure fabric, especially the software defined network and SDN piece. What that means is that, you know, you don't have to worry about connecting your NC two cluster to Azure to some sort of a net worth pipe. You have direct access to the Azure services from the same application that's now running on an C2 cluster. And that makes your refactoring journey so much easier. Your management claim looks the same, your high performance notes let the NVMe notes, they look the same. And really, I mean, other than the facts that you're doing something in the public cloud, all the Nutanix goodness that you're used to continue to receive that, there is a lot of secret sauce that we have had to develop as part of this journey. >>But if we had to pick one that really stands out, it is how do we take the complexity, the network complexity, offer public cloud, in this case Azure, and make it as familiar to Nutanix's customers as the VPC construc, the virtual private cloud construct that allows them to really think of their on-prem networking and the public cloud networking in very similar terms. There's a lot more that's gone on behind the scenes. And by the way, I'll tell you a funny sort of anecdote. My dad used to say when I drew up that, you know, if you really want to grow up, you have to do two things. You have to like build a house and you have to marry your kid off to someone. And I would say our dad a third do a code development with the public cloud provider of the partner. This has been just an absolute amazing journey with Eric and the Microsoft team, and you're very grateful for their support. >>I need NC two for my house. I live in a house that was built and it's 1687 and we connect old to new and it's, it is a bolt on, but, but, but, and so, but the secret sauce, I mean there's, there's a lot there, but is it a PAs layer? You didn't just wrap it in a container and shove it into the public cloud, You've done more than that. I'm inferring, >>You know, the, it's actually an infrastructure layer offering on top of fid. You can obviously run various types of platform services. So for example, down the road, if you have a containerized application, you'll actually be able to tat it from OnPrem and run it on C two. But the NC two offer itself, the NCAA often itself is an infrastructure level offering. And the trick is that the storage that you're used to the high performance storage that you know, define Nutanix to begin with, the hypervisor that you're used to, the network constructs that you're used to light MI segmentation for security purposes, all of them are available to you on NC two in Azure, the same way that we're used to do on-prem. And furthermore, managing all of that through Prism, which is our management interface and management console also remains the same. That makes your security model easier, that makes your management challenge easier, that makes it much easier for an accusation person or the IT office to be able to report back to the board that they have started to execute on the cloud mandate and they have done that much faster than they'll be able to otherwise. >>Great. Thank you for helping us understand the plumbing. So now Thomas, maybe we can get to like the customers. What, what are you seeing, what are the use cases that are, that are gonna emerge for this solution? >>Yeah, I mean we've, you know, we've had a solution for a while and you know, this is now new on Azure is gonna extend the reach of the solution and get us closer to the type of use cases that are unique to Azure in terms of those solutions for analytics and so forth. But the kind of key use cases for us, the first one you know, talks about it is a migration. You know, we see customers on the cloud journey, they're looking to go and move applications wholesale from on premises to public cloud. You know, we make this very easy because in the end they take the same culture that are around the application and make them, we make them available Now in the Azure region, you can do this for any applications. There's no change to the application, no networking change. The same IP will work the same whether you're running on premises or in Azure. >>The app stays exactly the same, manage the same way, protected the same way. So that's a big one. And you know, the type of drivers point to politically or maybe I wanna go do something different or I wanna go and shut down education on premises, I need to do that with a given timeline. I can now move first and then take care of optimizing the application to take advantage of all that Azure has to offer. So migration and doing that in a simple fashion, in a very fast manner is, is a key use case. Another one, and this is classic for leveraging public cloud force, which are doing on premises IT disaster recovery and something that we refer to as elastic disaster recovery, being able to go and actually configure a secondary site to protect your on premises workloads, but I that site sitting in Azure as a small site, just enough to hold the data that you're replicating and then use the fact that you cannot get access to resources on demand in Azure to scale out the environment, feed over workloads, run them with performance, potentially feed them back to on premises and then shrink back the environment in Azure to again, optimize cost and take advantage of elasticity that you get from public cloud models. >>Then the last one, building on top of that is just the fact that you cannot get boosting use cases and maybe running a large environment, typically desktop, you know, VDI environments that we see running on premises and I have, you know, a seasonal requirement to go and actually enable more workers to go get access the same solution. You could do this by sizing for the large burst capacity on premises wasting resources during the rest of the year. What we see customers do is optimize what they're running on premises and get access to resources on demand in Azure and basically move the workload and now basically get combined desktops running on premises desktops running on NC two on Azure, same desktop images, same management, same services, and do that as a burst use case during, say you're a retailer that has to go and take care of your holiday season. You know, great use case that we see over and over again for our customers, right? And pretty much complimenting the notion of, look, I wanna go to desktop as a service, but right now I don't want to refactor the entire application stack. I just wanna be able to get access to resources on demand in the right place at the right time. >>Makes sense. I mean this is really all about supporting customers', digital transformations. We all talk about how that was accelerated during the pandemic and, but the cloud is a fundamental component of the digital transformation generic. You, you guys have obviously made a commitment between Microsoft and and Nutanix to simplify hybrid cloud and that journey to the cloud. How should customers, you know, measure that? What does success look like? What's the ultimate vision here? >>Well, the ultimate vision is really twofold. I think the one is to, you know, first is really to ease a customer's journey to the cloud to allow them to take advantage of all the benefits to the cloud, but to do so without having to rewrite their applications or retrain their, their administrators and or or to obviate their investment that they already have and platforms like, like Nutanix. And so the, the work that companies have done together here, you know, first and foremost is really to allow folks to come to the cloud in the way that they want to come to the cloud and take really the best of both worlds, right? Leverage, leverage their investment in the capabilities of the Nutanix platform, but do so in conjunction with the advantages and and capabilities of, of Azure. You know, Second is really to extend some of the cloud capabilities down onto the on-premise infrastructure. And so with investments that we've done together with Azure arc for example, we're really extending the Azure control plane down onto on premise Nutanix clusters and bringing the capabilities that that provides to the, the Nutanix customer as well as various Azure services like our data services and Azure SQL server. So it's really kind of coming at the problem from, from two directions. One is from kind of traditional on-premise up into the cloud and then the second is kind of from the cloud leveraging the investment customers have in in on-premise hci. >>Got it. Thank you. Okay, last question. Maybe each of you can just give us one key takeaway for our audience today. Maybe we start with with with with Thomas and then Indu and then Eric you can bring us home. >>Sure. So the key takeaway is, you know, Nutanix Cloud clusters on Azure is now ga you know, this is something that we've had tremendous demand from our customers, both from the Microsoft side and the Nutanix side going, going back years literally, right? People have been wanting to go and see this, this is now live GA open for business and you know, we're ready to go and engage and ready to scale, right? This is our first step in a long journey in a very key partnership for us at Nutanix. >>Great Indu >>In our Dave. In a prior life about seven or eight, eight years ago, I was a part of a team that took a popular cat's preparation software and moved it to the public cloud. And that was a journey that took us four years and probably several hundred million. And if we had had NC two then it would've saved us half the money, but more importantly would've gotten there in one third the time. And that's really the value of this. >>Okay. Eric, bring us home please. >>Yeah, I'll just point out like this is not something that's just both on or something. We, we, we started yesterday. This is something the teams, both companies have been working on together for, for years, really. And it's, it's a way of, of deeply integrating Nutanix into the Azure Cloud and with the ultimate goal of, of again, providing cloud capabilities to the Nutanix customer in a way that they can, you know, take advantage of the cloud and then compliment those applications over time with additional Azure services like storage, for example. So it really is a great on-ramp to the cloud for, for customers who have significant investments in, in Nutanix clusters on premise, >>Love the co-engineering and the ability to take advantage of those cloud native tools and capabilities, real customer value. Thanks gentlemen. Really appreciate your time. >>Thank >>You. Thank you. >>Okay. Keep it right there. You're watching Accelerate Hybrid Cloud, that journey with Nutanix and Microsoft technology on the cube. You're a leader in enterprise and emerging tech coverage.
SUMMARY :
the senior vice president of products at Nutanix, and Indu Care, who's the Senior Vice President of Have us, What's driving the I mean, it's, it has effectively infinite capacity, the ability to, you know, I wanna, Thomas, if you could talk a little bit, I don't wanna inundate people with the And the first thing that you deal with is just silos, right? Did I get that right? C to an Azure, it's gonna look like the same cluster that you might be running at the edge So what specific engineering work did you guys do and what's unique about this relative then lift and shift it to the public cloud, at which point you start the refactor And one of the things that you have done really well with the NC two on Azure is And by the way, I'll tell you a funny sort of anecdote. and shove it into the public cloud, You've done more than that. to the high performance storage that you know, define Nutanix to begin with, the hypervisor that What, what are you seeing, what are the use cases that are, that are gonna emerge for this solution? the first one you know, talks about it is a migration. And you know, the type of drivers point to politically VDI environments that we see running on premises and I have, you know, a seasonal requirement to How should customers, you know, measure that? And so the, the work that companies have done together here, you know, Maybe each of you can just give us one key takeaway for now ga you know, this is something that we've had tremendous demand from our customers, And that's really the value of this. can, you know, take advantage of the cloud and then compliment those applications over Love the co-engineering and the ability to take advantage of those cloud native and Microsoft technology on the cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Thomas | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Eric Lockhart | PERSON | 0.99+ |
Eric | PERSON | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
four years | QUANTITY | 0.99+ |
Indu Care | ORGANIZATION | 0.99+ |
Dave Ante | PERSON | 0.99+ |
NCI | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
NCAA | ORGANIZATION | 0.99+ |
PowerPoint | TITLE | 0.99+ |
both companies | QUANTITY | 0.99+ |
two directions | QUANTITY | 0.99+ |
Azure | TITLE | 0.99+ |
Thomas Corny | PERSON | 0.98+ |
Second | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
C two | TITLE | 0.98+ |
each | QUANTITY | 0.98+ |
second | QUANTITY | 0.98+ |
Indu | PERSON | 0.98+ |
yesterday | DATE | 0.98+ |
both worlds | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
first step | QUANTITY | 0.98+ |
nnc two | ORGANIZATION | 0.98+ |
first | QUANTITY | 0.97+ |
Azure Cloud | TITLE | 0.97+ |
two things | QUANTITY | 0.97+ |
one key takeaway | QUANTITY | 0.97+ |
NC two | TITLE | 0.97+ |
first thing | QUANTITY | 0.97+ |
today | DATE | 0.96+ |
third | QUANTITY | 0.95+ |
both world | QUANTITY | 0.93+ |
several hundred million | QUANTITY | 0.92+ |
first one | QUANTITY | 0.92+ |
1687 | DATE | 0.92+ |
eight years ago | DATE | 0.91+ |
one third | QUANTITY | 0.9+ |
pandemic | EVENT | 0.9+ |
Dave | PERSON | 0.9+ |
Thomas Cornely, Induprakas Keri & Eric Lockard | Accelerate Hybrid Cloud with Nutanix & Microsoft
(gentle music) >> Okay, we're back with the hybrid cloud power panel. I'm Dave Vellante, and with me Eric Lockard who is the Corporate Vice President of Microsoft Azure Specialized. Thomas Cornely is the Senior Vice President of Products at Nutanix and Indu Keri, who's the Senior Vice President of Engineering, NCI and NC2 at Nutanix. Gentlemen, welcome to The Cube. Thanks for coming on. >> It's good to be here. >> Thanks for having us. >> Eric, let's, let's start with you. We hear so much about cloud first. What's driving the need for hybrid cloud for organizations today? I mean, I want to just put everything in the public cloud. >> Yeah, well, I mean the public cloud has a bunch of inherent advantages, right? I mean it's, it has effectively infinite capacity the ability to, you know, innovate without a lot of upfront costs, you know, regions all over the world. So there is a trend towards public cloud, but you know not everything can go to the cloud, especially right away. There's lots of reasons. Customers want to have assets on premise you know, data gravity, sovereignty and so on. And so really hybrid is the way to achieve the best of both worlds, really to kind of leverage the assets and investments that customers have on premise but also take advantage of the cloud for bursting, originality or expansion especially coming out of the pandemic. We saw a lot of this from work from home and and video conferencing and so on driving a lot of cloud adoption. So hybrid is really the way that we see customers achieving the best of both worlds. >> Yeah, makes sense. I want to, Thomas, if you could talk a little bit I don't want to inundate people with the acronyms, but the Nutanix Cloud clusters on Azure, what is that? What problems does it solve? Give us some color there, please. >> Yeah, so, you know, cloud clusters on Azure which we actually call NC2 to make it simple. And so NC2 on Azure is really our solutions for hybrid cloud, right? And you think about hybrid cloud highly desirable, customers want it. They, they know this is the right way to do it for them given that they want to have workloads on premises at the edge, any public clouds, but it's complicated. It's hard to do, right? And the first thing that you deal with is just silos, right? You have different infrastructure that you have to go and deal with. You have different teams, different technologies, different areas of expertise. And dealing with different portals, networking get complicated, security gets complicated. And so you heard me say this already, you know hybrid can be complex. And so what we've done we then NC2 Azure is we make that simple, right? We allow teams to go and basically have a solution that allows you to go and take any application running on premises and move it as-is to any Azure region where NC2 is available. Once it's running there you keep the same operating model, right? And that's, so that actually super valuable to actually go and do this in a simple fashion. Do it faster, and basically do hybrid in a more (indistinct) fashion know for all your applications. And that's what's really special about NC2 today. >> So Thomas, just a quick follow up on that. So you're, you're, if I understand you correctly it's an identical experience. Did I get that right? >> This is the key for us, right? When you think you're sitting on premises you are used to way of doing things of how you run your applications, how you operate, how you protect them. And what we do here is we extend the Nutanix operating model to workloads running in Azure using the same core stack that you're running on premises, right? So once you have a cluster, deploy in NC2 Azure, it's going to look like the same cluster that you might be running at the edge or in your own data center, using the same tools, using the same admin constructs to go protect the workloads make them highly available do disaster recovery or secure them. All of that becomes the same. But now you are in Azure, and this is what we've spent a lot of time working with Eric and his teams on is you actually have access now to all of those suites of Azure services (indistinct) from those workloads. So now you get the best of both world, you know and we bridge them together and you to get seamless access of those services between what you get from Nutanix, what you get from Azure. >> Yeah. And as you alluded to this is traditionally been non-trivial and people have been looking forward to this for quite some time. So Indu, I want to understand from an engineering perspective, your team had to work with the Microsoft team, and I'm sure there was this is not just a press release, this is, or a PowerPoint you had to do some some engineering work. So what specific engineering work did you guys do and what's unique about this relative to other solutions in the marketplace? >> So let me start with what's unique about this. And I think Thomas and Eric both did a really good job of describing that. The best way to think about what we are delivering jointly with Microsoft is that it speeds up the journey to the public cloud. You know, one way to think about this is moving to the public cloud is sort of like remodeling your house. And when you start remodeling your house, you know, you find that you start with something and before you know it, you're trying to remodel the entire house. And that's a little bit like what journey to the public cloud sort of starts to look like when you start to refactor applications. Because it wasn't, most of the applications out there today weren't designed for the public cloud to begin with. NC2 allows you to flip that on its head and say that take your application as-is and then lift and shift it to the public cloud at which point you start the refactor journey. And one of the things that you have done really well with the NC2 on Azure is that NC2 is not something that sits by Azure side. It's fully integrated into the Azure fabric especially the software-defined networking, SDN piece. What that means is that, you know you don't have to worry about connecting your NC2 cluster to Azure to some sort of a network pipe. You have direct access to the Azure services from the same application that's now running on an NC2 cluster. And that makes your refactor journey so much easier. Your management claim looks the same, your high performance notes let the NVMe notes they look the same. And really, I mean, other than the fact that you're doing something in the public cloud all the Nutanix goodness that you're used to continue to receive that. There is a lot of secret sauce that we have had to develop as part of this journey. But if we had to pick one that really stands out it is how do we take the complexity, the network complexity offer public cloud, in this case Azure and make it as familiar to Nutanix's customers as the VPC, the virtual private cloud (indistinct) that allows them to really think of their on-prem networking and the public cloud networking in very similar terms. There's a lot more that's done on behind the scenes. And by the way, I'll tell you a funny sort of anecdote. My dad used to say when I grew up that, you know if you really want to grow up, you have to do two things. You have to like build a house and you have to marry your kid off to someone. And I would say our dad a third, do a cloud development with the public cloud provider of the partner. This has been just an absolute amazing journey with Eric and the Microsoft team and we're very grateful for their support. >> I need NC2 for my house. I live in a house that was built and it's 1687 and we connect all the new and it is a bolt on, but the secret sauce, I mean there's, there's a lot there but is it a (indistinct) layer. You didn't just wrap it in a container and shove it into the public cloud. You've done more than that, I'm inferring. >> You know, the, it's actually an infrastructure layer offering on top of (indistinct). You can obviously run various types of platform services. So for example, down the road if you have a containerized application you'll actually be able to take it from on prem and run it on NC2. But the NC2 offer itself, the NC2 offering itself is an infrastructure level offering. And the trick is that the storage that you're used to the high performance storage that you know define Nutanix to begin with the hypervisor that you're used to the network constructs that you're used to light micro segmentation for security purposes, all of them are available to you on NC2 in Azure the same way that we're used to do on-prem. And furthermore, managing all of that through Prism, which is our management interface and management console also remains the same. That makes your security model easier that makes your management challenge easier that makes it much easier for an application person or the IT office to be able to report back to the board that they have started to execute on the cloud mandate and they've done that much faster than they would be able to otherwise. >> Great. Thank you for helping us understand the plumbing. So now Thomas, maybe we can get to like the customers. What, what are you seeing, what are the use cases that are that are going to emerge for this solution? >> Yeah, I mean we've, you know we've had a solution for a while and you know this is now new on Azure is going to extend the reach of the solution and get us closer to the type of use cases that are unique to Azure in terms of those solutions for analytics and so forth. But the kind of key use cases for us the first one you know, talks about it is a migration. You know, we see customers on that cloud journey. They're looking to go and move applications wholesale from on premises to public cloud. You know, we make this very easy because in the end they take the same culture that were around the application and we make them available now in the Azure region. You can do this for any applications. There's no change to the application, no networking change the same IP constraint will work the same whether you're running on premises or in Azure. The app stays exactly the same manage the same way, protected the same way. So that's a big one. And you know, the type of drivers for (indistinct) maybe I want to go do something different or I want to go and shut down the location on premises I need to do that with a given timeline. I can now move first and then take care of optimizing the application to take advantage of all that Azure has to offer. So migration and doing that in a simple fashion in a very fast manner is, is a key use case. Another one, and this is classic for leveraging public cloud force, which we're doing on premises IT disaster recovery and something that we refer to as Elastic disaster recovery, being able to go and actually configure a secondary site to protect your on premises workloads. But I think that site sitting in Azure as a small site just enough to hold the data that you're replicating and then use the fact that you cannot get access to resources on demand in Azure to scale out the environment feed over workloads, run them with performance potentially fill them back to on premises, and then shrink back the environment in Azure to again optimize cost and take advantage of the elasticity that you get from public cloud models. Then the last one, building on top of that is just the fact that you cannot get bursting use cases and maybe running a large environment, typically desktop, you know, VDI environments that we see running on premises and I have, you know, a seasonal requirement to go and actually enable more workers to go get access the same solution. You could do this by sizing for the large burst capacity on premises wasting resources during the rest of the year. What we see customers do is optimize what they're running on premises and get access to resources on demand in Azure and basically move the workloads and now basically get combined desktops running on premises desktops running on NC2 on Azure same desktop images, same management, same services and do that as a burst use case during say you're a retailer that has to go and take care of your holiday season. You know, great use case that we see over and over again for our customers, right? And pretty much complimenting the notion of, look I want to go to desktop as a service, but right now I don't want to refactor the entire application stack. I just want to be able to get access to resources on demand in the right place at the right time. >> Makes sense. I mean this is really all about supporting customer's, digital transformations. We all talk about how that was accelerated during the pandemic and but the cloud is a fundamental component of the digital transformations generic. You, you guys have obviously made a commitment between Microsoft and Nutanix to simplify hybrid cloud and that journey to the cloud. How should customers, you know, measure that? What does success look like? What's the ultimate vision here? >> Well, the ultimate vision is really twofold, I think. The one is to, you know first is really to ease a customer's journey to the cloud to allow them to take advantage of all the benefits to the cloud, but to do so without having to rewrite their applications or retrain their administrators and or to obviate their investment that they already have and platforms like Nutanix. And so the work that companies have done together here, you know, first and foremost is really to allow folks to come to the cloud in the way that they want to come to the cloud and take really the best of both worlds, right? Leverage their investment in the capabilities of the Nutanix platform, but do so in conjunction with the advantages and capabilities of Azure. You know, second is really to extend some of the cloud capabilities down onto the on-premise infrastructure. And so with investments that we've done together with Azure arc for example, we're really extending the Azure control plane down onto on-premise Nutanix clusters and bringing the capabilities that provides to the Nutanix customer as well as various Azure services like our data services and Azure SQL server. So it's really kind of coming at the problem from two directions. One is from kind of traditional on-premise up into the cloud, and then the second is kind of from the cloud leveraging the investment customers have in on-premise HCI. >> Got it. Thank you. Okay, last question. Maybe each of you could just give us one key takeaway for our audience today. Maybe we start with Thomas and then Indu and then Eric you can bring us home. >> Sure. So the key takeaway is, you know, cloud customers on Azure is now GA you know, this is something that we've had tremendous demand from our customers both from the Microsoft side and the Nutanix side going back years literally, right? People have been wanting to go and see this this is now live GA open for business and you know we're ready to go and engage and ready to scale, right? This is our first step in a long journey in a very key partnership for us at Nutanix. >> Great, Indu. >> In our day, in a prior life about seven or eight years ago, I was a part of a team that took a popular text preparation software and moved it to the public cloud. And that was a journey that took us four years and probably several hundred million dollars. And if we had NC2 then it would've saved us half the money, but more importantly would've gotten there in one third the time. And that's really the value of this. >> Okay. Eric, bring us home please. >> Yeah, I'll just point out that, this is not something that's just bought on or something we started yesterday. This is something the teams both companies have been working on together for years really. And it's a way of deeply integrating Nutanix into the Azure Cloud. And with the ultimate goal of again providing cloud capabilities to the Nutanix customer in a way that they can, you know take advantage of the cloud and then compliment those applications over time with additional Azure services like storage, for example. So it really is a great on-ramp to the cloud for customers who have significant investments in Nutanix clusters on premise. >> Love the co-engineering and the ability to take advantage of those cloud native tools and capabilities, real customer value. Thanks gentlemen. Really appreciate your time. >> Thank you. >> Thank you. >> Okay. Keep it right there. You're watching accelerate hybrid cloud, that journey with Nutanix and Microsoft technology on The Cube, your leader in enterprise and emerging tech coverage. (gentle music)
SUMMARY :
the Senior Vice President everything in the public cloud. the ability to, you know, innovate but the Nutanix Cloud clusters And the first thing that you understand you correctly All of that becomes the same. in the marketplace? for the public cloud to begin with. it into the public cloud. or the IT office to be able to report back that are going to emerge the first one you know, talks and that journey to the cloud. and take really the best Maybe each of you could just and ready to scale, right? and moved it to the public cloud. This is something the teams Love the co-engineering and the ability hybrid cloud, that journey
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Eric | PERSON | 0.99+ |
Thomas | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Eric Lockard | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Thomas Cornely | PERSON | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
four years | QUANTITY | 0.99+ |
both companies | QUANTITY | 0.99+ |
PowerPoint | TITLE | 0.99+ |
first step | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
each | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
one third | QUANTITY | 0.98+ |
NC2 | TITLE | 0.98+ |
1687 | DATE | 0.98+ |
Azure | TITLE | 0.98+ |
NC2 | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
both worlds | QUANTITY | 0.98+ |
NCI | ORGANIZATION | 0.97+ |
both | QUANTITY | 0.97+ |
second | QUANTITY | 0.97+ |
two things | QUANTITY | 0.97+ |
first thing | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
one key takeaway | QUANTITY | 0.97+ |
Azure Cloud | TITLE | 0.96+ |
eight years ago | DATE | 0.96+ |
Indu | PERSON | 0.96+ |
two directions | QUANTITY | 0.95+ |
several hundred million dollars | QUANTITY | 0.94+ |
third | QUANTITY | 0.93+ |
pandemic | EVENT | 0.93+ |
Induprakas Keri | PERSON | 0.93+ |
first one | QUANTITY | 0.93+ |
half the money | QUANTITY | 0.93+ |
NC2 | LOCATION | 0.89+ |
Thomas Stocker, UiPath & Neeraj Mathur, VMware | UiPath FORWARD5
>> TheCUBE presents UI Path Forward Five brought to you by UI Path. >> Welcome back to UI Path Forward Five. You're watching The Cubes, Walter Wall coverage. This is day one, Dave Vellante, with my co-host Dave Nicholson. We're taking RPA to intelligence automation. We're going from point tools to platforms. Neeraj Mathur is here. He's the director of Intelligent Automation at VMware. Yes, VMware. We're not going to talk about vSphere or Aria, or maybe we are, (Neeraj chuckles) but he's joined by Thomas Stocker who's a principal product manager at UI Path. And we're going to talk about testing automation, automating the testing process. It's a new sort of big vector in the whole RPA automation space. Gentleman, welcome to theCUBE. Good to see you. >> Neeraj: Thank you very much. >> Thomas: Thank you. >> So Neeraj, as we were saying, Dave and I, you know, really like VMware was half our lives for a long time but we're going to flip it a little bit. >> Neeraj: Absolutely. >> And talk about sort of some of the inside baseball. Talk about your role and how you're applying automation at VMware. >> Absolutely. So, so as part of us really running the intelligent automation program at VMware, we have a quite matured COE for last, you know four to five years, we've been doing this automation across the enterprise. So what we have really done is, you know over 45 different business functions where we really automated quite a lot different processes and tasks on that. So as part of my role, I'm really responsible for making sure that we are, you know, bringing in the best practices, making sure that we are ready to scale across the enterprise but at the same time, how, you know, quickly we are able to deliver the value of this automation to our businesses as well. >> Thomas, as a product manager, you know the product, and the market inside and out, you know the competition, you know the pricing, you know how customers are using it, you know all the features. What's your area of - main area of focus? >> The main area of the UiPathT suite... >> For your role, I mean? >> For my role is the RPA testing. So meaning testing RPA workflows themselves. And the reason is RPA has matured over the last few years. We see that, and it has adopted a lot of best practices from the software development area. So what we see is RPA now becomes business critical. It's part of the main core business processes in corporation and testing it just makes sense. You have to continuously monitor and continuously test your automation to make sure it does not break in production. >> Okay. And you have a specific product for this? Is it a feature or it's a module? >> So RPA testing or the UiPath T Suite, as the name suggests it's a suite of products. It's actually part of the existing platform. So we use Orchestrator, which is the distribution engine. We use Studio, which is our idea to create automation. And on top of that, we build a new component, which is called the UiPath Test Manager. And this is a kind of analytics and management platform where you have an oversight on what happened, what went wrong, and what is the reason for automation to **bring. >> Okay. And so Neeraj, you're testing your robot code? >> Neeraj: Correct. >> Right. And you're looking for what? Governance, security, quality, efficiency, what are the things you're looking for? >> It's actually all of all of those but our main goal to really start this was two-front, right? So we were really looking at how do we, you know, deliver at a speed with the quality which we can really maintain and sustain for a longer period, right? So to improve our quality of delivery at a speed of delivery, which we can do it. So the way we look at testing automation is not just as an independent entity. We look at this as a pipeline of a continuous improvement for us, right? So how it is called industry as a CICD pipeline. So testing automation is one of the key component of that. But the way we were able to deliver on the speed is to really have that end to end automation done for us to also from developers to production and using that pipeline and our testing is one piece of that. And the way we were able to also improve on the quality of our delivery is to really have automated way of doing the code reviews, automated way of doing the testing using this platform as well. and then, you know, how you go through end to end for that purpose. >> Thomas, when I hear testing robots, (Thomas chuckles) I don't care if it's code or actual robots, it's terrifying. >> It's terrify, yeah. >> It's terrifying. Okay, great. You, you have some test suite that says look, Yeah, we've looked at >> The, why is that terrifying? >> What's, It's terrifying because if you have to let it interact with actual live systems in some way. Yeah. The only way to know if it's going to break something is either you let it loose or you have some sort of sandbox where, I mean, what do you do? Are you taking clones of environments and running actual tests against them? I mean, think it's >> Like testing disaster recovery in the old days. Imagine. >> So we are actually not running any testing in the production live environment, right? The way we build this actually to do a testing in the separate test environment on that as well by using very specific test data from business, which you know, we call that as a golden copy of that test data because we want to use that data for months and years to come. Okay. Right? Yeah. So not touching any production environmental Facebook. >> Yeah. All right. Cause you, you can imagine >> Absolutely >> It's like, oh yeah we've created a robotic changes baby diapers let's go ahead and test it on these babies. [Collective Laughter] Yeah >> I don't think so. No, no, But, but what's the, does it does it matter if there's a delta between the test data and the, the, the production data? How, how big is that delta? How do you manage that? >> It does matter. And that's where actually that whole, you know, angle of how much you can, can in real, in real life can test right? So there are cases where you would have, even in our cases where, you know, the production data might be slightly different than the test data itself. So the whole effort goes into making sure that the test data, which we are preparing here, is as close to the products and data itself, right? It may not be a hundred percent close but that's the sort of you know, boundary or risk you may have to take. >> Okay. So you're snapshotting, that moving it over, a little V motion? >> Neeraj: Yeah. >> Okay. So do you do this for citizen developers as well? Or is you guys pretty much center of excellence writing all the bots? >> No, right now we are doing only for the unattended, the COE driven bots only at this point of time, >> What are you, what are your thoughts on the future? Because I can see I can see some really sloppy citizen coders. >> Yeah. Yeah. So as part of our governance, which we are trying to build for our citizen developers as well, there there is a really similar consideration for that as well. But for us, we have really not gone that far to build that sort of automation right >> Now, narrowly, just if we talk about testing what's the business impact been on the testing? And I'm interested in overall, but the overall platform but specifically for the testing, when did that when did you start implementing that and, and what what has been the business benefit? >> So the benefit is really on the on the speed of the delivery, which means that we are able to actually deliver more projects and more automation as well. So since we adopted that, we have seen our you know, improvement, our speed is around 15%, right? So, so, you know, 15% better speed than previously. What we have also seen is, is that our success rate of our transactions in production environment has gone to 96% success rate, which is, again there is a direct implication on business, on, on that point of view that, you know, there's no more manual exception or manual interaction is required for those failure scenarios. >> So 15% better speed at what? At, at implementing the bots? At actually writing code? Or... >> End to end, Yes. So from building the code to test that code able to approve that and then deploy that into the production environment after testing it this is really has improved by 15%. >> Okay. And, and what, what what business processes outside of sort of testing have you sort of attacked with the platform? Can you talk to that? >> The business processes outside of testing? >> Dave: Yeah. You mean the one which we are not testing ourself? >> Yeah, no. So just the UI path platform, is it exclusively for, for testing? >> This testing is exclusively for the UI path bots which we have built, right? So we have some 400 plus automations of UI bots. So it's meant exclusively >> But are you using UI path in any other ways? >> No, not at this time. >> Okay, okay. Interesting. So you started with testing? >> No, we started by building the bots. So we already had roughly 400 bots in production. When we came with the testing automation, that's when we started looking at it. >> Dave: Okay. And then now building that whole testing-- >> Dave: What are those other bots doing? Let me ask it that way. >> Oh, there's quite a lot. I mean, we have many bots. >> Dave: Paint a picture if you want. Yeah. In, in finance, in auto management, HR, legal, IT, there's a lot of automations which are there. As I'm saying, there's more than 400 automations out there. Yeah. So so it's across the, you know, enterprise on that. >> Thomas. So, and you know, both of you have a have a view on this, but Thomas's views probably wider across other, other instances. What are the most common things that are revealed in tests that indicate something needs to be fixed? Yeah, so think of, think of a test, a test failure, an error. What are the, what are the most common things that happen? >> So when we started with building our product we conducted a, a survey among our customers. And without a surprise the main reason why automation breaks is change. >> David: Sure. >> And the problem here is RPA is a controlled process a controlled workflow but it runs in an uncontrollable environment. So typically RPA is developed by a C.O.E. Those are business and automation experts, but they operate in an environment that's driven by new patches new application changes ruled out by IT. And that's the main challenge here. You cannot control that. And so far, if you, if you do not proactively test what happens is you catch an issue in production when it already breaks, right? That's reactive, that's leads to maintenance to un-claim maintenance actually. And that was the goal right from the start from the taste suite to support our customers here and go over to proactive maintenance meaning testing before and finding those issues before the heat production. >> Yeah. Yeah, yeah. So I'm, I'm still not clear on, so you just gave a perfect example, changes in the environment. >> Yeah. >> So those changes are happening in the production environment. >> Thomas: Yeah. The robot that was happily doing its automation stuff before? >> Thomas: Yeah. Everyone was happy with it. Change happens. Robot breaks. >> Thomas: Yeah. >> Okay. You're saying you test before changes are implemented? To see if those changes will break the robot? >> Thomas: Yeah. >> Okay. How do you, how do you expose those changes that are in the, in a, that are going to be in a production environment to the robot? You must have a, Is is that part of the test environment? Does that mean that you have to have what fully running instances of like an ERP system? >> Thomas: Yeah. You know, a clone of an environment. How do you, how do you test that without having the live robot against the production environment? >> I think there's no big difference to standard software testing. Okay. The interesting thing is, the change actually happens earlier. You are affected on production side with it but the change happens on it side or on DevOps side. So you typically will test in a test environment that's similar to your production environment or probably in it in a pre-product environment. And the test itself is simply running your workflow that you want to test, but mark away any dependencies you don't want to invoke. You don't want to send a, a letter to a customer in a test environment, right? And then you verify that the result is what you actually expect, right? And as soon as this is not the case, you will be notified you will have a result, the fail result, and you can act before it breaks. So you can fix it, redeploy to production and you should be good now. >> But the, the main emphasis at VMware is testing your bots, correct? >> Neeraj: Testing your bots. Yes. Can I apply this to testing other software code? >> Yeah, yeah. You, you can, you can technically actually and Thomas can speak better than me on that to any software for that matter, but we have really not explored that aspect of it. >> David: You guys have pretty good coders, good engineers at VMware, but no, seriously Thomas what's that market looking like? Is that taking off? Are you, are you are you applying this capability or customers applying it for just more broadly testing software? >> Absolutely. So our goal was we want to test RPA and the application it relies on so that includes RPA testing as well as application testing. The main difference is typical functional application testing is a black box testing. So you don't know the inner implementation of of that application. And it works out pretty well. The big, the big opportunity that we have is not isolated Not isolated testing, isolated RPA but we talk about convergence of automation. So what we offer our customers is one automation platform. You create one, you create automation, not redundantly in different departments, but you create once probably for testing and then you reuse it for RPA. So that suddenly helps your, your test engineers to to move from a pure cost center to a value center. >> How, how unique is this capability in the industry relative to your competition and and what capabilities do you have that, that or, or or differentiators from the folks that we all know you're competing with? >> So the big advantage is the power of the entire platform that we have with UiPath. So we didn't start from scratch. We have that great automation layer. We have that great distribution layer. We have all that AI capabilities that so far were used for RPA. We can reuse them, repurpose them for testing. And that really differentiates us from the competition. >> Thomas, I I, I detect a hint of an accent. Is it, is it, is it German or >> It's actually Austrian. >> Austrian. Well, >> You know. Don't compare us with Germans. >> I understand. High German. Is that the proper, is that what's spoken in Austria? >> Yes, it is. >> So, so >> Point being? >> Point being exactly as I drift off point being generally German is considered to be a very very precise language with very specific words. It's very easy to be confused about between the difference the difference between two things automation testing and automating testing. >> Thomas: Yes. >> Because in this case, what you are testing are automations. >> Thomas: Yes. >> That's what you're talking about. >> Thomas: Yes. >> You're not talking about the automation of testing. Correct? >> Well, we talk about >> And that's got to be confusing when you go to translate that into >> Dave: But isn't it both? >> 50 other languages? >> Dave: It's both. >> Is it both? >> Thomas: It actually is both. >> Okay. >> And there's something we are exploring right now which is even, even the next step, the next layer which is autonomous testing. So, so far you had an expert an automation expert creating the automation once and it would be rerun over and over again. What we are now exploring is together with university to autonomously test, meaning a bot explores your application on the test and finds issues completely autonomously. >> Dave: So autonomous testing of automation? >> It's getting more and more complicated. >> It's more clear, it's getting clearer by the minute. >> Sorry for that. >> All right Neeraj, last question is: Where do you want to take this? What's your vision for, for VMware in the context of automation? >> Sure. So, so I think the first and the foremost thing for us is to really make it more mainstream for for our automation developer Excel, right? What I mean by that is, is to really, so so there is a shift now how we engage with our business users and SMEs. And I said previously they used to actually test it manually. Now the conversation changes that, hey can you tell us what test cases you want what you want us to test in an automated measure? Can you give us the test data for that so that we can keep on testing in a continuous manner for the months and years to come down? Right? The other part of the test it changes is that, hey it used to take eight weeks for us to build but now it's going to take nine weeks because we're going to spend an extra week just to automate that as well. But it's going to help you in the long run and that's the conversation. So to really make it as much more mainstream and then say that out of all these kinds of automation and bots which we are building, So we are not looking to have a test automation for every single bot which we are building. So we need to have a way to choose where their value is. Is it the quarter end processing one? Is it the most business critical one, or is it the one where we are expecting of frequent changes, right? That's where the value of the testing is. So really bring that as a part of our whole process and then, you know >> We're still fine too. That great. Guys, thanks so much. This has been really interesting conversation. I've been waiting to talk to a real life customer about testing and automation testing. Appreciate your time. >> Thank you very much. >> Thanks for everything. >> All right. Thank you for watching, keep it right there. Dave Nicholson and I will be back right after this short break. This is day one of theCUBE coverage of UI Path Forward Five. Be right back after this short break.
SUMMARY :
brought to you by UI Path. in the whole RPA automation space. So Neeraj, as we were some of the inside baseball. for making sure that we are, you know, and the market inside and And the reason is RPA has Is it a feature or it's a module? So RPA testing or the UiPath testing your robot code? And you're looking for what? So the way we look at testing automation I don't care if it's You, you have some test suite that says of sandbox where, I mean, what do you do? recovery in the old days. in the separate test Cause you, you can imagine it on these babies. between the test data and that the test data, which we that moving it over, So do you do this for What are you, what are But for us, we have really not gone that So the benefit is really on the At, at implementing the bots? the code to test that code of testing have you sort of You mean the one which we So just the UI path platform, for the UI path bots So you started with testing? So we already had roughly And then now building that whole testing-- Let me ask it that way. I mean, we have many bots. so it's across the, you know, both of you have a the main reason why from the taste suite to changes in the environment. in the production environment. The robot that was happily doing its Thomas: Yeah. You're saying you test before Does that mean that you against the production environment? the result is what you Can I apply this to testing for that matter, but we have really not So you don't know the So the big advantage is the power a hint of an accent. Well, compare us with Germans. Is that the proper, is that about between the difference what you are testing the automation of testing. on the test and finds issues getting clearer by the minute. But it's going to help you in the long run to a real life customer Thank you for
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Thomas | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Neeraj | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Neeraj Mathur | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Thomas Stocker | PERSON | 0.99+ |
nine weeks | QUANTITY | 0.99+ |
15% | QUANTITY | 0.99+ |
eight weeks | QUANTITY | 0.99+ |
96% | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
VMware | ORGANIZATION | 0.99+ |
UiPath | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
more than 400 automations | QUANTITY | 0.98+ |
Excel | TITLE | 0.98+ |
50 other languages | QUANTITY | 0.98+ |
Austria | LOCATION | 0.98+ |
one piece | QUANTITY | 0.97+ |
two-front | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
UI Path Forward Five | TITLE | 0.97+ |
The Cubes | TITLE | 0.96+ |
around 15% | QUANTITY | 0.96+ |
UiPath T Suite | TITLE | 0.96+ |
UI Path | ORGANIZATION | 0.96+ |
German | OTHER | 0.96+ |
Austrian | OTHER | 0.95+ |
hundred percent | QUANTITY | 0.95+ |
400 plus automations | QUANTITY | 0.95+ |
TheCUBE | ORGANIZATION | 0.92+ |
400 bots | QUANTITY | 0.92+ |
over 45 different business functions | QUANTITY | 0.91+ |
Germans | OTHER | 0.91+ |
day one | QUANTITY | 0.91+ |
UiPathT | TITLE | 0.9+ |
RPA | TITLE | 0.9+ |
months | QUANTITY | 0.88+ |
UI | ORGANIZATION | 0.86+ |
Chris Thomas & Rob Krugman | AWS Summit New York 2022
(calm electronic music) >> Okay, welcome back everyone to theCUBE's coverage here live in New York City for AWS Summit 2022. I'm John Furrier, host of theCUBE, but a great conversation here as the day winds down. First of all, 10,000 plus people, this is a big event, just New York City. So sign of the times that some headwinds are happening? I don't think so, not in the cloud enterprise innovation game. Lot going on, this innovation conversation we're going to have now is about the confluence of cloud scale integration data and the future of how FinTech and other markets are going to change with technology. We got Chris Thomas, the CTO of Slalom, and Rob Krugman, chief digital officer at Broadridge. Gentlemen, thanks for coming on theCUBE. >> Thanks for having us. >> So we had a talk before we came on camera about your firm, what you guys do, take a quick minute to just give the scope and size of your firm and what you guys work on. >> Yeah, so Broadridge is a global financial FinTech company. We work on, part of our business is capital markets and wealth, and that's about a third of our business, about $7 trillion a day clearing through our platforms. And then the other side of our business is communications where we help all different types of organizations communicate with their shareholders, communicate with their customers across a variety of different digital channels and capabilities. >> Yeah, and Slalom, give a quick one minute on Slalom. I know you guys, but for the folks that don't know you. >> Yeah, no problem. So Slalom is a modern consulting firm focused on strategy, technology, and business transformation. And me personally, I'm part of the element lab, which is focused on forward thinking technology and disruptive technology in the next five to 10 years. >> Awesome, and that's the scope of this conversation. The next five to 10 years, you guys are working on a project together, you're kind of customer partners. You're building something. What are you guys working on? I can't wait to jump into it, explain. >> Sure, so similar to Chris, at Broadridge, we've created innovation capability, innovation incubation capability, and one of the first areas we're experimenting in is digital assets. So what we're looking to do is we're looking at a variety of different areas where we think consolidation network effects that we could bring can add a significant amount of value. And so the area we're working on is this concept of a wallet of wallets. How do we actually consolidate assets that are held across a variety of different wallets, maybe traditional locations- >> Digital wallets. >> Digital wallets, but maybe even traditional accounts, bring that together and then give control back to the consumer of who they want to share that information with, how they want their transactions to be able to control. So the idea of, people talk about Web 3 being the internet of value. I often think about it as the internet of control. How do you return control back to the individual so that they can make decisions about how and who has access to their information and assets? >> It's interesting, I totally like the value angle, but your point is what's the chicken and the egg here, the cart before the horse, you can look at it both ways and say, okay, control is going to drive the value. This is an interesting nuance, right? >> Yes, absolutely. >> So in this architectural world, they thought about the data plane and the control plane. Everyone's trying to go old school, middleware thinking. Let's own the data plane, we'll win everything. Not going to happen if it goes decentralized, right, Chris? >> Yeah, yeah. I mean, we're building a decentralized application, but it really is built on top of AWS. We have a serverless architecture that scales as our business scales built on top of things like S3, Lambda, DynamoDB, and of course using those security principles like Cognito and AWS Gateway, API Gateway. So we're really building an architecture of Web 3 on top of the Web 2 basics in the cloud. >> I mean, all evolutions are abstractions on top of each other, IG, DNS, Key, it goes the whole nine yards. In digital, at least, that's the way. Question about serverless real quick. I saw that Redshift just launched general availability of serverless in Redshift? >> Yes. >> You're starting to see the serverless now part of almost all the services in AWS. Is that enabling that abstraction, because most people don't see it that way. They go, oh, well, Amazon's not Web 3. They got databases, you could use that stuff. So how do you connect the dots and cross the bridge to the future with the idea that I might not think Web 2 or cloud is Web 3? >> I'll jump in quick. I mean, I think it's the decentralize. If you think about decentralization. serverless and decentralization, you could argue are the same way of, they're saying the same thing in different ways. One is thinking about it from a technology perspective. One is thinking about it from an ecosystem perspective and how things come together. You need serverless components that can talk to each other and communicate with each other to actually really reach the promise of what Web 3 is supposed to be. >> So digital bits or digital assets, I call it digital bits, 'cause I think zero ones. If you digitize everything and everything has value or now control drives the value. I could be a soccer team. I have apparel, I have value in my logos, I have photos, I have CUBE videos. I mean some say that this should be an NFT. Yeah, right, maybe, but digital assets have to be protected, but owned. So ownership drives it too, right? >> Absolutely. >> So how does that fit in, how do you explain that? 'Cause I'm trying to tie the dots here, connect the dots and tie it together. What do I get if I go down this road that you guys are building? >> So I think one of the challenges of digital assets right now is that it's a closed community. And I think the people that play in it, they're really into it. And so you look at things like NFTs and you look at some of the other activities that are happening and there are certain naysayers that look at it and say, this stuff is not based upon value. It's a bunch of artwork, it can't be worth this. Well, how about we do a time out there and we actually look at the underlying technology that's supporting this, the blockchain, and the potential ramifications of that across the entire financial ecosystem, and frankly, all different types of ecosystems of having this immutable record, where information gets stored and gets sent and the ability to go back to it at all times, that's where the real power is. So I think we're starting to see. We've hit a bit of a hiccup, if you will, in the cryptocurrencies. They're going to continue to be there. They won't all be there. A lot of them will probably disappear, but they'll be a finite number. >> What percentage of stuff do you think is vapor BS? If you had to pick an order of magnitude number. >> (laughs) I would say at least 75% of it. (John laughs) >> I mean, there's quite a few projects that are failing right now, but it's interesting in that in the crypto markets, they're failing gracefully. Because it's on the blockchain and it's all very transparent. Things are checked, you know immediately which companies are insolvent and which opportunities are still working. So it's very, very interesting in my opinion. >> Well, and I think the ones that don't have valid premises are the ones that are failing. Like Terra and some of these other ones, if you actually really looked at it, the entire industry knew these things were no good. But then you look at stable coins. And you look at what's going on with CBDCs. These are backed by real underlying assets that people can be comfortable with. And there's not a question of, is this going to happen? The question is, how quickly is it going to happen and how quickly are we going to be using digital currencies? >> It's interesting, we always talk about software, software as money now, money is software and gold and oil's moving over to that crypto. How do you guys see software? 'Cause we were just arguing in the queue, Dave Vellante and I, before you guys came on that the software industry pretty much does not exist anymore, it's open source. So everything's open source as an industry, but the value is integration, innovation. So it's not just software, it's the free. So you got to, it's integration. So how do you guys see this software driving crypto? Because it is software defined money at the end of the day. It's a token. >> No, I think that's absolutely one of the strengths of the crypto markets and the Web 3 market is it's governed by software. And because of that, you can build a trust framework. Everybody knows it's on the public blockchain. Everybody's aware of the software that's driving the rules and the rules of engagement in this blockchain. And it creates that trust network that says, hey, I can transact with you even though I don't know anything about you and I don't need a middleman to tell me I can trust you. Because this software drives that trust framework. >> Lot of disruption, lot of companies go out of business as a middleman in these markets. >> Listen, the intermediaries either have to disrupt themselves or they will be disrupted. I think that's what we're going to learn here. And it's going to start in financial services, but it's going to go to a lot of different places. I think the interesting thing that's happening now is for the first time, you're starting to see the regulators start to get involved. Which is actually a really good thing for the market. Because to Chris's point, transparency is here, how do you actually present that transparency and that trust back to consumers so they feel comfortable once that problem is solved. And I think everyone in the industry welcomes it. All of a sudden you have this ecosystem that people can play in, they can build and they can start to actually create real value. >> Every structural change that I've been involved in my 30 plus year career has been around inflection points. There was always some sort of underbelly. So I'm not going to judge crypto. It's been in the market for a while, but it's a good sign there's innovation happening. So as now, clarity comes into what's real. I think you guys are talking a conversation I think is refreshing because you're saying, okay, cloud is real, Lambda, serverless, all these tools. So Web 3 is certainly real because it's a future architecture, but it's attracting the young, it's a cultural shift. And it's also cooler than boring Web 2 and cloud. So I think the cultural shift, the fact that it's got data involved, there's some disruption around middleman and intermediaries, makes it very attractive to tech geeks. You look at, I read a stat, I heard a stat from a friend in the Bay Area that 30% of Cal computer science students are dropping out and jumping into crypto. So it's attracting the technical nerds, alpha geeks. It's a cultural revolution and there's some cool stuff going on from a business model standpoint. >> There's one thing missing. The thing that's missing, it's what we're trying to work on, I think is experience. I think if you're being honest about the entire marketplace, what you would agree is that this stuff is not easy to use today, and that's got to be satisfied. You need to do something that if it's the 85 year old grandma that wants to actually participate in these markets that not only can they feel comfortable, but they actually know how to do it. You can't use these crazy tools where you use these terms. And I think the industry, as it grows up, will satisfy a lot of those issues. >> And I think this is why I want to tie back and get your reaction to this. I think that's why you guys talking about building on top of AWS is refreshing, 'cause it's not dogmatic. Well, we can't use Amazon, it's not really Web 3. Well, a database could be used when you need it. You don't need to write everything through the blockchain. Databases are a very valuable capability, you get serverless. So all these things now can work together. So what do you guys see for companies that want to be Web 3 for all the good reasons and how do they leverage cloud specifically to get there? What are some things that you guys have learned that you can point to and share, you want to start? >> Well, I think not everything has to be open and public to everybody. You're going to want to have some things that are secret. You're going to want to encrypt some things. You're going to want to put some things within your own walls. And that's where AWS really excels. I think you can have the best of both worlds. So that's my perspective on it. >> The only thing I would add to it, so my view is it's 2022. I actually was joking earlier. I think I was at the first re:Invent. And I remember walking in and this was a new industry. >> It was tiny. >> This is foundational. Like cloud is not a, I don't view like, we shouldn't be having that conversation anymore. Of course you should build this stuff on top of the cloud. Of course you should build it on top of AWS. It just makes sense. And we should, instead of worrying about those challenges, what we should be worrying about are how do we make these applications easier to use? How do we actually- >> Energy efficient. >> How do we enable the promise of what these things are going to bring, and actually make it real, because if it happens, think about traditional assets. There's projects going on globally that are looking at how do you take equity securities and actually move them to the blockchain. When that stuff happens, boom. >> And I like what you guys are doing, I saw the news out through this crypto winter, some major wallet exchanges that have been advertising are hurting. Take me through what you guys are thinking, what the vision is around the wallet of wallets. Is it to provide an experience for the user or the market industry itself? What's the target, is it both? Share the design goals for the wallet of wallets. >> My favorite thing about innovation and innovation labs is that we can experiment. So I'll go in saying we don't know what the final answer is going to be, but this is the premise that we have. In this disparate decentralized ecosystem, you need some mechanism to be able to control what's actually happening at the consumer level. So I think the key target is how do you create an experience where the consumer feels like they're in control of that value? How do they actually control the underlying assets? And then how does it actually get delivered to them? Is it something that comes from their bank, from their broker? Is it coming from an independent organization? How do they manage all of that information? And I think the last part of it are the assets. It's easy to think about cryptos and NFTs, but thinking about traditional assets, thinking about identity information and healthcare records, all of that stuff is going to become part of this ecosystem. And imagine being able to go someplace and saying, oh, you need my information. Well, I'm going to give it to you off my phone and I'm going to give it to you for the next 24 hours so you can use it, but after that you have no access to it. Or you're my financial advisor, here's a view of what I actually have, my underlying assets. What do you recommend I do? So I think we're going to see an evolution in the market. >> Like a data clean room. >> Yeah, but that you control. >> Yes! (laughs) >> Yes! >> I think about it very similarly as well. As my journey into the crypto market has gone through different pathways, different avenues. And I've come to a place where I'm really managing eight different wallets and it's difficult to figure exactly where all my assets are and having a tool like this will allow me to visualize and aggregate those assets and maybe even recombine them in unique ways, I think is hugely valuable. >> My biggest fear is losing my key. >> Well, and that's an experience problem that has to be solved, but let me give you, my favorite use case in this space is, 'cause NFTs, right? People are like, what does NFTs really mean? Title insurance, right? Anyone buy a house or refinance your mortgage? You go through this crazy process that costs seven or eight thousand dollars every single time you close on something to get title insurance so they could validate it. What if that title was actually sitting on the chain, you got an NFT that you put in your wallet and when it goes time to sell your house or to refinance, everything's there. Okay, I'm the owner of the house. I don't know, JP Morgan Chase has the actual mortgage. There's another lien, there's some taxes. >> It's like a link tree in the wallet. (laughs) >> Yeah, think about it, you got a smart contract. Boom, closing happens immediately. >> I think that's one of the most important things. I think people look at NFTs and they think, oh, this is art. And that's sort of how it started in the art and collectable space, but it's actually quickly moving towards utilities and tokenization and passes. And that's where I think the value is. >> And ownership and the token. >> Identity and ownership, especially. >> And the digital rights ownership and the economics behind it really have a lot of scale 'cause I appreciate the FinTech angle you are coming from because I can now see what's going on here with you. It's like, okay, we got to start somewhere. Let's start with the experience. The wallet's a tough nut to crack, 'cause that requires defacto participation in the industry as a defacto standard. So how are you guys doing there? Can you give an update and then how can people get, what's the project called and how do people get involved? >> Yeah, so we're still in the innovation, incubation stages. So we're not launching it yet. But what I will tell you is what a lot of our focus is, how do we make these transactional things that you do? How do we make it easy to pull all your assets together? How do we make it easy to move things from one location to the other location in ways that you're not using a weird cryptographic numeric value for your wallet, but you actually can use real nomenclature that you can renumber and it's easy to understand. Our expectation is that sometime in the fall, we'll actually be in a position to launch this. What we're going to do over the summer is we're going to start allowing people to play with it, get their feedback, and we're going to iterate. >> So sandbox in when, November? >> I think launch in the fall, sometime in the fall. >> Oh, this fall. >> But over the summer, what we're expecting is some type of friends and family type release where we can start to realize what people are doing and then fix the challenges, see if we're on the right track and make the appropriate corrections. >> So right now you guys are just together on this? >> Yep. >> The opening up friends and family or community is going to be controlled. >> It is, yeah. >> Yeah, as a group, I think one thing that's really important to highlight is that we're an innovation lab. We're working with Broadridge's innovation lab, that partnership across innovation labs has allowed us to move very, very quickly to build this. Actually, if you think about it, we were talking about this not too long ago and we're almost close to having an internal launch. So I think it's very rapid development. We follow a lot of the- >> There's buy-in across the board. >> Exactly, exactly, and we saw lot of very- >> So who's going to run this? A Dow, or your companies, is it going to be a separate company? >> So to be honest, we're not entirely sure yet. It's a new product that we're going to be creating. What we actually do with it. Our thought is within an innovation environment, there's three things you could do with something. You can make it a product within the existing infrastructure, you can create a new business unit or you can spin it off as something new. I do think this becomes a product within the organization based upon it's so aligned to what we do today, but we'll see. >> But you guys are financing it? >> Yes. >> As collective companies? >> Yeah, right. >> Got it, okay, cool. Well, let us know how we can help. If you guys want to do a remote in to theCUBE. I would love the mission you guys are on. I think this is the kind of work that every company should be doing in the new R and D. You got to jump in the deep end and swim as fast as possible. But I think you can do it. I think that is refreshing and that's smart. >> And you have to do it quick because this market, I think the one thing we would probably agree on is that it's moving faster than we could, every week there's something else that happens. >> Okay, so now you guys were at Consensus down in Austin when the winter hit and you've been in the business for a long time, you got to know the industries. You see where it's going. What was the big thing you guys learned, any scar tissue from the early data coming in from the collaboration? Was there some aha moments, was there some oh shoot moments? Oh, wow, I didn't think that was going to happen. Share some anecdotal stories from the experience. Good, bad, and if you want to be bold say ugly, too. >> Well, I think the first thing I want to say about the timing, it is the crypto winter, but I actually think now's a really great time to build something because everybody's continuing to build. Folks are focused on the future and that's what we are as well. In terms of some of the challenges, well, the Web 3 space is so new. And there's not a way to just go online and copy somebody else's work and rinse and repeat. We had to figure a lot of things on our own. We had to try different technologies, see which worked better and make sure that it was functioning the way we wanted it to function. Really, so it was not easy. >> They oversold that product out, that's good, like this team. >> But think about it, so the joke is that when winter is when real work happens. If you look at the companies that have not been affected by this it's the infrastructure companies and what it reminds me of, it's a little bit different, but 2001, we had the dot com bust. The entire industry blew up, but what came out of that? >> Everything that exists. >> Amazon, lots of companies grew up out of that environment. >> Everything that was promoted actually happened. >> Yes, but you know what didn't happen- >> Food delivery. >> But you know what's interesting that didn't happen- >> (laughs) Pet food, the soccer never happened. >> The whole Super Bowl, yes. (John laughs) In financial services we built on top of legacy. I think what Web 3 is doing, it's getting rid of that legacy infrastructure. And the banks are going to be involved. There's going to be new players and stuff. But what I'm seeing now is a doubling down of the infrastructure investment of saying okay, how do we actually make this stuff real so we can actually show the promise? >> One of the things I just shared, Rob, you'd appreciate this, is that the digital advertising market's changing because now banner ads and the old techniques are based on Web 2 infrastructure, basically DNS as we know it. And token problems are everywhere. Sites and silos are built because LinkedIn doesn't share information. And the sites want first party data. It's a hoarding exercise, so those practices are going to get decimated. So in comes token economics, that's going to get decimated. So you're already seeing the decline of media. And advertising, cookies are going away. >> I think it's going to change, it's going to be a flip, because I think right now you're not in control. Other people are in control. And I think with tokenomics and some of the other things that are going to happen, it gives back control to the individual. Think about it, right now you get advertising. Now you didn't say I wanted this advertising. Imagine the value of advertising when you say, you know what, I am interested in getting information about this particular type of product. The lead generation, the value of that advertising is significantly higher. >> Organic notifications. >> Yeah. >> Well, gentlemen, I'd love to follow up with you. I'm definitely going to ping in. Now I'm going to put CUBE coin back on the table. For our audience CUBE coin's coming. Really appreciate it, thanks for sharing your insights. Great conversation. >> Excellent, thank you for having us. >> Excellent, thank you so much. >> theCUBE's coverage here from New York City. I'm John Furrier, we'll be back with more live coverage to close out the day. Stay with us, we'll be right back. >> Excellent. (calm electronic music)
SUMMARY :
and the future of how what you guys work on. and wealth, and that's about I know you guys, but for the the next five to 10 years. Awesome, and that's the And so the area we're working on So the idea of, people talk about Web 3 going to drive the value. Not going to happen if it goes and of course using In digital, at least, that's the way. So how do you connect the that can talk to each other or now control drives the value. that you guys are building? and the ability to go do you think is vapor BS? (laughs) I would in that in the crypto markets, is it going to happen on that the software industry that says, hey, I can transact with you Lot of disruption, lot of and they can start to I think you guys are And I think the industry, as it grows up, I think that's why you guys talking I think you can have I think I was at the first re:Invent. applications easier to use? and actually move them to the blockchain. And I like what you guys are doing, all of that stuff is going to And I've come to a place that has to be solved, in the wallet. you got a smart contract. it started in the art So how are you guys doing there? that you can renumber and fall, sometime in the fall. and make the appropriate corrections. or community is going to be controlled. that's really important to highlight So to be honest, we're But I think you can do it. I think the one thing we in from the collaboration? Folks are focused on the future They oversold that product out, If you look at the companies Amazon, lots of companies Everything that was (laughs) Pet food, the And the banks are going to be involved. is that the digital I think it's going to coin back on the table. to close out the day. (calm electronic music)
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris Thomas | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Rob Krugman | PERSON | 0.99+ |
Slalom | ORGANIZATION | 0.99+ |
2001 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Austin | LOCATION | 0.99+ |
New York City | LOCATION | 0.99+ |
30% | QUANTITY | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
JP Morgan Chase | ORGANIZATION | 0.99+ |
New York City | LOCATION | 0.99+ |
Rob | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
30 plus year | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
one minute | QUANTITY | 0.99+ |
seven | QUANTITY | 0.99+ |
Redshift | TITLE | 0.99+ |
Super Bowl | EVENT | 0.99+ |
eight thousand dollars | QUANTITY | 0.99+ |
November | DATE | 0.99+ |
Consensus | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
S3 | TITLE | 0.98+ |
Bay Area | LOCATION | 0.98+ |
first time | QUANTITY | 0.98+ |
10,000 plus people | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
both worlds | QUANTITY | 0.98+ |
three things | QUANTITY | 0.97+ |
both ways | QUANTITY | 0.97+ |
AWS Summit 2022 | EVENT | 0.97+ |
one | QUANTITY | 0.97+ |
Lambda | TITLE | 0.97+ |
One | QUANTITY | 0.97+ |
Broadridge | ORGANIZATION | 0.97+ |
about $7 trillion a day | QUANTITY | 0.97+ |
10 years | QUANTITY | 0.97+ |
five | QUANTITY | 0.97+ |
85 year old | QUANTITY | 0.96+ |
one location | QUANTITY | 0.96+ |
first thing | QUANTITY | 0.95+ |
nine yards | QUANTITY | 0.94+ |
one thing | QUANTITY | 0.94+ |
Web | TITLE | 0.93+ |
DynamoDB | TITLE | 0.93+ |
first | QUANTITY | 0.93+ |
theCUBE | ORGANIZATION | 0.93+ |
AWS Summit | EVENT | 0.92+ |
zero | QUANTITY | 0.92+ |
this fall | DATE | 0.9+ |
API Gateway | TITLE | 0.9+ |
Dow | ORGANIZATION | 0.89+ |
First | QUANTITY | 0.88+ |
CUBE | ORGANIZATION | 0.88+ |
eight different wallets | QUANTITY | 0.87+ |
about a third | QUANTITY | 0.85+ |
2022 | DATE | 0.85+ |
Web 3 | TITLE | 0.84+ |
Cognito | TITLE | 0.82+ |
Invent | EVENT | 0.82+ |
every single time | QUANTITY | 0.8+ |
Web 3 | TITLE | 0.79+ |
Thomas Bienkowski, Netscout |Netscout Advanced NPR Panel 7 22
>>EDR NDR, what are the differences, which one's better? Are they better together? Today's security stack contains a lot of different tools and types of data and fortunate, as you know, this creates data silos, which leads to vis visibility gaps. EDR is endpoint detection and response. It's designed to monitor and mitigate endpoint attacks, which are typically focused on computers and servers, NDR network detection, and response. On the other hand, monitors network traffic to gain visibility into potential or active cyber threats, delivering real time visibility across the broader network. One of the biggest advantages that NDR has over EDR is that bad actors can hide or manipulate endpoint data, pretty easily network data. On the other hand, much harder to manipulate because attackers and malware can avoid detection at the endpoint. NDR, as you're gonna hear is the only real source for reliable, accurate, and comprehensive data. >>All endpoints use the network to communicate, which makes your network data, the ultimate source of truth. My name is Lisa Martin, and today on the special cube presentation, Tom Binkowski senior director of product marketing at net scout, and I are gonna explore the trends and the vital reasons why relying upon EDR is not quite enough. We're also gonna share with you the growing importance of advanced NDR. Welcome to the series, the growing importance of advanced NDR in the first segment, Tom's gonna talk with me about the trends that are driving enterprise security teams to implement multiple cyber security solutions that enable greater visibility, greater protection. We're also gonna explore Gartner's concept of the security operations center, SOC visibility triad, and the three main data sources for visibility, SIM EDR and NDR in segment two, Tom. And I will talk about the role of NDR and how it overcomes the challenges of EDR as Tom's gonna discuss, as you'll hear EDR is absolutely needed, but as he will explain it, can't be solely relied upon for comprehensive cybersecurity. And then finally, we'll come back for a third and final segment to discuss why not all NDR is created equal. Tom's gonna unpack the features and the capabilities that are most important when choosing an NDR solution. Let's do this. Here comes our first segment. >>Hey, everyone kicking things off. This is segment one. I'm Lisa Martin with Tom Binowski, senior director of product marketing at nets scout. Welcome to the growing importance of advanced NDR. Tom, great to have you on the program, >>Glad to be here. >>So we're gonna be talking about the trends that are driving enterprise security teams to implement multiple cyber security solutions that really enable greater visibility and protection. And there are a number of factors that continue to expand the ECAC service for enterprise networks. I always like to think of them as kind of the spreading amorphously you shared had shared some stats with me previously, Tom, some cloud adoption stats for 2022 94% of all enterprises today use a cloud service and more than 60% of all corporate data is store in the cloud. So, Tom, what are some of the key trends that nets scout is seeing in the market with respect to this? >>Yeah, so just to continue that, you know, those stats that, that migration of workloads to the cloud is a major trend that we're seeing in that was exasperated by the pandemic, right along with working from home. Those two things are probably the most dramatic changes that we we see out there today. But along with that is also this growing sophistication of the network, you know, today, you know, your network environment, isn't a simple hub and spoke or something like that. It is a very sophisticated combination of, you know, high speed backbones, potentially up to a hundred gigabits combination with partner networks. You have, like we said, workloads up in, in private clouds, pub public clouds. So you have this hybrid cloud environment. So, and then you have applications that are multi-tiered, there are pieces and parts. And in all of that, some on your premise, some up in a private cloud, some on a public cloud, some actually pulling data off when you a customer network or potentially even a, a partner network. So really, really sophisticated environment today. And that's requiring this need for very comprehensive network visibility, not only for, for cybersecurity purposes, but also just to make sure that those applications and networks are performing as you have designed them. >>So when it comes to gaining visibility into cyber threats, I, you talked about the, the sophistication and it sounds like even the complexity of these networks, Gartner introduced the concept of the security operations, visibility triad, or the SOC visibility triad break that down for us. It consists of three main data sources, but to break those three main data sources down for us. >>Sure. So Gartner came out a few years ago where they were trying to, you know, summarize where do security operations team get visibility into threats and they put together a triad and the three sides of the trier consists of one, the SIM security information event manager, two, the endpoint or, or data that you get from EDR systems, endpoint detection, response systems. And the third side is the network or the data you get from network detection, response systems. And, you know, they didn't necessarily say one is better than the other. They're basically said that you need all three in order to have comprehensive visibility for cybersecurity purposes. >>So talk, so all, all three perspectives are needed. Talk about what each provides, what are the different perspectives on threat detection and remediation? >>Yeah. So let's start with the SIM, you know, that is a device that is gathering alerts or logs from all kinds of different devices all over your network. Be it routers servers, you know, firewalls IDs, or even from endpoint detection and network detection devices too. So it is, it is the aggregator or consumer of all those alerts. The SIM is trying to correlate those alerts across all those different data sources and, and trying to the best it can to bubble up potentially the highest priority alerts or drawing correlations and, and, and, and giving you some guidance on, Hey, here's something that we think is, is really of importance or high priority. Here's some information that we have across these disparate data sources. Now go investigate the disadvantage of the SIM is that's all it gives you is just these logs or, or, or information. It doesn't give you any further context. >>Like what happened, what is really happening at the end point? Can I get visibility into the, into the files that were potentially manipulated or the, the registry setting or what, what happened on the network? And I get visibility into the packet date or things like that. It that's, so that's where it ends. And, and that's where the, so there other two sides of the equation come in, the endpoint will give you that deeper visibility, endpoint detection response. It will look for known and or unknown threats, you know, at that endpoint, it'll give you all kinds of additional information that is occurring in endpoint, whether it be a registry setting in memory on the file, et cetera. But you know, one of, some of its disadvantages, it's really difficult because really difficult to deploy pervasive because it requires an agent and, you know, not all devices can accept an agent, but what it miss, what is lacking is the context on the network. >>So if I was an analyst and I started pursuing from my SIM, I went down to the end point and, and said, I wanna investigate this further. And I hit a, I hit a dead end from some sort, or I realize that the device that's potentially I should be alerted to, or should be concerned about is an IOT device that doesn't even have an agent on it. My next source of visibility is on the network and that's where NDR comes in. It, it sees what's traversing. The entire network provides you visibility into that from both a metadata and even a ultimately a packer perspective. And maybe, you know, could be deployed a little bit more strategically, but you know, it doesn't have the perspective of the endpoint. So you can see how each of these sort of compliments each other. And that's why, you know, Gartner said that, that you need 'em all, then they all play a role. They all have their pros and cons or advantage and disadvantages, but, you know, bringing them and using 'em together is, is the key. >>I wanna kinda dig into some of the, the EDR gaps and challenges, as you talked about as, as the things evolve and change the network, environment's becoming far more sophisticated and as well as threat actors are, and malware is. So can you crack that open more on some of the challenges that EDR is presenting? What are some of those gaps and how can organizations use other, other, other data sources to solve them? >>Yeah, sure. So, you know, again, just be clear that EDR is absolutely required, right? We, we need that, but as sort of these network environments get more complex, are you getting all kinds of new devices being put on the network that devices being brought into the network that may be, you didn't know of B Y O D devices you have, I T devices, you know, popping up potentially by the thousands in, in, in some cases when new applications or world that maybe can't accept an and endpoint detection or an EDR agent, you may have environments like ICS and skate environments that just, you can't put an endpoint agent there. However, those devices can be compromised, right? You have different environments up in the cloud or SaaS environments again, where you may not be able to deploy an endpoint agent and all that together leaves visibility gaps or gaps in, in, in the security operation triad. Right. And that is basically open door for exploitation >>Open door. Go ahead. Sorry. >>Yeah. And then, then you just have the malware and the, and the attackers getting more sophisticated. They, they have malware that can detect an EDR agent running or some anti malware agent running on device. And they'll simply avoid that and move on to the next one, or they know how to hide their tracks, you know, whether it be deleting files, registry, settings, things like that. You know, so it's, that's another challenge that, that, that just an agent faces. Another one is there are certain applications like my SQL that are, you know, have ministry administrative rights into certain parts of the windows operate system that EDR doesn't have visibility into another area that maybe EDR may not have visibility is, is, is in, you know, malware that tries to compromise, you know, hardware, especially like bios or something like that. So there's a number of challenges as sort of the whole network environment and sophistication of bad actors and malware increases. >>Ultimately, I think one of the things that, that we've learned, and, and we've heard from you in this segment, is that doing business in, in today's digital economy, demands, agility, table stakes, right? Absolutely essential corporate digital infrastructures have changed a lot in response to the dynamic environment, but its businesses are racing to the clouds. Dave Alane likes to call it the forced March to the cloud, expanding activities across this globally distributed digital ecosystem. They also sounds like need to reinvent cybersecurity to defend this continuously expanding threat surface. And for that comprehensive network, visibility is, as I think you were saying is really, really fundamental and more advanced network detection is, and responses required. Is that right? >>That's correct. You know, you know, we, we at ESCO, this is, this is where we come from. Our perspective is the network. It has been over for over 30 years. And, and we, as well as others believe that that network visibility, comprehensive network visibility is fundamental for cyber security as well as network performance and application analysis. So it, it, it's sort of a core competency or need for, for modern businesses today. >>Excellent. And hold that thought, Tom, cause in a moment, you and I are gonna be back to talk about the role of NDR and how it overcomes the challenges of EDR. You're watching the cube, the leader in enterprise tech coverage. Hey everyone, welcome back. This is segment two kicking things off I'm Lisa Martin with Tom Binkowski, senior director of product marketing at nets scout, Tom, great to have you back on the program. >>Good to be here. >>We're gonna be talking about the growing importance of advanced NDR in this series. In this segment specifically, Tom's gonna be talking about the role of NDR and how it overcomes the challenges of EDR. So Tom, one of the things that we talked about previously is one of the biggest advantages that NDR has over EDR is that bad actors can hide or manipulate endpoint data pretty easily, whereas network data, much harder to manipulate. So my question, Tom, for you is, is NDR the only real source for reliable, accurate, comprehensive data. >>I'm sure that's arguable, right? Depending on who you are as a vendor, but you know, it's, it's our, our answer is yes, NDR solutions also bring an analyst down to the packet level. And there's a saying, you know, the, the packet is the ultimate source or source of truth. A bad actor cannot manipulate a packet. Once it's on the wire, they could certainly manipulate it from their end point and then blast it out. But once it hits the wire, that's it they've lost control of it. And once it's captured by a network detection or, or network monitoring device, they can't manipulate it. They can't go into that packet store and, and manipulate those packets. So the ultimate source of truth is, is lies within that packet somewhere. >>Got you. Okay. So as you said in segment one EDR absolutely necessary, right. But you did point out it can't organizations can't solely rely on it for comprehensive cybersecurity. So Tom, talk about the benefits of, of this complimenting, this combination of EDR and NDR and, and how can that deliver more comprehensive cybersecurity for organizations? >>Yeah, so, so one of the things we talked about in the prior segment was where EDR, maybe can't be deployed and it's either on different types of devices like IOT devices, or even different environments. They have a tough time maybe in some of these public cloud environments, but that's where NDR can, can step in, especially in these public cloud environments. So I think there's a misconception out there that's difficult to get packet level or network visibility and public clouds like AWS or Azure or Google and so on. And that's absolutely not true. They have all kinds of virtual tapping capabilities that an NDR solution or network based monitoring solution could take advantage of. And one of the things that we know we spoke about before some of that growing trends of migrating workloads to the cloud, that's, what's driving that those virtual networks or virtual taps is providing visibility into the performance and security of those workloads. >>As they're migrated to public clouds, NDR can also be deployed more strategically, you know, prior segment talking about how the, in order to gain pervasive visibility with EDR, you have to deploy an agent everywhere agents can't be deployed everywhere. So what you can do with NDR is there's a lot fewer places in a network where you can strategically deploy a network based monitoring device to give you visibility into not only that north south traffic. So what's coming in and out of your network, but also the, the, the, the east west traffic too west traversing, you know, within your network environment between different points of your op your, your multi-tiered application, things like that. So that's where, you know, NDR has a, a, a little bit more advantage. So fewer points of points in the network, if you will, than everywhere on every single endpoint. And then, you know, NDR is out there continuously gathering network data. It's both either before, during, and even after a threat or an attack is, is detected. And it provides you with this network context of, of, you know, what's happening on the wire. And it does that through providing you access to, you know, layer two through layer seven metadata, or even ultimately packets, you know, the bottom line is simply that, you know, NDR is providing, as we said before, that that network context that is potentially missing or is missing in EDR. >>Can you talk a little bit about XDR that kind of sounds like a superhero name to me, but this is extended detection and response, and this is an evolution of EDR talk to us about XDR and maybe EDR NDR XDR is really delivering that comprehensive cybersecurity strategy for organizations. >>Yeah. So, you know, it's, it's interesting. I think there's a lot of confusion out there in the industry. What is, what is XDR, what is XDR versus an advanced SIM, et cetera. So in some cases, there are some folks that don't think it's just an evolution of EDR. You know, to me, XDR is taking, look at these, all these disparate data sources. So going back to our, when our first segment, we talked about the, the, the security operations center triad, and it has data from different perspectives, as we were saying, right? And XCR, to me is the, is, is trying to bring them all together. All these disparate data source sets or sources bring them together, conduct some level of analysis on that data for the analyst and potentially, you know, float to the top. The most, you know, important events are events that we, that you know, that the system deems high priority or most risky and so on. But as I, as I'm describing this, I know there are many advanced Sims out there trying to do this today too. Or they do do this today. So this there's this little area of confusion around, you know, what exactly is XDR, but really it is just trying to pull together these different sources of information and trying to help that analyst figure out, you know, what, where's the high priority event that's they should be looking at, >>Right? Getting those high priority events elevated to the top as soon as possible. One of the things that I wanted to ask you about was something that occurred in March of this year, just a couple of months ago, when the white house released a statement from president Biden regarding the nation's cyber security, it included recommendations for private companies. I think a lot of you are familiar with this, but the first set of recommendations were best practices that all organizations should already be following, right? Multifactor authentication, patching against known vulnerabilities, educating employees on the phishing attempts on how to be effective against them. And the next statement in the president's release, focus on data safety practices, also stuff that probably a lot of corporations doing encryption maintaining offline backups, but where the statement focused on proactive measures companies should take to modernize and improve their cybersecurity posture. It was vague. It was deploy modern security tools on your computers and devices to continuously look for and mitigate threats. So my question to you is how do, how do you advise organizations do that? Deploy modern security tools look for and mitigate threats, and where do the data sources, the SOC tri that we talked about NDR XDR EDR, where did they help fit into helping organizations take something that's a bit nebulous and really figure out how to become much more secure? >>Yeah, it was, it was definitely a little vague there with that, with that sentence. And also if you, if you, I think if, if you look at the sentence, deploy modern security tools on your computers and devices, right. It's missing the network as we've been talking about there, there's, there's a key, key point of, of reference that's missing from that, from that sentence. Right. But I think what they mean by deploying monitor security tools is, is really taking advantage of all these, these ways to gain visibility into, you know, the threats like we've been talking about, you're deploying advanced Sims that are pulling logs from all kinds of different security devices or, and, or servers cetera. You're, you're deploying advanced endpoint detection systems, advanced NDR systems. And so on, you're trying to use, you're trying to utilize XDR new technology to pull data from all those different sources and analyze it further. And then, you know, the other one we, we haven't even mentioned yet. It was the, so the security operation and automation, right. Response it's now, now what do we do? We've detected something, but now help me automate the response to that. And so I think that's what they mean by leveraging modern, you know, security tools and so on >>When you're in customer conversations, I imagine they're coming to, to Netscale looking for advice like what we just talked through the vagueness in that statement and the different tools that organizations can use. So when you're talking to customers and they're talking about, we need to gain visibility across our entire network, across all of our devices, from your perspective from net Scout's perspective, what does that visibility actually look like and deliver across an organization that does it well? >>Yeah, we, I mean, I think the simple way to put it is you need visibility. That is both broad and deep. And what I mean by broad is that you need visibility across your network, no matter where that network may reside, no matter what protocols it's running, what, you know, technologies is it, is it virtualized or, or legacy running in a hundred gigabits? Is it in a private cloud, a public cloud, a combination of both. So that broadness, meaning wherever that network is or whatever it's running, that's, that's what you need visibility into. It has to be able to support that environment. Absolutely. And the, the, absolutely when I, we talk about being deep it's, it has to get down to a packet level. It can't be, you know, as high as say, just looking at net flow records or something like that, that they are valuable, they have their role. However, you know, when we talk about getting deep, it has to ultimately get down to the packet level and that's, and we've said this in this time that it's ultimately that source of truth. So that, that's what that's, I think that's what we need. >>Got it. That that depth is incredibly important. Thanks so much, Tom, for talking about this in a moment, you and I are gonna be back, we're gonna be talking about why not all NDR is created equally, and Tom's gonna actually share with you some of the features and capabilities that you should be looking for when you're choosing an NDR solution. You're watching the cube, the leader in enterprise tech coverage, >>And we're clear. >>All right. >>10 45. Perfect. You guys are >>Okay. Good >>Cruising. Well, >>Welcome back everyone. This is segment three. I'm Lisa Martin with Tom gin. Kowski senior director of product marketing at nets scout. Welcome back to the growing importance of advanced NDR in this segment, Tom and I are gonna be talking about the fact that not all NDR is created equally. He's gonna impact the features, the capabilities that are most important when organizations are choosing an NDR solution. Tom, it's great to have you back on the program. >>Great, great to be here. >>So we've, we've covered a lot of content in the first two segments, but as we, as we see enterprises expanding their it infrastructure, enabling the remote workforce, which is here to stay leveraging the crowd cloud, driving innovation, the need for cybersecurity approaches and strategies that are far more robust and deep is really essential. But in response to those challenges, more and more enterprises are relying on NDR solutions that fill some of the gaps that we talked about with some of the existing tool sets in the last segment, we talked about some of the gaps in EDR solutions, how NDR resolves those. But we also know that not all NDR tools are created equally. So what, in your perspective, Tom are some of the absolutely fundamental components of NDR tools that organizations need to have for those tools to really be robust. >>Yeah. So we, we, we touched upon this a little bit in the previous segment when we talked about first and foremost, your NDR solution is providing you comprehensive network visibility that must support whatever your network environment is. And it should be in a single tool. It shouldn't have a one vendor per providing you, you know, network visibility in the cloud and another vendor providing network visibility in a local network. It should be a single NDR solution that provides you visibility across your entire network. So we also talked about it, not only does it need to be broadened like that, but also has to be deep too, eventually down to a packet level. So those are, those are sort of fundamental table stakes, but the NDR solution also must give you the ability to access a robust source of layer two or layer three metadata, and then ultimately give you access to, to packets. And then last but not least that solution must integrate into your existing cybersecurity stack. So in the prior segments, we talked a lot about, you know, the, the SIM, so that, that, that NDR solution must have the ability to integrate into that SIM or into your XDR system or even into your source system. >>Let's kind of double click on. Now, the evolution of NDR can explain some of the differences between the previous generations and advanced NDR. >>Yeah. So let's, let's start with what we consider the most fundamental difference. And that is solution must be packet based. There are other ways to get network visibility. One is using net flow and there are some NDR solutions that rely upon net flow for their source of, of, of visibility. But that's too shallow. You ultimately, you need to get deeper. You need to get down to a pack level and that's again where some, so, you know, you, you want to make sure that your NDR or advanced NDR solution is packet based. Number two, you wanna make sure that when you're pulling packets off the wire, you can do it at scale, that full line rate and in any environment, as we, as we spoke about previously, whether it be your local environment or a public cloud environment, number three, you wanna be able to do this when your traffic is encrypted. As we know a lot of, lot of not of network traffic is encrypted today. So you have the ability to have to have the ability to decrypt that traffic and then analyze it with your NDR system. >>Another, another, another one number four is, okay, I'm not just pulling packets off the wire, throwing full packets into a data storage someplace. That's gonna, you know, fill up a disc in a matter of seconds, right? You want the ability to extract a meaningful set of metadata from layer two to layer seven, the OSI model look at key metrics and conducting initial set of analysis, have the ability to index and compress that data, that metadata as well as packets on these local storage devices on, you know, so having the ability to do this packet capture at scale is really important, storing that packets and metadata locally versus up in a cloud to, you know, help with some compliance and, and confidentiality issues. And then, you know, last final least when we talk about integration into that security stack, it's multiple levels of integration. Sure. We wanna send alerts up into that SIM, but we also want the ability to, you know, work with that XDR system to, or that, that source system to drill back down into that metadata packets for further analysis. And then last but not least that piece of integration should be that there's a robust set of information that these NDR systems are pulling off the wire many times in more advanced mature organizations, you know, security teams, data scientists, et cetera. They just want access to that raw data, let them do their own analysis outside, say the user interface with the boundaries of a, of a vendor's user interface. Right? So have the ability to export that data too is really important and advance in the systems. >>Got it. So, so essentially that the, the, the breadth, the visibility across the entire infrastructure, the depth you mentioned going down to a packet level, the scale, the metadata encryption, is that what net scout means when you talk about visibility without borders? >>Yeah, exactly. You know, we, we have been doing this for over 30 years, pulling packets off of wire, converting them using patent technology to a robust set of metadata, you know, at, at full line rates up to a hundred in any network environment, any protocols, et cetera. So that, that's what we mean by that breadth. And in depth of visibility, >>Can you talk a little bit about smart detection if we say, okay, advanced NDR needs to deliver this threat intelligence, but it also needs to enable smart detection. What does net scout mean by that? >>So what you wanna make sure you have multiple methods of detection, not just a methods. So, you know, not just doing behavioral analysis or not just detecting threats based on known indicators or compromise, what you wanna wanna have multiple ways of detecting threats. It could be using statistical behavioral analysis. It could be using curated threat intelligence. It could be using, you know, open source signature engine, like from Sara COTA or other threat analytics, but to, but you also wanna make sure that you're doing this both in real time and have the ability to do it historically. So after a, a threat has been detected, for example, with another, with another product, say an EDR device, you now want the ability to drill into the data from the network that had occurred in, in, you know, prior to this. So historically you want the ability to comb through a historical set of metadata or packets with new threat intelligence that you've you've gathered today. I wanna be able to go back in time and look through with a whole new perspective, looking for something that I didn't know about, but you know, 30 days ago. So that's, that's what we, what we mean by smart detection. >>So really what organizations need is these tools that deliver a far more comprehensive approach. I wanna get into a little bit more on in integration. You talked about that in previous segments, but can you, can you give us an example of, of what you guys mean by smart integration? Is that, what does that deliver for organizations specifically? >>Yeah, we really it's three things. One will say the integration to the SIM to the security operations center and so on. So when, when an ed, when an NDR device detects something, have it send an alert to the SIM using, you know, open standards or, or, or like syslog standards, et cetera, the other direction is from the SIM or from the so, so one, you know, that SIM that, so is receiving information from many different devices that are, or detecting threats. The analyst now wants the ability to one determine if that's a true threat or not a false positive, if it is a true threat, you know, what help me with the remediation effort. So, you know, an example could be an alert comes into a SIM slash. So, and part of the playbook is to go out and grab the metadata packets associated with this alert sometime before and sometime after when that alert came in. >>So that could be part of the automation coming from the SIM slash. So, and then last one, not least is we alluded to this before is having the ability to export that robust set of layer two through layer seven metadata and or packets to a third party data lake, if you will, and where analysts more sophisticated analysts, data scientists, and so on, can do their own correlation, enrich it with their own data, combined it with other data sets and so on, do their own analysis. So it's that three layers of, of integration, if you will, that really what should be an advanced NDR system? >>All right, Tom, take this home for me. How does nets scout deliver advanced NDRs for organizations? >>We do that via solution. We call Omni the security. This is Netscout's portfolio of, of multiple different cyber security products. It all starts with the packets. You know, our core competency for the last 30 years has been to pull packets off the wire at scale, using patented technologies, for example, adapt service intelligence technologies to convert those broad packets into robust set of layer seven layer two through seven metadata. We refer to that data as smart data with that data in hand, you now have the ability to conduct multiple types of threat detection using statistical behavioral, you know, curative threat intelligence, or even open source. So rules engine, you have the ability to detect threats both in real time, as well as historically, but then a solution goes beyond just detecting threats or investigating threats has the ability to influence the blocking of threats too. So we have integrations with different firewall vendors like Palo Alto, for example, where they could take the results of our investigation and then, you know, create policies, blocking policies into firewall. >>In addition to that, we have our own Omni a E D product or our Arbor edge defense. That's, that's a product that sits in front of the firewall and protects the firewall from different types of attacks. We have integration that where you can, you can also influence policies being blocked in the a E and in last but not least, our, our solution integrates this sort of three methods of integration. As we mentioned before, with an existing security system, sending alerts to it, allowing for automation and investigation from it, and having the ability to export our data for, you know, custom analysis, you know, all of this makes that security stack that we've been talking about better, all those different tools that we have. That's that operations triads that we talked about or visibility triad, we talked about, you know, our data makes that entire triad just better and makes the overall security staff better and makes overall security just, just better too. So that, that that's our solution on the security. >>Got it. On the security. And what you've talked about did a great job. The last three segments talking about the differences between the different technologies, data sources, why the complimentary and collaborative nature of them working together is so important for that comprehensive cybersecurity. So Tom, thank you so much for sharing such great and thoughtful information and insight for the audience. >>Oh, you're welcome. Thank you. >>My pleasure. We wanna thank you for watching the program today. Remember that all these videos are available@thecube.net, and you can check out today's news on Silicon angle.com and of course, net scout.com. We also wanna thank net scout for making this program possible and sponsoring the cube. I'm Lisa Martin for Tomski. Thanks for watching and bye for now.
SUMMARY :
as you know, this creates data silos, which leads to vis visibility gaps. with you the growing importance of advanced NDR. Tom, great to have you on the program, I always like to think of them as kind of the spreading amorphously you shared had shared some stats with me sophistication of the network, you know, today, you know, your network environment, So when it comes to gaining visibility into cyber threats, I, you talked about the, the sophistication And the third side is the network or the data you get from network detection, So talk, so all, all three perspectives are needed. of the SIM is that's all it gives you is just these logs or, come in, the endpoint will give you that deeper visibility, or advantage and disadvantages, but, you know, bringing them and using 'em together is, is the key. So can you crack that open more on some of the into the network that may be, you didn't know of B Y O D devices you have, or they know how to hide their tracks, you know, whether it be deleting files, as I think you were saying is really, really fundamental and more advanced network detection is, You know, you know, we, we at ESCO, this is, this is where we come from. And hold that thought, Tom, cause in a moment, you and I are gonna be back to talk about the role of NDR So my question, Tom, for you is, is NDR the And there's a saying, you know, So Tom, talk about the benefits of, of this complimenting, And one of the things that we know we spoke about before some the bottom line is simply that, you know, NDR is providing, as we said before, that that network context Can you talk a little bit about XDR that kind of sounds like a superhero name to me, important events are events that we, that you know, that the system deems high So my question to you is And then, you know, the other one we, So when you're talking to customers and they're talking about, And what I mean by broad is that you need visibility across your and Tom's gonna actually share with you some of the features and capabilities that you should be looking for You guys are Tom, it's great to have you back on the program. challenges, more and more enterprises are relying on NDR solutions that fill some of the So in the prior segments, we talked a lot about, you know, the, some of the differences between the previous generations and advanced NDR. So you have the ability to have to have the ability to And then, you know, is that what net scout means when you talk about visibility without borders? a robust set of metadata, you know, at, at full line rates up to a hundred in Can you talk a little bit about smart detection if we say, okay, advanced NDR needs to deliver this threat the data from the network that had occurred in, in, you know, prior to this. So really what organizations need is these tools that deliver a far more comprehensive the so, so one, you know, that SIM that, so is receiving So that could be part of the automation coming from the SIM slash. All right, Tom, take this home for me. and then, you know, create policies, blocking policies into firewall. triads that we talked about or visibility triad, we talked about, you know, our data makes that So Tom, thank you so much for sharing such great and thoughtful information and insight for the audience. Oh, you're welcome. We wanna thank you for watching the program today.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tom | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Dave Alane | PERSON | 0.99+ |
Tom Binkowski | PERSON | 0.99+ |
Tom Binowski | PERSON | 0.99+ |
Thomas Bienkowski | PERSON | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
three sides | QUANTITY | 0.99+ |
third side | QUANTITY | 0.99+ |
Netscout | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Palo Alto | ORGANIZATION | 0.99+ |
third | QUANTITY | 0.99+ |
more than 60% | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
first segment | QUANTITY | 0.99+ |
over 30 years | QUANTITY | 0.99+ |
ESCO | ORGANIZATION | 0.99+ |
Biden | PERSON | 0.99+ |
2022 | DATE | 0.99+ |
March of this year | DATE | 0.99+ |
three main data sources | QUANTITY | 0.99+ |
two sides | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
Today | DATE | 0.98+ |
three things | QUANTITY | 0.98+ |
single tool | QUANTITY | 0.98+ |
thousands | QUANTITY | 0.98+ |
March | DATE | 0.98+ |
Tomski | PERSON | 0.98+ |
30 days ago | DATE | 0.98+ |
first two segments | QUANTITY | 0.98+ |
ORGANIZATION | 0.98+ | |
two | QUANTITY | 0.98+ |
XDR | TITLE | 0.98+ |
One | QUANTITY | 0.98+ |
net scout | ORGANIZATION | 0.98+ |
two things | QUANTITY | 0.97+ |
94% | QUANTITY | 0.97+ |
SQL | TITLE | 0.97+ |
first | QUANTITY | 0.97+ |
over 30 years | QUANTITY | 0.97+ |
single | QUANTITY | 0.96+ |
Netscale | ORGANIZATION | 0.96+ |
each | QUANTITY | 0.96+ |
one vendor | QUANTITY | 0.95+ |
three | QUANTITY | 0.95+ |
Tracie Zenti & Thomas Anderson | Red Hat Summit 2022
(gentle music) >> We're back at the Seaport in Boston. I'm Dave Vellante with my co-host, Paul Gillin. Tracie Zenti is here. She's the Director of Global Partner Management at Microsoft, and Tom Anderson is the Vice President of Ansible at Red Hat. Guys, welcome to theCube. >> Hi, thank you. >> Yep. >> Ansible on Azure, we're going to talk about that. Why do I need Ansible? Why do I need that kind of automation in Azure? What's the problem you're solving there? >> Yeah, so automation itself is connecting customers' infrastructure to their end resources, so whether that infrastructure's in the cloud, whether it's in the data center, or whether it's at the edge. Ansible is the common automation platform that allows customers to reuse automation across all of those platforms. >> And so, Tracie, I mean, Microsoft does everything. Why do you need Red Hat to do Ansible? >> We want that automation, right? We want our customers to have that ease of use so they can be innovative and bring their workloads to Azure. So that's exactly why we want Ansible. >> Yeah, so kind of loaded questions here, right, as we were sort of talking offline. The nature of partnerships is changing. It's about co-creating, adding value together, getting those effects of momentum, but maybe talk about how the relationship started and how it's evolving and I'd love to have your perspective on the evolving nature of ecosystems. >> Yeah, I think the partnership with Red Hat has been strong for a number of years. I think my predecessor was in the role for five years. There was a person in there for a couple years before that. So I think seven or eight years, we've been working together and co-engineering. Red Hat enterprised Linux. It's co-engineered. Ansible was co-engineered. We work together, right? So we want it to run perfectly on our platform. We want it to be a good customer experience. I think the evolution that we're seeing is in how customers buy, right? They want us to be one company, right? They want it to be easy. They want be able to buy their software where they run it on the cloud. They don't want to have to call Red Hat to buy and then call us to buy and then deploy. And we can do all that now with Ansible's the first one we're doing this together and we'll grow that on our marketplace so that it's easy to buy, easy to deploy, easy to keep track of. >> This is not just Ansible in the marketplace. This is actually a fully managed service. >> That's right. >> What is the value you've added on top of that? >> So it runs in the customer account, but it acts kind of like SaaS. So Red Hat gets to manage it, right? And it's in their own tenant. So they get in the customer's own tenant, right? So with a service principle, Red Hat's able to do that management. Tom, do you want to add anything to that? >> Yeah, the customers don't have to worry about managing Ansible. They just worry about using Ansible to automate their infrastructure. So it's a kind of a win-win situation for us and for our customers. We manage the infrastructure for them and the customer's resources themselves and they get to just focus on automating their business. >> Now, if they want to do cross-cloud automation or automation to their hybrid cloud, will you support that as well? >> 100%. >> Absolutely. >> Yeah. >> We're totally fine with that, right? I mean, it's unrealistic to think customers run everything in one place. That isn't enterprise. That's not reality. So yeah, I'm fine with that. >> Well, that's not every cloud provider. >> No (laughing) that's true. >> You guys over here, at Amazon, you can't even say multicloud or you'll get thrown off the stage. >> Of course we'd love it to all run on Azure, but we want our customers to be happy and have choice, yeah. >> You guys have all, I mean, you've been around a long time. So you had a huge on-prem state, brought that to the cloud, and Azure Stack, I mean, it's been around forever and it's evolved. So you've always believed in, whatever you call it, Hybrid IT, and of course, you guys, that's your call of mission. >> Yeah, exactly. >> So how do you each see hybrid? Where's the points of agreement? It sounds like there's more overlap than gaps, but maybe you could talk about your perspective. >> Yeah, I don't think there are any points of disagreement. I think for us, it's meeting our customers where their center of gravity is, where they see their center of management gravity. If it's on Azure, great. If it's on their data center, that's okay, too. So they can manage to or from. So if Azure is their center of gravity, they can use automation, Ansible automation, to manage all the things on Azure, things on other cloud providers, things in their data center, all the way out to their edge. So they have the choice of what makes the most sense to them. >> And Azure Arc is obviously, that's how Azure Stack is evolving, right? >> Yeah, and we have Azure Arc integration with Ansible. >> Yeah. >> So yeah, absolutely. And I mean, we also have Rell on our marketplace, right? So you can buy the basement and you could buy the roof and everything in between. So we're growing the estate on marketplace as well to all the other products that we have in common. So absolutely. >> How much of an opportunity, just go if we go inside? Give us a little peak inside Microsoft. How much of an opportunity does Microsoft think about multi-cloud specifically? I'm not crazy about the term multicloud, 'cause to me, multicloud, runs an Azure, runs an AWS, runs on Google, maybe runs somewhere else. But multicloud meaning that common experience, your version of hybrid, if you will. How serious is Microsoft about that as a business opportunity? A lot of people would say, well, Microsoft really doesn't want. They want everything in their cloud. But I'd love to hear from you if that is good. >> Well, we have Azure Red Hat OpenShift, which is a Microsoft branded version of OpenShift. We have Ansible now on our marketplace. We also, of course, we have AKS. So I mean, container strategy runs anywhere. But we also obviously have services that enhance all these things. So I think, our marketplace is a third party marketplace. It is designed to let customers buy and run easily on Azure and we'd want to make that experience good. So I don't know that it's... I can't speak to our strategy on multicloud, but what I can speak to is when businesses need to do innovation, we want it to be easy to do that, right? We want it to be easy to buy, defined, buy, deploy, manage, and that's what we're trying to accomplish. >> Fair to say, you're not trying to stop it. >> No, yeah, yeah. >> Whether or not it evolves into something that you heavily lean into or see. >> When we were talking before the cameras turned on, you said that you think marketplaces are the future. Why do you say that? And how will marketplaces be differentiated from each other in the future? >> Well, our marketplace is really, first of all, I think, as you said off camera, they're now. You can buy now, right? There's nothing that stops you. But to me, it's an extension of consumerization of IT. I've been in IT and manageability for about 23 years and full automation is what we and IT used to always talk about, that single pane of glass. How do you keep track of everything? How do you make it easy? How do you support? And IT is always eeking out that last little bit of funding to do innovation, right? So what we can do with consumerization of IT is make it easier to innovate. Make it cheaper to innovate, right? So I think marketplaces do that, right? They've got gold images you can deploy. You're also able to deploy custom images. So I think the future is as particularly with ours, like we support, I don't remember the exact number, but over a hundred countries of tax calculation. We've got like 17 currencies. So as we progress and customers can run from anywhere in the world and buy from anywhere in the world and make it simple to do those things that used to take maybe two months to spin up services for innovation and Ansible helps with that, that's going to help enterprises innovate faster. And I think that's what marketplaces are really going to bring to the forefront is that innovation. >> Tom, why did Ansible, I'm going to say one, I mean, you're never done. But it was unclear a few years ago, which automation platform was going to win in the marketplace and clearly, Ansible has taken a leading position. Why? What were the factors that led to that? >> Honestly, it was the strength of the community, right? And Red Hat leaning into that community to support that community. When you look out at the upstream community for Ansible and the number of participants, active participants that are contributing to the community just increases its value to everybody. So the number of integrations, the number of things that you can automate with Ansible is in the thousands and thousands, and that's not because a group of Red Hat engineers wrote it. That's because our community partners, like Microsoft wrote the user integrations for Ansible. F5 does theirs. Customers take those and expand on them. So the number of use cases that we can address through the community and through our partners is immense. >> But that doesn't just happen. I mean, what have you done to cultivate that community? >> Well, it's in Red Hat's DNA, right? To be the catalyst in a community, to bring partners and users together, to share their knowledge and their expertise and their skills, and to make the code open. So anybody can go grab Ansible from upstream and start doing stuff with it, if they want. If they want to mature on it and management for it and support all the other things that Red Hat provides, then they come to us for a subscription. So it's really been about sort of catalyzing and supporting that community, and Red Hat is a good steward of these upstream communities. >> Is Azure putting Ansible to use actually within your own platform as opposed to being a managed service? Are you adopting Ansible for automation of the Azure Platform? >> I'll let you answer that. >> So two years ago, Microsoft presented at AnsibleFest, our fall conference, Budd Warrack, I'm butchering his last name, but he came on and told how the networking team at Microsoft supports about 35,000 access points across hundreds of buildings, all the Microsoft campuses using Ansible to do that. Fantastic story if you want to go on YouTube and look up that use case. So Microsoft is an avid user of the Ansible technology in their environment. >> Azure is kind of this really, I mean, incredible strategic platform for Microsoft. I wonder if you could talk about Azure as a honeypot for partners. I mean, it seems, I mean, the momentum is unbelievable. I mean, I pay attention to their earnings calls every quarter of Azure growth, even though I don't know what the exact number is, 'cause they won't give it to me but they give me the growth rates and it's actually accelerating. >> No lie. (Tracie laughing) >> I've got my number. It's in the tens of billions. I mean, I'm north of 35 billion, but growing at the high 30%. I mean, it's remarkable. So talk about the importance of that to the ecosystem as a honey pot. >> Paul Satia said it right. Many times partners are essential to our strategy. But if you think about it, software solves problems. We have software that solves problems. They have software that solves problems, right? So when IT and customers are thinking of solving a problem, they're thinking software, right? And we want that software to run on Azure. So partners have to be essential to our strategy. Absolutely. It's again, we're one team to the customer. They want to see that as working together seamlessly. They don't want it to be hardware Azure plus software. So that's absolutely critical to our success. >> And if I could add for us, the partners are super important. So some of our launch partners are like F5 and CyberArk who have certified Ansible content for Ansible on Azure. We have service provider partners like Accenture and Kindra that are launching with us and providing our joint customers with help to get up to speed. So it really is a partner play. >> Absolutely. >> Where are you guys taking this? Where do you want to see it go? What are some of the things that observers should pay attention to as marketers of success and evolution? >> Well, certainly for us, it's obviously customer adoption, but it is providing them with patterns. So out of the box patterns that makes it easy for them to get up and running and solve the use cases and problems that they run into most frequently. Problems ain't the right word. Challenges or opportunities on Azure to be able to automate the things. So we're really leaning into the different use cases, whether it's edge, whether it's cloud, whether it's cloud to edge, all of those things. We want to provide users with out of the box Ansible content that allows 'em to just get up and automating super fast, and doing that on Azure makes it way easier for us because we don't have to focus on the install and the setting up and configuring it. It's all just part of the experience >> And Tracie, for Microsoft, it's world domination with a smile. (all laughing) >> Of course. No, of course not. No, I think it's to continue to grow the co-engineering we do across all of the Red Hat products. I can't even tell you the number of things we work on together, but to look forward strategically at what opportunities we have across our products and theirs to integrate like Arc and Ansible, and then making it all easy to buy, making it available so that customers have choice and they can buy how they want to and simplify. So we're just going to continue to do that and we're at that infancy right now and as we grow, it'll just get easier and easier with more and more products. >> Well, bringing the edge into the equation is going to be really interesting. Microsoft with its gaming, vector is amazing, and recent, awesome acquisitions. All the gamers are excited about that and that's a huge edge play. >> You'll have to bring my son on for that interview. >> Yeah. >> My son will interview. >> He knows more than all of us, I'm sure. What about Ansible? What's ahead for Ansible? >> Edge, so part of the Red Hat play at the Edge. We've getting a lot of customer pull for both industrial Edge use cases in the energy sector. We've had a joint customer with Azure that has a combined Edge platform. Certainly, the cloud stuff that we're announcing today is a huge growth area. And then just general enterprise automation. There's lots of room to run there for Ansible. >> And lots of industries, right? >> Yeah. >> Telco, manufacturing. >> Retail. >> Retail. >> Yeah. >> Yeah. There's so many places to go, yeah, that need the help. >> The market's just, how you going to count it anymore? It's just enormous. >> Yeah. >> It's the entire GDP the world. But guys, thanks for coming to theCUBE. >> Yeah. >> Great story. Congratulations on the partnership and the announcements and look forward to speaking with you in the future. >> Yeah, thanks for having us. >> Thanks for having us. >> You're very welcome. And keep it right there. This is Dave Vellante for Paul Gillin. This is theCUBE's coverage of Red Hat Summit 2022. We'll be right back at Seaport in Boston. (gentle music)
SUMMARY :
and Tom Anderson is the Vice President going to talk about that. that allows customers to reuse automation Why do you need Red Hat to do Ansible? to have that ease of use and I'd love to have your perspective so that it's easy to buy, easy to deploy, Ansible in the marketplace. So Red Hat gets to manage it, right? Yeah, the customers don't have to worry to think customers run at Amazon, you can't even say multicloud it to all run on Azure, and of course, you guys, So how do you each see hybrid? So they can manage to or from. Yeah, and we have Azure and you could buy the roof But I'd love to hear It is designed to let customers Fair to say, you're into something that you from each other in the future? and buy from anywhere in the world I'm going to say one, So the number of use to cultivate that community? and to make the code open. of the Ansible technology to their earnings calls No lie. So talk about the importance of that So partners have to be the partners are super important. and solve the use cases and problems And Tracie, for Microsoft, across all of the Red Hat products. is going to be really interesting. You'll have to bring my What about Ansible? There's lots of room to There's so many places to going to count it anymore? But guys, thanks for coming to theCUBE. and look forward to speaking of Red Hat Summit 2022.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tracie | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Tracie Zenti | PERSON | 0.99+ |
Tom Anderson | PERSON | 0.99+ |
Paul Satia | PERSON | 0.99+ |
seven | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
Tom | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Ansible | ORGANIZATION | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
17 currencies | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
CyberArk | ORGANIZATION | 0.99+ |
Kindra | ORGANIZATION | 0.99+ |
eight years | QUANTITY | 0.99+ |
Seaport | LOCATION | 0.99+ |
Thomas Anderson | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
two months | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Red Hat Summit 2022 | EVENT | 0.99+ |
F5 | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
YouTube | ORGANIZATION | 0.98+ |
one team | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
about 23 years | QUANTITY | 0.98+ |
Red H | ORGANIZATION | 0.98+ |
AWS | ORGANIZATION | 0.98+ |
Azure Arc | TITLE | 0.98+ |
tens of billions | QUANTITY | 0.98+ |
two years ago | DATE | 0.97+ |
Azure | TITLE | 0.97+ |
one company | QUANTITY | 0.97+ |
ORGANIZATION | 0.97+ | |
Azure Arc | TITLE | 0.97+ |
Edge | ORGANIZATION | 0.97+ |
OpenShift | TITLE | 0.97+ |
30% | QUANTITY | 0.97+ |
about 35,000 access points | QUANTITY | 0.97+ |
first one | QUANTITY | 0.96+ |
Red Hat | TITLE | 0.96+ |
Linux | TITLE | 0.95+ |
Azure Stack | TITLE | 0.95+ |
each | QUANTITY | 0.94+ |
Budd Warrack | PERSON | 0.94+ |
Ed Walsh, Courtney Pallotta & Thomas Hazel, ChaosSearch | AWS 2021 CUBE Testimonial
(upbeat music) >> My name's Courtney Pallota, I'm the Vice President of Marketing at ChaosSearch. We've partnered with theCUBE team to take every one of those assets, tailor them to meet whatever our needs were, and get them out and shared far and wide. And theCUBE team has been tremendously helpful in partnering with us to make that a success. >> theCUBE has been fantastic with us. They are thought leaders in this space. And we have a unique product, a unique vision, and they have an insight into where the market's going. They've had conference with us with data mesh, and how do we fit into that new realm of data access. And with our unique vision, with our unique platform, and with theCUBE, we've uniquely come out into the market. >> What's my overall experience with theCUBE? Would I do it again, would I recommended it to others? I said, I recommend theCUBE to everyone. In fact, I was at IBM, and some of the IBM executives didn't want to go on theCUBE because it's a live interview. Live interviews can be traumatic. But the fact of the matter is, one, yeah, they're tough questions, but they're in line, they're what clients are looking for. So yes, you have to be on ball. I mean, you're always on your toes, but you get your message out so crisply. So I recommend it to everyone. I've gotten a lot of other executives to participate, and they've all had a great example. You have to be ready. I mean, you can't go on theCUBE and not be ready, but now you can get your message out. And it has such a good distribution. I can't think of a better platform. So I recommended it to everyone. If I say ChaosSearch in one word, I'd say digital transformation, with a hyphen.
SUMMARY :
tailor them to meet And with our unique vision, I said, I recommend theCUBE to everyone.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Courtney Pallota | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
ChaosSearch | ORGANIZATION | 0.99+ |
Thomas Hazel | PERSON | 0.99+ |
theCUBE | ORGANIZATION | 0.99+ |
one word | QUANTITY | 0.97+ |
Courtney Pallotta | PERSON | 0.9+ |
theCUBE | TITLE | 0.71+ |
one | QUANTITY | 0.56+ |
AWS 2021 | ORGANIZATION | 0.55+ |
Thomas Hazel, ChaosSearchJSON Flex on ChaosSearch
[Thomas Hazel] - Hello, this is Thomas Hazel, founder CTO here at ChaosSearch. And tonight I'm going to demonstrate a new feature we are offering this quarter called JSON Flex. If you're familiar with JSON datasets, they're wonderful ways to represent information. You know, they're multidimensional, they have ability to set up arrays as attributes but those arrays are really problematic when you need to expand them or flatten them to do any type of elastic search or relational access, particularly when you're trying to do aggregations. And so the common process is to exclude those arrays or pick and choose that information. But with this new Chaos Flex capability, our system uniquely can index that data horizontally in a very small and efficient representation. And then with our Chaos Refinery, expand each attribute as you wish vertically, so you can do all the basic and natural constructs you would have done if you had, you know, a more straightforward, two dimensional, three dimensional type representation. So without further ado, I'mma get into this presentation of JSON Flex. Now, in this case, I've already set up the service to point to a particular S3 account that has CloudTrail data, one that is pretty problematic when it comes down to flattening data. And again, if you know CloudTrail, one row can become 10,000 as data gets flattened. So without further ado, let me jump right in. When you first log into the ChaosSearch service, you'll see a tab called 'Storage'. This is the S3 account, and I have variety of buckets. I have the refinery, it's a data refinery. This is where we create views or lenses into these index streams that you can do analysis that publishes it in elastic API as an index pattern or relational table in SQL Now a particular bucket I have here is a whole bunch of demonstration datasets that we have to show off our capabilities and our offering. In this bucket, I have CloudTrail data and I'm going to create what we call a 'object group'. An object group is a entry point, a filter of which files I want to index that data. Now, it can be statically there or a live streaming. These object groups had the ability to say, what type of data do you want to index on? Now through our wizard, you can type in, you know, prefix in this case, I want to type in CloudTrail, and you see here, I have a whole bunch of CloudTrail. I'mma choose one file to make it quick and easy. But this particular CloudTrail data will expand and we can show the capability of this horizontal to vertical expansion. So I walked through the wizard, as you can see here, we discovered JSON, it's a gzip file. Leave flattening unlimited 'cause we want to be able to expand infinitely. But this case, instead of doing default virtual, I'm going to horizontally represent this information. And this uniquely compresses the data in a way that can be stored efficiently on disc but then expanded in our data refinery on Pond Query or search requests. So I'mma create this object group. Now I'm going to call this, you know, 'JSON Flex test' and I could set up live indexing, SQS pops up but I'mma skip that and skip Retention and just create it. Once this object group is created, you kind of think of it as a virtual bucket, 'cause it does filter the data as you can see here. When I look at the view, I just see CloudTrail, but within the console, I can say start indexing. Now this is static data there could be a live stream and we set up workers to index this data. Whether it's one file, a million files or one terabyte, or one petabyte, we index the data. We discover all the schema, and as you see here, we discovered 104 columns. Now what's interesting is that we represent this expansion in a horizontal way. You know, if you know CloudTrail records zero, record one, record two. This can expand pretty dramatically if you fully flatten it but this case we horizontally representing it as the index. So when I go into the data refinery, I can create a view. Now, if you know the data refinery of ChaosSearch, you can bring multiple data streams together. You can do transformations virtually, you can do correlations, but in this case, I'm just going to take this one particular index stream, we call 'JSON Flex' and walk through a wizard, we try to simplify everything and select a particular attribute to expand. Now, again, we represent this in one row but if you had arrays and do all the permutations, it could go one to 100 to 10,000. We had one JSON audit that went from one row to 1 million rows. Now, clearly you don't want to create all those permutations, when you're tryna put into a database. With our unique index technology, you can do it virtually and sort horizontally. So let me just select 'Virtual' and walk through the wizard. Now, as I mentioned, we do all these different transformations changed schema, we're going to skip all that and select the order time, records event and say, 'create this'. I'm going to say, you know, 'JSON Flex View', I can set up caching, do a variety of things, I'm going to skip that. And once I create this, it's now available in the elastic API as an index pattern, as well as SQL via our Presto API dialect. And you can use Looker, Tableau, et cetera. But in this case, we go to this 'Analytics tab' and we built in the Kibana, open search tooling that is Apache Tonetto. And I click on discovery here and I'm going to select that particular view. Again, it looks like, oops, it looks like an index pattern, and I'mma choose, let's see here, let's choose 15 years from past and present and make sure I find where actually was timed. And what you'll see here is, you know, sure. It's just one particular data set has a variety of columns, but you see here is unlike that record zero, records one, now it's expanded. And so it has been expanded like a vertical flattening that you would traditionally do if you wanted to do anything that was an elastic or a relational construct, you know, that fit into a table format. Now the 'vantage of JSON Flex, you don't have that stored as a blob and use these proprietary JSON API's. You can use your native elastic API or your native SQL tooling to get access to it naturally without that expense of that explosion or without the complexity of ETLing it, and picking and choosing before you actually put into the database. That completes the demonstration of ChaosSearch new JSON Flex capability. If you're interested, come to ChaosSearch.io and set up a free trial. Thank you.
SUMMARY :
and as you see here, we
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Thomas Hazel | PERSON | 0.99+ |
10,000 | QUANTITY | 0.99+ |
one terabyte | QUANTITY | 0.99+ |
one file | QUANTITY | 0.99+ |
104 columns | QUANTITY | 0.99+ |
one petabyte | QUANTITY | 0.99+ |
1 million rows | QUANTITY | 0.99+ |
JSON Flex | TITLE | 0.99+ |
ChaosSearch | ORGANIZATION | 0.99+ |
one row | QUANTITY | 0.99+ |
a million files | QUANTITY | 0.99+ |
tonight | DATE | 0.98+ |
Tableau | TITLE | 0.98+ |
each attribute | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
SQL | TITLE | 0.98+ |
S3 | TITLE | 0.98+ |
100 | QUANTITY | 0.98+ |
JSON | TITLE | 0.98+ |
15 years | QUANTITY | 0.98+ |
Presto | TITLE | 0.97+ |
one | QUANTITY | 0.96+ |
Looker | TITLE | 0.95+ |
two | QUANTITY | 0.93+ |
JSON Flex View | TITLE | 0.92+ |
JSON API | TITLE | 0.91+ |
Flex | TITLE | 0.87+ |
zero | QUANTITY | 0.87+ |
SQS | TITLE | 0.86+ |
ChaosSearchJSON | ORGANIZATION | 0.8+ |
this quarter | DATE | 0.8+ |
CloudTrail | COMMERCIAL_ITEM | 0.79+ |
Apache Tonetto | ORGANIZATION | 0.72+ |
JSON | ORGANIZATION | 0.69+ |
Chaos Flex | TITLE | 0.69+ |
CloudTrail | TITLE | 0.6+ |
ChaosSearch | TITLE | 0.58+ |
ChaosSearch.io | TITLE | 0.57+ |
data set | QUANTITY | 0.56+ |
Kibana | ORGANIZATION | 0.45+ |
Ed Walsh and Thomas Hazel, ChaosSearch
>> Welcome to theCUBE, I am Dave Vellante. And today we're going to explore the ebb and flow of data as it travels into the cloud and the data lake. The concept of data lakes was alluring when it was first coined last decade by CTO James Dixon. Rather than be limited to highly structured and curated data that lives in a relational database in the form of an expensive and rigid data warehouse or a data mart. A data lake is formed by flowing data from a variety of sources into a scalable repository, like, say an S3 bucket that anyone can access, dive into, they can extract water, A.K.A data, from that lake and analyze data that's much more fine-grained and less expensive to store at scale. The problem became that organizations started to dump everything into their data lakes with no schema on our right, no metadata, no context, just shoving it into the data lake and figure out what's valuable at some point down the road. Kind of reminds you of your attic, right? Except this is an attic in the cloud. So it's too big to clean out over a weekend. Well look, it's 2021 and we should be solving this problem by now. A lot of folks are working on this, but often the solutions add other complexities for technology pros. So to understand this better, we're going to enlist the help of ChaosSearch CEO Ed Walsh, and Thomas Hazel, the CTO and Founder of ChaosSearch. We're also going to speak with Kevin Miller who's the Vice President and General Manager of S3 at Amazon web services. And of course they manage the largest and deepest data lakes on the planet. And we'll hear from a customer to get their perspective on this problem and how to go about solving it, but let's get started. Ed, Thomas, great to see you. Thanks for coming on theCUBE. >> Likewise. >> Face to face, it's really good to be here. >> It is nice face to face. >> It's great. >> So, Ed, let me start with you. We've been talking about data lakes in the cloud forever. Why is it still so difficult to extract value from those data lakes? >> Good question. I mean, data analytics at scale has always been a challenge, right? So, we're making some incremental changes. As you mentioned that we need to see some step function changes. But in fact, it's the reason ChaosSearch was really founded. But if you look at it, the same challenge around data warehouse or a data lake. Really it's not just to flowing the data in, it's how to get insights out. So it kind of falls into a couple of areas, but the business side will always complain and it's kind of uniform across everything in data lakes, everything in data warehousing. They'll say, "Hey, listen, I typically have to deal with a centralized team to do that data prep because it's data scientists and DBAs". Most of the time, they're a centralized group. Sometimes they're are business units, but most of the time, because they're scarce resources together. And then it takes a lot of time. It's arduous, it's complicated, it's a rigid process of the deal of the team, hard to add new data, but also it's hard to, it's very hard to share data and there's no way to governance without locking it down. And of course they would be more self-serve. So there's, you hear from the business side constantly now underneath is like, there's some real technology issues that we haven't really changed the way we're doing data prep since the two thousands, right? So if you look at it, it's, it falls two big areas. It's one, how to do data prep. How do you take, a request comes in from a business unit. I want to do X, Y, Z with this data. I want to use this type of tool sets to do the following. Someone has to be smart, how to put that data in the right schema, you mentioned. You have to put it in the right format, that the tool sets can analyze that data before you do anything. And then second thing, I'll come back to that 'cause that's the biggest challenge. But the second challenge is how these different data lakes and data warehouses are now persisting data and the complexity of managing that data and also the cost of computing it. And I'll go through that. But basically the biggest thing is actually getting it from raw data so the rigidness and complexity that the business sides are using it is literally someone has to do this ETL process, extract, transform, load. They're actually taking data, a request comes in, I need so much data in this type of way to put together. They're literally physically duplicating data and putting it together on a schema. They're stitching together almost a data puddle for all these different requests. And what happens is anytime they have to do that, someone has to do it. And it's, very skilled resources are scanned in the enterprise, right? So it's a DBS and data scientists. And then when they want new data, you give them a set of data set. They're always saying, what can I add to this data? Now that I've seen the reports. I want to add this data more fresh. And the same process has to happen. This takes about 60% to 80% of the data scientists in DPA's to do this work. It's kind of well-documented. And this is what actually stops the process. That's what is rigid. They have to be rigid because there's a process around that. That's the biggest challenge of doing this. And it takes an enterprise, weeks or months. I always say three weeks or three months. And no one challenges beyond that. It also takes the same skill set of people that you want to drive digital transformation, data warehousing initiatives, motorization, being data driven or all these data scientists and DBS they don't have enough of. So this is not only hurting you getting insights out of your day like in the warehouses. It's also, this resource constraint is hurting you actually getting. >> So that smallest atomic unit is that team, that's super specialized team, right? >> Right. >> Yeah. Okay. So you guys talk about activating the data lake. >> Yep. >> For analytics. What's unique about that? What problems are you all solving? You know, when you guys crew created this magic sauce. >> No, and basically, there's a lot of things. I highlighted the biggest one is how to do the data prep, but also you're persisting and using the data. But in the end, it's like, there's a lot of challenges at how to get analytics at scale. And this is really where Thomas and I founded the team to go after this, but I'll try to say it simply. What we're doing, I'll try to compare and contrast what we do compared to what you do with maybe an elastic cluster or a BI cluster. And if you look at it, what we do is we simply put your data in S3, don't move it, don't transform it. In fact, we're against data movement. What we do is we literally point and set that data and we index that data and make it available in a data representation that you can give virtual views to end-users. And those virtual views are available immediately over petabytes of data. And it actually gets presented to the end-user as an open API. So if you're elastic search user, you can use all your elastic search tools on this view. If you're a SQL user, Tableau, Looker, all the different tools, same thing with machine learning next year. So what we do is we take it, make it very simple. Simply put it there. It's already there already. Point us at it. We do the hard of indexing and making available. And then you publish in the open API as your users can use exactly what they do today. So that's, dramatically I'll give you a before and after. So let's say you're doing elastic search. You're doing logging analytics at scale, they're lending their data in S3. And then they're ETL physically duplicating and moving data. And typically deleting a lot of data to get in a format that elastic search can use. They're persisting it up in a data layer called leucine. It's physically sitting in memories, CPU, SSDs, and it's not one of them, it's a bunch of those. They in the cloud, you have to set them up because they're persisting ECC. They stand up same by 24, not a very cost-effective way to the cloud computing. What we do in comparison to that is literally pointing it at the same S3. In fact, you can run a complete parallel, the data necessary it's being ETL out. When just one more use case read only, or allow you to get that data and make this virtual views. So we run a complete parallel, but what happens is we just give a virtual view to the end users. We don't need this persistence layer, this extra cost layer, this extra time, cost and complexity of doing that. So what happens is when you look at what happens in elastic, they have a constraint, a trade-off of how much you can keep and how much you can afford to keep. And also it becomes unstable at time because you have to build out a schema. It's on a server, the more the schema scales out, guess what? you have to add more servers, very expensive. They're up seven by 24. And also they become brutalized. You lose one node, the whole thing has to be put together. We have none of that cost and complexity. We literally go from to keep whatever you want, whatever you want to keep an S3 is single persistence, very cost effective. And what we are able to do is, costs, we save 50 to 80%. Why? We don't go with the old paradigm of sit it up on servers, spin them up for persistence and keep them up 7 by 24. We're literally asking their cluster, what do you want to cut? We bring up the right compute resources. And then we release those sources after the query done. So we can do some queries that they can't imagine at scale, but we're able to do the exact same query at 50 to 80% savings. And they don't have to do any tutorial of moving that data or managing that layer of persistence, which is not only expensive, it becomes brittle. And then it becomes, I'll be quick. Once you go to BI, it's the same challenge, but the BI systems, the requests are constant coming at from a business unit down to the centralized data team. Give me this flavor of data. I want to use this piece of, you know, this analytic tool in that desk set. So they have to do all this pipeline. They're constantly saying, okay, I'll give you this data, this data, I'm duplicating that data, I'm moving it and stitching it together. And then the minute you want more data, they do the same process all over. We completely eliminate that. >> And those requests are queue up. Thomas, it had me, you don't have to move the data. That's kind of the exciting piece here, isn't it? >> Absolutely no. I think, you know, the data lake philosophy has always been solid, right? The problem is we had that Hadoop hang over, right? Where let's say we were using that platform, little too many variety of ways. And so, I always believed in data lake philosophy when James came and coined that I'm like, that's it. However, HTFS, that wasn't really a service. Cloud object storage is a service that the elasticity, the security, the durability, all that benefits are really why we founded on-cloud storage as a first move. >> So it was talking Thomas about, you know, being able to shut off essentially the compute so you don't have to keep paying for it, but there's other vendors out there and stuff like that. Something similar as separating, compute from storage that they're famous for that. And you have Databricks out there doing their lake house thing. Do you compete with those? How do you participate and how do you differentiate? >> Well, you know you've heard this term data lakes, warehouse, now lake house. And so what everybody wants is simple in, easy in, however, the problem with data lakes was complexity of out. Driving value. And I said, what if, what if you have the easy in and the value out? So if you look at, say snowflake as a warehousing solution, you have to all that prep and data movement to get into that system. And that it's rigid static. Now, Databricks, now that lake house has exact same thing. Now, should they have a data lake philosophy, but their data ingestion is not data lake philosophy. So I said, what if we had that simple in with a unique architecture and indexed technology, make it virtually accessible, publishable dynamically at petabyte scale. And so our service connects to the customer's cloud storage. Data stream the data in, set up what we call a live indexing stream, and then go to our data refinery and publish views that can be consumed the elastic API, use cabana Grafana, or say SQL tables look or say Tableau. And so we're getting the benefits of both sides, use scheme on read-write performance with scheme write-read performance. And if you can do that, that's the true promise of a data lake, you know, again, nothing against Hadoop, but scheme on read with all that complexity of software was a little data swamping. >> Well, you've got to start it, okay. So we got to give them a good prompt, but everybody I talked to has got this big bunch of spark clusters, now saying, all right, this doesn't scale, we're stuck. And so, you know, I'm a big fan of Jamag Dagani and our concept of the data lake and it's early days. But if you fast forward to the end of the decade, you know, what do you see as being the sort of critical components of this notion of, people call it data mesh, but to get the analytics stack, you're a visionary Thomas, how do you see this thing playing out over the next decade? >> I love her thought leadership, to be honest, our core principles were her core principles now, 5, 6, 7 years ago. And so this idea of, decentralize that data as a product, self-serve and, and federated computer governance, I mean, all that was our core principle. The trick is how do you enable that mesh philosophy? I can say we're a mesh ready, meaning that, we can participate in a way that very few products can participate. If there's gates data into your system, the CTL, the schema management, my argument with the data meshes like producers and consumers have the same rights. I want the consumer, people that choose how they want to consume that data. As well as the producer, publishing it. I can say our data refinery is that answer. You know, shoot, I'd love to open up a standard, right? Where we can really talk about the producers and consumers and the rights each others have. But I think she's right on the philosophy. I think as products mature in this cloud, in this data lake capabilities, the trick is those gates. If you have to structure up front, if you set those pipelines, the chance of you getting your data into a mesh is the weeks and months that Ed was mentioning. >> Well, I think you're right. I think the problem with data mesh today is the lack of standards you've got. You know, when you draw the conceptual diagrams, you've got a lot of lollipops, which are APIs, but they're all unique primitives. So there aren't standards, by which to your point, the consumer can take the data the way he or she wants it and build their own data products without having to tap people on the shoulder to say, how can I use this?, where does the data live? And being able to add their own data. >> You're exactly right. So I'm an organization, I'm generating data, when the courageously stream it into a lake. And then the service, a ChaosSearch service, is the data is discoverable and configurable by the consumer. Let's say you want to go to the corner store. I want to make a certain meal tonight. I want to pick and choose what I want, how I want it. Imagine if the data mesh truly can have that producer of information, you know, all the things you can buy a grocery store and what you want to make for dinner. And if you'd static, if you call up your producer to do the change, was it really a data mesh enabled service? I would argue not. >> Ed, bring us home. >> Well, maybe one more thing with this. >> Please, yeah. 'Cause some of this is we're talking 2031, but largely these principles are what we have in production today, right? So even the self service where you can actually have a business context on top of a data lake, we do that today, we talked about, we get rid of the physical ETL, which is 80% of the work, but the last 20% it's done by this refinery where you can do virtual views, the right or back and do all the transformation need and make it available. But also that's available to, you can actually give that as a role-based access service to your end-users, actually analysts. And you don't want to be a data scientist or DBA. In the hands of a data scientist the DBA is powerful, but the fact of matter, you don't have to affect all of our employees, regardless of seniority, if they're in finance or in sales, they actually go through and learn how to do this. So you don't have to be it. So part of that, and they can come up with their own view, which that's one of the things about data lakes. The business unit wants to do themselves, but more importantly, because they have that context of what they're trying to do instead of queuing up the very specific request that takes weeks, they're able to do it themselves. >> And if I have to put it on different data stores and ETL that I can do things in real time or near real time. And that's game changing and something we haven't been able to do ever. >> And then maybe just to wrap it up, listen, you know 8 years ago, Thomas and his group of founders, came up with the concept. How do you actually get after analytics at scale and solve the real problems? And it's not one thing, it's not just getting S3. It's all these different things. And what we have in market today is the ability to literally just simply stream it to S3, by the way, simply do, what we do is automate the process of getting the data in a representation that you can now share an augment. And then we publish open API. So can actually use a tool as you want, first use case log analytics, hey, it's easy to just stream your logs in. And we give you elastic search type of services. Same thing that with CQL, you'll see mainstream machine learning next year. So listen, I think we have the data lake, you know, 3.0 now, and we're just stretching our legs right now to have fun. >> Well, and you have to say it log analytics. But if I really do believe in this concept of building data products and data services, because I want to sell them, I want to monetize them and being able to do that quickly and easily, so I can consume them as the future. So guys, thanks so much for coming on the program. Really appreciate it.
SUMMARY :
and Thomas Hazel, the CTO really good to be here. lakes in the cloud forever. And the same process has to happen. So you guys talk about You know, when you guys crew founded the team to go after this, That's kind of the exciting service that the elasticity, And you have Databricks out there And if you can do that, end of the decade, you know, the chance of you getting your on the shoulder to say, all the things you can buy a grocery store So even the self service where you can actually have And if I have to put it is the ability to literally Well, and you have
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Kevin Miller | PERSON | 0.99+ |
Thomas | PERSON | 0.99+ |
Ed | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
50 | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
James | PERSON | 0.99+ |
Thomas Hazel | PERSON | 0.99+ |
ChaosSearch | ORGANIZATION | 0.99+ |
three months | QUANTITY | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
2021 | DATE | 0.99+ |
two thousands | QUANTITY | 0.99+ |
three weeks | QUANTITY | 0.99+ |
24 | QUANTITY | 0.99+ |
James Dixon | PERSON | 0.99+ |
last decade | DATE | 0.99+ |
7 | QUANTITY | 0.99+ |
second challenge | QUANTITY | 0.99+ |
2031 | DATE | 0.99+ |
Jamag Dagani | PERSON | 0.98+ |
S3 | ORGANIZATION | 0.98+ |
both sides | QUANTITY | 0.98+ |
S3 | TITLE | 0.98+ |
8 years ago | DATE | 0.98+ |
second thing | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
about 60% | QUANTITY | 0.98+ |
tonight | DATE | 0.97+ |
first | QUANTITY | 0.97+ |
Tableau | TITLE | 0.97+ |
two big areas | QUANTITY | 0.96+ |
one | QUANTITY | 0.95+ |
SQL | TITLE | 0.94+ |
seven | QUANTITY | 0.94+ |
6 | DATE | 0.94+ |
CTO | PERSON | 0.93+ |
CQL | TITLE | 0.93+ |
7 years | DATE | 0.93+ |
first move | QUANTITY | 0.93+ |
next decade | DATE | 0.92+ |
single | QUANTITY | 0.91+ |
DBS | ORGANIZATION | 0.9+ |
20% | QUANTITY | 0.9+ |
one thing | QUANTITY | 0.87+ |
5 | DATE | 0.87+ |
Hadoop | TITLE | 0.87+ |
Looker | TITLE | 0.8+ |
Grafana | TITLE | 0.73+ |
DPA | ORGANIZATION | 0.71+ |
one more thing | QUANTITY | 0.71+ |
end of the | DATE | 0.69+ |
Vice President | PERSON | 0.65+ |
petabytes | QUANTITY | 0.64+ |
cabana | TITLE | 0.62+ |
CEO | PERSON | 0.57+ |
HTFS | ORGANIZATION | 0.54+ |
house | ORGANIZATION | 0.49+ |
theCUBE | ORGANIZATION | 0.48+ |
Ed Walsh and Thomas Hazel, ChaosSearch | JSON
>>Hi, Brian, this is Dave Volante. Welcome to this cube conversation with Thomas Hazel was the founder and CTO of chaos surgeon. I'm also joined by ed Walsh. Who's the CEO Thomas. Good to see you. >>Great to be here. >>Explain Jason. First of all, what >>Jason, Jason has a powerful data representation, a data source. Uh, but let's just say that we try to drive value out of it. It gets complicated. Uh, I can search. We activate customers, data lakes. So, you know, customers stream their Jason data to this, uh, cloud stores that we activate. Now, the trick is the complexity of a Jason data structure. You can do all these complexity of representation. Now here's the problem putting that representation into a elastic search database or relational databases, very problematic. So what people choose to do is they pick and choose what they want and or they just stored as a blob. And so I said, what if, what if we create a new index technology that could store it as a full representation, but dynamically in a, we call our data refinery published access to all the permutations that you may want, where if you do a full on flatten, your flattening of its Jason, one row theoretically could be put into a million rows and relational data sort of explode, >>But then it gets really expensive. But so, but everybody says they have Jason support, every database vendor that I talked to, it's a big announcement. We now support Jason. What's the deal. >>Exactly. So you take your relational database with all those relational constructs and you have a proprietary Jason API to pick and choose. So instead of picking, choosing upfront, now you're picking, choosing in the backend where you really want us the power of the relational analysis of that Jaison data. And that's where chaos comes in, where we expand those data streams we do in a relational way. So all that tooling you've been built to know and love. Now you can access to it. So if you're doing proprietary APIs or Jason data, you're not using Looker, you're not using Tableau. You're doing some type of proprietary, probably emailing now on the backend. >>Okay. So you say all the tools that you've trained, everybody on you can't really use them. You got to build some custom stuff and okay, so, so, so maybe bring that home then in terms of what what's the money, why do the suits care about this stuff? >>The reason this is so important is think about anything, cloud native Kubernetes, your different applications. What you're doing in Mongo is all Jason is it's very powerful but painful, but if you're not keeping the data, what people are doing a data scientist is, or they're just doing leveling, they're saying I'm going to only keep the first four things. So think about it's Kubernetes, it's your app logs. They're trying to figure out for black Friday, what happens? It's Lilly saying, Hey, every minute they'll cut a new log. You're able to say, listen, these are the users that were in that system for an hour. And here's a different things. They do. The fact of the matter is if you cut it off, you lose all that fidelity, all that data. So it's really important that to have. So if you're trying to figure out either what happened for security, what happened for on a performance, or if you're trying to figure out, Hey, I'm VP of product or growth, how do I cross sell things? >>You need to know what everyone's doing. If you're not handling Jason natively, like we're doing either your, it keeps on expanding on black Friday. All of a sudden the logs get huge. And the next day it's not, but it's really powerful data that you need to harness for business values. It's, what's going to drive growth. It's what's going to do the digital transformation. So without the technology, you're kind of blind. And to be honest, you don't know. Cause a data scientist is kind of deleted the data on you. So this is big for the business and digital transformation, but also it was such a pain. The data scientists in DBS were forced to just basically make it simple. So it didn't blow up their system. We allow them to keep it simple, but yes, >>Both power. It reminds me if you like, go on vacation, you got your video camera. Somebody breaks into your house. You go back to Lucas and see who and that the data's gone. The video's gone because it didn't, you didn't, you weren't able to save it cause it's too >>Expensive. Well, it's funny. This is the first day source. That's driving the design of the database because of all the value we should be designed the database around the information. It stores not the structure and how it's been organized. And so our viewpoint is you get to choose your structure yet contain all that content. So if a vendor >>It says to kind of, I'm a customer then says, Hey, we got Jason support. What questions should I ask to really peel the onion? >>Well, particularly relational. Is it a relational access to that data? Now you could say, oh, I've ETL does Jason into it. But chances are the explosion of Jason permutations of one row to a million. They're probably not doing the full representation. So from our viewpoint is either you're doing a blob type access to proprietary Jason APIs or you're picking and choosing those, the choices say that is the market thought. However, what if you could take all the vegetation and design your schema based on how you want to consume it versus how you could store it. And that's a big difference with, >>So I should be asking how, how do I consume this data? Are you ETL? Bring it in how much data explosion is going to occur. Once I do this, and you're saying for chaos, search the answer to those questions. >>The answer is, again, our philosophy simply stream your data into your cloud object, storage, your data lake and with our index technology and our data refinery. You get to create views, dynamic the incident, whether it's a terabyte or petabyte, and describe how you want your data because consumed in a relational way or an elastic search way, both are consumable through our data refinery, which is >>For us. The refinery gives you the view. So what happens if someone wants a different view, I want to actually unpack different columns or different matrices. You able to do that in a virtual view, it's available immediately over petabytes of data. You don't have that episode where you come back, look at the video camera. There's no data there left. So that's, >>We do appreciate the time and the explanation on really understanding Jason. Thank you. All right. And thank you for watching this cube conversation. This is Dave Volante. We'll see you next time.
SUMMARY :
Good to see you. First of all, what where if you do a full on flatten, your flattening of its Jason, one row theoretically What's the deal. So you take your relational database with all those relational constructs and you have a proprietary You got to build some custom The fact of the matter is if you cut it off, you lose all that And to be honest, you don't know. It reminds me if you like, go on vacation, you got your video camera. And so our viewpoint is you It says to kind of, I'm a customer then says, Hey, we got Jason support. However, what if you could take all the vegetation and design your schema based on how you want to Bring it in how much data explosion is going to occur. whether it's a terabyte or petabyte, and describe how you want your data because consumed in a relational way You don't have that episode where you come back, look at the video camera. And thank you for watching this cube conversation.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Volante | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Jason | PERSON | 0.99+ |
Thomas Hazel | PERSON | 0.99+ |
Lilly | PERSON | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
JSON | PERSON | 0.99+ |
Thomas | PERSON | 0.99+ |
first day | QUANTITY | 0.99+ |
black Friday | EVENT | 0.99+ |
an hour | QUANTITY | 0.98+ |
both | QUANTITY | 0.97+ |
Both | QUANTITY | 0.97+ |
ed Walsh | PERSON | 0.97+ |
Tableau | TITLE | 0.95+ |
first four things | QUANTITY | 0.94+ |
Kubernetes | TITLE | 0.93+ |
one row | QUANTITY | 0.92+ |
Mongo | ORGANIZATION | 0.9+ |
Jason | ORGANIZATION | 0.89+ |
ChaosSearch | ORGANIZATION | 0.89+ |
a million | QUANTITY | 0.88+ |
next day | DATE | 0.86+ |
Jason | TITLE | 0.81+ |
First | QUANTITY | 0.74+ |
million rows | QUANTITY | 0.73+ |
ETL | ORGANIZATION | 0.7+ |
petabytes | QUANTITY | 0.69+ |
Looker | ORGANIZATION | 0.66+ |
DBS | ORGANIZATION | 0.58+ |
Jaison | PERSON | 0.52+ |
Lucas | PERSON | 0.49+ |
Ed Walsh and Thomas Hazel V1
>>Welcome to the cube. I'm Dave Volante. Today, we're going to explore the ebb and flow of data as it travels into the cloud. In the data lake, the concept of data lakes was a Loring when it was first coined last decade by CTO James Dickson, rather than be limited to highly structured and curated data that lives in a relational database in the form of an expensive and rigid data warehouse or a data Mart, a data lake is formed by flowing data from a variety of sources into a scalable repository, like say an S3 bucket that anyone can access, dive into. They can extract water. It can a data from that lake and analyze data. That's much more fine-grained and less expensive to store at scale. The problem became that organizations started to dump everything into their data lakes with no schema on it, right? No metadata, no context to shove it into the data lake and figure out what's valuable. >>At some point down the road kind of reminds you of your attic, right? Except this is an attic in the cloud. So it's too big to clean out over a weekend. We'll look it's 2021 and we should be solving this problem by now, a lot of folks are working on this, but often the solutions at other complexities for technology pros. So to understand this better, we're going to enlist the help of chaos search CEO and Walsh and Thomas Hazel, the CTO and founder of chaos search. We're also going to speak with Kevin Miller. Who's the vice president and general manager of S3 at Amazon web services. And of course they manage the largest and deepest data lakes on the planet. And we'll hear from a customer to get their perspective on this problem and how to go about solving it, but let's get started. Ed Thomas. Great to see you. Thanks for coming on the cube. Likewise face. It's really good to be in this nice face. Great. So let me start with you. We've been talking about data lakes in the cloud forever. Why is it still so difficult to extract value from those data? >>Good question. I mean, a data analytics at scale is always been a challenge, right? So, and it's, uh, we're making some incremental changes. As you mentioned that we need to seem some step function changes, but, uh, in fact, it's the reason, uh, search was really founded. But if you look at it the same challenge around data warehouse or a data lake, really, it's not just a flowing the data in is how to get insights out. So it kind of falls into a couple of areas, but the business side will always complain and it's kind of uniform across everything in data lakes, everything that we're offering, they'll say, Hey, listen, I typically have to deal with a centralized team to do that data prep because it's data scientist and DBS. Most of the time they're a centralized group, sometimes are business units, but most of the time, because they're scarce resources together. >>And then it takes a lot of time. It's arduous, it's complicated. It's a rigid process of the deal of the team, hard to add new data. But also it's hard to, you know, it's very hard to share data and there's no way to governance without locking it down. And of course they would be more self-service. So there's you hear from the business side constantly now underneath is like, there's some real technology issues that we haven't really changed the way we're doing data prep since the two thousands. Right? So if you look at it, it's, it falls, uh, two big areas. It's one. How do data prep, how do you take a request comes in from a business unit. I want to do X, Y, Z with this data. I want to use this type of tool sets to do the following. Someone has to be smart, how to put that data in the right schema. >>You mentioned you have to put it in the right format, that the tool sets can analyze that data before you do anything. And then secondly, I'll come back to that because that's a biggest challenge. But the second challenge is how these different data lakes and data we're also going to persisting data and the complexity of managing that data and also the cost of computing. And I'll go through that. But basically the biggest thing is actually getting it from raw data so that the rigidness and complexity that the business sides are using it is literally someone has to do this ETL process extract, transform load. They're actually taking data request comes in. I need so much data in this type of way to put together their Lilly, physically duplicating data and putting it together and schema they're stitching together almost a data puddle for all these different requests. >>And what happens is anytime they have to do that, someone has to do it. And it's very skilled. Resources are scant in the enterprise, right? So it's a DBS and data scientists. And then when they want new data, you give them a set of data set. They're always saying, what can I add this data? Now that I've seen the reports, I want to add this data more fresh. And the same process has to happen. This takes about 60 to 80% of the data scientists in DPA's to do this work. It's kind of well-documented. Uh, and this is what actually stops the process. That's what is rigid. They have to be rigid because there's a process around that. Uh, that's the biggest challenge to doing this. And it takes in the enterprise, uh, weeks or months. I always say three weeks to three months. And no one challenges beyond that. It also takes the same skill set of people that you want to drive. Digital transformation, data, warehousing initiatives, uh, monitorization being, data driven, or all these data scientists and DBS. They don't have enough of, so this is not only hurting you getting insights out of your dead like that, or else it's also this resource constraints hurting you actually getting smaller. >>The Tomic unit is that team that's super specialized team. Right. Right. Yeah. Okay. So you guys talk about activating the data lake. Yep, sure. Analytics, what what's unique about that? What problems are you all solving? You know, when you guys crew created this, this, this magic sauce. >>No, and it basically, there's a lot of things I highlighted the biggest one is how to do the data prep, but also you're persisting and using the data. But in the end, it's like, there's a lot of challenges that how to get analytics at scale. And this is really where Thomas founded the team to go after this. But, um, I'll try to say it simply, what are we doing? I'll try to compare and stress what we do compared to what you do with maybe an elastic cluster or a BI cluster. Um, and if you look at it, what we do is we simply put your data in S3, don't move it, don't transform it. In fact, we're not we're against data movement. What we do is we literally pointed at that data and we index that data and make it available in a data representation that you can give virtual views to end users. >>And those virtual views are available immediately over petabytes of data. And it re it actually gets presented to the end user as an open API. So if you're elastic search user, you can use all your lesser search tools on this view. If you're a SQL user, Tableau, Looker, all the different tools, same thing with machine learning next year. So what we do is we take it, make it very simple. Simply put it there. It's already there already. Point is at it. We do the hard of indexing and making available. And then you publish in the open API as your users can use exactly what they do today. So that's dramatically. I'll give you a before and after. So let's say you're doing elastic search. You're doing logging analytics at scale, they're lending their data in S3. And then they're,, they're physically duplicating a moving data and typically deleting a lot of data to get in a format that elastic search can use. >>They're persisting it up in a data layer called leucine. It's physically sitting in memories, CPU, uh, uh, SSDs. And it's not one of them. It's a bunch of those. They in the cloud, you have to set them up because they're persisting ECC. They stand up semi by 24, not a very cost-effective way to the cloud, uh, cloud computing. What we do in comparison to that is literally pointing it at the same S3. In fact, you can run a complete parallel, the data necessary. It's being ETL. That we're just one more use case read only, or allow you to get that data and make this virtual views. So we run a complete parallel, but what happens is we just give a virtual view to the end users. We don't need this persistence layer, this extra cost layer, this extra, um, uh, time cost and complexity of doing that. >>So what happens is when you look at what happens in elastic, they have a constraint, a trade-off of how much you can keep and how much you can afford to keep. And also it becomes unstable at time because you have to build out a schema. It's on a server, the more the schema scales out, guess what you have to add more servers, very expensive. They're up seven by 24. And also they become brittle. As you lose one node. The whole thing has to be put together. We have none of that cost and complexity. We literally go from to keep whatever you want, whatever you want to keep an S3, a single persistence, very cost effective. And what we do is, um, costs. We save 50 to 80% why we don't go with the old paradigm of sit it up on servers, spin them up for persistence and keep them up. >>Somebody 24, we're literally asking her cluster, what do you want to cut? We bring up the right compute resources. And then we release those sources after the query done. So we can do some queries that they can't imagine at scale, but we're able to do the exact same query at 50 to 80% savings. And they don't have to do any of the toil of moving that data or managing that layer of persistence, which is not only expensive. It becomes brittle. And then it becomes an I'll be quick. Once you go to BI, it's the same challenge, but the BI systems, the requests are constant coming at from a business unit down to the centralized data team. Give me this flavor of debt. I want to use this piece of, you know, this analytic tool in that desk set. So they have to do all this pipeline. They're constantly saying, okay, I'll give you this data, this data I'm duplicating that data. I'm moving in stitching together. And then the minute you want more data, they do the same process all over. We completely eliminate that. >>The questions queue up, Thomas, it had me, you don't have to move the data. That's, that's kind of the >>Writing piece here. Isn't it? I absolutely, no. I think, you know, the daylight philosophy has always been solid, right? The problem is we had that who do hang over, right? Where let's say we were using that platform, little, too many variety of ways. And so I always believed in daily philosophy when James came and coined that I'm like, that's it. However, HTFS that wasn't really a service cloud. Oddish storage is a service that the, the last society, the security and the durability, all that benefits are really why we founded, uh, Oncotype storage as a first move. >>So it was talking Thomas about, you know, being able to shut off essentially the compute and you have to keep paying for it, but there's other vendors out there and stuff like that. Something similar as separating, compute from storage that they're famous for that. And, and, and yet Databricks out there doing their lake house thing. Do you compete with those? How do you participate and how do you differentiate? >>I know you've heard this term data lakes, warehouse now, lake house. And so what everybody wants is simple in easy N however, the problem with data lakes was complexity of out driving value. And I said, what if, what if you have the easy end and the value out? So if you look at, uh, say snowflake as a, as a warehousing solution, you have to all that prep and data movement to get into that system. And that it's rigid static. Now, Databricks, now that lake house has exact same thing. Now, should they have a data lake philosophy, but their data ingestion is not daily philosophy. So I said, what if we had that simple in with a unique architecture, indexed technology, make it virtually accessible publishable dynamically at petabyte scale. And so our service connects to the customer's cloud storage data, stream the data in set up what we call a live indexing stream, and then go to our data refinery and publish views that can be consumed the lasted API, use cabana Grafana, or say SQL tables look or say Tableau. And so we're getting the benefits of both sides, you know, schema on read, write performance with scheme on, right. Reperformance. And if you can do that, that's the true promise of a data lake, you know, again, nothing against Hadoop, but a schema on read with all that complexity of, uh, software was, uh, what was a little data, swamp >>Got to start it. Okay. So we got to give a good prompt, but everybody I talked to has got this big bunch of spark clusters now saying, all right, this, this doesn't scale we're stuck. And so, you know, I'm a big fan of and our concept of the data lake and it's it's early days. But if you fast forward to the end of the decade, you know, what do you see as being the sort of critical components of this notion of, you know, people call it data mesh, but you've got the analytics stack. Uh, you, you, you're a visionary Thomas, how do you see this thing playing out over the next? >>I love for thought leadership, to be honest, our core principles were her core principles now, you know, 5, 6, 7 years ago. And so this idea of, you know, de centralize that data as a product, you know, self-serve and, and federated, computer, uh, governance, I mean, all that, it was our core principle. The trick is how do you enable that mesh philosophy? We, I could say we're a mesh ready, meaning that, you know, we can participate in a way that very few products can participate. If there's gates data into your system, the CTLA, the schema management, my argument with the data meshes like producers and consumers have the same rights. I want the consumer people that choose how they want to consume that data, as well as the producer publishing it. I can say our data refinery is that answer. You know, shoot, I love to open up a standard, right, where we can really talk about the producers and consumers and the rights each others have. But I think she's right on the philosophy. I think as products mature in this cloud, in this data lake capabilities, the trick is those gates. If you have the structure up front, it gets at those pipelines. You know, the chance of you getting your data into a mesh is the weeks and months that it was mentioning. >>Well, I think you're right. I think the problem with, with data mesh today is the lack of standards you've got. You know, when you draw the conceptual diagrams, you've got a lot of lollipops, which are API APIs, but they're all, you know, unique primitives. So there aren't standards by which to your point, the consumer can take the data the way he or she wants it and build their own data products without having to tap people on the shoulder to say, how can I use this? Where's the data live and, and, and, and, and being able to add their own >>You're exactly right. So I'm an organization I'm generally data will be courageous, a stream it to a lake. And then the service, uh, Ks search service is the data's con uh, discoverable and configurable by the consumer. Let's say you want to go to the corner store? You know, I want to make a certain meal tonight. I want to pick and choose what I want, how I want it. Imagine if the data mesh truly can have that producer of information, you, all the things you can buy a grocery store and what you want to make for dinner. And if you'd static, if you call up your producer to do the change, was it really a data mesh enabled service? I would argue not that >>Bring us home >>Well. Uh, and, um, maybe one more thing with this, cause some of this is we talking 20, 31, but largely these principles are what we have in production today, right? So even the self service where you can actually have business context on top of a debt, like we do that today, we talked about, we get rid of the physical ETL, which is 80% of the work, but the last 20% it's done by this refinery where you can do virtual views, the right our back and do all the transformation need and make it available. But also that's available to, you can actually give that as a role-based access service to your end users actually analysts, and you don't want to be a data scientist or DBA in the hands of a data science. The DBA is powerful, but the fact of matter, you don't have to affect all of our employees, regardless of seniority. If they're in finance or in sales, they actually go through and learn how to do this. So you don't have to be it. So part of that, and they can come up with their own view, which that's one of the things about debt lakes, the business unit wants to do themselves, but more importantly, because they have that context of what they're trying to do instead of queuing up the very specific request that takes weeks, they're able to do it themselves and to find out that >>Different data stores and ETL that I can do things in real time or near real time. And that's that's game changing and something we haven't been able to do, um, ever. Hmm. >>And then maybe just to wrap it up, listen, um, you know, eight years ago is a group of founders came up with the concept. How do you actually get after analytics at scale and solve the real problems? And it's not one thing it's not just getting S3, it's all these different things. And what we have in market today is the ability to literally just simply stream it to S3 by the way, simply do what we do is automate the process of getting the data in a representation that you can now share an augment. And then we publish open API. So can actually use a tool as you want first use case log analytics, Hey, it's easy to just stream your logs in and we give you elastic search puppet services, same thing that with CQL, you'll see mainstream machine learning next year. So listen, I think we have the data lake, you know, 3.0 now, and we're just stretching our legs run off >>Well, and you have to say it log analytics. But if I really do believe in this concept of building data products and data services, because I want to sell them, I want to monetize them and being able to do that quickly and easily, so that can consume them as the future. So guys, thanks so much for coming on the program. Really appreciate it. All right. In a moment, Kevin Miller of Amazon web services joins me. You're watching the cube, your leader in high tech coverage.
SUMMARY :
that organizations started to dump everything into their data lakes with no schema on it, At some point down the road kind of reminds you of your attic, right? But if you look at it the same challenge around data warehouse So if you look at it, it's, it falls, uh, two big areas. You mentioned you have to put it in the right format, that the tool sets can analyze that data before you do anything. It also takes the same skill set of people that you want So you guys talk about activating the data lake. Um, and if you look at it, what we do is we simply put your data in S3, don't move it, And then you publish in the open API as your users can use exactly what they you have to set them up because they're persisting ECC. It's on a server, the more the schema scales out, guess what you have to add more servers, And then the minute you want more data, they do the same process all over. The questions queue up, Thomas, it had me, you don't have to move the data. I absolutely, no. I think, you know, the daylight philosophy has always been So it was talking Thomas about, you know, being able to shut off essentially the And I said, what if, what if you have the easy end and the value out? the sort of critical components of this notion of, you know, people call it data mesh, And so this idea of, you know, de centralize that You know, when you draw the conceptual diagrams, you've got a lot of lollipops, which are API APIs, but they're all, if you call up your producer to do the change, was it really a data mesh enabled service? but the fact of matter, you don't have to affect all of our employees, regardless of seniority. And that's that's game changing And then maybe just to wrap it up, listen, um, you know, eight years ago is a group of founders Well, and you have to say it log analytics.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kevin Miller | PERSON | 0.99+ |
Thomas | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Ed Thomas | PERSON | 0.99+ |
50 | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
James | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
three months | QUANTITY | 0.99+ |
three weeks | QUANTITY | 0.99+ |
Thomas Hazel | PERSON | 0.99+ |
2021 | DATE | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
next year | DATE | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
S3 | ORGANIZATION | 0.99+ |
second challenge | QUANTITY | 0.99+ |
24 | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
eight years ago | DATE | 0.98+ |
Today | DATE | 0.98+ |
two thousands | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
20% | QUANTITY | 0.98+ |
tonight | DATE | 0.97+ |
last decade | DATE | 0.97+ |
S3 | TITLE | 0.97+ |
first | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
Tableau | TITLE | 0.95+ |
single | QUANTITY | 0.95+ |
James Dickson | PERSON | 0.94+ |
Hadoop | TITLE | 0.94+ |
two big areas | QUANTITY | 0.94+ |
20 | QUANTITY | 0.94+ |
SQL | TITLE | 0.93+ |
seven | QUANTITY | 0.93+ |
CTO | PERSON | 0.93+ |
about 60 | QUANTITY | 0.93+ |
Oncotype | ORGANIZATION | 0.92+ |
first move | QUANTITY | 0.92+ |
secondly | QUANTITY | 0.91+ |
one more thing | QUANTITY | 0.89+ |
DBS | ORGANIZATION | 0.89+ |
one node | QUANTITY | 0.85+ |
Walsh | PERSON | 0.83+ |
petabytes | QUANTITY | 0.77+ |
Tomic | ORGANIZATION | 0.77+ |
31 | QUANTITY | 0.77+ |
end of the | DATE | 0.76+ |
cabana | TITLE | 0.73+ |
HTFS | ORGANIZATION | 0.7+ |
Mart | ORGANIZATION | 0.68+ |
Grafana | TITLE | 0.63+ |
data | ORGANIZATION | 0.58+ |
Looker | TITLE | 0.55+ |
CQL | TITLE | 0.55+ |
DPA | ORGANIZATION | 0.54+ |
Thomas Hansen, UiPath & Jason Bergstrom, Deloitte | UiPath FORWARD IV
>>From the Bellagio hotel in Las Vegas. It's the cube covering UI path forward for brought to you by UI path. >>Hey, welcome back to Las Vegas. Lisa Martin, with Dave Volante, the cube is here, live at UI path forward for very excited to be here in person. Next topic, the smart factory, a couple of guests here to unpack that for us, Jason Brixton joins us the smart factory lead at Deloitte and Thomas Hanson, the CRO of UI path gentlemen, and welcome to the program. Thank you. Thank you for having us great to have you great to be in person. Let's talk about smart track factory factory Ford auto. What is it from Deloitte perspective and then UI path. >>So if you think about smart factory, it's really that transition from the old kind of analog manufacturing environment to the digital, digital operating type environment that we see today. So technology has really changed in the last three or four years. And as a result of that elevation of technology, we're able to do a lot more on the manufacturing floor than we ever could. So what used to be more analog or hybrid with a little bit of technology is now starting to shift really to end to end integrated manufacturing operations that are based on digital platforms and we're loving it. It's a great place to be >>Great. Tell us what's your perspective? >>Well, first of all, it's great to be here. Thank you for the invite. It's so nice to be away from soon calls or, or other type of, uh, of calls, right. And be in person. Uh, look, we have an amazing partnership with the lights. Um, we have worked together for years. We've done more than 400 joint engagements with the large companies across the world. And in that process, we've really gone deeper from a vertical and industry perspective and smart factory is really the starting point of going super specific and figuring out what does automation or how does automation rather play into, um, to a, to a smart factory, like a beautiful trombone, that music from a beautiful trombone. >>So years ago, we wrote a piece talking about the cloud as an opportunity and how to take advantage of it. And one of the, the premise of the piece was you've got to build ecosystems and maybe it's within an industry or within a practice and build data in different disciplines because the power of many versus the capabilities of one, this smart factory initiative that you guys have going, it feels like an ecosystem play. Can you describe that ecosystem? Who's involved? I know SAP in for AWS, but, but tell us more about the ECOS. >>Yeah, sure. So your, your hunch, there is a great one, right? We, we learned early on that trying to do this as Deloitte or Deloitte plus one just, wasn't going to get it done, right? You really needed to harness the power of the many. And so at the, at the core of what we're doing at the smart factory at Wichita, that you alluded to is about bringing an ecosystem to life. So we have 21 partners that are going to be participating out of the gate with the smart factory. Wichitan the intent is to show a seamless solution and actual end-to-end production facility that showcases 21 amazing technologies and partners. And we're just really thrilled about what we're able to show our clients. So, >>Yep. So Koch industries owns Inforce. So obviously that's the Wichita connection, is that right? So they got to be involved in this. I mean, they were amazing company, but what can you tell us about, uh, their, their involvement? >>Yep. So Coke, obviously the in for connection, uh, Dragos, which is another in four company as a founder within, uh, within the ecosystem, which is fantastic. There they play at the core. They're also an incredibly important client, right? So the Coke business on the whole is critical to how we think about manufacturing across a whole range of industries from discreet production to scale process. Um, they're fantastic partners and we've had a great time working with them. And you guys are just, >>It's about to launch through soft launch. Can you tell us more where you are in the progression? >>Sure. So soft launch started two days ago. Oh wow. So the building, we have the keys, uh, we are doing some visits with a handful of friends and family, that ecosystem partners that you mentioned, there'll be coming out, uh, to see it and to provide some feedback. And then we go live in earnest in January >>At Thomas where's UI path fit. >>Well, we fit in essay as a key part in this initiative. Um, look, we, as a company, we are part the preferred partner. First, we do all our business together with partners and we have right about almost 5,000 partners now, globally. And then there's a few, then there's a few in that 5,000 that are unique that really stand out. And Deloitte of course is one of those very, very special partners that we work with globally, but also locally here in the us, across all the states across all the industries. So we're thrilled to be part of this automation plays a key key part of smart factory. When you think about it, the evolution of work there's so much boring, mundane work on there. Humankind is better served, spending their time and effort on the non mundane on the innovative on the creative. And that's what we try to ensure that the humans in the loop so to speak are focused on the innovative work, the graded work, and we have software robots, RPA automation handle all that boring and mundane work, >>Right? Letting the folks focus on the value, add to themselves a value add to the organization, more strategic investments. Thomas question for you is in terms of you talked about this being horizontal across industries, but I'm curious about what some of the feedback is from some of your customers, 8,000 customers. Now you've got a very large what, 726 million ARR, huge lot of customers over a hundred million ARR. What's been the feedback from some of those guys. >>Well, so first of all, uh, personally, I I've been in enterprise software for more than 20 years. And what I've experienced over the years are most large scale enterprise software projects tends to be multi-year in nature, be rather complex. And the failure rate can be rather high. Then in comes RPA and automation, which is a complete different kettle of fish in the sense that from conceptualization of identifying a process, to getting it built, getting it tested, getting it into production, you're talking days and weeks only. So the path to seeing value is so fast. What I've learned yesterday and today from the 1516 customer meetings I've had so far is the same unique trend or learning across all industries and also from various parts of the world. And that is very fast realization of value, perhaps starting initially with 5, 10 20 processes and then scaling super fast because the find that return on investment incredibly quickly with our solution. So that's what unifies it across geographies and across industries. >>What'd you think about the smart factory? And one of the things we've learned during COVID is there's so much unknown. So sometimes these processes aren't linear like a trombone, you know, going back and forth in and out, but is there unknown in the smart factory processes or is it pretty well known? And you can do the process mining on that known base. What's the dynamic >>Back there. So there's a few different dimensions to it. So yes, it is well known because it's a controlled environment, but one of the things that we're doing is we're actually actively introducing a lot of unknown factors to try to let the bots and the process mining kick into effect. Right? So we're artificially, let's just say injecting opportunity for us to do that. The other thing that we're doing is, and what's really unique about the smart factory at Wichita is it's one of four across the globe for Deloitte. And so we're bringing data in from the other three sites, which is data, that'll be less controlled. We're going to do process mining on that. Just try to take advantage of some of the, some of the capabilities associated with the solutions. >>Okay. So, so w when you think about process mining, do you start there, or do you start with, I sometimes call it paving the cow path, you know, taking what you've known, that linear process that, that hit that as the quick win, and then worry about the process money, or do you step back and say, wait a minute, we have to rethink the entire factory experience. Where do you start? >>I think it depends in the case of the smart factory with that, we've got a few different places, so we're using it to do ingestion of orders. So that's obviously a very controlled environment. We're then using it to do a lot of work around inventory management and optimization as well as month end close plays, which will be a lot more we're learning as we go. Right. So I think on the spectrum, it could be on either end my personal belief. If you look at it more long-term or actually out in the real world is that this is all about learning new things. It's about generating insights from data that frankly, you don't want human beings to have to go do that. And so having the ability to take advantage of an intelligent automation solution, as powerful as UI path is really a great advantage. >>One of the things that's misunderstood, I think about UI path is they look at what happened post let's say 2015, 2016, and say, oh, just like, just like every other Silicon valley company, double, double, triple, triple. And that's not how you guys started. You sort of let things bake for the better part of a decade and then got product market fit and then exploded. Um, and so that's, that to me was a key to your success in scaling this. I feel like you guys are building a new offering here. This is not just doing a one-off the product market fit. It's not like a point product. It's a, it's a big thing. So can you talk about the go to market, your product market fit? You're testing it out now, your goals, are you trying to scale this up? What, what are some of the things that you can share about your aspirations? So >>The partnership from a UI path perspective to Deloitte is a critical partnership. One of the select few on a global level, uh, we have enjoyed tremendous, uh, amount of engagements together. I mentioned early on 400, and I believe we, we now have together right about 1000 developers trained within your organization on your iPod, right? That's right. Yep. So we have a strong base that, of course we want to build full and hopefully put a syrup behind the thousand to 10,000. And over time, we want to make sure that it's globally inclusive, that we can serve all the marketers across the world where we have giant presence. And there's a select number of verticals and industries where we really have had success together that we of course want to go and specifically shoe in on what would have caused now be manufacturing together. And of course, a classic vertical we've been very strong in together as BFSI bank and financial services industry. So those are good areas. >>Well, Jason, you're building a business out of this, right? I mean, you've got a business plan around it and you're going to scale this thing. >>Oh, absolutely. Yeah. That's 100% the case. So we have smart factory at Wichita. That is part of our positioning in the marketplace. What we found is that telling people about tech and about solutions is one thing, showing it to them in a production environment is altogether different, right? Giving clients the opportunity to explore the art of the possible in a real setting like that is incredibly impactful. And so you talked about go to market, we see this relationship with the ecosystem and what we're trying to do in Wichita, that's sort of the epicenter of building an entire business, which ultimately will have huge global potential. >>We talk about speed for a minute. And the growth trajectory that UI path Thomas has been on for the last five years or so. I think I was reading, I think it was analysis that Dave wrote that in 2016 revenue was 1,000,020, 20, 15, 20, 20 600 million. So massive growth very quickly. My question, Jason is for you in terms of the speed. Ha how quickly are you looking to see the smart factory for Dato really impacting organizations around the globe because these guys are on a fast bulleted. >>Yeah. So I wish we had those growth rates. I will say though, selling and delivering these solutions holistically to manufacturers takes more time. So we think of our cycle as be measured, certainly in many months, certainly not years. We are starting to see an acceleration of that entire sales cycle and delivery cycle, just because of things like the pandemic driving organizations to just need to move faster. Frankly, if you're not moving towards digital manufacturing operations right now, you're probably behind. And so we're seeing that urgency from the market start to pick up, but we don't have that kind of growth rate, unfortunately. >>Well, what's it. What's interesting about Deloitte to me is you guys here, I think of you as a virtual company. I mean, I know you got a lot of bodies out there, but it's not like you've got a lot of physical locations. Right. And so now, but now you're just, you're investing in a physical plant essentially, >>Which is extremely exciting. We, we keep telling ourselves when we talk to folks, they own lots of buildings. So just because we're excited about our building doesn't mean they are, but you're exactly right, right. We're obviously a global services and products company. So this is one of a handful of buildings that are going to start to represent us as an organization. And we're really excited about what should we watch? >>It's kind of milestones for progress success. What are the markers that we should be paying attention to is independence. >>I think specifically on this, um, rapid experiment together, I think one of the key learnings we can take away that we can apply to other companies in the manufacturing industry specifically look from a UI perspective. We work with many large scale manufacturers around the world, but we've seen amazing fast progress with Bridgestone. For example, we implemented a smaller set of, uh, uh, bots that help them reduce their paperwork by 85% onto their branches with a Turkish e-commerce retailer called Archer. Lik I think I get the pronunciation correctly. They put 85 processes in place with our bots and are now to date transacting or running. I think it's 3 million e-commerce transactions with our processes. So the impact we can have in manufacturing together with the learnings from this, my factory, I think is just so exciting. Really? >>Yeah. The impact, the potential there is, is unlimited. Guys. Thank you for joining David, me talking to us about smart factory Ford auto, what it means for both businesses, how the partnership is evolving. It sounds like music from a beautiful trombone. Thank you so much for joining Dave and me today. Thank you For Dave Volante. I'm Lisa Martin. The Cubas live in Las Vegas at the Bellagio at UI path forward for we'll be right back.
SUMMARY :
UI path forward for brought to you by UI path. the smart factory, a couple of guests here to unpack that for us, Jason Brixton joins us the So technology has really changed in the last three or four years. Tell us what's your perspective? smart factory is really the starting point of going super specific and figuring out what does automation initiative that you guys have going, it feels like an ecosystem play. So we have 21 partners that are going to be participating out of the gate with the smart So obviously that's the Wichita connection, So the Coke business on Can you tell us more where you are in the progression? So the building, the loop so to speak are focused on the innovative work, the graded work, and we have software Letting the folks focus on the value, add to themselves a value add to the organization, So the path to seeing value is so fast. And one of the things we've learned during COVID is there's so much unknown. So there's a few different dimensions to it. and then worry about the process money, or do you step back and say, wait a minute, we have to rethink the entire And so having the ability talk about the go to market, your product market fit? One of the select few on a global level, uh, we have enjoyed tremendous, I mean, you've got a business plan around it and you're going to scale this thing. Giving clients the opportunity to And the growth trajectory that UI path Thomas has been on for to pick up, but we don't have that kind of growth rate, unfortunately. What's interesting about Deloitte to me is you guys here, I think of you as a virtual company. And we're really excited about what should we watch? What are the markers that we should be paying So the impact we can have in manufacturing together with the learnings Vegas at the Bellagio at UI path forward for we'll be right back.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Deloitte | ORGANIZATION | 0.99+ |
Jason | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
Dave | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Jason Brixton | PERSON | 0.99+ |
Ford | ORGANIZATION | 0.99+ |
Coke | ORGANIZATION | 0.99+ |
January | DATE | 0.99+ |
5 | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
Thomas | PERSON | 0.99+ |
1,000,020 | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
2015 | DATE | 0.99+ |
21 partners | QUANTITY | 0.99+ |
Jason Bergstrom | PERSON | 0.99+ |
85 processes | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
726 million | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
85% | QUANTITY | 0.99+ |
8,000 customers | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Wichita | LOCATION | 0.99+ |
more than 20 years | QUANTITY | 0.99+ |
UiPath | ORGANIZATION | 0.99+ |
two days ago | DATE | 0.99+ |
Thomas Hanson | PERSON | 0.99+ |
iPod | COMMERCIAL_ITEM | 0.99+ |
Thomas Hansen | PERSON | 0.99+ |
Dragos | ORGANIZATION | 0.99+ |
both businesses | QUANTITY | 0.99+ |
15 | QUANTITY | 0.99+ |
20 | QUANTITY | 0.98+ |
Bridgestone | ORGANIZATION | 0.98+ |
1516 customer meetings | QUANTITY | 0.98+ |
three sites | QUANTITY | 0.98+ |
Archer | ORGANIZATION | 0.98+ |
5,000 | QUANTITY | 0.98+ |
Silicon valley | LOCATION | 0.98+ |
more than 400 joint engagements | QUANTITY | 0.98+ |
10,000 | QUANTITY | 0.98+ |
UI path | ORGANIZATION | 0.97+ |
One | QUANTITY | 0.97+ |
Inforce | ORGANIZATION | 0.97+ |
Koch | ORGANIZATION | 0.97+ |
21 amazing technologies | QUANTITY | 0.97+ |
400 | QUANTITY | 0.97+ |
thousand | QUANTITY | 0.96+ |
Wichitan | ORGANIZATION | 0.96+ |
four | QUANTITY | 0.96+ |
about 1000 developers | QUANTITY | 0.95+ |
Bellagio | LOCATION | 0.93+ |
over a hundred million | QUANTITY | 0.93+ |
almost 5,000 partners | QUANTITY | 0.9+ |
double | QUANTITY | 0.89+ |
3 million e | QUANTITY | 0.89+ |
four years | QUANTITY | 0.87+ |
BFSI bank | ORGANIZATION | 0.86+ |
Dato | ORGANIZATION | 0.86+ |
years ago | DATE | 0.85+ |
Bellagio | ORGANIZATION | 0.85+ |
one thing | QUANTITY | 0.84+ |
Richard Henshall & Thomas Anderson, Red Hat | AnsibleFest 2021
(upbeat music) >> Welcome to AnsibleFest, 2021, the virtual version. This is The Cube and my name is Dave Volante. We're going to dig into automation and its continuing evolution. Tom Anderson is here. He's the vice president of Red Hat Ansible, the automation platform. And Richard Henshall is also here, Senior Manager of Ansible Product Management, of course, at Red Hat. Guys, welcome to the cube. Good to see you. >> Thanks for having us. >> Thank you for having us Dave. You're welcome, so Rich with this latest release of the Ansible Automation Platform, AAP, we'll get the acronyms out of the way. The focus seems to be an expanding the reach of automation and its potential use cases. I mean, I'll say automation everywhere, not to be confused with the RPA vendor, but the point is, you're trying to make it easier to automate things like provisioning, configuration management, application deployment, throw in orchestration and all these other IT processes. Now, you've talked about this theme in previous releases of AAP. So what's new in this release? What can customers do now that they couldn't do before? >> Yeah, it's a good question thank you. So, we look at this in two dimensions. So, the first dimension we have is like where automation can happen, right? So, you know, we always have traditional data center, clouds being been very prevalent for us for the last, you know, sort of five, 10 years in most people's view. But now we have the Edge, right? So now we have Edge computing, which is sometimes a lot more of the same, but also it comes with a different dynamic of how it has to be sort of used and utilized by different use cases, different industry segments. But then, while you expand the use cases to make sure that people can do automation where they need to do it and make sure if we don't close to the Edge or close to the data center, based on where the technology needs to be run, you also have to think about who's now using automation. So, the second dimension is making sure that different users can take access. You mentioned like application deployment, or infrastructure, or network configuration. We expand the number of different users we have that are starting to take advantage of Ansible. So how do we get more developers? How do we get into the developer workflow, into the development workflow, for how Ansible is created, as well as how we help with the operational, the posts deployment stage that people do operating automation, as well as then the running of Ansible Automation Platform itself. >> Excellent, okay. So, in thinking about some of those various roles or personas, I mean, I think about product leads. I would see developers, obviously you're going to be in there. Managers I would think want that view. You know the thrust seems to be, you're trying to continue to enhance the experience, for these personas and others, I suppose, with new tooling. Maybe you could add some color to that and what's happening in the market Tom if you take this and Rich chime in, what's happening in the market that makes this so important? Who are the key roles and personas that you're targeting? >> Yeah. So, there's a couple of things happening here. I mean, traditionally the people that had been using Ansible to automate their subsystems were the domain expert for that subsystem, right? I'm the storage operations team. I'm the network operations team. I'm using this tool to automate the tasks that I do day to day to operate my piece of the sub system. Now, what they're being asked to do is to expose that subsystem to other constituencies in the organization, right? So they had not, they're not waiting for a call to come in to say, can I have a network segment? Can I have this storage allocated to me? Can I deploy these servers so I can start testing or building or deploying my application. Those subsystems need to be exposed to those different audiences. And so the type of automation that is required is different. Now, we need to expose those subsystems in a way that makes those domain owners comfortable. So they're okay with another audience having access to their subsystem. But at the same time, they're able to ensure the governance and compliance around that, and then give that third-party that developer, that QE person, that man, that business, that line of business manager, whoever it might be, that's accessing that resource, a interface that is friendly and easy enough for them to do. It's kind of the democratization. I know it's a cliche, but the democratization of automated automation within organizations, giving them roles, specific experiences, of how they can access these different subsystems and speed their access to these systems and deploy applications. >> So if we could stay on that for a second, cause that's a complicated situation. You're now opening this up. You Richard mentioned the Edge. So you got to make sure that the person that's getting access has access, but then you also have to make sure that that individual can't screw it up, do things that you don't want that individual to do. And it's probably a whole other set of compliance issues and policy things that you have to bake in. Is that, am I getting that right? >> Yeah. And then that's the aspect of it. When you start to think, you know, Tom listed off there, you know, 10, you can just keep adding different sort of personas that individuals that work in roles, identify with as themselves. I'm a network person, I'm a storage person. To us they're all just Ansible users, right? There may be using a slightly different way, maybe using it slightly different places, but they're just an Ansible user, right? And so as you have, like those people that just like become organically, you've now got thousands potentially of Ansible users inside a large enterprise organization, or if you know, a couple of hundred if your smaller. But you're then go, well, what do I do with Ansible, right? And so at that point, you then start to say, now we try to look at it as what's their use of Ansible itself, because it's not just a command line tool. It's got a management interface, it's got analytics, we've got content management, we've got operational runtime, we've got responsiveness to, you know, disaster recovery scenarios for when, you know, when you need to be able to do certain actions, you may use it in different ways at different places. So we start, try and break out, what is the person doing with Ansible Automation Platform at this part of their workflow? Are they creating content, right? Are they consuming content, or are they operating that automation content for those other constituent users that Tom referred to. >> Yeah, that's really helpful because there's context, there are different roles, different personas need different contexts, you know, trying to do different things. Sometimes somebody just wants to see the analytics to make sure it's, you know, hey, everything's green, Oh, we got a yellow, versus, hey actually want to make some changes and I'm authorized to do so. Let's shift gears a little bit and talk about containers. I want to understand how containers are driving change for customers. Maybe what new tools you're providing to support this space? What about the Edge? Yeah, how real is that in terms of tangible pockets or patterns that you can identify that require new types of capabilities that you're delivering? Maybe you can help us unpack that a little bit. >> Okay so, I think there's two ways to look at containers, right? So the first is how are we utilizing the container technology itself, right? So containers are a package, right? So the amount of work we've been doing as Ansible's become more successful in the last couple of years, separating content out with Ansible collections. The ability to bring back manage, control a containerized runtime of Ansible so that you can lifecycle it, you can deploy it, it becomes portable. Edge is important there. How do I make sure I have the same automation running in the data center as the same automation running out on the Edge, if I'm looking at something that needs to be identical. The portability that the packaging of the container gives us, is a fantastic advantage, given you need to bring together just that automation you want. Smaller footprint, more refined footprint, lifecycle manage footprint. But at the same time, containers are also a very useful way of scaling the operation, right? And so as red hat puts things like Open Shift out in all these different locations, how can we leverage those platforms, to push the runtime of Ansible, the execution component, the execution plane of Ansible. How into anywhere that's hospitable for it to run? And as you move out towards Edge, as you move further away from the data center, you need a more ubiquitous sort of like run-time plane that you can put these things on. So they can just spin up when as, and when you need to. Potentially even at the end, actually being on the device, because at the same time with Edge, you also have different limits around how Edge works. It's not just about, hey I'm wifi points in an NFL stadium, actually, you're talking about I'm at the end of a 2000 mile, you know, piece of cable on an oil pipeline or potentially I'm a refinery out in the Gulf of Mexico. You know, you've got a very different dynamic to how you interact with that end point, than you do when it's a nice big controlled network, you know, powered location, which is well-governed and well-orchestrated. >> That's good. Thank you Rich. So Tom, think about automation, you know, back in the day, seems like a long time ago, but it really wasn't, automation used to scare some IT folks, because you know, sometimes it created unintended consequences or maybe it was a cultural thing and that you didn't want to automate themselves out of a job, but regardless. The cloud has changed that mindset, you know, showing us what's possible. You guys obviously had a big role in that, and the pandemic and digital initiatives, they really have made I call it the automation mandate. It was like the fourth March to digital, at least that's how I see it. I wonder if you could talk about, how you see your users approaching automation in as it relates to their business goals. Do you think automation is still being treated sometimes with trepidation or as a side project for some organizations or is it really continuing to evolve as a mainstream business imperative? >> Yes, so Dave we see it continuing to evolve as a strategic imperative for our customers. I mean, you'll, hear some of the keynote folks that are speaking here today. I've done an interview or doing an interview with Joe Mills from Discover, talking about extreme automation throughout Discovers organization. You'll hear representatives from JPMC talk about 22,000 JPMC employees contributing automation content in their environment, across 20 or 22 countries. I mean, just think about that scale, and the number of people that are involved in automation now and their tasks. So I think it's, I think we are, we have moved beyond or are moving beyond that idea that automation is just there to replace people's jobs. And it's much more about automation replacing the mundane, increasing consistency, increasing security, increasing agility, and giving people an opportunity to do more and more interesting stuff. So that's what we hear from our customers, this idea of them building. And it's not just the technology piece, but it's the cultural piece inside organizations where they're building these guilds or communities of practice, bringing people together to share best practices and experience with automation, so that they can feel comfortable learning from others and sharing with others and driving the organization forward. So we see a lot of that, and you'll hear a lot of that, at some of the Ansible Fest sessions this week. >> Well, I mean though I think that's a really important point. The last point you made about the skills, because I think you're right. I think we have moved beyond it's just job replacement. I don't know anybody who loves provisioning LUNs and say, oh, I'm the best in the world at that. It's just kind of something that was maybe important 10, 15, 20 years ago, but today, he should let the machines do that. So that's the whole skills transformation, is obviously a big part of digital transformation. Isn't it? >> It absolutely is. And frankly, we still hear, it's an impediment, that skills shortages are still an impediment to our customer success. They are still skilling up. I mean, honestly, that's one of the differentiators, for Ansible, as a language, a human readable language, that is easy to learn, easy to use, easy to share across an organization. So that's why you see job boards, and whatnot with so many opportunities that require or, or ask for Ansible skills out there. It's just a, it's become sort of a ubiquitous automation language in organizations, because it can be shared across lots of different roles. You don't have to be a Ruby software developer or a Python software developer to create automation with Ansible. You can be Tom Anderson or Rich Henshall. You don't have to, you don't have to be the, you know, the, the sharpest software developer in the world to take advantage of it. So anyway, that's one of the things that kind of overcoming some of the skills apprehension and bringing people into this, into the kind of new environment, of thinking about automation as code, not software code, but thinking of it like code. >> Got it. Guys we've got to leave it there, but Rich, how about you bring us home. We'll give you the last word. >> I mean, I think, you know what Tom just said there I think, about the skills side of things, is I think that the part that made it resonates the most. I mean I was a customer before I joined Red Hat, and trying to get large numbers of people, onto a same path, to try and achieve that outbound objective, that an organization has. The objective of an organization is not to automate, it's to achieve what is needed by what the automation facilitates. So how do we get those different groups to go from, Hey, this is about me, to this is actually about what we're trying to achieve as a business what we're trying to facilitate as a business, and how do we get those people easier access, a reduced barrier of entry to the skills they need to help make that successful, that compliments what they do, in their primary role, with a really strong secondary skill set that helps them do all the bits and pieces they need to do to make that job work. >> That's great, I mean you guys have done a great job, I mean it wasn't clear, you know, decade ago, or maybe half a decade ago, who was going to win this battle. Ansible clearly has market momentum and has become the leader. So guys congratulations on that and good job. Keep it going. I really appreciate your time. >> Thank you. >> Thank you. Thanks. >> Okay. This is the cubes, continuous coverage of Ansible Fest, 2021. Keep it right there for more content that educates and inspires. Thanks for watching. (upbeat music)
SUMMARY :
the automation platform. not to be confused with the RPA vendor, needs to be run, you You know the thrust seems to be, the tasks that I do day to So you got to make sure that the person or if you know, a couple to make sure it's, you know, I'm at the end of a 2000 mile, you know, and that you didn't want to automate and the number of people that are involved So that's the whole skills transformation, have to be the, you know, how about you bring us home. it's to achieve what is needed and has become the leader. Thank you. more content that educates
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Joe Mills | PERSON | 0.99+ |
Tom | PERSON | 0.99+ |
Tom Anderson | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Richard Henshall | PERSON | 0.99+ |
Rich Henshall | PERSON | 0.99+ |
Ansible | ORGANIZATION | 0.99+ |
Richard | PERSON | 0.99+ |
JPMC | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Discovers | ORGANIZATION | 0.99+ |
Discover | ORGANIZATION | 0.99+ |
two dimensions | QUANTITY | 0.99+ |
Gulf of Mexico | LOCATION | 0.99+ |
today | DATE | 0.99+ |
22 countries | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
20 | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
first | QUANTITY | 0.98+ |
decade ago | DATE | 0.98+ |
Thomas Anderson | PERSON | 0.98+ |
second dimension | QUANTITY | 0.98+ |
10 years | QUANTITY | 0.98+ |
two ways | QUANTITY | 0.98+ |
five | QUANTITY | 0.98+ |
Rich | PERSON | 0.98+ |
10 | DATE | 0.97+ |
thousands | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
Red Hat Ansible | ORGANIZATION | 0.97+ |
first dimension | QUANTITY | 0.97+ |
half a decade ago | DATE | 0.95+ |
Ruby | TITLE | 0.94+ |
this week | DATE | 0.94+ |
AnsibleFest | ORGANIZATION | 0.94+ |
fourth March | DATE | 0.92+ |
2000 mile | QUANTITY | 0.91+ |
15 | DATE | 0.91+ |
about 22,000 | QUANTITY | 0.9+ |
Ansible Fest | EVENT | 0.89+ |
10 | QUANTITY | 0.89+ |
20 years ago | DATE | 0.88+ |
Edge | TITLE | 0.86+ |
NFL | EVENT | 0.83+ |
Ansible Fest | EVENT | 0.82+ |
AnsibleFest 2021 | EVENT | 0.75+ |
a second | QUANTITY | 0.73+ |
last couple of years | DATE | 0.72+ |
Ansible Automation | ORGANIZATION | 0.65+ |
hundred | QUANTITY | 0.63+ |
AAP | TITLE | 0.57+ |
pandemic | EVENT | 0.51+ |
couple | QUANTITY | 0.51+ |
Rajiv Mirani and Thomas Cornely, Nutanix | .NEXTConf 2021
(upbeat electronic music plays) >> Hey everyone, welcome back to theCube's coverage of .NEXT 2021 Virtual. I'm John Furrier, hosts of theCube. We have two great guests, Rajiv Mirani, who's the Chief Technology Officer, and Thomas Cornely, SVP of Product Management. Day Two keynote product, the platform, announcements, news. A lot of people, Rajiv, are super excited about the, the platform, uh, moving to a subscription model. Everything's kind of coming into place. How are the customers, uh, seeing this? How they adopted hybrid cloud as a hybrid, hybrid, hybrid, data, data, data? Those are the, those are the, that's the, that's where the puck is right now. You guys are there. How are customers seeing this? >> Mirani: Um, um, great question, John, by the way, great to be back here on theCube again this year. So when we talk to our customers, pretty much, all of them agreed that for them, the ideal state that they want to be in is a hybrid world, right? That they want to essentially be able to run both of those, both on the private data center and the public cloud, and sort of have a common platform, common experience, common, uh, skillset, same people managing, managing workloads across both locations. And unfortunately, most of them don't have that that tooling available today to do so, right. And that's where the platform, the Nutanix platform's come a long way. We've always been great at running in the data center, running every single workload, we continue to make great strides on our core with the increased performance for, for the most demanding, uh, workloads out there. But what we have done in the last couple of years has also extended this platform to run in the public cloud and essentially provide the same capabilities, the same operational behavior across locations. And that's when you're seeing a lot of excitement from our customers because they really want to be in that state, for it to have the common tooling across work locations, as you can imagine, we're getting traction. Customers who want to move workloads to public cloud, they don't want to spend the effort to refactor them. Or for customers who really want to operate in a hybrid mode with things like disaster recovery, cloud bursting, workloads like that. So, you know, I think we've made a great step in that direction. And we look forward to doing more with our customers. >> Furrier: What is the big challenge that you're seeing with this hybrid transition from your customers and how are you solving that specifically? >> Mirani: Yeah. If you look at how public and private operate today, they're very different in the kind of technologies used. And most customers today will have two separate teams, like one for their on-prem workloads, using a certain set of tooling, a second completely different team, managing a completely different set of workloads, but with different technologies. And that's not an ideal state in some senses, that's not true hybrid, right? It's like creating two new silos, if anything. And our vision is that you get to a point where both of these operate in the same manner, you've got the same people managing all of them, the same workloads anyway, but similar performance, similar SaaS. So they're going to literally get to point where applications and data can move back and forth. And that's, that's, that's where I think the real future is for hybrid >> Furrier: I have to ask you a personal question. As the CTO, you've got be excited with the architecture that's evolving with hybrid and multi-cloud, I mean, I mean, it's pretty, pretty exciting from a tech standpoint, what is your reaction to that? >> Mirani: %100 and it's been a long time coming, right? We have been building pieces of this over years. And if you look at all the product announcements, Nutanix has made over the last few years and the acquisitions that made them and so on, there's been a purpose behind them. That's been a purpose to get to this model where we can operate a customer's workloads in a hybrid environment. So really, really happy to see all of that come together. Years and years of work finally finally bearing fruit. >> Furrier: Well, we've had many conversations in the past, but it congratulates a lot more to do with so much more action happening. Thomas, you get the keys to the kingdom, okay, and the product management you've got to prioritize, you've got to put it together. What are the key components of this Nutanix cloud platform? The hybrid cloud, multi-cloud strategy that's in place, because there's a lot of headroom there, but take us through the key components today and then how that translates into hybrid multi-cloud for the future. >> Cornely: Certainly, John, thank you again and great to be here, and kind of, Rajiv, you said really nicely here. If you look at our portfolio at Nutanix, what we have is great technologies. They've been sold as a lot of different products in the past, right. And what we've done last few months is we kind of bring things together, simplify and streamline, and we align everything around a cloud platform, right? And this is really the messaging that we're going after is look, it's not about the price of our solutions, but business outcomes for customers. And so are we focusing on pushing the cloud platform, which we encompasses five key areas for us, which we refer to as cloud infrastructure, no deficiencies running your workloads. Cloud management, which is how you're going to go and actually manage, operate, automate, and get governance. And then services on top that started on all around data, right? So we have unified storage, finding the objects, data services. We have database services. Now we have outset of desktop services, which is for EMC. So all of this, the big change for us is this is something that, you know, you can consume in terms of solutions and consume on premises. As Rajiv discussed, you know, we can take the same platform and deploy it in public cloud regions now, right? So you can now get no seamless hybrid cloud, same operating model. But increasingly what we're doing is taking your solutions and re-targeting issues and problems at workers running native public clouds. So think of this as going, after automating more governance, security, you know, finding objects, database services, wherever you're workload is running. So this is taking this portfolio and reapplying it, and targeting on prem at the edge in hybrid and in christening public cloud in ATV. >> Furrier: That's awesome. I've been watching some of the footage and I was noticing quite a lot of innovation around virtualized, networking, disaster, recovery security, and data services. It's all good. You guys were, and this is in your wheelhouse. I know you guys are doing this for many, many years. I want to dive deeper into that because the theme right now that we've been reporting on, you guys are hitting right here what the keynote is cloud scale is about faster development, right? Cloud native is about speed, it's about not waiting for these old departments, IT or security to get back to them in days or weeks and responding to either policy or some changes, you got to move faster. And data, data is critical in all of this. So we'll start with virtualized networking because networking again is a key part of it. The developers want to go faster. They're shifting left, take us through the virtualization piece of how important that is. >> Mirani: Yeah, that's actually a great question as well. So if you think about it, virtual networking is the first step towards building a real cloud like infrastructure on premises that extends out to include networking as well. So one of the key components of any cloud is automation. Another key component is self service and with the API, is it bigger on virtual networking All of that becomes much simpler, much more possible than having to, you know, qualify it, work with someone there to reconfigure physical networks and slots. We can, we can do that in a self service way, much more automated way. But beyond that, the, the, the notion of watching networks is really powerful because it helps us to now essentially extend networks and, and replicate networks anywhere on the private data center, but in the public cloud as well. So now when customers move their workloads, we'd already made that very simple with our clusters offering. But if you're only peek behind the layers a little bit, it's like, well, yea, but the network's not the same on the side. So now it, now it means that a go re IP, my workloads create new subnets and all of that. So there was a little bit of complication left in that process. So to actual network that goes away also. So essentially you can repeat the same network in both locations. You can literally move your workloads, no redesign of your network acquired and still get that self service and automation capabilities of which cookies so great step forward, it really helps us complete the infrastructure as a service stack. We had great storage capabilities before, we create compute capabilities before, and sort of networking the third leg and all of that. >> Furrier: Talk about the complexity here, because I think a lot of people will look at dev ops movement and say, infrastructure is code when you go to one cloud, it's okay. You can, you can, you know, make things easier. Programmable. When, when you start getting into data center, private data centers, or essentially edges now, cause if it's distributed cloud environment or cloud operations, it's essentially one big cloud operation. So the networks are different. As you said, this is a big deal. Okay. This is sort of make infrastructure as code happen in multiple environments across multiple clouds is not trivial. Could you talk about the main trends and how you guys see this evolving and how you solve that? >> Mirani: Yeah. Well, the beauty here is that we are actually creating the same environment everywhere, right? From, from, from point of view of networking, compute, and storage, but also things like security. So when you move workloads, things with security, posture also moves, which is also super important. It's a really hard problem, and something a lot of CIO's struggle with, but having the same security posture in public and private clouds reporting as well. So with this, with this clusters offering and our on-prem offering competing with the infrastructure service stack, you may not have this capability where your operations really are unified across multicloud hybrid cloud in any way you run. >> Furrier: Okay, so if I have multiple cloud vendors, there are different vendors. You guys are creating a connection unifying those three. Is that right? >> Mirani: Essentially, yes, so we're running the same stack on all of them and abstracting away the differences between the clouds that you can run operations. >> Furrier: And when the benefits, the benefits of the customers are what? What's the main, what's the main benefit there? >> Mirani: Essentially. They don't have to worry about, about where their workloads are running. Then they can pick the best cloud for their workloads. It can seamlessly move them between Cloud. They can move their data over easily, and essentially stop worrying about getting locked into a single, into a single cloud either in a multi-cloud scenario or in a hybrid cloud scenario, right. There many, many companies now were started on a cloud first mandate, but over time realized that they want to move workloads back to on-prem or the other way around. They have traditional workloads that they started on prem and want to move them to public cloud now. And we make that really simple. >> Furrier: Yeah. It's kind of a trick question. I wanted to tee that up for Thomas, because I love that kind of that horizontal scales, what the cloud's all about, but when you factor data into it, this is the sweet spot, because this is where, you know, I think it gets really exciting and complicated too, because, you know, data's got, can get unwieldy pretty quickly. You got state got multiple applications, Thomas, what's your, what can you share the data aspect of this? This is super, super important. >> Absolutely. It's, you know, it's really our core source of differentiation, when you think about it. That's what makes Nutanix special right? In, in the market. When we talk about cloud, right. Actually, if you've been following Nutanix for years, you know, we've been talking a lot about making infrastructure invisible, right? The new way for us to talk about what we're doing, with our vision is, is to make clouds invisible so that in the end, you can focus on your own business, right? So how do you make Cloud invisible? Lots of technology is at the application layer to go and containerize applications, you know, make them portable, modernize them, make them cloud native. That's all fine when you're not talking of state class containers, that the simplest thing to move around. Right. But as we all know, you know, applications end of the day, rely on data and measure the data across all of these different locations. I'm not even going to go seconds. Cause that's almost a given, you're talking about attribution. You can go straight from edge to on-prem to hybrid, to different public cloud regions. You know, how do you go into the key control of that and get consistency of all of this, right? So that's part of it is being aware of where your data is, right? But the other part is that inconsistency of set up data services regardless of where you're running. And so this is something that we look at the cloud platform, where we provide you the cloud infrastructure go and run the applications. But we also built into the cloud platform. You get all of your core data services, whether you have to consume file services, object services, or database services to really support your application. And that will move with your application, that is the key thing here by bringing everything onto the same platform. You now can see all operations, regardless of where you're running the application. The last thing that we're adding, and this is a new offering that we're just launching, which is a service, it's called, delete the dead ends. Which is a solution that gives you visibility and allow you to go and get better governance around all your data, wherever it may live, across on-prem edge and public clouds. That's a big deal again, because to manage it, you first have to make sense of it and get control over it. And that's what data answer's is going to be all about. >> Furrier: You know, one of the things we've we've been reporting on is data is now a competitive advantage, especially when you have workflows involved, um, super important. Um, how do you see customers going to the edge? Because if you have this environment, how does the data equation, Thomas, go to the edge? How do you see that evolving? >> Cornely: So it's yeah. I mean, edge is not one thing. And that's actually the biggest part of the challenge of defining what the edge is depending on the customer that you're working with. But in many cases you get data ingesting or being treated at the edge that you then have to go move to either your private cloud or your public cloud environment to go and basically aggregate it, analyze it and get insights from it. Right? So this is where a lot of our technologies, whether it's, I think the object's offering built in, we'll ask you to go and make the ingest over great distances over the network, right? And then have your common data to actually do an ethics audit over our own object store. Right? Again, announcements, we brought into our storage solutions here, we want to then actually organize it then actually organize it directly onto the objects store solution. Nope. Using things, things like or SG select built into our protocols. So again, make it easy for you to go in ingest anywhere, consolidate your data, and then get value out of it. Using some of the latest announcements on the API forms. >> Furrier: Rajiv databases are still the heart of most applications in the enterprise these days, but databases are not just the data is a lot of different data. Moving around. You have a lot a new data engineering platforms coming in. A lot of customers are scratching their head and, and they want to kind of be, be ready and be ready today. Talk about your view of the database services space and what you guys are doing to help enterprise, operate, manage their databases. >> Mirani: Yeah, it's a super important area, right? I mean, databases are probably the most important workload customers run on premises and pretty close on the public cloud as well. And if you look at it recently, the tooling that's available on premises, fairly traditional, but the clouds, when we integrate innovation, we're going to be looking at things like Amazon's relational database service makes it an order of magnitude simpler for our customers to manage the database. At the same time, also a proliferation of databases and we have the traditional Oracle and SQL server. But if you have open source Mongo, DB, and my SQL, and a lot of post-grads, it's a lot of different kinds of databases that people have to manage. And now it just becomes this cable. I have the spoke tooling for each one of them. So with our Arab product, what we're doing is essentially creating a data management layer, a database management layer that unifies operations across your databases and across locations, public cloud and private clouds. So all the operations that you need, you do, which are very complicated in, in, in, in with traditional tooling now, provisioning of databases backing up and restoring them providing a true time machine capabilities, so you can pull back transactions. We can copy data management for your data first. All of that has been tested in Era for a wide variety of database engines, your choice of database engine at the back end. And so the new capabilities are adding sort of extend that lead that we have in that space. Right? So, so one of the things we announced at .Next is, is, is, is one-click storage scaling. So one of the common problems with databases is as they grow over time, it's not running out of storage capacity. Now re-provisions to storage for a database, migrate all the data where it's weeks and months of look, right? Well, guess what? With Era, you can do that in one click, it uses the underlying AOS scale-out architecture to provision more storage and it does it have zero downtime. So on the fly, you can resize your databases that speed, you're adding some security capabilities. You're adding some capabilities around resilience. Era continues to be a very exciting product for us. And one of the things, one of the real things that we are really excited about is that it can really unify database operations between private and public. So in the future, we can also offer an aversion of Era, which operates on native public cloud instances and really excited about that. >> Furrier: Yeah. And you guys got that two X performance on scaling up databases and analytics. Now the big part point there, since you brought up security, I got to ask you, how are you guys talking about security? Obviously it's embedded in from the beginning. I know you guys continue to talk about that, but talk about, Rajiv, the security on, on that's on everyone's mind. Okay. It goes evolving. You seeing ransomware are continuing to happen more and more and more, and that's just the tip of the iceberg. What do you guys, how are you guys helping customers stay secure? >> Mirani: Security is something that you always have to think about as a defense in depth when it comes to security, right? There's no one product that, that's going to do everything for you. That said, what we are trying to do is to essentially go with the gamut of detection, prevention, and response with our security, and ransom ware is a great example of that, right. We've partnered with Qualys to essentially be able to do a risk assessment of your workloads, to basically be able to look into your workloads, see whether they have been bashed, whether they have any known vulnerabilities and so on. To try and prevent malware from infecting your workloads in the first place, right? So that's, that's the first line of defense. Now not systems will be perfect. Some, some, some, some malware will probably get in anyway But then you detect it, right. We have a database of all the 4,000 ransomware signatures that you can use to prevent ransomware from, uh, detecting ransom ware if it does infect the system. And if that happens, we can prevent it from doing any damage by putting your fire systems and storage into read-only mode, right. We can also prevent lateral spread of, of your ransomware through micro-segmentation. And finally, if you were, if you were to invade, all those defenses that you were actually able to encrypt data on, on, on a filer, we have immutable snapshots, they can recover from those kinds of attacks. So it's really a defense in depth approach. And in keeping with that, you know, we also have a rich ecosystem of partners while this is one of them, but older networks market sector that we work with closely to make sure that our customers have the best tooling around and the simplest way to manage security of their infrastructure. >> Furrier: Well, I got to say, I'm very impressed guys, by the announcements from the team I've been, we've been following Nutanix in the beginning, as you know, and now it's back in the next phase of the inflection point. I mean, looking at my notebook here from the announcements, the VPC virtual networking, DR Observability, zero trust security, workload governance, performance expanded availability, and AWS elastic DR. Okay, we'll get to that in a second, clusters on Azure preview cloud native ecosystem, cloud control plane. I mean, besides all the buzzword bingo, that's going on there, this is cloud, this is a cloud native story. This is distributed computing. This is virtualization, containers, cloud native, kind of all coming together around data. >> Cornely: What you see here is, I mean, it is clear that it is about modern applications, right? And this is about shifting strategy in terms of focusing on the pieces where we're going to be great at. And a lot of these are around data, giving you data services, data governance, not having giving you an invisible platform that can be running in any cloud. And then partnering, right. And this is just recognizing what's going on in the world, right? People want options, customers and options. When it comes to cloud, they want options to where they're running the reports, what options in terms of, whether it be using to build the modern applications. Right? So our big thing here is providing and being the best platform to go and actually support for Devers to come in and build and run their new and modern applications. That means that for us supporting a broad ecosystem of partners, entrepreneur platform, you know, we announced our partnership with Red Hat a couple of months ago, right? And this is going to be a big deal for us because again, we're bringing two leaders in the industry that are eminently complimentary when it comes to providing you a complete stack to go and build, run, and manage your client's applications. When you do that on premises, utilizing like the preferred ATI environment to do that. Using the Red Hat Open Shift, or, you're doing this open to public cloud and again, making it seamless and easy, to move the applications and their supporting data services around, around them that support them, whether they're running on prem in hybrid winter mechanic. So client activity is a big deal, but when it comes to client activity, the way we look at this, it's all about giving customers choice, choice of that from services and choice of infrastructure service. >> Furrier: Yeah. Let's talk to the red hat folks, Rajiv, it's you know, it's, they're an operating system thinking company. You know, you look at the internet now in the cloud and edge, and on-premise, it's essentially an operating system. you need your backup and recovery needs to disaster recovery. You need to have the HCI, you need to have all of these elements part of the system. It's, it's, it's, it's building on top of the existing Nutanix legacy, then the roots and the ecosystem with new stuff. >> Mirani: Right? I mean, it's, in fact, the Red Hat part is a great example of, you know, the perfect marriage, if you will, right? It's, it's, it's the best in class platform for running the cloud-native workloads and the best in class platform with a service offering in there. So two really great companies coming together. So, so really happy that we could get that done. You know, the, the point here is that cloud native applications still need infrastructure to run off, right? And then that infrastructure, if anything, the demands on that and growing it since it's no longer that hail of, I have some box storage, I have some filers and, you know, just don't excite them, set. People are using things like object stores, they're using databases increasingly. They're using the Kafka and Map Reduce and all kinds of data stores out there. And back haul must be great at supporting all of that. And that's where, as Thomas said, earlier, data services, data storage, those are our strengths. So that's certainly a building from platform to platform. And then from there onwards platform services, great to have right out of the pocket. >> Furrier: People still forget this, you know, still hardware and software working together behind the scenes. The old joke we have here on the cube is server less is running on a bunch of servers. So, you know, this is the way that is going. It's really the innovation. This is the infrastructure as code truly. This is what's what's happened is super exciting. Rajiv, Thomas, thank you guys for coming on. Always great to talk to you guys. Congratulations on an amazing platform. You guys are developing. Looks really strong. People are giving it rave reviews and congratulations on, on, on your keynotes. >> Cornely: Thank you for having us >> Okay. This is theCube's coverage of.next global virtual 2021 cube coverage day two keynote review. I'm John Furrier Furrier with the cube. Thanks for watching.
SUMMARY :
How are the customers, uh, seeing this? the effort to refactor them. the same workloads anyway, As the CTO, you've got be excited with the And if you look at all get the keys to the kingdom, of different products in the because the theme right now So one of the key components So the networks are different. the beauty here is that we Is that right? between the clouds that you They don't have to the data aspect of this? Lots of technology is at the application layer to go and one of the things we've the edge that you then have are still the heart of So on the fly, you can resize Now the big part point there, since you of all the 4,000 ransomware of the inflection point. the way we look at this, now in the cloud and edge, the perfect marriage, if you will, right? Always great to talk to you guys. This is theCube's coverage
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Cornely | PERSON | 0.99+ |
Mirani | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Thomas | PERSON | 0.99+ |
Thomas Cornely | PERSON | 0.99+ |
Rajiv | PERSON | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Qualys | ORGANIZATION | 0.99+ |
two separate teams | QUANTITY | 0.99+ |
Rajiv Mirani | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
4,000 ransomware | QUANTITY | 0.99+ |
two leaders | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
one click | QUANTITY | 0.99+ |
both locations | QUANTITY | 0.98+ |
first line | QUANTITY | 0.98+ |
red hat | ORGANIZATION | 0.98+ |
first mandate | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
this year | DATE | 0.98+ |
first | QUANTITY | 0.98+ |
three | QUANTITY | 0.97+ |
SQL | TITLE | 0.97+ |
one-click | QUANTITY | 0.96+ |
one thing | QUANTITY | 0.96+ |
each one | QUANTITY | 0.96+ |
second | QUANTITY | 0.96+ |
two great guests | QUANTITY | 0.96+ |
Kafka | TITLE | 0.96+ |
Azure | TITLE | 0.95+ |
two new silos | QUANTITY | 0.95+ |
EMC | ORGANIZATION | 0.95+ |
both locations | QUANTITY | 0.94+ |
Map Reduce | TITLE | 0.94+ |
one cloud | QUANTITY | 0.93+ |
Devers | ORGANIZATION | 0.91+ |
AOS | TITLE | 0.91+ |
third leg | QUANTITY | 0.91+ |
Day Two | QUANTITY | 0.91+ |
single | QUANTITY | 0.9+ |
five key areas | QUANTITY | 0.89+ |
Arab | OTHER | 0.88+ |
single cloud | QUANTITY | 0.87+ |
great companies | QUANTITY | 0.86+ |
couple of months ago | DATE | 0.85+ |
2021 | DATE | 0.84+ |
Mongo | TITLE | 0.82+ |
Thomas Hazel, ChaosSearch & Jeremy Foran, BAI Communications | AWS Startup Showcase
(upbeat music) >> Hey everyone, I'm John Furrier with The Cube, we're here in Palo Alto, California for a remote interview and session for The Cube presents AWS startup showcase, the next big thing in AI security in life sciences. I'm John Furrier. We're here with a great segment on cloud. Next big thing in Cloud with Chaos Search, Thomas Hazel, Chief Technology and Science Officer of Chaos Search joined by Jeremy Foran, the head of data analytics, the bad boy of data analyst as they say, but BAI communications, Jeremy Thomas, great to have you on. >> Great to be here. >> Pleasure to be here. >> So we're going to be talking about applying large scale log analytics to building the future of the transit industry. Obviously Telco's a big part of that, smart cities, you name the use case self-driving trucks, cars, you name it, everything's now edge. That the edge is super valuable, it's a new kind of last mile if you will, it's moving fast, it's mobile. This is a huge deal. Let's get into it, Thomas. What's this big story around this, this session? >> Well, we provide unique ability to take all that edge data and drive it into a data lake offering that we provide data analytics, both in logs, BI and coming out with ML there this year into next. So our unique play is transforming customers' cloud outer storage into an analytical platform. And really, I think with BIA is a log analytics specifically where, you know there's a lot of data streams from all those devices going into a lake that we transform their lake into analytics for driving, I guess, operational analysis. >> You know, Jeremy, I remember back in the day, I'm old enough to remember when the edge was the remote switch or campus hub or something. And then even on the Telco side, there was no wifi back in 2000 and you know, someone was driving in a car and you got any signal, you're lucky. Now you got, you know, no perimeter you have unlimited connectivity everywhere. This has opened up more of an Omni channel data problem. How do you see that world? Because you still got more devices pushing out at this edge and it's getting super local, right? Even on the body, even on people in the car. So certainly a lot of change on the infrastructure side. What does that pose for data challenge? >> Yeah, I, I would say that, you know users always want more, more bandwidth, more performance and that requires us to create more systems that require more complexity to deliver that user experience that we're, we're very proud of. And with that complexity means, you know exponentially more data. And so one of the wifi networks we offer in the Toronto subway system, T-connect, you know we see a 100-200,000 unique users a day and you can imagine just the amount of infrastructure to support that so that everyone has a seamless experience and can get their news and emails and even stream media while they're waiting for the subway. >> So you guys provide state of the art infrastructure for cell, wifi, broadcast, radio, IP networks, basically I mean, I call it the smart city kind of go-to. But that's basically anything involving kind of that edge piece. This is a huge thing. So as smart cities are on the table, which and you seeing 5G being called more of an enterprise app where there's feeding large dense areas of people this is now a new modern version of what I would call the, the smart city blueprint. What's changed in your mind on this whole modernization of this smart city infrastructure concept? What's new? What's cutting edge? >> Yeah. I would say that, you know there was an explosion of data and a lot of our insights aren't coming from one system anymore. It's coming from collecting data from all of the different pieces, the different infrastructure whether that's your fiber infrastructure or your wireless infrastructure, and then to solve problems you need to correlate data across those systems. So we're seeing more and more technologies that allow you to do that correlation. And that's really where we're finding tons of value, right? >> Thomas, take us through what you guys do as a, as a, as a product, a value proposition, the secret sauce, and and why I'm here with Jeremy? Why is this conversation important for the folks watching? What's the connection between Chaos Search and BAI communication? >> Well, it's data, right? And lots of it. So our unique platform allows people like Jeremy to stream all this data, right? In you know, today's world terabytes go to petabytes really easily, billions go to trillion really easily, and so providing the analysis of that data for their operations is challenging particularly based on technology and architectures that have been around for a long time. So what we do here at Chaos Search is the ability for BIA to stream all these devices, all these services into one centralized data lake on their cloud outer storage, where we connect to that cloud outer storage and transform it into an analytical database to do, in this case log analytics and do it seamlessly, easily where a new workload a new stream just streams into that lake. And we, as a service take over, we discover we index it and publish well-known open API and visualization so that they can focus on their business, not all the operational data pipeline, database and data engineering type work that again, at these types of scales is is frankly a nightmare. >> You know, one of the things that we've always observed on The Cube when you see new things come out that are really cool groundbreaking products like you guys are doing it's always a challenge to manage the cost and complexity of bringing in the new. So Jeremy, take us through this tech stack here because you know, it's, sometimes it might be unwieldy just in from a tech stack perspective, nevermind the business logic or the business processes that got to be either unwound or changed. Can you take us through the IT stack that's critical to support your, your area? >> Yeah, absolutely. So with all the various different equipment you know, to provide our public wifi and and our desks, carrier agnostic, LT and 5G networks, you know, we need to be able to adhere to PCI compliance and ISO 27,000, so that, you know, requires us to keep a tremendous amount of our data. And the challenge we were facing is how do we do that cost effectively, and not have to make any sort of compromises on how we do that? A lot of times you'll find you don't know the value of your data today until tomorrow. An example would be COVID. You know, we, when we were storing data two years ago we weren't planning for a pandemic, but now that we were able to retain that data and look back we can see a tremendous amount of value with trying to forecast how our systems will recover when things get back to normal. And so when I met Thomas and we were sort of talking about how we were going to solve some of these data retention problems, he started explaining to me their compression in some of the performance metrics of their profession. And, you know, I said, oh, middle out compression. And it was a bit, it's been a bit of a running joke between me and him and I'm sure others, but it's incredibly impressive the amount of data we're able to store at the kind of cost, right? >> What, what problem does, did he solve for you? Because I mean, these guys, honestly, you know the startups have a lot and the Cloud's enabling more value now, we're seeing this, but when you look at this what was your, what was your core problem that you had? >> Yeah, so we, when you we want to be able to, I mean, primarily this is for our CIS log server. And CIS long servers today aren't what they were 10, 15 years ago where you just sort of had a machine and if something broke you went and looked, right? Now, they're very complex, that data is feeding to various systems and third-party software. So, you know, we're actively looking for changes in patterns and we have our, you know security teams auditing these from, for penetration testing and such. And then the getting that data to S3 so that we could have it in case, you know, for two, three years of storage. Well, the problem we were facing is all of that all of these different systems we needed to feed and retain data, we couldn't do that on site. We wanted to do use S3 but when we were doing some projections, it's like, we, we don't really have the budget for all of these places. Meeting Thomas and, and working with Chaos Search, you know, using their compression brought those costs down drastically. And then as we've been working with them the really exciting thing is they we're bringing more and more features to that surface or offering. So, you know, first it was just storing that data away. And now we're starting to build solutions off of that sitting in storage. So that's where it gets really exciting because you know, there, it's nothing to start getting anomaly detection off those logs, which, you know originally it was just, we need to store them in case somebody needs them two, three years from now. >> So Thomas Thomas, if I get this right then what I'm hearing is obviously I've put aside the complexity and the governing side the regulations for a minute just generally. Data retention as, as a key value proposition and having data available when you need it and then to do that and doing it in a very cost-effective simple way. It sounds like what you guys are offering. Is that right? >> Yeah, I mean, one key aspect of our solution is retention, right? Those are a lot of the challenges, but at the same time we provide real time notification like a classic log analytic type platform, alerting, monitoring. The key thing is to bringing both those worlds together and solving that problem. And so this, you know, middle in middle out, well, to be frank, we created a new technology called what we call Chaos Index that is a database index that is wonderfully small as as we're indicating, but also provides all the features that makes Cloud object storage, high performance. And so the idea is that use this lake offering to store all your data in a cost effective way but our service allows you to analyze it both in a long retention perspective as well as real-time perspective and bringing those two worlds together is so key because typically you have Silo Solutions and whether it's real-time at scale or retention scale the cost complexity and time to build out those solutions I know Jeremy knows also, well, a lot of folks come to us to solve those problems because you know when you're dealing with, you know terabytes and up, you know these things get complicated and to be frank, fall over quite often. >> Yeah. Let me, let me just ask you the question that's probably on everyone's mind who's watching and you guys probably have both heard this many times, because a lot of people just throw the data lake solution around like it's, you know why they whitewash their kind of old legacy solutions with data lake, store it on data lake. It's been called a data swamp. So people are fearful that, okay. I love this idea of a data lake, who doesn't like throwing data into a repository, having it available at will with notifications, all this secret magic beans that just magically create value. But I doubt that, I don't want to turn into a data swamp. So Thomas and Jeremy, talk about that, that concern. How do you mitigate that? How do you talk to that? Because if done properly, there's huge value in having a control plane or some sort of data system that is going to be tied in with signals and just storage retention. So I see the value. How do you manage the concern that people might say, Hey, I don't want to date a swamp? >> Yeah, I'll jump into that. So, you know, let's just be frank, Hadoop was a great tool for a very narrow scenario. I think that data swamp came out because people were using the tooling in an incorrect way. I've always had the belief that data lakes are the future. You just have the right to have the right service the right philosophy to leverage it. So what we do here at Chaos Search is we allow you to organize it, discover it, automatically index that data so that swamp doesn't get swampy. You know, when you stream data into your lake how do you organize it, such that it's has a nice stream? How do you transform that data into a value? So with our service we actually start where the storage begins, not a end point, not an archive. So we have tooling and services that keep your lake from being swampy to be, to be clear. And, but the key value is the benefits of the lake, the cost effectiveness, the reliability, security, the scale, those are all the benefits. The problem was that no one really made cloud offer storage a first-class citizen and we've done that. We've dressed the swamp nature but provided all the value of analysis. And that cost metrics, that scale. No one can touch cloud outer storage, it just, you can't. But what we've done is cracked the code of how you make it analytical. >> Jeremy, I want to get your thoughts on this too, on your side I mean, as a practitioner and customer of, of of these solutions, you know, the concern is am I missing anything? And I've been a big proponent of data retention for many, many years. You know, Dave Alondra in our Cube knows all know that I bang on the table all the time, store your data, be a data hoarder, because it's going to come back and be valuable. Costs are going down so I'm a big fan of data retention. But the fear might be on, what am I missing? Because machine learning starts to come in down the road you got AI, the more data you have that's accessible in real time, the more machine learning is effective. Do you, do you worry about missing anything or do you just store everything? >> We, we store everything. Sometimes it's, it's interesting where the value and insights come from your data. Something that see, might seem trivial today down the road offers tremendous, tremendous value. So one of the things we do is provide because we have wifi in the subway infrastructure, you know taking that wifi data, we can start to understand the flow of people in and out of the subway network. And we can take that and provide insights to the rail operators, which get them from A to B quicker. You know, when we built the wifi it wasn't with the intention of getting Torontonians across the city faster. But that was one of the values that we were able to get from the data in terms of, you know, Thomas's solution, I think one of the reasons we we engaged him in the first place is because I didn't believe his compression. It sounded a little too good to be true. And so when it was time to try them out, you know all we had to do was ship data to an S3 bucket. You know, there's tons of, of solutions to do that. And, and data shippers right out of the box. It took a few, you know, a few minutes and then to start exploring the data was in Cabana, which is or their dashboard, which is, you know, an interface that's easy to use. So we were, you know, within a two days getting the value out of that data that we were looking for which is, you know, phenomenal. We've been very happy. >> Thomas, sounds like you've got a great, great testimonial here and it's not like an easy problem that he's living in there. I mean, I think, you know, I was mentioning this earlier and we're going to get into it now. There's regulations and there's certain compliance issues. First of all, everyone has this now problem now, it's not just within that space. But just the technical complexities of packets moving around I got on my wifi and the stop here, I'm jumping over here, and there's a ton of data it's all over the place, it's totally unstructured. So it's a tough, tough test for you guys, Chaos Search. So yeah, it's almost like the Mount Everest of customer testimonials. You've got to, it's a big, it's a big use case here. How does this translate to other clients? And talk about this governance and security controls because I know this highly regulated and you got there's penalties involved on his side of the world and Telco, the providers that have these edge devices there's actually penalties and, and whatnot so, not just commercial, it's maybe a, you know risk management, but here there's actually penalties. >> Absolutely. So, you know centralizing your data has a real benefit of of not getting in trouble, right? So you have one place, you store one place that's a good thing, but what we've done and this was a key aspect to our offering is we as Chaos, Chaos Search folks, we don't own the customer's data. We don't own BIA's data. They own the data. They give us access rights, very standard way with Cloud App storage roll on policies from Amazon, read only access rights to their data. And so not owning a customer's data is a big selling point not only for them, but for us for compliance regulatory perspective. So, you know, unlike a lot of solutions where you move the data into them and now they are responsible, actually BIA owns everything. We, they provide access so that we could provide an analysis that they could turn off at any point in time. We're also SOC 2 type 1 and type 2 compliant you got to do it, you know, in this, this world, you know when we were young we ran at this because of all of these compliance scenarios that we will be in, but, you know, the long as short of it is, we're transient service. The storage, cloud storage is the source of truth where all data resides and, you know, think about it, it's architecturally smart, it's cost effective, it's secure, it's reliable, it's durable. But from a security perspective, having the customer own their own data is a big differentiation in the market, a big differentiation. >> Jeremy, talk about on your end the security controls surrounding the log management environments that span across countries with different regulations. Now you've got all kinds of policy dimensions and technical dimensions and topology dimensions. >> Yeah, absolutely. So how we approach it is we look at where we have offerings across the globe and we figure out what the sort of highest watermark level of adherence we need to hit. And then we standardize across that. And by shipping to S3, it allows us to enforce that governance really easily and right to Tom's point you know, we manage the data, which is very important to us and we don't have to be worried about a third party or if we want to change providers years down the road. Although I don't think anyone's coming out with 81% compression anytime soon (laughs). But yeah, so that's, for us, it's about meeting those high standards and having the technologies that enable us to do it. And Chaos Search is a very big part of that right now. >> All right let me ask you a question, for the folks watching that are like really interested in this topic, what would you say to them when evaluating Chaos Search obviously, your use case is complex, but so are others as enterprises start to have an edge, obviously the security posture shifts, everything shifts. There's no more perimeter and the data problem becomes acute to them. So the enterprises are going to start seeing what you've been living for in your world. What's your advice to people watching? >> My advice would be to give them a try. You know, it's it's has been really quite impressive. The customer service has been hands-on and we've been getting, you know, they've been under-promising and over-delivering, which when you have the kind of requirements to manage solutions in these very complex environment, cloud local, you know various data centers and such, you know that kind of customer service is very important, right? It enables us to continue to deliver those high quality solutions. >> So Thomas give us the, the overview of the secret sauce. You've got a great testimonial here. You got people watching, what's different now in the world that you're going after, what wave are you on? Talk to the people who are watching this and saying, okay why Chaos Search? Why are you relevant? Obviously there's some cool things you're doing. I love that. What's cool, and what's relevant and why what's in it for them if they work with you? >> Yeah. So you know, that that whole Silicon Valley reference actually got that from my patent attorney when we were talking. But yeah, no, we, we, you know, focus on if we can crack this code of making data, one a face small, store small, moves small, process small. But then make it multimodal access make it virtual transformation. If we could do that, and we could transform cloud outer storage into a high-performance medical database all these heavy, heavy problems, all that complexity that scaffolding that you build to do these type of scales would be solved. Now what we had to focus on and this has been my, I guess you say life passion is working on a new data representation. And that's our secret sauce that enables a new architecture a new service that where the customer folks on their tooling, their APIs, their visualizations that they know and love, what we focus is on taking that data lake, and again, to transform it into an analytical database, both for log analytics think of like elastic search replacement, as well as a BI replacement for your SQL warehousing database. And coming out later this year into 2022, ML support on one representation. You don't have the silo your information you don't have to re index your data, both. So elastic search CQL and actually ML TensorFlow actions on the exact same representation. So think about the data retention, doing some post analysis on all those logs of data, months, years, and then maybe set up some triggers if you see some anomaly that's happening within your service. So you think about it, the hunt with BI reporting, with predictive analysis on one platform. Again, it sounds a little unicorn, I agree with Jeremy, maybe it didn't sound true but it's been a life's work. So it didn't happen overnight. And you know, it's eight years, at least in the in the making, but I guess the life journey in the end. >> Well, you know, the timing is great. You know, all the database geeks out there who have been following the data industry know that, you know there's a good point for structured data but when you start getting into mechanisms and they become a bottleneck or a blocker to innovation, you know you starting to see this idea of a data lake being let the data kind of form, let it be. You know, I hate the word control plane but more of a, a connective tissue between systems is become an interesting thing. So now you can store everything so you know, no worries there, no blind spots and then let the magic of machine learning in the future, come around. So Jeremy, with that, I got to ask you since you're the bad boy of data analytics at BAI communications head of data analytics, what does that, what do you look for in the future as you start to set this up because I can almost imagine and connecting the dots here in the interview, you got the data lake you're storing everything, which is good. Now you have to create more insights and get ahead of the curve and provide some prescriptive and automated ways to do things better. What's your vision? >> First I would just like to say that, you know when astrophysicists talk about, you know, dark dark energy, dark matter, I'm convinced that's where Thomas is hiding the ones and zeros to get that compression, right? I don't don't know that to be fact but I know it to be true. And then in terms of machine learning and these sort of future technologies, which are becoming available you know, starting from scratch and trying to build out you know, models that have value, you know that takes a fair amount of work. And that landscape keeps changing, right? Being able to push our data into an S3 bucket and then you know, retain that data and then get anomaly detection on top of it. That's, I mean, that's something special and that unlocks a lot of ability for you know, our teams to very easily deliver anomaly detection, machine learning to our customers, without having to take on a lot of work to understand the latest and greatest in machine learning. So, I mean, it's really empowering to our team, right? And, and a tool that we're going to. >> Yeah, I love and I love the name, Chaos Search, Thomas. I got to say, you know it brings up the inside baseball around chaos monkey which everyone knows was a DevOps tool to create kind of day two simulate day two operations and disruptions in DevOps. But what you're really getting at is your whole new architecture that's beyond DevOps movement, it's like next gen architecture. Talk about that to the people watching who have a lot of legacy and want to transform over to a more enabling platform that's going to give them some headroom for their data. What, what do you say to them? How do they get started? What, how should they, how what's their mindset? What they, what are some first principles you can share? >> Well, you know, I always start with first principles but you know, I like to say we're the next next gen. The key thing with the Chaos Search offering is you can start today with B, without even Chaos Search. Stream your data to S3. We're going to make hip and cool data lakes again. And actually it's a, Google it now, data lakes are hip and cool. So start streaming now, start managing your data in a well-formed centralized viewpoint with security governance and cost effectiveness. Then call Chaos Search shop, and we'll make access to it easily, simply to ultimately solve your problems. The bug whether your security issue, the bug, whether it's more performance issues at scale, right? And so when workloads can be added instantaneously in your data lake it's, it's game changing it's mind changing. So from the DevOps folks where, you know, you're up all night trying to say, how am I going to scale from terabyte, you know one today to 50 terabytes, don't. Stream it to S3. We'll take over, we'll worry about that scale pain. You worry about your job of security, performance, operations, integrity. >> That really highlights the cloud scale the value proposition as, as apps start to be using data as an input, not just as a a part of a repo repo, so great stuff. Thomas, thanks for sharing your life's work and your technology magic. Jeremy, thanks for coming on and sharing your use cases with us and how you are making it all work. Appreciate it. >> Thank you. >> My pleasure. >> Okay. This is The Cubes, coverage and presenting AWS this time showcase the next big thing here with Chaos Search. I'm John Furrier, your host. Thanks for watching. (upbeat music)
SUMMARY :
great to have you on. it's a new kind of last mile if you will, specifically where, you know and you know, someone was driving and you can imagine just the amount and you seeing 5G being called that allow you to do that correlation. and so providing the analysis and complexity of bringing in the new. And the challenge we were and we have our, you know and having data available when you need it And so this, you know, of data system that is going to be tied in is we allow you to organize it, of these solutions, you So we were, you know, within and you got there's penalties of solutions where you the security controls surrounding the log and having the technologies and the data problem you know, they've been after, what wave are you on? that scaffolding that you in the interview, you got the data lake like to say that, you know I got to say, you know but you know, I like to say with us and how you the next big thing here with Chaos Search.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeremy | PERSON | 0.99+ |
Thomas | PERSON | 0.99+ |
Dave Alondra | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Jeremy Thomas | PERSON | 0.99+ |
Thomas Hazel | PERSON | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Jeremy Foran | PERSON | 0.99+ |
BIA | ORGANIZATION | 0.99+ |
Tom | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
81% | QUANTITY | 0.99+ |
Chaos Search | ORGANIZATION | 0.99+ |
eight years | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
2000 | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
50 terabytes | QUANTITY | 0.99+ |
two days | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
billions | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Toronto | LOCATION | 0.99+ |
ORGANIZATION | 0.98+ | |
First | QUANTITY | 0.98+ |
S3 | TITLE | 0.98+ |
one platform | QUANTITY | 0.98+ |
ChaosSearch | ORGANIZATION | 0.98+ |
first principles | QUANTITY | 0.98+ |
two worlds | QUANTITY | 0.98+ |
first principles | QUANTITY | 0.98+ |
2022 | DATE | 0.98+ |
one place | QUANTITY | 0.98+ |
one system | QUANTITY | 0.98+ |
three years | QUANTITY | 0.98+ |
DevOps | TITLE | 0.98+ |
two years ago | DATE | 0.97+ |
Thomas Thomas | PERSON | 0.96+ |
Chaos | ORGANIZATION | 0.96+ |
SQL | TITLE | 0.96+ |
BAI | ORGANIZATION | 0.96+ |
trillion | QUANTITY | 0.95+ |
BAI Communications | ORGANIZATION | 0.95+ |
Mount Everest | LOCATION | 0.95+ |
The Cube | ORGANIZATION | 0.95+ |
this year | DATE | 0.95+ |
first | QUANTITY | 0.95+ |
Cloud App | TITLE | 0.94+ |
Hadoop | TITLE | 0.94+ |
pandemic | EVENT | 0.94+ |
first place | QUANTITY | 0.94+ |
Thomas Scheibe | Cisco Future Cloud
(upbeat music) >> Narrator: From around the globe, it's theCUBE. Presenting Future Cloud. One event, a world of opportunities. Brought to you by Cisco. >> Okay. We're here with Thomas Scheibe, who's the vice president of Product Management, aka VP of all things Data Center Networking, STN, cloud, you name it in that category. Welcome Thomas, good to see you again. >> Hey, same here. Thanks for having me on. >> Yeah, it's our pleasure. Okay. Let's get right into observability. When you think about observability, visibility, infrastructure monitoring, problem resolution across the network, how does cloud change things? In other words, what are the challenges that networking teams are currently facing as they're moving to the cloud and trying to implement hybrid cloud? >> Yeah. (scoffs) Yeah. Visibility as always is very, very important and it's quite frankly, it's not just, it's not just the networking team, it's actually the application team too, right? And as you pointed out, the the underlying impetus to what's going on here is the, the data center is wherever the data is, and I think we said this a couple years back. And really what happens the, the applications are going to be deployed in different locations, right? Whether it's in a public cloud, whether it's on-prem and they're built differently, right? They're built as micro servers, so they might actually be distributed as well at the same application. And so what that really means is you need, as an operator as well as actually a user, a better visibility, "where are my pieces?", and you need to be able to correlate between where the app is and what the underlying network is, that is in place in these different locations. So you have actually a good knowledge why the app is running so fantastic or sometimes not. So I think that's, that's really the problem statement. What, what we're trying to go after with observability. >> Okay. Let's, let's double click on that. So, so a lot of customers tell me that you got to stare at log files until your eyes bleed, then you've got to bring in guys with lab coats who have PhDs to figure all this stuff out. >> Thomas: Yeah. >> So you just described, it's getting more complex, but at the same time, you have to simplify things. So how, how are you doing that? >> Correct. So what we basically have done is we have this fantastic product that is called ThousandEyes. And so what this does is basically (chuckles) as the name which I think is a fantastic, fantastic name. You have these sensors everywhere and you can have a good correlation on links between if I run a from a site to a site, from a site to a cloud, from the cloud to cloud. And you basic can measure what is the performance of these links? And so what we're, what we're doing here is we're actually extending the footprint of the ThousandEyes agent, right? Instead of just having a, an inversion machine of clouds we are now embedding them with the Cisco network devices, right? We announced this was the Catalyst 9000. And we're extending this now to our 8000 Catalyst product line for the for the SD-WAN products, as well as to the data center products, in Nexus line. And so what you see is, is you know, a half a thing, you have ThousandEyes. You get a million insights and you get a billion dollar off improvements for how your applications run. And this is really the, the power of tying together the footprint of what a network is with the visibility, what is going on. So you actually know the application behavior that is attached to this network. >> I see. So, okay. So as the cloud evolves, it expands, it connects, you're actually enabling ThousandEyes to go further, not just confined within a single data center location but out to the network across clouds, et cetera. >> Thomas: Correct. >> Wherever the network is you're going to have a ThousandEyes sensor and you can bring this together and you can quite frankly pick, if you want to say, Hey I have my application in public cloud provider A domain one, and I have another one in domain two I can do monitor that link. I can also monitor, I have a user that has a campus location or a branch location. I kind of put an agent there and then I can monitor the connectivity from that branch location all the way to the, let's say, corporation's data center or headquarter or to the cloud. And I can have these probes and just the, have visibility in saying, Hey, if there's a performance I know where the issue is. And then I obviously can use all the other tools that we have to address those. >> All right, let's talk about the cloud operating model. Everybody tells us that, you know, it's really the change in the model that drives big numbers in terms of ROI. And I want you to maybe address how you're bringing automation and DevOps to this world of hybrid and specifically, how is Cisco enabling IT organizations to move to a cloud operating model as that cloud definition expands? >> Yeah, no, that's that's another interesting topic beyond the observability. So it really, really what we're seeing, and this is going on for, I want to say couple of years now it's really this transition from operating infrastructure as a networking team, more like a service like what you would expect from a cloud provider, right? This is really around the networking team offering services like a cloud provided us. And that's really what the meaning is of cloud operating model, right? Where this is infrastructure running your own data center where that's linking that infrastructure was whatever runs on the public cloud is operating it like a cloud service. And so we are on this journey for a while. So one of the examples um that we have, we're moving some of the control software assets that customers today can deploy on-prem to an instance that they can deploy in a, in a cloud provider and just basically instantiate things there and then just run it that way. Right? And so the latest example for this is what we have, our Identity Service Engine that is now unlimited availability, available on AWS and will become available mid this year, both on AWS and Azure, as a service. You can just go to Marketplace, you can load it there and now increase. You can start running your policy control in the cloud managing your access infrastructure in your data center, in your campus, wherever you want to do it. And so that's just one example of how we see our Customers Network Operations team taking advantage of a cloud operating model and basically deploying their, their tools where they need them and when they need them. >> Dave: So >> What's the scope of I, I hope I'm saying it right, ISE, right? I.S.E, I think it's um, you call it ISE. What's the scope of that? Like for instance, to an effect my, or even, you know address, simplify my security approach? >> Absolutely. That's now coming to what is the beauty of the product itself? Yes. What you can do is really is, a lot of people talking about is, how do I get to a Zero Trust approach to networking? How do I get to a much more dynamic, flexible segmentation in my infrastructure, again, whether this was only campus access as well as the data center and ISE helps you there. You can use it as a pawn to define your policies and then inter-connect from there, right. In this particular case, we would, instead of ISE in a cloud as a software, alone, you now can connect and say, Hey, I want to manage and program my network infrastructure and my data center or my campus going to the respective controller, whether it's DNA Center for campus or whether it's the, the ACI policy controller. And so yes, what you get as an effect out of this is a very elegant way to automatically manage ,in one place, "what is my policy", and then drive the right segmentation in your network infrastructure. >> Yeah. Zero Trust. It was..Pre pandemic it was kind of a buzzword, now it's become a mandate. I, I wonder if we could talk about- >> Thomas: - Yes >> Yeah, right. I mean, so- >> Thomas: -Pretty much. >> I wondered if we could talk about cloud native apps. You got all these developers that are working inside organizations, they're maintaining legacy apps they're connecting their data to systems in the cloud. They're sharing that data. These developers, they're rapidly advancing their skillsets. How is Cisco enabling its infrastructure to support this world of cloud native, making infrastructure more responsive and agile for application developers? >> Yeah. So you were going to the talk we saw was the visibility. We talked about the operating model how our network operates actually want to use tools going forward. Now the next step to this is, it's not just the operator. How do they actually, where do they want to put these tools? Or how they interact with this tools? As well as quite frankly, as how let's say, a DevOps team, or application team or a cloud team also wants to take advantage of the programmability of the underlying network. And this is where we're moving into this whole cloud native discussion, right. Which has really two angles. So it's the cloud native way, how applications are being built. And then there is the cloud native way, how you interact with infrastructure, right? And so what we have done is we're A, putting in place the on-ramps between clouds, and then on top of it, we're exposing for all these tools APIs that can be used and leveraged by standard cloud tools or cloud-native tools, right? And one example or two examples we always have. And again, we're on this journey for a while, is both Ansible script capabilities that access from RedHat as well as Hashi Terraform capabilities that you can orchestrate across infrastructure to drive infrastructure automation. And what, what really stands behind it is what either the networking operations team wants to do or even the app team. They want to be able to describe the application as a code and then drive automatically or programmatically instantiation of infrastructure needed for that application. And so what you see us doing is providing all these capability as an interface for all our network tools, right. Whether this is ISE, what I just mentioned, whether this is our DCN controllers in the data center whether these are the controllers in the, in the campus for all of those, we have cloud-native interfaces. So operator or a DevOps team can actually interact directly with that infrastructure the way they would do today with everything that lives on the cloud or with everything how they built the application. >> Yeah, this is key. You can't even have the conversation of of Op cloud operating model that includes and comprises on-prem without programmable infrastructure. So that's, that's very important. Last question, Thomas, are customers actually using this? You made the announcement today. Are there, are there any examples of customers out there doing this? >> We do have a lot of customers out there that are moving down the path and using the Cisco High-performance Infrastructure both on the compute side, as well as on the Nexus side. One of the costumers, and this is like an interesting case, is Rakuten. Rakuten is a large telco provider, a mobile 5G operator in Japan and expanding, and as in different countries. And so people, some think, "Oh cloud" "You must be talking about the public cloud provider" "the big three or four". But if you look at it, there's a lot of the telco service providers are actually cloud providers as well and expanding very rapidly. And so we're actually very proud to work together with Rakuten and help them build high performance data center infrastructure based on HANA Gig and actually for a gig to drive their deployment to its 5G mobile cloud infrastructure, which is which is where the whole the whole world, which frankly is going. And so it's really exciting to see this development and see the power of automation visibility together with the High-performance infrastructure becoming a reality on delivering actually, services. >> Yeah, some great points you're making there. Yes, you have the big four clouds, they're enormous but then you have a lot of actually quite large clouds telcos that are either proximate to those clouds or they're in places where those hyper-scalers may not have a presence and building out their own infrastructure. So, so that's a great case study. Thomas.Hey, great having you on. Thanks much for spending some time with us. >> Yeah, same here. I appreciate it. Thanks a lot. >> All right. And thank you for watching everybody. This is Dave Vellante for theCUBE, the leader in tech event coverage. (upbeat music)
SUMMARY :
Brought to you by Cisco. Welcome Thomas, good to see you again. Thanks for having me on. as they're moving to the cloud And so what that really means is you need, that you got to stare at log but at the same time, you And so what you see is, is So as the cloud evolves, and you can bring this together And I want you to maybe address how And so the latest example What's the scope of I, And so yes, what you get was kind of a buzzword, I mean, so- to support this world And so what you see us You can't even have the conversation of and see the power of but then you have a lot of I appreciate it. And thank you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Thomas | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Japan | LOCATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Rakuten | ORGANIZATION | 0.99+ |
Thomas Scheibe | PERSON | 0.99+ |
two examples | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
ThousandEyes | ORGANIZATION | 0.99+ |
one example | QUANTITY | 0.99+ |
mid this year | DATE | 0.99+ |
two angles | QUANTITY | 0.99+ |
ACI | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
one | QUANTITY | 0.98+ |
HANA Gig | TITLE | 0.98+ |
One | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
One event | QUANTITY | 0.97+ |
8000 | COMMERCIAL_ITEM | 0.96+ |
four | QUANTITY | 0.96+ |
ISE | TITLE | 0.95+ |
one place | QUANTITY | 0.94+ |
Data Center Networking | ORGANIZATION | 0.91+ |
billion dollar | QUANTITY | 0.91+ |
Cisco Future Cloud | ORGANIZATION | 0.9+ |
STN | ORGANIZATION | 0.87+ |
a million insights | QUANTITY | 0.86+ |
a couple years back | DATE | 0.86+ |
three | QUANTITY | 0.85+ |
pandemic | EVENT | 0.82+ |
Catalyst 9000 | COMMERCIAL_ITEM | 0.82+ |
RedHat | TITLE | 0.81+ |
double | QUANTITY | 0.8+ |
theCUBE | ORGANIZATION | 0.78+ |
single data center | QUANTITY | 0.76+ |
Hashi Terraform | TITLE | 0.75+ |
couple | QUANTITY | 0.75+ |
DevOps | ORGANIZATION | 0.73+ |
Azure | TITLE | 0.71+ |
half a thing | QUANTITY | 0.66+ |
Thomas.Hey | PERSON | 0.64+ |
Marketplace | TITLE | 0.62+ |
years | QUANTITY | 0.6+ |
Catalyst | ORGANIZATION | 0.58+ |
two | QUANTITY | 0.58+ |
domain | QUANTITY | 0.56+ |
Nexus | COMMERCIAL_ITEM | 0.47+ |
Ansible | ORGANIZATION | 0.38+ |
Thomas Scheibe, Cisco | Cisco Future Cloud
>>From around the globe. It's the cube present a future cloud one event, a world of opportunities brought to you by Cisco. >>Okay. We're here with Thomas Shabbat. Who's the vice president of product management, AKA VP of all things, data center, networking, SDN cloud, you name it in that category. Welcome Thomas. Good to see you again. >>Hey Sam. Yes. Thanks for having me on. >>Yeah, it's our pleasure. Okay. Let's get right into observability. When you think about observability visibility, infrastructure monitoring, problem resolution across the network, how does cloud change thing? In other words, what are the challenges that networking teams are currently facing as they're moving to the cloud and trying to implement hybrid cloud? >>Yeah. Yeah. Uh, visibility as always is very, very important. And it's perfect. It's not just, it's not just the network team is actually the application team too. Right. And as you pointed out, the, the underlying impetus to what's going on here is the data center is wherever the data is. And I think we set as a couple of years back and really what happens, the, the applications are going to be deployed, uh, in different locations, right? Whether it's in a public cloud, whether it's on prem, uh, and they're built differently, right? They are built as microservices. They might actually be distributed as well at the same application. And so what that really means is you need as an operator as well as actually a user, a bit of visibility where I, my pieces and you need to be able to correlate between where the Apres and what the underlying network is. That is a place at these different locations. So you have actually a good knowledge why the app is running so fantastic or sometimes not. So I think that's, that's really the problem statement, uh, what, what we tried to go after was the observability. >>Okay. Let's double click on that. So, so a lot of customers telling me that you got to stare at log files until your eyes bleed. And then you've got to bring in guys with lab coats who have PhDs to figure all this stuff out. So you just described, it's getting more complex, but at the same time, you have to simplify things. So how are you doing that? >>Correct. So what we basically have done is we have this fantastic product that is called thousand eyes. And so what does DAS is basically as the name, which I think is a fantastic, uh, fantastic name. You have these sensors everywhere. Um, and you can have a good correlation on, uh, links between if I run a, a site to a site from a site to a cloud, from a cloud to cloud, and you basically can measure what is the performance of these links? And so what we're, what we're doing here is we're actually extending the footprint of these thousand eyes agent, right. Instead of just having them, uh, in Virgin material clouds, we are now embedding them with the Cisco network devices, right. We announced this was the catalyst of 9,000. And we're extending this now to our, um, uh, 8,000 catalyst product line for the, for the sun products, as well as to the data center products, the next line. Um, and so what you see is, is there a half a thing you have sounds nice, you get a million insights and you get a billion dollar off improvements, uh, for how your applications run. And this is really, um, the, the power of tying together, the footprint of what a network is with the visibility, what is going on. So you actually know the application behavior that is attached to this network. >>I see. So, okay. So as the cloud evolves, it expands, it connects, you're actually enabling thousand eyes to go further, not just confined within a single data center location, but out to the network across clouds, et cetera, >>Correct. Wherever the network is, you're going to have a thousand eyes sensor and you can bring this together and you can quite frankly pick, if you want to say, Hey, I have my application in public cloud provider, a, uh, domain one, and I have another one domain, two, I can do monitor that link. I can also monitor, I have a user that has a campus location or branch location. I kind of put an agent there and then I can monitor the connectivity from that branch location, all the way to the let's say, corporations, that data center or headquarter, or to the cloud. And I can have these probes and just to be, have visibility and saying, Hey, if there's a performance, I know where the issue is. And then I obviously can use all the other sorts that we have to address those. >>All right, let's talk about the cloud operating model. Everybody tells us that, you know, it's, it's really the change in the model that drives big numbers in terms of ROI. And I want you to maybe address how you're bringing automation and dev ops to this world of, of hybrid and specifically, how is Cisco enabling it organizations to move to a cloud operating model as that cloud definition expands? >>Yeah, no, that's, that's another interesting topic beyond the observability. So really, really what we're seeing. And this has gone on for, uh, I want to say couple of years now, it's really this transition from, uh, operating infrastructure as a network and team more like a service, like what you would expect from a cloud provider, right? This is really around the network team, offering services like a cloud provided us. And that's really what the meaning is of cloud operating model, right? Where this is infrastructure running in your own data center, whether that's linking that infrastructure was whatever runs on the public cloud is operating at like a cloud service. And so we are on this journey for a while. So one of the examples, um, that we have removing some of the control software assets that customers today can deploy on prem, uh, to, uh, an instance that they can deploy in a, in a cloud provider and just basically instantiate saying, stay, and then just run it that way. >>Right? And so the latest example for this is what we have our identity service engine that is now unlimited availability available on AWS. And we will become available mid this year. Also data, we, as a visual, as a service, you can just go to marketplace, you can load it there and now increase. You can start running your policy control in a cloud, managing your X's infrastructure in your data center and your, uh, wherever you want to do it. And so that's just one example of how we see, uh, our customers' network operations team taking advantage of a cloud operating model, or basically deploying their, their tools where they need them and when they need them. So >>What's the scope of, I hope I'm saying it right, ice, right. ISC. I think they call it ice. What's the scope of that? Like for instance, 10 an effect my, or even, you know, address simplify my security approach. >>Absolutely. That's now coming to, what is the beauty of the product itself? Yes. Uh, what you can do is really is like, there's a lot of people talking, what I, how do I get to a zero trust approach to networking? How do I get to a much more dynamic, flexible segmentation in my infrastructure, again, whether this is on only campus X, as well as the data center and ice helps you there, you can use this as a point to, to find your policies and then any connect from there, right? In this particular case, we would instead ice in a cloud as a software, uh, load you now can connect and say, Hey, I want to manage and program my network infrastructure and my data center, or my campus going to the respect of controller with it's DNA center for campus, or whether does this, the, uh, ACI policy controller. And so, yes, what'd you get as an effect out of this is a very elegant way to automatically manage in one place. What does my policy, and then drive the right segmentation in your network infrastructure. Okay. >>Zero trust. It was pre pandemic. It was kind of a buzzword. Now it's become a mandate. I, I wonder if we could talk about yet, right. I mean, so I wonder, could talk about cloud native apps. Uh, you got all these developers that are working inside organizations, they're maintaining legacy apps, they're connecting their data to systems in the cloud. They're sharing that data. These developers they're rapidly advancing their skillsets. How is Cisco enabling its infrastructure to support this world of cloud native making infrastructure more responsive and agile for application developers? >>Yeah. So you were going to, the talk we saw was the visibility. We talked about the operating model, how our network operates, actually want to use tools going forward. Now the next step to visits, it's not just the operator. How do they actually, where do they want to put these tools? All they, how they interact with this tools as well as quite frankly, is how let's say a dev ops team on application team or a cloud team also wants to take advantage off the programmability of the underlying network. And this is where we moving into this whole cloud native discussion, right. Which has really two angles to, is the cloud native way, how applications are being built. And then there is the cloud native way, how you interact with infrastructure, right? And so what we have done as we're putting in place, the on-ramps between clouds, uh, and then on top of it, we're exposing for all these tools, API APIs that can be used and leveraged by standard cloud tools or, uh, uh, cloud native tools, right? >>And one example or two examples we always have. And again, we're on this journey for a while is, uh, both Ansible, uh, script capabilities, uh, that access from red hat, as well as, uh, Hashi Terraform capabilities that you can orchestrate across infrastructure to drive infrastructure automation. And what, what really stands behind it is what either the networking operations team wants to do, or even the app team. They want to be able to describe the application as a code and then drive automatically or programmatically in sedation of infrastructure needed for that application. And so what you see us doing is providing all these, uh, capability as an interface for all our network tools, right? Whether this is ice. What I just mentioned, whether this is our, uh, DCN controllers in the data center, uh, whether these are the controllers in the, uh, in the campus for all of those, we have cloud native interfaces. So, uh, operator or a dev ops team can actually interact directly with that infrastructure the way they would do today with everything that lives in the cloud, or was everything, how they built the application, >>You can't even have the conversation of, of op cloud operating model that includes and comprises on-prem without programmable infrastructure. So that's, that's very important. Last question, Thomas are customers actually using this? They made the announcement today. Are there any examples of customers out there doing this? >>We, we do have a lot of customers out there, um, that are moving down a path and using the D D Cisco high-performance infrastructure, also on the compute side, as well as on the next site. Uh, one of the customers, uh, and this is like an interesting case, is the Rakuten, uh, record in is a large type of provider, um, uh, mobile 5g operator, uh, in Japan and expanding and as in different countries. Uh, and so people, something, Oh, cloud, you must be talking about the public cloud provider, the big, the big three or four. Uh, but if you look at it as a lot of the tackles service providers are actually cloud providers as well and expanding very rapidly. And so we're actually very, um, proud to work together was, was Rakuten and in help them building a high performance, uh, data center infrastructure based on how they gig and actually phone a gig, uh, to drive their deployment to it's a 5g mobile cloud infrastructure, which is, which is, um, where the whole, the whole world where traffic is going. And so it's really exciting to see these development and see the power of automation, visibility, uh, together with the high performance infrastructure, becoming reality and delivering actually, uh, services. Yes. >>Some great points you're making there, but yes, you have the big four clouds are enormous, but then you have a lot of actually quite large clouds, telcos that are either proximate to those clouds or they're in places where those hyperscalers may not have a presence and building out their own infrastructure. So, so that's a great case study, uh, Thomas, Hey, great. Having you on. Thanks so much for spending some time with us. >>Yeah. The same here. I appreciate it. Thanks a lot. >>Thank you for watching everybody. This is Dave Volante for the cube, the leader in tech event coverage.
SUMMARY :
you by Cisco. Good to see you again. When you think about observability And so what that really means is you need it's getting more complex, but at the same time, you have to simplify things. and so what you see is, is there a half a thing you have sounds nice, you get a million insights So as the cloud evolves, it expands, it connects, And I can have these probes and just to be, have visibility and saying, Hey, if there's a performance, And I want you to And this has gone on for, uh, I want to say couple of years now, And so the latest example for this is what we have our identity service engine that you know, address simplify my security approach. And so, yes, what'd you get as an effect out of this is a very elegant Uh, you got all these developers that are working inside organizations, And then there is the cloud native way, how you interact with infrastructure, And so what you see You can't even have the conversation of, of op cloud operating model that includes and comprises And so it's really exciting to see these development and see the power of automation, visibility, so that's a great case study, uh, Thomas, Hey, great. I appreciate it. Thank you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Thomas | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Sam | PERSON | 0.99+ |
Thomas Scheibe | PERSON | 0.99+ |
Thomas Shabbat | PERSON | 0.99+ |
Rakuten | ORGANIZATION | 0.99+ |
Japan | LOCATION | 0.99+ |
two examples | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
mid this year | DATE | 0.99+ |
one example | QUANTITY | 0.99+ |
two | QUANTITY | 0.98+ |
ACI | ORGANIZATION | 0.98+ |
one place | QUANTITY | 0.98+ |
two angles | QUANTITY | 0.98+ |
8,000 | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
one | QUANTITY | 0.96+ |
both | QUANTITY | 0.96+ |
9,000 | QUANTITY | 0.96+ |
four | QUANTITY | 0.96+ |
pandemic | EVENT | 0.94+ |
three | QUANTITY | 0.87+ |
thousand eyes | QUANTITY | 0.86+ |
Virgin | ORGANIZATION | 0.85+ |
zero trust | QUANTITY | 0.85+ |
Zero trust | QUANTITY | 0.84+ |
billion dollar | QUANTITY | 0.84+ |
couple of years back | DATE | 0.82+ |
Cisco Future Cloud | ORGANIZATION | 0.81+ |
10 | QUANTITY | 0.75+ |
one domain | QUANTITY | 0.74+ |
a million insights | QUANTITY | 0.73+ |
double | QUANTITY | 0.73+ |
one event | QUANTITY | 0.7+ |
single data center | QUANTITY | 0.7+ |
half | QUANTITY | 0.65+ |
Hashi | TITLE | 0.61+ |
couple | QUANTITY | 0.53+ |
years | QUANTITY | 0.48+ |
Apres | ORGANIZATION | 0.44+ |
thousand | QUANTITY | 0.41+ |
Jamie Thomas, IBM | IBM Think 2021
>> Narrator: From around the globe, it's the CUBE with digital coverage of IBM Think 2021, brought to you by IBM. >> Welcome back to IBM Think 2021, the virtual edition. This is the CUBEs, continuous, deep dive coverage of the people, processes and technologies that are really changing our world. Right now, we're going to talk about modernization and what's beyond with Jamie Thomas, general manager, strategy and development, IBM Enterprise Security. Jamie, always a pleasure. Great to see you again. Thanks for coming on. >> It's great to see you, Dave. And thanks for having me on the CUBE is always a pleasure. >> Yeah, it is our pleasure. And listen, we've been hearing a lot about IBM is focused on hybrid cloud, Arvind Krishna says we must win the architectural battle for hybrid cloud. I love that. We've been hearing a lot about AI. And I wonder if you could talk about IBM Systems and how it plays into that strategy? >> Sure, well, it's a great time to have this discussion Dave. As you all know, IBM Systems Technology is used widely around the world, by many, many 1000s of clients in the context of our IBM System Z, our power systems and storage. And what we have seen is really an uptake of monetization around those workloads, if you will, driven by hybrid cloud, the hybrid cloud agenda, as well as an uptake of Red Hat OpenShift, as a vehicle for this modernization. So it's pretty exciting stuff, what we see as many clients taking advantage of OpenShift on Linux, to really modernize these environments, and then stay close, if you will, to that systems of record database and the transactions associated with it. So they're seeing a definite performance advantage to taking advantage of OpenShift. And it's really fascinating to see the things that they're doing. So if you look at financial services, for instance, there's a lot of focus on risk analytics. So things like fraud, anti money laundering, mortgage risk, types of applications being done in this context, when you look at our retail industry clients, you see also a lot of customer centricity solutions, if you will, being deployed on OpenShift. And once again, having Linux close to those traditional LPARs of AIX, I-Series, or in the context of z/OS. So those are some of the things we see happening. And it's quite real. >> Now, you didn't mention power, but I want to come back and ask you about power. Because a few weeks ago, we were prompted to dig in a little bit with the when Arvind was on with Pat Kessinger at Intel and talking about the relationship you guys have. And so we dug in a little bit, we thought originally, we said, oh, it's about quantum. But we dug in. And we realized that the POWER10 is actually the best out there and the highest performance in terms of disaggregating memory. And we see that as a future architecture for systems and actually really quite excited about it about the potential that brings not only to build beyond system on a chip and system on a package, but to start doing interesting things at the Edge. You know, what do you what's going on with power? >> Well, of course, when I talked about OpenShift, we're doing OpenShift on power Linux, as well as Z Linux, but you're exactly right in the context for a POWER10 processor. We couldn't be more we're so excited about this processor. First of all, it's our first delivery with our partner Samsung with a seven nanometer form factor. The processor itself has only 18 billion transistors. So it's got a few transistors there. But one of the cool inventions, if you will, that we have created is this expansive memory region as part of this design point, which we call memory inception, it gives us the ability to reach memory across servers, up to two petabytes of memory. Aside from that, this processor has generational improvements and core and thread performance, improved energy efficiency. And all of this, Dave is going to give us a lot of opportunity with new workloads, particularly around artificial intelligence and inferencing around artificial intelligence. I mean, that's going to be that's another critical innovation that we see here in this POWER10 processor. >> Yeah, processor performance is just exploding. We're blowing away the historical norms. I think many people don't realize that. Let's talk about some of the key announcements that you've made in quantum last time we spoke on the qubit for last year, I think we did a deeper dive on quantum. You've made some announcements around hardware and software roadmaps. Give us the update on quantum please. >> Well, there is so much that has happened since we last spoke on the quantum landscape. And the key thing that we focused on in the last six months is really an articulation of our roadmaps, so the roadmap around hardware, the roadmap around software, and we've also done quite a bit of ecosystem development. So in terms of the roadmap around hardware, we put ourselves out there we've said we were going to get to over 1000 qubit machine and in 2023, so that's our milestone. And we've got a number of steps we've outlined along that way, of course, we have to make progress, frankly, every six months in terms of innovating around the processor, the electronics and the fridge associated with these machines. So lots of exciting innovation across the board. We've also published a software roadmap, where we're articulating how we improve a circuit execution speeds. So we hope, our plan to show shortly a 100 times improvement in circuit execution speeds. And as we go forward in the future, we're modifying our Qiskit programming model to not only allow a easily easy use by all types of developers, but to improve the fidelity of the entire machine, if you will. So all of our innovations go hand in hand, our hardware roadmap, our software roadmap, are all very critical in driving the technical outcomes that we think are so important for quantum to become a reality. We've deployed, I would say, in our quantum cloud over, you know, over 20 machines over time, we never quite identify the precise number because frankly, as we put up a new generation machine, we often retire when it's older. So we're constantly updating them out there, and every machine that comes on online, and that cloud, in fact, represents a sea change and hardware and a sea change in software. So they're all the latest and greatest that our clients can have access to. >> That's key, the developer angle you got redshift running on quantum yet? >> Okay, I mean, that's a really good question, you know, as part of that software roadmap in terms of the evolution and the speed of that circuit execution is really this interesting marriage between classical processing and quantum processing and bring those closer together. And in the context of our classical operations that are interfacing with that quantum processor, we're taking advantage of OpenShift, running on that classical machine to achieve that. And once again, if, as you can imagine, that'll give us a lot of flexibility in terms of where that classical machine resides and how we continue the evolution the great marriage, I think that's going to that will exist that does exist and will exist between classical computing and quantum computing. >> I'm glad I asked it was kind of tongue in cheek. But that's a key thread to the ecosystem, which is critical to obviously, you know, such a new technology. How are you thinking about the ecosystem evolution? >> Well, the ecosystem here for quantum is infinitely important. We started day one, on this journey with free access to our systems for that reason, because we wanted to create easy entry for anyone that really wanted to participate in this quantum journey. And I can tell you, it really fascinates everyone, from high school students, to college students, to those that are PhDs. But during this journey, we have reached over 300,000 unique users, we have now over 500,000 unique downloads of our Qiskit programming model. But to really achieve that is his back plane by this ongoing educational thrust that we have. So we've created an open source textbook, around Qiskit that allows organizations around the world to take advantage of it from a curriculum perspective. We have over 200 organizations that are using our open source textbook. Last year, when we realized we couldn't do our in person programming camps, which were so exciting around the world, you can imagine doing an in person programming camp and South Africa and Asia and all those things we did in 2019. Well, we had just like you all, we had to go completely virtual, right. And we thought that we would have a few 100 people sign up for our summer school, we had over 4000 people sign up for our summer school. And so one of the things we had to do is really pedal fast to be able to support that many students in this summer school that kind of grew out of our proportions. The neat thing was once again, seeing all the kids and students around the world taking advantage of this and learning about quantum computing. And then I guess that the end of last year, Dave, to really top this off, we did something really fundamentally important. And we set up a quantum center for historically black colleges and universities, with Howard University being the anchor of this quantum center. And we're serving 23 HBCUs now, to be able to reach a new set of students, if you will, with STEM technologies, and most importantly, with quantum. And I find, you know, the neat thing about quantum is is very interdisciplinary. So we have quantum physicist, we have electrical engineers, we have engineers on the team, we have computer scientists, we have people with biology and chemistry and financial services backgrounds. So I'm pretty excited about the reach that we have with quantum into HBCUs and even beyond right I think we can do some we can have some phenomenal results and help a lot of people on this journey to quantum and you know, obviously help ourselves but help these students as well. >> What do you see in people do with quantum and maybe some of the use cases. I mean you mentioned there's sort of a connection to traditional workloads, but obviously some new territory what's exciting out there? >> Well, there's been a really a number of use cases that I think are top of mind right now. So one of the most interesting to me has been one that showed us a few months ago that we talked about in the press actually a few months ago, which is with Exxon Mobil. And they really started looking at logistics in the context of Maritime shipping, using quantum. And if you think of logistics, logistics are really, really complicated. Logistics in the face of a pandemic are even more complicated and logistics when things like the Suez Canal shuts down, are even more complicated. So think about, you know, when the Suez Canal shut down, it's kind of like the equivalent of several major airports around the world shutting down and then you have to reroute all the traffic, and that traffic and maritime shipping is has to be very precise, has to be planned the stops are plan, the routes are plan. And the interest that ExxonMobil has had in this journey is not just more effective logistics, but how do they get natural gas shipped around the world more effectively, because their goal is to bring energy to organizations into countries while reducing CO2 emissions. So they have a very grand vision that they're trying to accomplish. And this logistics operation is just one of many, then we can think of logistics, though being a being applicable to anyone that has a supply chain. So to other shipping organizations, not just Maritime shipping. And a lot of the optimization logic that we're learning from that set of work also applies to financial services. So if we look at optimization, around portfolio pricing, and everything, a lot of the similar characteristics will also go be applicable to the financial services industry. So that's one big example. And I guess our latest partnership that we announced with some fanfare, about two weeks ago, was with the Cleveland Clinic, and we're doing a special discovery acceleration activity with the Cleveland Clinic, which starts prominently with artificial intelligence, looking at chemistry and genomics, and improve speed around machine learning for all of the the critical healthcare operations that the Cleveland Clinic has embarked on but as part of that journey, they like many clients are evolving from artificial intelligence, and then learning how they can apply quantum as an accelerator in the future. And so they also indicated that they will buy the first commercial on premise quantum computer for their operations and place that in Ohio, in the the the years to come. So it's a pretty exciting relationship. These relationships show the power of the combination, once again, of classical computing, using that intelligently to solve very difficult problems. And then taking advantage of quantum for what it can uniquely do in a lot of these use cases. >> That's great description, because it is a strong connection to things that we do today. It's just going to do them better, but then it's going to open up a whole new set of opportunities. Everybody wants to know, when, you know, it's all over the place. Because some people say, oh, not for decades, other people say I think it's going to be sooner than you think. What are you guys saying about timeframe? >> We're certainly determined to make it sooner than later. Our roadmaps if you note go through 2023. And we think the 2023 is going to will be a pivotal year for us in terms of delivery around those roadmaps. But it's these kind of use cases and this intense working with these clients, 'cause when they work with us, they're giving us feedback on everything that we've done, how does this programming model really help me solve these problems? What do we need to do differently? In the case of Exxon Mobil, they've given us a lot of really great feedback on how we can better fine tune all elements of the system to improve that system. It's really allowed us to chart a course for how we think about the programming model in particular in the context of users. Just last week, in fact, we announced some new machine learning applications, which these applications are really to allow artificial intelligence users and programmers to get take advantage of quantum without being a quantum physicist or expert, right. So it's really an encapsulation of a composable elements so that they can start to use, using an interface allows them to access through PyTorch into the quantum computer, take advantage of some of the things we're doing around neural networks and things like that, once again, without having to be experts in quantum. So I think those are the kind of things we're learning how to do better, fundamentally through this co-creation and development with our quantum network. And our quantum network now is over 140 unique organizations and those are commercial, academic, national laboratories and startups that we're working with. >> The picture started become more clear, we're seeing emerging AI applications, a lot of work today in AI is in modeling. Over time, it's going to shift toward inference and real time and practical applications. Everybody talks about Moore's law being dead. Well, in fact, the yes, I guess, technically speaking, but the premise or the outcome of Moore's law is actually accelerating, we're seeing processor performance, quadrupling every two years now, when you include the GPU along with the CPU, the DSPs, the accelerators. And so that's going to take us through this decade, and then then quantum is going to power us, you know, well beyond who can even predict that. It's a very, very exciting time. Jamie, I always love talking to you. Thank you so much for coming back on the CUBE. >> Well, I appreciate the time. And I think you're exactly right, Dave, you know, we talked about POWER10, just for a few minutes there. But one of the things we've done in POWER10, as well as we've embedded AI into every core that processor, so you reduce that latency, we've got a 10 to 20 times improvement over the last generation in terms of artificial intelligence, you think about the evolution of a classical machine like that state of the art, and then combine that with quantum and what we can do in the future, I think is a really exciting time to be in computing. And I really appreciate your time today to have this dialogue with you. >> Yeah, it's always fun and it's of national importance as well. Jamie Thomas, thanks so much. This is Dave Vellante with the CUBE keep it right there our continuous coverage of IBM Think 2021 will be right back. (gentle music) (bright music)
SUMMARY :
it's the CUBE with digital of the people, processes and technologies the CUBE is always a pleasure. and how it plays into that strategy? and the transactions associated with it. and talking about the that we have created is of the key announcements And the key thing that we And in the context of the ecosystem evolution? And so one of the things we and maybe some of the use cases. And a lot of the optimization to things that we do today. of the things we're doing going to power us, you know, like that state of the art, and it's of national importance as well.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jamie Thomas | PERSON | 0.99+ |
Pat Kessinger | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Cleveland Clinic | ORGANIZATION | 0.99+ |
Jamie | PERSON | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Exxon Mobil | ORGANIZATION | 0.99+ |
Arvind Krishna | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Jamie Thomas | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
2019 | DATE | 0.99+ |
ExxonMobil | ORGANIZATION | 0.99+ |
100 times | QUANTITY | 0.99+ |
Ohio | LOCATION | 0.99+ |
Asia | LOCATION | 0.99+ |
Last year | DATE | 0.99+ |
2023 | DATE | 0.99+ |
Howard University | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
Arvind | PERSON | 0.99+ |
last year | DATE | 0.99+ |
South Africa | LOCATION | 0.99+ |
Suez Canal | LOCATION | 0.99+ |
over 300,000 unique users | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
23 HBCUs | QUANTITY | 0.99+ |
Qiskit | TITLE | 0.99+ |
first | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Moore | PERSON | 0.99+ |
Z Linux | TITLE | 0.99+ |
over 200 organizations | QUANTITY | 0.99+ |
Linux | TITLE | 0.98+ |
over 4000 people | QUANTITY | 0.98+ |
first delivery | QUANTITY | 0.98+ |
OpenShift | TITLE | 0.98+ |
Think 2021 | COMMERCIAL_ITEM | 0.97+ |
over 140 unique organizations | QUANTITY | 0.97+ |
First | QUANTITY | 0.97+ |
seven nanometer | QUANTITY | 0.97+ |
over 20 machines | QUANTITY | 0.97+ |
pandemic | EVENT | 0.97+ |
18 billion transistors | QUANTITY | 0.97+ |
one | QUANTITY | 0.96+ |
20 times | QUANTITY | 0.96+ |
day one | QUANTITY | 0.95+ |
over 500,000 unique downloads | QUANTITY | 0.95+ |
one big example | QUANTITY | 0.94+ |
Think 2021 | COMMERCIAL_ITEM | 0.93+ |
100 people | QUANTITY | 0.93+ |
about two weeks ago | DATE | 0.92+ |
over 1000 qubit | QUANTITY | 0.9+ |
I-Series | COMMERCIAL_ITEM | 0.87+ |
z/OS | TITLE | 0.85+ |
six months | QUANTITY | 0.82+ |
few months ago | DATE | 0.8+ |
POWER10 | TITLE | 0.79+ |
up | QUANTITY | 0.78+ |
PyTorch | TITLE | 0.78+ |
few weeks ago | DATE | 0.78+ |
1000s of clients | QUANTITY | 0.76+ |
Rob Thomas, IBM | IBM Think 2021
>> Voice Over: From around the globe. It's theCUBE with digital coverage of IBM Think 2021 brought to you by IBM. >> Okay. Welcome back everyone. To theCUBE's coverage of IBM Think 2021 virtual. I'm John Furrier, host of theCUBE. We've got a great segment here on the power of hybrid cloud and AI. And I'm excited to have Rob Thomas, Senior Vice President of IBM's cloud and Data platform, CUBE alumni. Been on going back years and years talking about data. Rob, great to see you, a leader at IBM. Thanks for joining. >> John. Great to see you hope everybody is safe and well and great to be with you again. >> Yeah, love the progress, love the Hybrid Cloud distributed computing, meets operating systems, meets modern applications at the center of it is the new cloud equation. And of course data continues to be the value proposition as the platform. And as you quoted many times and I love your favorite quote. There's no AI without IA. So you got to have the architecture. So that still rings true today and it's just so evergreen and so relevant and cooler than ever with machine learning and AI operations. So let's just jump in. IBM's announced, host a new products and updates at Think. Tell us what you're most excited about and what should people pay attention to. >> Maybe I'll connect two thoughts here. There is no AI without IA, still true today. Meaning, customers that want to do AI need an information architecture. There was an IDC report just last year that said, "Despite all the progress on data, still 90% of data in organizations is either unused or underutilized." So what's amazing is after all the time we've been talking John, we're still really just getting started. Then that kind of connects to another thought, which is I still believe that AI is not going to replace managers, but managers that use AI will replace the managers that do not. And I'd say that's the backdrop for all the announcements that we're doing this week. It's things like auto SQL. How do you actually automate the creation of SQL queries in a large distributed data warehouse? It's never been done before, now we're doing it. It's things like Watson Orchestrate which is super powers in the hands of any business user, just to ask for something to get done. Just ask for a task to get completed. Watson Orchestrator will do that for you. It's maximo mobile. So anybody working in the field now has access to an AI system on their device for how they're managing their assets. So this is all about empowering people and users that use these products are going to have an advantage over the users that are not, that's what I'm really excited about. >> So one of the things that's coming out as Cloud Pak for Data, AI powered automation these are kind of two that you kind of touched upon the SQL thing their. Cloud Pak is there, you got it for Data and this automation trend. What is that about? Why is it important? Can you share with us the relevance of those two things? >> Let's talk broadly about automation. There's two huge markets here. There's the market for RPA business process, $30 billion market. There's the market for AIOps, which is growing 22%, that's on its way to $40 billion. These are enormous markets. Probably the biggest bet IBM has made in the last year is in automation. Explicitly in Watson AIOps. Last June in Think we announced Watson AIOps, then we did the acquisition of Instana, then we announced our intent to acquire Turbonomic. At this point, we're the only company that has all the pieces for automating how you run your IT systems. That's what I mean when I say AIOps. So really pleased with the progress that we've made there. But again, we're just getting started. >> Yeah. Congratulations on the Turbonomic. I was just commenting on that when that announced. IBM buying into the Cloud and the Hybrid cloud is interesting because the shift has happened. It's Public Cloud, it's on premises as Edge. Those two things as a system, it's more important ever than the modernization of the apps that you guys are talking about and having the under the cover capabilities. So as Cloud and Data merge, this kind of control plane concept, this architecture, as you'd said IA. You can't have AI without IA. What is that architecture look like? Can you break down the elements of what's involved? I know there's predictive analytics, there's automation and security. What are the pillars of this architecture? What are the four concepts? If you can explain that. >> Yeah, let's start with the basics. So Hybrid Cloud is about you build your software runs once and you run it anywhere you want, any public cloud,any private cloud. That assumes containers are important to the future of software. We are a hundred percent convinced that is true. OpenShift is the platform that we build on and that many software companies in the world are now building on because it gives you portability for your applications. So then you start to think about if you have that common fabric for Hybrid Cloud, how do you deliver value to customers in addition to the platform? To me, that's four big things. It's automation, we talked about that. It's security, it's predictions. How do you actually make predictions on your data? And then it's modernization. Meaning, how do you actually help customers modernize their applications and get to the Cloud? So those are the things we always talk about, automate, secure, modernize, predict. I think those are the four most important things for every company that's thinking about Cloud and AI. >> Yeah, it's interesting. I love the security side is one of the big conversations in AIOps and day two operations or whatever it's called is shifting left, getting security into the Cloud native kind of development pipeline. But speaking of secure, you have a customer that was talking about this Dow Chemical. About IB empowering Dow zero trust architecture. Could you explain that deal and how that's working? Because that's again, huge enterprise customer, very big scale at scale, zero trust is big, part of it. What is this? >> Let's start with the basics. So what is zero trust mean? It means to have a secure business, you have to start with the assumption that nothing can be trusted. That means you have to think about all aspects of your security practice. How do you align on a security strategy? How do you protect your data assets? How do you manage security threats? So we always talk about a line, protect, manage back to modernize, which is how do you bring all your systems forward to do this? That's exactly what we're doing with the Dow as you heard in that session, which is they've kind of done that whole journey from how they built a security strategy that was designed with zero trust in mind, they're protecting data assets, they're managing cyber threats in real time with a relatively low number of false positives which are the issue that most companies have. They're a tremendous example of a company that jumped on this and has had a really big impact. And they've done it without interfering with their business operations, meaning anybody can lock everything down but then you can't really run your business if you're doing that. They've done it, I think in a really intelligent way. >> That's awesome. We always talk about the big waves. You always give great color commentary on the trends. Right now though, the tsunami seems to be a confluence of many things coming together. What are some of the big trends in waves you're seeing now specifically on the tech side, on the technology side, as well as the business side right now? 'Cause coming out of post COVID, it's pretty clear cloud-native is powering a new growth strategy for customers. Dow was one of them, you just commented on it but there's a bigger wave happening here, both on the tech theater and in the business theater. Can you share your views on and your opinions and envision on these trends? >> I think there's three profound trends that are actually pretty simple to understand. One is, technology is going to decentralize again. We've always gone from centralized architectures to decentralized. Mainframe was centralized, internet mobile decentralized. The first version of public cloud was centralized, meaning bringing everything to one place. Technology is decentralized and again, with Hybrid Cloud, with Edge, pretty straight forward I think that's a trend that we can ride and lead for the next decade. Next is around automation that we talked about. There was a McKinsey report that said, "120 billion hours a year are going to be automated with things like Watson Orchestrator, Watson AIOps." What we're doing around Cloud Pak for automation, we think that time is now. We think you can start to automate in your business today and you may have seen the--example where we're doing customer care and they're now automating 70% of their inbound customer inquiries. It's really amazing. And then the third is around data. The classical problem, I mentioned 90% is still unused or underutilized. This trend on data is not about to slow down because the data being collected is still multiplying 10 X every year and companies have to find a way to organize that data as they collected. So that's going to be a trend that continues. >> You know, I just kind of pinched myself sometimes and hearing you talk with some of our earlier conversations in theCUBE, people who have been on this data mindset have really been successful because it's evolving and growing and it's changing and it's adding more input into the system and the technology is getting better. There's more cloud scales. You mentioned automation and scale are huge. And I think this really kind of wakes everyone up. And certainly the pandemic has woken everyone up to the fact that this is driving new experiences for users and businesses, right? So this is, and then those experiences become expectations. This is the classic UX paradigm that grows from new things. So I got to ask you, with the pandemic what is the been the most compelling ways you seen people operate, create new expectations? Because new things are coming, new big things, and new incremental things are happening. So evolution and revolutionary capabilities. Can you share some examples and your thoughts? >> We've collected a decent bit of data on this. And what's interesting is how much AI has accelerated since the pandemic started. And it's really in five areas, it's customer care that we talked about, virtual agents, customer service, how you do that. It's employee experience. So somewhere to customer care but how do you take care of your employees using AI? Third is around AIOps, we talked about that. Fourth is around regulatory compliance and fifth is around financial planning and budgeting. These are the five major use cases of AI that are getting into production in companies over the last year that's going to continue to accelerate. So I think it's actually fairly clarifying now that we really understand these are the five big things. I encourage anybody watching, pick one of these, get started, then pick the second, then pick the third. If you are not doing all five of these, 12, 18, 24 months from now, you are going to be behind. >> So give us an example of some things that have surprised you in the pandemic and things that blew you away. Like, wow, I didn't see that coming. Can you share on things that you've seen evolve? Cause you're a year ahead of the business units of Cloud and Data, big part of IBM and you see customer examples. Just quickly share some notable use cases or just anecdotal examples of just things that jumped out at you that said, "Wow, that's going to be a double-down moment or that's not going to be anymore." Exposes, the pandemic exposes the good, bad and the ugly. I mean, people got caught off guard, some got a tailwind, some had a headwind, some are retooling. What's your thoughts on what you can you share any examples? >> Like everybody, many things have surprised me in the last year. I am encouraged at how fast many companies were able to adjust and adapt for this world. So that's a credit to all the resiliency that they built into their processes, their systems and their people over time. Related to that, the thing that really sticks out to me again, is this idea of using AI to serve your customers and to serve your employees. We had a hundred customers that went live with one of those two use cases in the first 35 days of the pandemic. Just think about that acceleration. I think without the pandemic, for those hundred it might've taken three years and it happened in 35 days. It's proof that the technology today is so powerful. Sometimes it just takes the initiative to get started and to do something. And all those companies have really benefited from this. So it's great to see. >> Great. Rob, great to have you on. Great to have your commentary on theCUBE. Could you just quickly share in 30 seconds, what is the most important thing people should pay attention to and Think this year from your perspective? What's the big aha moment that you think they could walk away with? >> We have intentionally made this a very technology centric event. Just go look at the demos, play with the technology. I think you will be impressed and start to see, let's say a bit of a new IBM in terms of how we're making technology accessible and easy for anybody to use. >> All right. Rob Thomas, Senior Vice President of IBM cloud and Data platform. Great to have you on and looking forward to seeing more of you this year and hopefully in person. Thanks for coming on theCUBE virtual. >> Thanks, John. >> Okay. I'm John Furrier with theCUBE. Keep coverage of IBM Think 2021. Thank you for watching. (soft music)
SUMMARY :
brought to you by IBM. on the power of hybrid cloud and AI. and well and great to be with you again. So you got to have the architecture. And I'd say that's the backdrop So one of the things that's coming that has all the pieces of the apps that you So Hybrid Cloud is about you of the big conversations in How do you protect your data assets? and in the business theater. and lead for the next decade. and hearing you talk with some in companies over the last year and things that blew you away. and to serve your employees. Rob, great to have you on. and easy for anybody to use. Great to have you on Thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rob Thomas | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
Rob | PERSON | 0.99+ |
Rob Thomas | PERSON | 0.99+ |
John | PERSON | 0.99+ |
70% | QUANTITY | 0.99+ |
$30 billion | QUANTITY | 0.99+ |
$40 billion | QUANTITY | 0.99+ |
22% | QUANTITY | 0.99+ |
Last June | DATE | 0.99+ |
Fourth | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
third | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
Dow Chemical | ORGANIZATION | 0.99+ |
Instana | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Turbonomic | ORGANIZATION | 0.99+ |
AIOps | ORGANIZATION | 0.99+ |
second | QUANTITY | 0.99+ |
fifth | QUANTITY | 0.99+ |
35 days | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
five areas | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
10 X | QUANTITY | 0.99+ |
Third | QUANTITY | 0.99+ |
Watson Orchestrator | TITLE | 0.99+ |
McKinsey | ORGANIZATION | 0.99+ |
Watson AIOps | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Watson Orchestrate | TITLE | 0.98+ |
30 seconds | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
hundred | QUANTITY | 0.98+ |
pandemic | EVENT | 0.98+ |
next decade | DATE | 0.98+ |
SQL | TITLE | 0.97+ |
two thoughts | QUANTITY | 0.97+ |
four concepts | QUANTITY | 0.97+ |
first version | QUANTITY | 0.97+ |
Hybrid Cloud | TITLE | 0.97+ |
hundred percent | QUANTITY | 0.97+ |
two huge markets | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
120 billion hours a year | QUANTITY | 0.97+ |
three profound trends | QUANTITY | 0.96+ |
12 | QUANTITY | 0.96+ |
two use cases | QUANTITY | 0.96+ |
18 | QUANTITY | 0.95+ |
theCUBE | ORGANIZATION | 0.95+ |
five big things | QUANTITY | 0.94+ |
zero trust | QUANTITY | 0.94+ |
Think | ORGANIZATION | 0.93+ |
five major use cases | QUANTITY | 0.93+ |
Dow | ORGANIZATION | 0.92+ |
CUBE | ORGANIZATION | 0.92+ |
one place | QUANTITY | 0.92+ |
waves | EVENT | 0.91+ |
IBM 34 Rob Thomas VTT
(soft music) >> Voice Over: From around the globe. It's theCUBE with digital coverage of IBM Think 2021 brought to you by IBM. >> Okay. Welcome back everyone. To theCUBE's coverage of IBM Think 2021 virtual. I'm John Furrier, host of theCUBE. We've got a great segment here on the power of hybrid cloud and AI. And I'm excited to have Rob Thomas, Senior Vice President of IBM's cloud and Data platform, CUBE alumni. Been on going back years and years talking about data. Rob, great to see you, a leader at IBM. Thanks for joining. >> John. Great to see you hope everybody is safe and well and great to be with you again. >> Yeah, love the progress, love the Hybrid Cloud distributed computing, meets operating systems, meets modern applications at the center of it is the new cloud equation. And of course data continues to be the value proposition as the platform. And as you quoted many times and I love your favorite quote. There's no AI without IA. So you got to have the architecture. So that still rings true today and it's just so evergreen and so relevant and cooler than ever with machine learning and AI operations. So let's just jump in. IBM's announced, host a new products and updates at Think. Tell us what you're most excited about and what should people pay attention to. >> Maybe I'll connect two thoughts here. There is no AI without IA, still true today. Meaning, customers that want to do AI need an information architecture. There was an IDC report just last year that said, "Despite all the progress on data, still 90% of data in organizations is either unused or underutilized." So what's amazing is after all the time we've been talking John, we're still really just getting started. Then that kind of connects to another thought, which is I still believe that AI is not going to replace managers, but managers that use AI will replace the managers that do not. And I'd say that's the backdrop for all the announcements that we're doing this week. It's things like auto SQL. How do you actually automate the creation of SQL queries in a large distributed data warehouse? It's never been done before, now we're doing it. It's things like Watson Orchestrate which is super powers in the hands of any business user, just to ask for something to get done. Just ask for a task to get completed. Watson Orchestrator will do that for you. It's Maximo Mbo. So anybody working in the field now has access to an AI system on their device for how they're managing their assets. So this is all about empowering people and users that use these products are going to have an advantage over the users that are not, that's what I'm really excited about. >> So one of the things that's coming out as Cloud Pak for Data, AI powered automation these are kind of two that you kind of touched upon the SQL thing their. Cloud Pak is there, you got it for Data and this automation trend. What is that about? Why is it important? Can you share with us the relevance of those two things? >> Let's talk broadly about automation. There's two huge markets here. There's the market for RPA business process, $30 billion market. There's the market for AIOps, which is growing 22%, that's on its way to $40 billion. These are enormous markets. Probably the biggest bet IBM has made in the last year is in automation. Explicitly in Watson AIOps. Last June in Think we announced Watson AIOps, then we did the acquisition of Instana, then we announced our intent to acquire Turbonomic. At this point, we're the only company that has all the pieces for automating how you run your IT systems. That's what I mean when I say AIOps. So really pleased with the progress that we've made there. But again, we're just getting started. >> Yeah. Congratulations on the Turbonomic. I was just commenting on that when that announced. IBM buying into the Cloud and the Hybrid cloud is interesting because the shift has happened. It's Public Cloud, it's on premises as Edge. Those two things as a system, it's more important ever than the modernization of the apps that you guys are talking about and having the under the cover capabilities. So as Cloud and Data merge, this kind of control plane concept, this architecture, as you'd said IA. You can't have AI without IA. What is that architecture look like? Can you break down the elements of what's involved? I know there's predictive analytics, there's automation and security. What are the pillars of this architecture? What are the four concepts? If you can explain that. >> Yeah, let's start with the basics. So Hybrid Cloud is about you build your software runs once and you run it anywhere you want, any public cloud,any private cloud. That assumes containers are important to the future of software. We are a hundred percent convinced that is true. OpenShift is the platform that we build on and that many software companies in the world are now building on because it gives you portability for your applications. So then you start to think about if you have that common fabric for Hybrid Cloud, how do you deliver value to customers in addition to the platform? To me, that's four big things. It's automation, we talked about that. It's security, it's predictions. How do you actually make predictions on your data? And then it's modernization. Meaning, how do you actually help customers modernize their applications and get to the Cloud? So those are the things we always talk about, automate, secure, modernize, predict. I think those are the four most important things for every company that's thinking about Cloud and AI. >> Yeah, it's interesting. I love the security side is one of the big conversations in AIOps and day two operations or whatever it's called is shifting left, getting security into the Cloud native kind of development pipeline. But speaking of secure, you have a customer that was talking about this Dow Chemical. About IB empowering Dow zero trust architecture. Could you explain that deal and how that's working? Because that's again, huge enterprise customer, very big scale at scale, zero trust is big, part of it. What is this? >> Let's start with the basics. So what is zero trust mean? It means to have a secure business, you have to start with the assumption that nothing can be trusted. That means you have to think about all aspects of your security practice. How do you align on a security strategy? How do you protect your data assets? How do you manage security threats? So we always talk about a line, protect, manage back to modernize, which is how do you bring all your systems forward to do this? That's exactly what we're doing with the Dow as you heard in that session, which is they've kind of done that whole journey from how they built a security strategy that was designed with zero trust in mind, they're protecting data assets, they're managing cyber threats in real time with a relatively low number of false positives which are the issue that most companies have. They're a tremendous example of a company that jumped on this and has had a really big impact. And they've done it without interfering with their business operations, meaning anybody can lock everything down but then you can't really run your business if you're doing that. They've done it, I think in a really intelligent way. >> That's awesome. We always talk about the big waves. You always give great color commentary on the trends. Right now though, the tsunami seems to be a confluence of many things coming together. What are some of the big trends in waves you're seeing now specifically on the tech side, on the technology side, as well as the business side right now? 'Cause coming out of post COVID, it's pretty clear cloud-native is powering a new growth strategy for customers. Dow was one of them, you just commented on it but there's a bigger wave happening here, both on the tech theater and in the business theater. Can you share your views on and your opinions and envision on these trends? >> I think there's three profound trends that are actually pretty simple to understand. One is, technology is going to decentralize again. We've always gone from centralized architectures to decentralized. Mainframe was centralized, internet mobile decentralized. The first version of public cloud was centralized, meaning bringing everything to one place. Technology is decentralized and again, with Hybrid Cloud, with Edge, pretty straight forward I think that's a trend that we can ride and lead for the next decade. Next is around automation that we talked about. There was a McKinsey report that said, "120 billion hours a year are going to be automated with things like Watson Orchestrator, Watson AIOps." What we're doing around Cloud Pak for automation, we think that time is now. We think you can start to automate in your business today and you may have seen the C QVS example where we're doing customer care and they're now automating 70% of their inbound customer inquiries. It's really amazing. And then the third is around data. The classical problem, I mentioned 90% is still unused or underutilized. This trend on data is not about the slow down because the data being collected is still multiplying 10 X every year and companies have to find a way to organize that data as they collected. So that's going to be a trend that continues. >> You know, I just kind of pinched myself sometimes and hearing you talk with some of our earlier conversations in theCUBE, people who have been on this data mindset have really been successful because it's evolving and growing and it's changing and it's adding more input into the system and the technology is getting better. There's more cloud scales. You mentioned automation and scale are huge. And I think this really kind of wakes everyone up. And certainly the pandemic has woken everyone up to the fact that this is driving new experiences for users and businesses, right? So this is, and then those experiences become expectations. This is the classic UX paradigm that grows from new things. So I got to ask you, with the pandemic what is the been the most compelling ways you seen people operate, create new expectations? Because new things are coming, new big things, and new incremental things are happening. So evolution and revolutionary capabilities. Can you share some examples and your thoughts? >> We've collected a decent bit of data on this. And what's interesting is how much AI has accelerated since the pandemic started. And it's really in five areas, it's customer care that we talked about, virtual agents, customer service, how you do that. It's employee experience. So somewhere to customer care but how do you take care of your employees using AI? Third is around AIOps, we talked about that. Fourth is around regulatory compliance and fifth is around financial planning and budgeting. These are the five major use cases of AI that are getting into production in companies over the last year that's going to continue to accelerate. So I think it's actually fairly clarifying now that we really understand these are the five big things. I encourage anybody watching, pick one of these, get started, then pick the second, then pick the third. If you are not doing all five of these, 12, 18, 24 months from now, you are going to be behind. >> So give us an example of some things that have surprised you in the pandemic and things that blew you away. Like, wow, I didn't see that coming. Can you share on things that you've seen evolve? Cause you're a year ahead of the business units of Cloud and Data, big part of IBM and you see customer examples. Just quickly share some notable use cases or just anecdotal examples of just things that jumped out at you that said, "Wow, that's going to be a double-down moment or that's not going to be anymore." Exposes, the pandemic exposes the good, bad and the ugly. I mean, people got caught off guard, some got a tailwind, some had a headwind, some are retooling. What's your thoughts on what you can you share any examples? >> Like everybody, many things have surprised me in the last year. I am encouraged at how fast many companies were able to adjust and adapt for this world. So that's a credit to all the resiliency that they built into their processes, their systems and their people over time. Related to that, the thing that really sticks out to me again, is this idea of using AI to serve your customers and to serve your employees. We had a hundred customers that went live with one of those two use cases in the first 35 days of the pandemic. Just think about that acceleration. I think without the pandemic, for those hundred it might've taken three years and it happened in 35 days. It's proof that the technology today is so powerful. Sometimes it just takes the initiative to get started and to do something. And all those companies have really benefited from this. So it's great to see. >> Great. Rob, great to have you on. Great to have your commentary on theCUBE. Could you just quickly share in 30 seconds, what is the most important thing people should pay attention to and Think this year from your perspective? What's the big aha moment that you think they could walk away with? >> We have intentionally made this a very technology centric event. Just go look at the demos, play with the technology. I think you will be impressed and start to see, let's say a bit of a new IBM in terms of how we're making technology accessible and easy for anybody to use. >> All right. Rob Thomas, Senior Vice President of IBM cloud and Data platform. Great to have you on and looking forward to seeing more of you this year and hopefully in person. Thanks for coming on theCUBE virtual. >> Thanks, John. >> Okay. I'm John Furrier with theCUBE. Keep coverage of IBM Think 2021. Thank you for watching. (soft music)
SUMMARY :
brought to you by IBM. on the power of hybrid cloud and AI. and well and great to be with you again. So you got to have the architecture. And I'd say that's the backdrop So one of the things that's coming that has all the pieces of the apps that you So Hybrid Cloud is about you of the big conversations in How do you protect your data assets? and in the business theater. and lead for the next decade. and hearing you talk with some in companies over the last year and things that blew you away. and to serve your employees. Rob, great to have you on. and easy for anybody to use. Great to have you on Thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rob Thomas | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Rob | PERSON | 0.99+ |
John | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
90% | QUANTITY | 0.99+ |
Dow Chemical | ORGANIZATION | 0.99+ |
Rob Thomas | PERSON | 0.99+ |
70% | QUANTITY | 0.99+ |
$40 billion | QUANTITY | 0.99+ |
$30 billion | QUANTITY | 0.99+ |
22% | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
fifth | QUANTITY | 0.99+ |
Fourth | QUANTITY | 0.99+ |
10 X | QUANTITY | 0.99+ |
Last June | DATE | 0.99+ |
three years | QUANTITY | 0.99+ |
McKinsey | ORGANIZATION | 0.99+ |
Instana | ORGANIZATION | 0.99+ |
third | QUANTITY | 0.99+ |
35 days | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
12 | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
AIOps | ORGANIZATION | 0.99+ |
30 seconds | QUANTITY | 0.99+ |
18 | QUANTITY | 0.99+ |
Third | QUANTITY | 0.99+ |
Watson AIOps | ORGANIZATION | 0.98+ |
Turbonomic | ORGANIZATION | 0.98+ |
two | QUANTITY | 0.98+ |
Watson Orchestrator | TITLE | 0.98+ |
five areas | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
five major use cases | QUANTITY | 0.98+ |
Watson Orchestrate | TITLE | 0.98+ |
both | QUANTITY | 0.98+ |
pandemic | EVENT | 0.98+ |
next decade | DATE | 0.98+ |
hundred | QUANTITY | 0.98+ |
two use cases | QUANTITY | 0.97+ |
big waves | EVENT | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
Think | ORGANIZATION | 0.97+ |
two huge markets | QUANTITY | 0.97+ |
120 billion hours a year | QUANTITY | 0.97+ |
this year | DATE | 0.96+ |
two thoughts | QUANTITY | 0.96+ |
four concepts | QUANTITY | 0.96+ |
five big things | QUANTITY | 0.96+ |
hundred percent | QUANTITY | 0.96+ |
SQL | TITLE | 0.96+ |
today | DATE | 0.96+ |
24 months | QUANTITY | 0.96+ |
Think 2021 | COMMERCIAL_ITEM | 0.95+ |
C QVS | TITLE | 0.93+ |
Dow | ORGANIZATION | 0.93+ |
CUBE | ORGANIZATION | 0.93+ |
first version | QUANTITY | 0.93+ |
one place | QUANTITY | 0.91+ |
Hybrid Cloud | TITLE | 0.91+ |
Robyn Bergeron, Red Hat and Thomas Anderson, Red Hat | Red Hat Summit 2021 Virtual Experience
(upbeat electronic music) >> Hello, welcome back to the Red Hat Summit, 2021 virtual coverage. I'm John Ferez, theCUBE coverage. I'm in Palo Alto with the remote interviews for our virtual conference here. We've got two great guests, CUBE alumnis, Tom Anderson, VP of Ansible Automation Platform, and Robin Bergeron, who's the Senior Manager, Ansible Community, community architect and all the great things involved. Robin, great to see you. Tom, thanks for coming back on Red Hat Summit, here, virtual. Good to see you. >> Thanks for having us. >> So since last summit, what's the updates on the Ansible Community and the Automation Platform? Tom, we'll start with you. Automation Platform, what's the big updates? >> Yeah. So since last Summit a lot has happened in Ansible land, if you will. So last time, I remember talking to you about content collections. Packing distribution format for into the sports. So we put a lot of effort into bringing all the Ansible content collections really, as well as the commercial users. And we launched last year a program certified content, working with our partners, including partners to certify the content collections that they create. Co-certify them, where we work together to make sure that the developed against, and tested against a Proctor spec, so that both of us can provide them to our customer bases with the confidence that they're going to be working and performing properly, and that we at Red Hat, and our partnership, co-support those out in our customer's production parts. That was a big deal. The other thing that we announced, late last fall, was the private automation hub. And that's the idea where our customers, obviously appreciate the idea of being able to go to Ansible galaxy or to the Ansible automation opt, to go and grab these content collections, these integrations, and bring them down in their environment. They wanted a way, they wanted a methodology, or a repository, where they can curate content from different sources, and then the manager across their environment, the automation across the environment. Kind of leaning into a little bit of automation content as code, if you will. And so we launched the automation hub, the private automation hub, where that sits in our customer's infrastructure; whether that's in the cloud, or on premise, or both, and allows them to grab content from galaxy, from the Ansible automation hub, the Ansible, automation hub on call.red hat.com, as well as their internally developed content, and be able to manage and provide that across their organization, governed by a set policies. So lots of stuff that's going on. Really advanced considering the amount of content that we provide. The amount of collections that we provide. Have certified that for our customers. And have the ability to curate and manage that content across the teams. >> I want to do a drill down on some of the unification of teams, which is a big message as well, as operating at scale, cause that's a super value proposition you guys have. And I want to get into that, but Robin, I want to come back to you on the community. So much has gone on. We're now into the pandemic for almost a year and a half now. It's been a productivity boom. Developers have been working at home for a long time, so it's not a new workflow for them, but you've seen a lot more productivity. What it's changed in the community since last summit, again, virtual to virtual again, between the windows here, event windows. You guys have a lot going on. What's new in the community? Gives us an update. >> Yeah, well, I mean, if we go back to summit, you know, this time-ish, you know, last year, we were wrapping up, more or less, the, it was, you know, we used to have everything you would install Ansible. You would get all the modules. You had everything, you know. It was all all altogether, which, you know, it was great for new users, who don't want to have to figure things out. It helps them to really get up and started running quickly. And, but, you know, from a community perspective, trying to manage that level of complexity turned out to be pretty hard. So the move to collections was actually great for, you know, not just, you know, for about user perspective, but also from a community perspective. And we came out with the Ansible 2-10. That was last fall, I believe. And that was the first real release of Ansible where we had, you know, collections were fully instantiated. We, you know, they were available on galaxy, but you could also get them as part of the Ansible community distribution. Fast forward to now, you know, we just had the Ansible 3.0 release, here in February, and we're looking to Ansible 4.0 here in early May. So, you know, there's been a lot of activity. A lot has improved, honestly, as a result of the changes that we've made. It's made it a lot easier for contributors to get in with a smaller group, that's more of their size and, you know, be able to get started and identify, you know, who are their interested peers in the community. So it's been a boom for us, honestly. You know, the pandemic otherwise is, you know, I think taught all of us, you know, certainly you, John, about the amazing things that we can do virtually. So we've had a lot of our meetups pivot to being virtual meetups, and things like that. And it's been great to see how easily the community has been able to pivot around, you know, this sort of event. I hope that we don't have to just keep practicing it for forever, but in the meantime, you know, it's enabled us to continue to get things done. Thank goodness to every video platform on Earth. >> Yeah. Well, we appreciate it. We're going to come back and talk more about that in the future; the best practice, what we all learned, and stories, but I think I want to come back to you on the persona side of Ansible, because one of the things we talked about last time that seems to be gaining a lot of traction, is that multiple personas. So I want to just hold on to that. We'll come back. Tom, back to you. We're at Red Hat summit. You guys have Ansible Fest, which is your own event that you guys drilled down on this. So users watching can know this your own community, but now we're part of Red Hat, part of IBM, which IBM Thinks, also happening soon as well. Red Hat summit still is unique event. How is Ansible fitting into the big picture? Because the value proposition of unifying teams is really consistent now with Red Hat's overall arching thing; which is operating at scale, open shift, Robin just mentioned. Where's the automation platform going this year? What's the story here at Red Hat summit for the automation platform? >> Yeah, no, that's a great question . We've seen so, we got time, just a little bit of the pandemic, and how it has accelerated some existing trends that we already saw. And one of those is really around the democratization of the application to work routines. More people delivering infrastructure and applications, independent of each other. Which is great. Faster and more agile, all those other good words that apply to that. But what that does bring up is the opportunity for patient work. Replication of effort. Not reusing necessarily things that are in existence already that other teams may have. They'd be not complying with all of the policies, if you will, the configuration and clients' policies. And so it's really kind of brought Ansible out into focus even more here. Now, because of the kind of common back lane that Ansible provides; a common language and common automation backplane across these different teams, and across these different personas. The great thing about what we supply for these different personas, whether it's outpatient developers, infrastructure honors, network engineers, SecOps teams, GetOps teams. There's so many of these obstacles out there, who now all want independent access to infrastructure, and deploying infrastructure. And Ansible has the kind of leverage that each of those communities, whether it's APIs or CLIs, or event based automation, or web hooks, et cetera, et cetera, you know? Service catalogs, utilize all of those interfaces, if you will, or modalities are accessible in Ansible automations. So it's really allowed us to be this sort of connective tissue, or glue, across these different silos or manes of the organization. Timing it opens specifically, one of the things that we talked about last fall, at our Ansible Fest, was our integration between the Ansible automation platform, our advanced cluster management product, and our OpenShift platform, that allows native applications, running on OpenShift, be able to talk to a Ansible automation operator that's running on that same platform, to do things off platform for their customers are already using Ansible. So connecting their cloud native platforms with our existing systems and infrastructures. Systems of records, network systems, ticketing systems, you name it. So all of those sorts of integrations, Ansible's become the connected glue across all of these different environments. Tying traditional IT, cloud IT, cloud native, you name it. So it's really been fun, and it's been an exciting time for us inside the portfolio and out. >> That's a great point. Connective tissue is a great way to describe some of these platform benefits, cause you guys have been on this platform for really long time. And the benefits are kind of being seen in the market, certainly as people have to move faster with the agility. Robin, I want to come back to you because he brought up this idea of personas. I mean, we all know DevOps infrastructure has code; it's been our religion for over a decade or more, but now the word DevSecOps is more prevalent in all the conversations. The security's now weaved in here. How are you seeing that play out in the community? And then, Tom, if you can give some color commentary too, on the automation platform, how security fits in? So DevOps, everything's being operationalized at scale, we get that. That's one of the value propositions you have, but DevSecOps has a persona. More people want more sec. Dev is great, more ops and standardization, more developers, agile standards, and then security. DevSecOps. What's your? >> I thought it was DevNetSecOps? (man chuckling) >> Okay. I've forgot net. Put net in there. Well, networks abstracted away, you know, as we say. >> Yeah! Well, you know, from my perspective, you know, they're people in their jobs all over the places, right? Like, they, you know, the more they can feel like they're efficient, and doing great stuff at their work, like, they're happy to bring as many people into the fold as possible. Right? And you know, normally, security's always been this, you know, it's sort of like networking, right? It's always been this sort of isolated, this special group over here, that's the traditional, you know, one of the traditional IT bottlenecks that causes us to not be able to get anything done. But, you know, on a community level, we see folks who are interested in security, you know, all the time. I know we've certainly done quite a bit of work with the some folks at IBM around one of their products; which I assume Tom will get more into here in just a moment. But from, you know, community perspective, I mean, we've seen people who've been writing, you know, playbooks and roles and, you know, now collections for, you know, all of the traditional government testing, you know, is, you know, missed standards, all of that kind of stuff. And, you know, it's one of those, it's part of network effects. And it's a great place for actually automation hub. I think, you know, for folks who were on prem or, you know, any of our customers are really going to start to see lots of value. How it will be able to connect folks inside the organization, you know, organically through just the place where I'm doing my Ansible things, allows them to find each other, really. And build those, you know, take it from being silos of automation everywhere into a really sort of networked, you know, internal network of Ansible friends and Ansible power users that, you know, can work together and collaborate, you know, just the same way that we do in open source. >> Yeah. And Tom, so IT modernization requires security. What's your take on this? Because you know, you got cluster, a lot of cluster, advanced cluster management issues. You got to deal with the modern apps that are coming. IT's got to evolve. What's your take on all this? >> Yeah. Not only does IT have to evolve, but it's the integration of IT into the rest of the environment. To be able to respond. So, one of the areas that we put a lot of effort into advancement of curating and solutions around security automation. And we've talked about that in the past, the idea of connecting SecOps teams that are doing intrusion detection, or threat hunting, and then responding in an automated way to those threat protections. Right? So connect SecOps with my team; which has traditionally been siloed operations and silo teams. And now with this curated, Ansible security automation solution that we brought to market, with our partners, that connects those two teams in a seamless sort of way. And we've got a lot of work with our friends at IBM, around this area because they are digging that security, their facility, the products in their portfolio. So we've done a lot of work with them. We've done a lot of work with lots of our partners; whether it's cyber or Microsoft, or whoever. Those areas are traditionally, Ansible's done a great job on sort of compliance around configuration enforcement, right? Setting configuration. Now we moved into connecting set-mops with IT. Security automation, now of our acquisition of SecOps, along with our advanced cluster management integration with Ansible, we're starting to say, what are the things inside that DevSecOps workflow that may require integration or automation, or package automation with other parts of the environment? So bringing all of those pieces together, as we move forward, which is really exciting for us. >> Okay, I got to ask you guys the number one question that I get all the time, and I see in the marketplace, kind of a combo question, is, how do I accelerate the automation of my cloud native development, with my traditional infrastructure? Because as people put in green, if one of the cloud projects, whether it's, and then integrating with the cloud on premises with the traditional infrastructure, how do I accelerate those two environments? How do I automate, accelerate the automation? >> It's a great short for us, as what we were talking about last Ansible Fest. We are bringing together with our advanced, cluster management product, ownership platform. Ansible is just been widespread use in all of the automation of both traditional, and cloud native, infrastructures. Whether it's cloud infrastructure, on-premise storage, compute network, you name it. Customers are using Ansible, using Ansible to do all kinds of pieces of infrastructure. Being able to tie that to their new, cloud native initiatives, without having to redo all of that work that they've already done, you integrate that, this thing, infrastructure automation, with their cloud native stuff, it accelerates substantially the, what I call, the operationalization of their cloud native platforms, with their existing IT infrastructure in the existing, IT ecosystem. I believe that that's what the Ansible automation platform plays a key role in connecting those pieces together, without having to redo all that work, that's been done and invested. >> Robin, what's your take on this? This is what people are working on in the trenches. They realize cloud benefits. They've got some cloud native action, and also then they got on the traditional environment, and they've got to get them connected and automated. >> Yeah, absolutely. I mean, you know, the beauty of Ansible, you know, from a end user perspective is, you know, how easy it is to learn and how easy the languages to learn. And I think, you know, that portability, you know, it doesn't matter like, how much of a rocket scientist you are, you know? Everybody appreciates simplicity. Everybody appreciates being able to hand something simple to somebody else, and letting other people get done, and having it, be more or less, it's not quite English, but it's definitely, you know, Ansible's quite readable. Right? And you know, when we looked at, when we started to work on all the Ansible operators, you know, one of that, one of the main pieces there was making sure that that simplicity that we have in Ansible, is brought over directly into the operator. So, just because it's cloud native doesn't mean you suddenly have to learn, you know, a whole set of new languages. Ansible's just as portable there, as it is to any other part of the, your IT organization, infrastructure, whatever it is that you have going on. >> Well, there's a lot of action going on here at Red Hat summit, 2021. Things I wanted to bring up, in context of the show, is the success, and the importance, of you guys having Ansible collections. This has come up multiple times, as we talked about those personas, and you've got these new contributors. You've got people contributing content, as open-source continues to grow and be phenomenal. Value proposition. Touch on this concept of collections. What's the updates? Why is it important? Why should folks pay attention to it, and continue to innovate with collections? >> From a commercial perspective, or from a product perspective, collections have made it a lot easier for contributors to create, and deploy, and distribute content. As Robin's mentioned earlier, previous iterations of Ansible have all of that integration. All of those collections, all within one big group. We call the "batteries included" back in the time. Back in the day, right? That that meant that contributors deployed content with the base, Ansible distribution, they had to wait for the next version of Ansible to come out. That's when that content would get redistributed with the next version of Ansible. By de-coupling, on platform, or engine, putting that into collections, individual elements of related integrations, those can move that their own pace. So users, new customers, can get the content they need, based their contributors like and keep up with. So, customers will have to wait for the next version of the shipping products and get a new version of the new integration they really like now. So again, de-coupling those things, it allows them to move at different paces. The engine, or the platform itself, needs to be stable, performance secure. It's going to move at a certain lifecycle. The content itself, all the different content, hub, and network providers, platforms, all of those things can now move at their own pace. Each of those have their own life cycle. Allows us to get more functionality in our customers hands a lot quicker. And then launching our certified program, partners, when we support that content, certified support that content, helps meet the values that we bring to our customers with this subscription. It's that ecosystem of partners that we work with, who certified and support the stuff that we ship and support with our customers. Benefits both from the accessing the technology, as well as to the access to the value added in terms of integration, testing and support. >> Robin, what's your take on the community? I see custom automation with connect here. A lot of action going on with collections. >> Yeah. Absolutely. You know, it's been interesting, you know? Tom just mentioned the, you know, how everything, previously, all had to be released all at once. Right? And if you think about, you know, sure I have Ansible installed, but you know, how often do I have to, you know, just even as a regular, I'm not a system administrator these days, type person, like how often do I have to, you know, click that button to update, you know, my Mac or my Linux machine? Or, you know, my windows machine, or you know, the operating system on my telephone, right? Every time one of these devices that Ansible connects to, or program, or whatever it is, connects to something, those things are all operating and, you know, developing themselves at their own paces. Right? So when a new version of, you know, we'll call it Red Hat, Enterprise Linux. When a new version of Red Hat, Enterprise Linux comes out, if there are new changes, or new features that, you know, we want to be able to connect to, that's not really helpful when we're not releasing for another six months. Right? So it's really helped us, you know, from a community angle, to able to have each of these collections working in concert with, you know, for example, the Lennox subsystems that are actually making things that will turn be turned into collections, right? Like, SE Linux, or a system D, right? Like, those things move at their own pace. We can update those at our own pace in collections, and then people can update those collections without having to wait another six months, or eight months, or whatever it is, for a new version of Ansible to come out. It's really made it easier for all of those, you know, developers of content to work on their content and their, you know, Ansible relationships almost in sync. And make sure that, you know, not, "I'm going to do it over here. And then I'm going to come back over here and fix everything later." It's more of a, you know, continuous development process. >> So, the experience. So the contributor experience is better then? You'd say? >> I'm sorry? >> The contributor experience is better then? >> Oh, absolutely. Yeah. 100%. I mean, it's, you know, there's something to be said for, I wouldn't say it's like, instant satisfaction, but certainly the ability to have a little bit more independence, and be able to release things as you see fit, and not be gated by the entire rest of the project, is amazing for those folks. >> All right. So I'll put you on the spot, Robin. So if I'm a developer, bottom line me, what's in it for me? Why should I pay attention to collections? What's the bottom line? >> Well, you know, Ansible is a platform, and Ansible benefits from network effects. You know, the reason that we've gotten as big as we have, is sort of like the snowball rolling downhill, right? The more people that latch onto what you're doing, the more people benefit and the more, you know, additional folks want to join in. So, you know, if I was working on any other product that I would consider being able to have automated with Ansible, you know, the biggest thing that I would look at is, well, you know, what are those people also using? Are they automating it with Ansible? And I can guarantee you, 99% of the time, everything else that people are using is also being automated with Ansible. So you'd be crazy to not, you know, want to participate, and make sure that you're providing the best, Ansible experience for, you know, your application, cause for every application or, you know, device that we can connect to, there's probably 20 other competitors that also make similar applications that, you know, folks might also consider in lieu of you if you're not using, if you're not providing Ansible content for it. >> Hey, make things easier, simple to use, and you reduce the steps it takes to do things. That's a winning formula, Tom. I mean, when you make things that good, then you get the network effect. But this highlights what you mentioned earlier, about connective tissue. When you were using words like "connective tissue" it implies an organizational's, not a mechanism. It's not just software, it's people. As a people experience here in the automation platform. >> Robin: Yep. >> This seems to be the bottom line. What's your take? What's your bottom line view? I'm a developer, what's in it for me? Why should I pay attention to the automation platform? >> What Robert just said to me is, more people using. Automation platform, crossing those domains, and silos as kind of connective tissue across those teams, and its personas, means those contributors, those developers, creating automation content, getting in the hands of more people across the organization. In a more simplified way by using Ansible automation. They get access, the automation itself, those personas, they get access to the system automation faster, they can have the money quicker, local to local folks. To reinvent the wheel in terms of automation, we're trying to, (man speaking faintly) They don't want to know about the details, and what it takes to configure the network, configure the storage elements. They rely on those automation developers and contributors that review that for them. One powers of the platform. Across those teams, across those others. Okay we're going to talk about SecOps, The ITOps, in SecOps, in networkOps. And to do all of these tasks, with the same language, and same unition content, running faster, and it's monitoring core responsibilities without worrying. >> Robin, you wanted to talk about something in the community, any updates? I think navigator, you mentioned you wanted to mention a plug for that? >> Absolutely! So, you know, much like any other platform in the universe, you know, if you don't have really great tools for developing content, you're kind of, you know, dead in the water, right? Or you're leaving it to fate. So we've been working on a new project, not part of the product yet, but you know, it's sort of in a community, exploratory phrase. A release, early release often, or, you know, minimum viable product, I guess, might be the other way to describe it currently. It's called Ansible navigator. It's a Tooey, which is like a gooey, but it's got a, sort of a terminal, user interface look to it, that allows you to, you know, develop, it's a sort of interface where you can develop content, you know, all in one window. Have your, you know, documentation accessible to you. Have, you know, all of your test results available to you in one window, rather than, I'm going to do something here, And then I'm going to go over here, and now I'm not sure. So now I'm going to go over here and look at docs instead. It's all, you know, it's all in one place. Which we think will actually, but I mean, I know the folks who have seen it already been like, (woman squealing) but you know, it's definitely in early, community stages right now. It's, you know, we can give you the link. It's github.com/Ansible/Ansiblenavigator >> A tooey versus a gooey, versus a command line interface. >> Yeah! >> How do you innovate on the command line? It's a cooey, or a? >> Yeah! >> It's, you know, there are so many IDs out there and I think Tom can probably talk to some of this, you know, how that might relate to VA code or, you know, many of the other, you know, traditional developer IDs that are out there. But, you know, the goal is certainly to be able to integrate with some of those other pieces. But, you know, it's one of those things where, you know, if everybody's using the same tool and we can start to enforce higher levels, quality and standards through that tool, there's benefits for everyone. Tom, I don't know if you want to add on to that in any way? >> Yeah, it's just kind of one of our focus areas here, which is making it as easy as possible for contributors to create Ansible automation content. And so part of that is production, meaning S & K. Remember what happened to S & K for Ansible? That involved developers and contributors to use ID's, build and deploy automation content. So, I'm really focused on making that contributor life their job. >> Well, thanks for coming on Tom and Robin. Thanks for sharing the insight here at Red Hat Summit 21, virtual. So you guys continue to do a great job with the success of the platform, which has been, you know, consistently growing and having great satisfaction with developers, and now ops teams, and sec teams, and net teams. You know, unifying these teams is certainly a huge priority for enterprises because the end of the day, cloud-scale is all about operating. Which means more standards, more operations. That's what you guys are doing. So congratulations on the continued success. Thanks for sharing. >> Thanks for having us. >> Okay. I'm John for here in theCUBE we are remote with CUBE virtual for Red Hat Summit, 2021. Thanks for watching. (upbeat electronic music)
SUMMARY :
and all the great things involved. and the Automation Platform? And have the ability to curate and manage on some of the unification of teams, the meantime, you know, and talk more about that in the future; of the application to work routines. of being seen in the market, away, you know, as we say. that's the traditional, you know, Because you know, you got cluster, but it's the integration of IT in all of the automation and they've got to get them have to learn, you know, in context of the show, of the new integration take on the community? click that button to update, you know, So the contributor but certainly the ability to have you on the spot, Robin. and the more, you know, and you reduce the steps the bottom line. the automation itself, those personas, in the universe, you know, A tooey versus a gooey, you know, many of the other, you know, for contributors to create which has been, you know, we are remote with CUBE virtual
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Neil | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Jonathan | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Ajay Patel | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
$3 | QUANTITY | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Jonathan Ebinger | PERSON | 0.99+ |
Anthony | PERSON | 0.99+ |
Mark Andreesen | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Matthias Becker | PERSON | 0.99+ |
Greg Sands | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Jennifer Meyer | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Target | ORGANIZATION | 0.99+ |
Blue Run Ventures | ORGANIZATION | 0.99+ |
Robert | PERSON | 0.99+ |
Paul Cormier | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
OVH | ORGANIZATION | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Sony | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Robin | PERSON | 0.99+ |
Red Cross | ORGANIZATION | 0.99+ |
Tom Anderson | PERSON | 0.99+ |
Andy Jazzy | PERSON | 0.99+ |
Korea | LOCATION | 0.99+ |
Howard | PERSON | 0.99+ |
Sharad Singal | PERSON | 0.99+ |
DZNE | ORGANIZATION | 0.99+ |
U.S. | LOCATION | 0.99+ |
five minutes | QUANTITY | 0.99+ |
$2.7 million | QUANTITY | 0.99+ |
Tom | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Matthias | PERSON | 0.99+ |
Matt | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
Jesse | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |